This is an effort to get some discussion going.

I remember starting grad school and coming across reddit posts with themes like, “What research area will be hot in the next 10 years?”, etc. In retrospect, the comments there were not very informed (talk of graphical models and bayesian non-parametrics). But, the heart of these posts is talking about a research area that you find exciting.

So, tell us what research area is currently exciting to you. Are you starting a new job, project, or graduate program to work on it?

  • karjudev
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I’m looking forward to see the evolution of Energy-based models, and I would like to see how semantic knowledge (in form of graph embeddings or some other tool) may interact with Transformers models to inject higher-order information in text.

  • joba2ca@feddit.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    I had the pleasure of conducting research into self-supervised learning (SSL) for computer vision.

    What stood out to me was the simplicity of the SSL algorithms combined with the astonishing performance of the self-supervisedly trained models after supervised fine-tuning.

    Also the fact that SSL works across tasks and domains, e.g., text generation, image generation, semantic segmentation…

  • Peanut@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    1 year ago

    More of an off and on hobbyist when I’m not busy trying to survive, so I’ve still got a lot to study. For the most part, I’ve been enthralled by the progress we are making in general understanding of neural functions. I feel like the more we learn in machine learning, the more we can deconstruct the mountain of data we’ve gathered about the brain this past couple decades. The more we understand that, the more we can intentionally apply or avoid in the development of neural nets.

    From deconstructing the complex algorithms that allow brains and bodies to develop from a couple cells, to understanding the absurd organic mess that allows the tangle of processes that we use to comprehend our own consciousness. There are a lot of people who seem very excited about this problem from very different angles, I just hope they can cooperate more and argue less about how their method is the only viable method and everyone else is just wrong.

    Like certain people’s dismissal of probabilistic models. I think it’s silly to argue that our brains definitely do not utilize such autoregressive functioning as a piece of the puzzle. I just think our brain has many different systems working in tandem.

    Sometimes we just allow brain to push out words without much thought. We might have to backtrack and correct ourselves if the wrong words come out. We will often pre-empt information or calculations into our short term memory before choosing to speak. Sometimes we stop mid sentence to apply these processes. I still believe there is an element of stochastic word selection.

    Yann Lecun has a really good model for developing an autonomous system, but I think he’s too eager to disregard how autoregressive models can be applied in a more complex system.

    Regardless, everything everyone is working on right now is so exciting on every level. I can’t wait to see what else comes. Any advancement at any angle could have severe effects on our lives at this point. Our economy has to adapt without sacrificing all of the poor, and we already have to deal with people who can’t comprehend how basic LLMs aren’t sentient emotional beings.

    Hopefully I can keep learning more about machine learning and neurology. I hope the Lemmy community can grow and we can see as much as there was on Reddit. Don’t know if anyone can set up a bot like “AI lover” on /r/machinelearningnews It was a nice feed of new and interesting papers. I just got tired of Reddit becoming progressively worse over time.