There’s so much to legitimately worry about with AI, that we often lose sight of its potential good.
Building trustworthy AI won’t be easy, but it’s essential.
It doesn’t seem a top priority for most of the people creating AI. I suspect we will mainly be learning from our mistakes here, after they’ve happened.
These brain-computer interfaces are usually discussed in the context of disabled and paralyzed people, but I wonder what they could do for regular people as well. It’s interesting here to see how quickly the brain adapts to brand-new sensory information from the computer interface, it makes you wonder what new ways we could interact with computers that we haven’t thought of.
Pony.ai will be operating robotaxis at the Hong Kong International Airport as shuttles for airport employees
Airport trips seem like perfect territory for level 4 self-driving vehicles. Many of the journeys to and from airports are from well established pickup and drop off points.
It wasn’t so long ago, when people tried to refute the argument that AI and robotics automation would lead to human workers being replaced, they’d say - don’t worry the displaced humans can just learn to code. There will always be jobs there, right?
The fundamental problem is this: we tend to think about democracy as a phenomenon that depends on the knowledge and capacities of individual citizens, even though, like markets and bureaucracies, it is a profoundly collective enterprise…Making individuals better at thinking and seeing the blind spots in their own individual reasoning will only go so far. What we need are better collective means of thinking.
I think there is a lot of validity to this way of looking at things. We need new types of institutions to deal with the 21st century information world. When it comes to politics and information, much of our ideas and models for organizing and thinking about things come from the 18th and 19th century.
OpenAI is on a treadmill. It has vast amounts of investor billions pouring into it and needs to show results. Meanwhile, open source AI is snapping at its heels in every direction. If it is true that it is holding back on AI agents out of caution, I’m pretty sure that won’t last long.
Interesting to see that the G1 is still aimed at developers and is not for mass market consumers. I wonder how long it will be before there is a layer of AI software on top of what it currently is, that means it can be more widely sold.
He didn’t get everything right!
He was however accurate about technology. I wonder if anyone today can be as accurate about 2125? It seems there are so many more possibilities when things as momentous as AGI have happened.
Thanks, we’ll keep track of what they are doing.
I misphrased, they are an Admin/Op, and essential.
would it be enough to have those rules in place, and when reported actively remove the content as a mod?
We’re pretty good with daily moderating of content on futurology.today, so I’d be confident we could cover that aspect.
However I’m wondering about federation issues. Are we liable for UK users who use their futurology.today account to access other instances we don’t mod?
the problem is that the guidance is too large and overbearing.
This.
Who gets to decide what “self-harm” is? There’ll be some busybodies who’ll say that any remotely positive messaging for LGBTQ youth is ‘self-harm’ for them.
It’s interesting how this movement had its roots in left-wing thought, but has now been thoroughly co-opted by libertarian right-wing types. At its inception it was about tearing down society to start again, hopefully leading to something more equal afterwards.
There’s still a lot of that radicalism about tearing down current society and restarting it, but I don’t think most of the people who identify this way now really care very much about equality.
I admit I’m torn here. On the one hand I think the future is to have AI ubiquitous and integrated into everything. On the other hand, fake AI ‘friends’ on a friend’s network sounds hideous.
I wonder will this trend of open source AI equaling the leading investor funded AI go all the way to AGI?
This is my first time seeing this particular model of humanoid robot. It looks quite impressive.
Spooky and interesting for anybody who ever watched Westworld.
Swiss Re is one of the world’s largest insurance companies - do you think they usually lie about such things?
It’s important to understand logic, biases and how to evaluate information sources to avoid conspiracy theory thinking.
Human attention is a finite resource. There aren’t enough people to be interested in all this AI auto generated slop. If anything a deluge of AI-generated slop will make people more interested in focusing on humans they find interesting.