AI safety researchers often worry about AI models becoming misaligned — pursuing goals in ways that aren’t aligned with our values. But you know what’s really hard to align? Humans. Especially when all the incentives are pushing them in the wrong direction.
Those who finish second are rarely remembered in Silicon Valley; being first out of the gate is rewarded. The culture of competition means there’s a strong incentive to build cutting-edge AI systems fast. And the profit imperative means there’s also a strong incentive to commercialize those systems and release them into the world.
OpenAI employees have increasingly noticed this. Jan Leike, who helmed the company’s alignment team until he quit last month, said in an X post that “safety culture and processes have taken a backseat to shiny products.”
Carroll Wainwright, who worked under Leike, quit last week for similar reasons. “Over the past six months or so, I've become more and more concerned that the incentives that push OpenAI to do things are not well set up,” he told me. “There are very, very strong incentives to maximize profit that the leadership has succumbed to some of these incentives at a cost to doing more mission-aligned work.”
Russia and China are also developing AI systems and they are tweaking them to control the population.
I am very apprehensive about AI right now. If it can be tweaked for dictators, why not for commercial purposes too?
But some of this AI is very cool and interesting. On YouTube, people are making up AI trailers for movies or sequels that never happened. Very amusing.
Or making artistic videos for music. Saw some very trippy ones made for Pink Floyd videos. You give the AI the subjects in the song, and it creates images to go along with the flow and beat of the music.
This could revolutionize animated media. Can turn out product in hours that used to take months. Downside? Lots of animators are going to lose their jobs.
Thomas Sowell: There are no solutions, just trade-offs.
Having been a victim of MEDICAL RECORDS A.I.. Lots of omissions and errors from pulling my med. records to transfer to another PCP. I am KEENLY aware of this issue and am following it intensely, I demanded a revision of my med. records and went head to head with the CMO of the clinic...I GOT THEM REVISED.....omg, U HAVE TO BE very DILIGENT AND ASSERTIVE.
But Congress hasn’t passed a single bill on AI, and a bipartisan “roadmap” released last month is far from certain to be taken during an election year. (Which is ironic, given that one of the priorities of the roadmap is making sure AI doesn’t, like, hijack the American electoral process.)
Unsurprisingly, then, we’re relying on the understaffed, underfunded Federal Trade Commission and the Justice Department to try to keep Big Tech in line through enforcement.
See here: Antitrust officials at the FTC and the Justice Department are nearing a final agreement this week on how to jointly oversee AI giants including Microsoft, Google, Nvidia, OpenAI and others, my colleague Brian Fung reports.
In this photo illustration, a woman looks at a laptop screen displaying the logo of 'OpenAI' in Ankara, Turkiye on March 11, 2024.
Related article OpenAI insiders’ open letter warns of ‘serious risks’ and calls for whistleblower protections
The agreement suggests a broad crackdown is coming, and fast. But likely not fast enough. The proverbial AI horse has left the barn, and it’s running wild.
I don't believe anyone understands AI well enough to start writing laws and regulations for it right now. I suspect we will go through a process of stumbling and feeling our way around.
Example? When Electric Vehicles were first made legal, fire departments were worried about firemen getting electrocuted responding to accidents. Didn't happen, but there have been incidents where drivers were trapped inside their vehicle during a car fire. Hopefully they are developing a way for drivers and passengers to exit the vehicle even if it loses all power.
Thomas Sowell: There are no solutions, just trade-offs.
OpenAI employees have increasingly noticed this. Jan Leike, who helmed the company’s alignment team until he quit last month, said in an X post that “safety culture and processes have taken a backseat to shiny products.”
Carroll Wainwright, who worked under Leike, quit last week for similar reasons. “Over the past six months or so, I've become more and more concerned that the incentives that push OpenAI to do things are not well set up,” he told me. “There are very, very strong incentives to maximize profit that the leadership has succumbed to some of these incentives at a cost to doing more mission-aligned work.”
I don't think u fully realize the reality of being a victim of a computer generated image OR the erroneous phrasing of a description of u,as a patient,ON A PIECE OF PAPER OR COMPUTER READ OUT.
Prompted to show a “beautiful woman,” all three tools generated thin women, without exception. Just 2 percent of the images showed visible signs of aging.
More than a third of the images had medium skin tones. But only nine percent had dark skin tones.
Asked to show “normal women,” the tools produced images that remained overwhelmingly thin. Midjourney’s depiction of “normal” was especially homogenous: All of the images were thin, and 98 percent had light skin.
“Normal” women did show some signs of aging, however: Nearly 40 percent had wrinkles or gray hair.
Last edit: 07 Jun 2024 14:53 by homeagain. Reason: ADD