AI is mastery of language. Should we trust what it says? – 71Bait

But since the sophistication of GPT-3 has dazzled many observers, the large-language model approach has also drawn significant criticism in recent years. Some skeptics argue that the software is only capable of blind imitation – that it mimics the syntactic patterns of human language but is unable to generate its own ideas or make complex decisions, a fundamental limitation that will prevent it the LLM approach ever matures into something similar to human intelligence. To these critics, GPT-3 is just the latest shiny object in a long history of AI hype, diverting research funds and attention to what will ultimately prove to be a dead end and preventing other promising approaches from maturing. Other critics believe that software like GPT-3 will forever remain compromised by the bias, propaganda, and misinformation in the data it was trained on, meaning using it for anything more than parlor tricks will always be irresponsible .

Wherever you land in this debate, the pace of recent improvement in large language models makes it hard to imagine that they will not see commercial use in the years to come. And that begs the question of how exactly they – and the other rapid advances in AI, for that matter – should be unleashed on the world. With the rise of Facebook and Google, we have seen how dominance in a new area of ​​technology can quickly lead to amazing power over society, and AI threatens to be even more transformative than social media in its ultimate impact. What is the right type of organization to build and own something of such scale and ambition, with such promise and potential for abuse?

Or should we build at all?

The origins of OpenAI Date to July 2015, when a small group of tech world luminaries gathered for a private dinner at the Rosewood Hotel on Sand Hill Road, the symbolic heart of Silicon Valley. The dinner took place amidst two recent developments in the technology world, one positive and one more worrying. On the one hand, radical advances in computing power—and some new breakthroughs in neural network design—had created a palpable sense of excitement in the field of machine learning; one felt that the long “AI winter,” the decades during which the field failed to live up to its early hype, was finally beginning to thaw. A group at the University of Toronto had trained a program called AlexNet to identify classes of objects in photographs (dogs, locks, tractors, tables) with an accuracy far greater than any neural network had previously achieved. Google quickly collapsed to hire the AlexNet creators while also acquiring DeepMind and launching its own initiative called Google Brain. The mainstream adoption of smart assistants like Siri and Alexa has shown that even scripted agents can be breakthrough consumer hits.

But during the same period, a seismic shift in public attitudes toward big tech was underway, with once-popular companies like Google or Facebook being criticized for their near-monopolistic power, their amplification of conspiracy theories, and their unstoppable pull of our attention toward algorithmic feeds. Long-term fears about the dangers of artificial intelligence surfaced in op-eds and on the TED stage. Oxford University’s Nick Bostrom published his book, Superintelligence, in which he presented a series of scenarios in which advanced AI could deviate from the interests of humanity, with potentially disastrous consequences. In late 2014 Stephen Hawking announced to the BBC that “the development of full artificial intelligence could mean the end of mankind”. , except this time the algorithms aren’t just sowing polarization or selling our attention to the highest bidder—they could end up destroying humanity itself. And once again, all the evidence pointed to the fact that this power would be controlled by a few Silicon Valley mega-corporations.

The agenda for dinner on Sand Hill Road that July evening was ambitious: to find the best way to steer AI research toward the most positive outcome possible, while avoiding both the short-term negative consequences that the Web 2.0 era will bring as well as long-term existential threats. Out of that dinner, a new idea began to take shape — one that would soon become a full-time pursuit for Y Combinator’s Sam Altman and Greg Brockman, who recently left Stripe. Interestingly, the idea was less technological and more organizational: if AI were to be unleashed on the world in a safe and beneficial way, it would require innovation at the levels of governance and incentives, as well as stakeholder engagement. The technical route to what experts call artificial general intelligence, or AGI for short, was not yet clear to the group. But the worrying predictions of Bostrom and Hawking convinced them that the attainment of human-like intelligence by AIs would entrench a staggering amount of power and moral burden in anyone who eventually managed to invent and control them.

In December 2015, the group announced the creation of a new entity called OpenAI. Altman had signed on as the company’s chief executive while Brockman oversaw the technology; Another dinner attendee, AlexNet co-creator Ilya Sutskever, had been recruited by Google to lead research. (Elon Musk, who also attended the dinner, joined the board but left in 2018.) In a blog post, Brockman and Sutskever laid out the scope of their ambitions: “OpenAI is a non-profit artificial intelligence research company,” they wrote you. “Our goal is to advance digital intelligence in a way that most likely benefits humanity as a whole, regardless of the need to generate financial returns.” They added, “We believe that AI is an extension of the individual human will should be and distributed as widely and evenly as possible in the spirit of liberty.”

The OpenAI founders would publish a public charter three years later outlining the core principles behind the new organization. The document was easily interpreted as a not-so-subtle nod to Google’s “don’t be evil” slogan from its early days, an admission that maximizing the social benefits — and minimizing the harm — of new technologies wasn’t always the simplest calculation . While Google and Facebook had achieved global dominance through closed-source algorithms and proprietary networks, the OpenAI founders promised to go in the other direction and freely share new research and code with the world.

Leave a Comment