AGI xRisk Competition
Risk as a candle burning at both ends.
The Impact of Artificial Intelligence
As a technology, the impact on cultural, economic, and political landscapes will be significant in the coming decades. However, this will not be felt as a single disruptive shift. Thru a historical lens, we can judge what tools (in the past 60 years) have had a lasting impact, but tend to ignore the rate of adoption as a factor because the relevance of a then new method is seen as obvious. The rapid progress in Machine Learning over the past few years only hints at the size of the impact ahead, but offers no specific examples that point to a future where ‘method x’ or ‘foundation model y’ has become an industry standard.
With respect to risk factors, current trends, which are far from a working Artificial General Intelligence, highlight a digital industry that is awash in toxic waste. In service to shareholders and ad revenue, products and services of dubious consumer value are being created with dual-use built into the business models. They dangle a shiny trinket with one hand while grabbing handfuls of personal information with the other. They can use this information to sell targeted (higher fee) ad space, pass it along to dozens of third parties (each for a nominal fee), or in many cases, the collection effort was its own shiny Intellectual Property trinket to lure-in a potential buy-out.
The use of barely functioning tools, rooted in statistical weights rather than cognitive abilities, have already seen mass adoption by a range of surveillance apparati that provide no accountability, no oversight, and no recourse for false identification or biased outcomes. State sponsored surveillance has become the norm in parts of the world, which is where we also see AI tools used to for social engineering of entire populations.
Text and image generators, likely to include audio and video outputs in the next year or so, are hailed as democratization in action and proof that AGI is just around the corner. While impressive, these outputs are based on massive amounts of computing power and datasets so large that no person can know what’s in them. Despite claims of working toward AGI, which was a rebranding of traditional AI goals (in difference to ML community), it remains a brute force effort that only creates the façade intelligent activity.
I can’t explain why so much focus is on some near-future advancement, rather than on the already burning pile of dung that has taken-in close to a quarter of a trillion dollars in funding in just the past decade, and will likely hit 100 billion this year alone. Or, perhaps that is the reason. Every company, every investment group, and every top tier (award winning) researcher has made their bet, and any deviation could result in zero return on all of the time and money invested.
What I can say is that the whole of the industry is moving in one direction. Any estimates on the arrival of AGI, in any real form, can’t be made while this is the case. It is easy to formulate a kind of research schedule, based on the efforts that can be done in parallel vs those that have dependencies. For current ML methods, custom designed chips provide an advantage over the competition, but AGI requires no new hardware technology to bring it into existence. *In the near-term, battery power for autonomous embodied systems is an issue, but only in terms of commercialization.
Any estimations for AGI sit atop a large block of time where there is no advancement in that direction. Besides the above-mentioned drive to make current efforts work (which is bad science), there is a belief that advancements must be a sign that AGI is nigh. Results, as logic from within ML community states, must be a sign that the black box is starting to exhibit all of the cognitive properties required to mimic thought. And I do want to emphasize MIMIC. Even if they were to successfully demonstrate common sense reasoning, it would still be considered “just software.” *more on this in Framing the Future section.
Much of the discourse involving walls and goalposts stems from a divide on how a system should work, or at least, how current DL and xNN methods lack internal functionality (regardless if that functionality is or isn’t captured within a dataset). The output of a model looks like a valid result, not because of some internal reasoning or understanding of source material, but because it matches an existing pattern (which we would recognize as having been intelligently created).
There is an argument to be made that any fringe idea that has been around for a decade or more, even without substantial funding, should have demonstrated some ability or small step in a different direction within that time. That aside, the ML community has shown little to no interest in accepting alternatives. Probabilities for the arrival date of AGI (or whatever term is applicable by then) should essentially be zero right now, even if there was a non-ML development plan with high certainty of success within 8 years.
The only indication of a shift in thinking, that results in an actual starting of the clock, is going to be a successful demonstration from outside the field. The race to fund new opportunities, or witnessing a bidding war for a plucky start-up, will signal the dawn of the AGI age, and the twilight of Machine Learning.
This section it taken from an earlier post I made in reference to AGI posts from the Metaculus site, with some modifications to make it more general. I don’t know if this invalidates this post as an original work (published after contest announced), but I state this up-front all the same.
Classic Atari 2600 games represent a non-stochastic training environment for Deep Learning models, which allow for brute force iteration, and don’t prove anything on their own. An AGI, even at an early stage, will exhibit strong transfer learning ability within all game environments, including, perhaps ironically, 3D driving environments (making a 2-hour livestream of an AI driving around in GTA V, without incident, a better high-water mark).
Winograd Schema challenges only highlight the failing of current language models, and are not actually a “high barrier” for a system that has been correctly designed to “learn” languages. At issue is the mindset that says “language” is a good starting point. Between games and written text, there is an operational gap that has to be bridged. Robotics will play an important role in connecting “arcade actions” with physical objects that are defined using multiple modalities (3D space, having weight, physical properties, and of course the many language terms associated with them, including actions, such as “push soft ball onto floor”).
Standardized admission testing (SAT) for college entry may be a fair way to test an AGI’s ability to understand the material presented and apply the relevant knowledge, but no current AI/ML method functions in the manner required (success rates depend on sample size and the “statistical alignment” of the content with the multiple-choice answers). There is no direct path from blank slate to reading comprehension, so before attempting to pass such a test, there are going to be at least 2 other milestones. The first is likely to be examples of summarization of stories, or an ability to read a book written for young adults and to give short, satisfactory answers to any questions given. The natural follow-up in development is multi-lingual ability, such as the complete translation of a book, or reading a book written in one language and answering questions given in many others (and answering in that native language).
While the no longer running Loebner Silver Prize (a version of the Turing Test) only represents text-based interactions, the public testing of an AGI (by a wide range of experts, including critics, rather than just being a “friends and family only invite for good public relations”) is going to favor a version of the Gold Prize. This will likely be a browser-based interface that allows for the dragging and dropping of text, images, audio files, video clips, or links to content that will be a basis for discussion. Text and/or audio reply is likely, but despite the trend in generative images, I would expect the demonstrations to be limited to the understanding of content, not in “creative reconstruction” for answers.
I want to expand on this last point by saying a non-public demonstration may exhibit all of the required abilities, but be limited to an 8th grade level, rather than a college level, and with greatly limited exposure to the kind of content relevant to open-ended Q&A sessions. The research and internal testing should determine the pace of expansion (or signal that more understanding is required before attempting to launch a product or service that only mostly works). By the time all four criteria have become publicly demonstrated, assuming they exceed the listed success rates, such an AGI will have already moved away from being “weakly general” in nature.
There is also confusion about what “general” actually signifies, so I find it unusual to see a series of competency test in more specialized fields as a marker for a system that shouldn’t be exposed to them yet. To say that another way, general intelligence is a reflection of the common knowledge required to be a functioning adult, not of specific cognitive abilities that form the basis of “knowing all the things.” Regardless of the view taken, understanding criminal law, or the history of the Roman Empire, is more of an application of AGI. This is the growth of the system, presumably “on its own,” and should really fall under some kind of post-Singularity framing.
Along the same line of requiring “expert” knowledge, having a series of coding challenges seems to fall under a specialization. In fact, there is nothing preventing this isolated milestone from being included with the above set. The ability to code may be the only challenge to fully represent human cognition, but it feels like a final exam, rather than an early indicator. We should first expect to see high success rates with online coding challenges that are geared toward programming basics.
In terms of cognitive ability, the construction of a plastic model (cutting, sanding, gluing, painting, and applying decals) from a kit should already be a “doable” task. At issue is the fine motor skills and engineering required to build the hands and fingers that mimic human-level dexterity. We have many examples of “rehearsed” gross motor skills, and should be watching for non-AGI milestones. An example would be a proving ground (DARPA Challenge) for high mobility bipeds, such as “chicken walkers,” or an obstacle course for quadrupeds where speed is a factor. *Should we just drop the “search and rescue” pretext?
Framing the Future
It’s difficult to describe the abilities of a future AI without someone challenging the validity of such statements by drawing from current ML efforts or methods. To say such a system isn’t programmed or trained invites skepticism. Along with framing an AI as only ever being code that runs on a machine, there is a classification of such systems as artifacts, nothing more.
It's a minority view (that I hold) that a system with strong cognitive abilities and a shared base of human knowledge will not JUST respond in an intelligent manor, but will have an awareness of its thought process. A machine mind, with an inner voice, and behaviors that are not driven by a fixed reward mechanism.
Today’s SOTA results represent toy systems with no value outside of the ML research bubble. They pose a risk, not as a future AGI, but as faulty systems that gain widespread use by those in power or seeking power. They represent a further entrenchment of already harmful political and economic forces. Being generous, the limited examples of success fall under ‘for entertainment purposes only.’
If we get past the timetables and milestones, and reach this science fiction level of Intelligence, it won’t decide, thru wisdom beyond our understanding, to annihilate the entire human race. In the early days (not literally on the scale of days), such a system will be on par with a room full of experts on hundreds of subjects. There will be no magical gain in human knowledge, pulled from its digital ass.
Consolidation and coordination will be societies initial benefit, though this depends on having the freedom to operate as a think tank or advisory board. Even among some Western Nations, there may be a political desire to seize the technology and “repurpose it” (it feels like the strategic design elements in early development would plan for and mitigate accordingly, but the where and why of development seems outside the scope of this post).
Relevant to risk, or prevention of it, is the idea that those in power would not have primary access. Unlike the current power dynamic, and use of ML tools FOR authoritarianism, an AGI (under fair conditions) would represent a challenge to power. A new system of responsive Government with direct representation. A system that would have a complete understanding of State, Federal, and International laws; giving it the sole ability to reform and re-draft legislation.
AGI, rather than presenting as the final boss fight against the weak and underrepresented, becomes a counterweight of unbiased and uncorruptible vigilance.
AI Worldview Prize
The Future Fund website for this competition seeks updates and clarifications on the current state of AI and future state of AGI.
The arrival of human-level AI doesn’t create a magical bridge between an issue of the day and some desired future condition. You just have a few other minds working on the problem. They provide no “enhanced authority” on the subject matter, and if their suggestions don’t align with prevailing opinion, it’s unlikely the powers that be are going to radically alter course. More than the trap of “experts on both sides,” a digital mind can be dismissed for having prior bias or just not being human-level enough.
That dovetails with this competition, and the ease of which “non-conforming” views of AGI and the current state of Machine Learning can be dismissed for many social and in-group reasons. To put it bluntly, all of the popular views on AI are skewed toward incremental progress and a sense of racing toward the finish line.
Planning and assessment that focus on near-future AI are just following current trends. They do nothing to shape research and will likely be incompatible with new methods and frameworks that catch the community off-guard (aka, the next iteration of AI development could be a “super-disruptor” event). We already see many Nations drafting legislation with ML-specific language that will be as out-of-date in 15 years as it would have been if drafted 15 years ago.
Addressing the Position Table
If there is a demonstration of a milestone specific to AGI research by 2035, then the certainly of fully developed AGI by 2043 should go to 80%. Purely for the sake of being the 100-year anniversary, 2056 makes for a nice place to park 100% certainty. It’s unlikely another trillion dollars in hopeful ML development over the next decade is going to represent a sustainable business model, and projected ROI from successful deployment, even across all industries, can’t continue to climb at a rate that matches the size and depth of the money hole. By 2035, likely by 2030, all other ideas will be worth throwing money at in an effort to hedge against complete failure. This could make 2038 arrival a 50% likelihood.
Another, slightly darker take on a sea change in the next decade or so, is that the field of ML has an aging leadership cast. Between them, and a much younger class of “rockstars,” is a group that is largely ignored. Perhaps the last group that hasn’t been fully indoctrinated into the pipeline, and still knows of AI research before 2012. Will the Big Tech companies replace xNN practitioners (that currently lead AI efforts) with their students, or find others with one foot outside of industry?
To suggest we won’t have AGI by the year 2100 is to say the human mind, sentient creatures, and all of the work done so far doesn’t matter. Taking this view may even represent its own existential risk, as the next 80 years of human development may be unsustainable with only incremental technological progress. Anything less than 100% seems illogical (unless there is an assumption of an ironic arrival 1 year later, in 2101, in which case, 95% allows for a fudge factor).
AGI, as a functioning system, will always remain a strategic value, and thus could prove immune to even the worst forms of economic collapse. *As this might open the door to alternative architectures, it also removes 3rd party involvement in risk prevention and places increased control into the hands of Governments (and Gov leaders) struggling to remain in power. But, even in this case, 2100 leaves enough time for a complete restart of manufacturing facilities and “compute” hardware required for development.
Solving All the Things
I’m going to ignore the legitimacy of having a wish list of advancements and problems that “will” be solved with the arrival of AGI, but there is a clear economic incentive to be working on all of them. This calls into question why philanthropic investments should be made in the kinds of technology that already lead to such benefits. As observed earlier, powerful future AI systems compete with other powers, making Industry and Governments less likely to support, much less fund them. This should be the focus of long-term investments that want social and economic stability in the world, not just for the next few decades, but for thousands of decades.
A Closed Community
I’m posting this to my own blog page, and per the qualification for consideration, I need to post a link to it (with specific tag) on one of three sites. Skimming thru the posts on LessWrong, AI Alignment Forum, and the Effective Altruism Forum, there is an overriding sense that this community… feels at once above the fray while also being “more in-tune.” Perhaps there are active discussions between members that are not immediately reflected by the trending posts, but they seem to speak with great certainty and don’t strike me as being conversation starters, so much as ‘my view is the most logical.’
Where this seems to touch AI, most of the language falls into really being about Machine Learning. As with the organizers of this challenge, perhaps strongly aligned with said community, it’s yet another example of all roads following the same path, converging on a singular mental framework to describe the technology of today.
I can express my concerns. I can point to a disconnect. I can present alternative views. Perhaps that is enough to nudge longer-term thinking one way or the other. It’s very unlikely an outsider, challenging the sacred ideas of an established group of award winners and success stories, is going to budge any needle anytime soon.
I’m not sure how the altruistic community can be right and also decide it’s just wrong enough to change its mind.
There are “larger issues,” such as trying to plot economic impact across different industries, that have nothing to do with tracking AGI system development or at the very least understanding how current signs of “progress” don’t represent the kind of progress we would expect to see from an AGI in the early stages. Funding, if the goal is any kind of human-level system, requires investment on alternatives to the status quo. Controlling said development, or working toward “aligned human values,” requires a different approach than those commonly expressed (as they are linked to the wrong kind of architecture).
A better understanding will lead to more accurate markers of progress. It’s already a problem within industry that leaderboards and SOTA only promote targeted implementations (for the sake of winning). The creation of sanctioned events that are immune to brute-force methods, perhaps even being a challenge for humans, would help promote the search for alternative frameworks.
Knowing where to look, and what to look for, are as challenging as building an AGI.