Miles Brundage, Head of Policy Research at OpenAI, has left OpenAI. Miles was at OpenAI for 6 years and 3 months advising OpenAI executives and the OpenAI board on AGI readiness.
Nextbigfuture covers what Miles thinks is needed for AGI readiness. I agree the world needs to be able to capture more value from rapidly improving AI.
Nextbigfuture projects that xAI will pull into a clear lead in the AI large language models by building a 20X AI training advantage. Nvidia CEO Jensen Huang describes how xAI (and Tesla AI) are able to install and the best GPU chips almost one year faster than companies like OpenAI and Meta. xAI has a 100,000 Nvidia H100 AI data center operating 3 months ahead of Meta and Microsoft. This is despite xAI getting the chips 9 months after Meta and OpenAI. The first volume production of the new Nvidia B200 chips (4 Times better than the H100) are being built and delivered in the next few months. The AI training compute correlates to the performance of the AI models. The 20X AI training advantage will put xAI two generations ahead of OpenAI and Meta. OpenAI will have GPT5 but xAI will release a model that will equal to a future GPT7 over year ahead of OpenAI.
Miles says the world is not ready to manage AGI – Artificial general intelligence.
He wants to spend more time working on issues that cut across the whole AI industry, to have more freedom to publish, and to be more independent.
He will be starting a nonprofit and/or joining an existing one and will focus on AI policy research and advocacy, since I think AI is unlikely to be as safe and beneficial as possible without a concerted effort to make it so.
Some areas of research interest for me include assessment/forecasting of AI progress, regulation of frontier AI safety and security, economic impacts of AI, acceleration of beneficial AI applications, compute governance, and overall “AI grand strategy”.
He thinks OpenAI remains an exciting place for many kinds of work to happen, and I’m excited to see the team continue to ramp up investment in safety culture and processes.
He wants to talk to folks who might want to advise or collaborate on my next steps.
AGI Policy Scorecard Based Upon What Miles Thinks is Needed – NOTE Miles Can be Wrong
What Miles Thinks we need to do: Create a shared understanding of the upsides and downsides of AI to align perceived interests and incentives. GRADE : B-
What Miles Thinks we need to do: Make rapid technical progress in safety and policy research (including alignment, interpretability, dangerous capability and proliferation evaluations, and proof-of-learning) so that we have the tools we need at the right time in order to ensure risks are appropriately managed. GRADE – C-
What Miles Thinks we need to do: Further incentivize sufficient safety by regulating high-stakes development and deployment via mechanisms like reporting and licensing requirements (to ensure regulatory visibility into frontier AI research and deployment), third party auditing of risk assessment, incident reporting, liability insurance, compute governance, and confidence-building measures. Regulation should cover AI inputs (e.g. compute), processes (e.g. safe development), and outputs (e.g. first party use, API based deployment, and individual use cases), and should take an “ecosystem approach,” i.e. taking into account that AI risks cannot be fully managed at the point of inference (e.g. discovering and addressing a particular deceptive use of AI may depend heavily on context, and will require more wide ranging interventions touching on social media policy enforcement, advertising regulation, etc.). GRADE – C+
What Miles Thinks we need to do: Experiment with novel means of public input into private decisions, increase the extent to which democratic governments are informed about and involved in AI development and deployment, and increase the transparency of private decision-making. GRADE – D+
What Miles Thinks we need to do: To prevent pollution of the epistemic commons and increase humanity’s ability to navigate this turbulent period, there should be heavy investment in informed public deliberation via proof of personhood, media provenance, AI literacy, and assistive AI technologies. To prepare for rapid AI-enabled economic disruption, shore up the social safety nets globally and begin global distribution of AI-enabled productivity gains (leveraging proof-of-personhood techniques). GRADE – F
We already have deepfakes, TTS scams, etc. running amok and there doesn’t yet seem to be an appreciation of the fact that provenance, watermarking etc. are all “whack-a-mole” type situations w.r.t. both modalities and providers. The politics of AI job displacement is about to blow up and policymakers (and even most in AI) don’t really appreciate the extent of this. Also, solving these things seems hard (lots of collective action issues around standard design + implementation + building demand for their use, plus free speech concerns, privacy concerns with biometric based proof of personhood solutions, an unclear and contested endgame for the future of work, etc.).
Differential technological development
(context on the term: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4213670)
What we need to do: Leverage all of the asymmetric advantages that humanity has over future misaligned AIs, malicious actors, and authoritarian leaders–namely, more time to prepare, greater (initial) numbers of people and AIs, greater (initial) amounts of compute, a compelling objective many can rally around (avoiding human disempowerment or extinction), and having “the initiative.” Use these to invest in societal defenses such as dramatically improved cybersecurity, physical defenses, AI-enabled strategic advice, AI literacy, and a capacity for fast institutional adaptation.
Grade: D+ — Basically everything good happening with AI is happening because of “the market” (which is good at some things and not others). Many AI applications are of dubious value.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.