Foresight Institute is allocating $1.2M for Novel AI Safety Research
The Foresight Institute, an esteemed non-profit organization established in 1986, has been at the forefront of identifying and nurturing high-impact, early-stage technological developments like nanotechnology, AI, advanced biotech, and longevity-enhancement. The Foresight institute has created programs like the Feynman Prize to support such developments. Foresight also runs a program on “existential hope”, pushing forward the concept coined by Toby Ord and Owen Cotton-Barrett in their 2015 paper “Existential risk and Existential hope: Definitions”, in which they wrote
“…we want to be able to refer to the chance of an existential eucatastrophe; upside risk on a large scale. We could call such a chance an existential hope. … Some people are trying to identify and avert specific threats to our future - reducing existential risk. Others are trying to steer us towards a world where we are robustly well-prepared to face whatever obstacles come - they are seeking to increase existential hope.”
When faced with the challenge of ensuring the safety and beneficial evolution of Artificial General Intelligence (AGI), they recognized the need to solicit innovative and underexplored proposals, calling for unique perspectives and groundbreaking solutions.
The Institute has a legacy of supporting bleeding-edge research initiatives that are highly relevant to humanity’s future as a species. One of the ways they’ve consistently and effectively navigated such fields is by cultivating a network of domain-experts with a specialized touch. This is how 5cube Labs began its collaboration with the Foresight institute: helping identify the most promising and potentially impactful areas to fund in AI Safety research.
The Talent
To handle the intricacies of this mission, Foresight enlisted the expertise of a consultant from 5cube Labs. With a rich history in machine learning, the consultant had particularly carved a niche in “Security, cryptography, and auxiliary approaches for infosec and AI security”. The consultant had just published a book on trustworthy AI practices, which included details on areas such as privacy-preserving machine learning. The consultant had also previously published research on stealing neural network weights using just noise (presented at ICML).
Their role? To serve as an advisor, bringing a unique lens to evaluate applicants for the grant, identifying the most promising and groundbreaking projects that would receive funding.
Project Challenges
Judging for an AI safety grant of this scope and magnitude poses several unique challenges:
Deep Expertise Requirement: Given their specific focus on the AI security track, they have to sift through complex proposals, distinguishing between those that offered genuine innovation and those that were mere repackagings of existing ideas.
Wide-reaching Perspective: They have to account for a diverse range of proposals from many different institutions and domains (e.g., cryptography, cybersecurity, pure mathematics, law, etc.), ensuring a balanced representation of ideas and considering the unique challenges and perspectives each approach brought.
Collaborative Judgment: The consultant had to seamlessly integrate their insights with those of advisors from other top research organizations, creating a holistic evaluation framework.
Technical Approach
With a distinct focus on AI security, the consultant is adopting a rigorous and meticulous approach:
Strategic Evaluation: They implemented a multi-layered review process, beginning with a heuristic evaluation to identify proposals that aligned with the grant’s primary objectives. This was followed by a deep technical assessment to ensure feasibility and potential impact.
Interdisciplinary Collaboration: Recognizing the interdisciplinary nature of the challenge, they collaborated on reviews with advisors from organizations like GovAI, Carnegie Mellon University, OpenAI, and The Future of Humanity Institute. This ensured that evaluations were both deep in their technical scrutiny and broad in their strategic implications.
Outcomes
The collaboration between the Foresight Institute and the 5cube Labs consultant is ongoing, and it has already started bearing fruit.
Through their meticulous evaluation process, they have identified several proposals that promise to redefine the boundaries of AI security.
Their approach, combining deep technical expertise with strategic foresight, and their collaboration with esteemed organizations, ensures that the future of AGI will not only be groundbreaking but also secure and beneficial for humanity.
If you are interested in following this program, you can go to https://foresight.org/ai-safety/ .
You can also learn more about the Foresight Institute at https://foresight.org/. You can also find them on social media on Twitter, LinkedIn, YouTube, Facebook, Spotify, and Apple Podcasts.
🗯️
” The Foresight Institute really appreciates [the consultant’s] unique perspective in the area of AI. ”
- Allison Duettmann, Chief Executive Officer of the Foresight Institute