Risk / Reward Framework for AI

Carlos E. Espinal
6 min readSep 20, 2023

--

Conversations are emerging on both sides of the pond about how rapid technological innovation in artificial intelligence can co-exist with responsible practices and principles at individual, company, investor and government level.

On November 1st, the UK will host its AI Summit at Bletchley Park, where Alan Turing once cracked the Enigma Code. Earlier this month the White House Office of Science and Technology Policy released its Blueprint for an AI Bill or Rights. Other governments are racing to tackle the problem, including the EU AI act, and China’s temporary measures on July 14th to manage the generative AI industry; Elon Musk hailed the country’s framework for international cooperation. Venture capitalists, such as General Catalyst amongst several others, and technologists are also building their own charters to help guide accountable innovation.

The jury is out on who the most appropriate collaborators at national and international level discussions should be. Sam Altman was recently involved in a Senate Hearing where he urged regulation, inviting accusations of ‘pulling up the ladder’ behind OpenAI and regulatory capture. Equally, regulators require the expertise of people building and investing in the technology. An approach that excludes the academics and entrepreneurs / investors with ‘skin in the game’ seems myopic.

In order to ensure that conversations we are all having are as productive as possible, an additional framework to calibrate these conversations seems useful. This framework can both avoid discussions degenerating into polarities if a level of trust isn’t established early on, and quickly bring a room onto the same page as shorthand context for the discussion. In this article we outline the importance of a framework being mutually recognized, how the Government might contribute to this framework, and how the long-tail of AI risks could be governed by corporate-level certification and verifications.

The importance of a framework

It is reasonably straightforward to establish high-level principles for privacy, safety and security, which broadly fall under a sort of Hippocratic Oath for the software age. It is much harder to demarcate the moment at which these concerns override shareholder interests. The more granular the framework’s start point for discussion, the more productive conversations for alignment can be. For private and public companies, an acceptable risk-reward ratio for disruption based on artificial learning should act as a guardrail, or as a start and end point for conversations.

To what extent should a company be able to capture profits while generating risk to itself, its employees and its customers and where should the line be? The Facebook and Cambridge Analytica saga has become an uncontroversial parable of where corporate entities have crossed to the wrong side of this line, but for everyone else, the finer margins are less clear.

Another way to frame this problem is to consider the different tiers of decision-making and rule-making which might exist for artificial intelligence to calibrate a risk-reward ratio. This discussion spectrum could categorise AI applications based on the potential harm or benefits they can impart to society, the environment, or any affected stakeholders. It would also take into consideration the size of the company and target market, to consider the possible breadth of impact.

The role of Government & setting of boundaries for severe concerns

A spectrum could be boundaried at both ends by the Government. At the most extreme end of the spectrum are laws that incur litigation for a legal entity whose AI applications cause severe harm if they deviate from regulation. There will clearly be lines which companies are not permitted to cross under any circumstance where technologies have profound implications for individual rights, societal structures, or even global dynamics. Italy was one of the first governments to draw this line when the Italian Data Protection Authority blocked ChatGPT for illegally collecting user data. I suspect a large part of the conversations in the upcoming UK AI Summit will be in identifying and coming up with a potential structure on how to address these severe concerns, where the impacts can escalate to having geopolitical and national security impact.

This upper bracket would contain companies with riskier exposure to artificial intelligence which are likely to be capturing more value with the technology. These companies might be required to comply with a standard of external reporting and government collaboration to ensure the appropriate deployment of models. A parallel here is the UK’s FCA, which conducts audits and requires reporting by statutory provision from certain financial organisations to ensure that customers are protected and competition is maintained. Companies in this bucket might even be required to include external board experts who can help to operationalise principles as part of a risk-reward framework, as well as take part in government-led anonymized data-sharing requirements to be able to help with national risk-related matters.

The long-tail spectrum and ‘reward’ points

Further down this spectrum, a base, soft and moderate tier of categorisation could help the broader framing of conversations. At the bottom end of the spectrum, a base tier of compliance will exist to ensure ethical alignment and minimise general misuse. This would ensure transparency and even optimise positive societal impact. GDPR as it exists today is a good parallel. GDPR is lightly enforced in most cases by the Information Commissioner’s Office (ICO) and largely exists to ensure that individuals are informed and aware of any risk they may be undertaking. A bit like the ICO, the regulator of base tier compliance would ask a series of questions about the use of data, how the application works and how the company intends to make money. Ultimately, where an individual or company is exposed to machine learning algorithms, they should know and be able to make decisions accordingly.

“The basic rule is that you must: tell people the cookies are there; explain what the cookies are doing and why; and. get the person’s consent to store a cookie on their device.” — ICO

The problem is more challenging in cases where the permutations of impact across the risk-reward spectrum is difficult to derive or is unknown. However, two further commitment tiers could align with something similar to the UK’s B-Corp model (naturally, a new name would be in order to differentiate it from the existing B-Corp, but I’m bad at naming things) to address that issue. The moderate version could involve organisations voluntarily adhering to enhanced social responsibility and receive certification and accrue the market rewards of doing so. A softer touch version to a full “B-Corp type” status would be a voluntary adherence to a framework. Organisations in this category would incorporate AI governance in their boardroom discussions, ensuring that ethical considerations are woven into decision-making processes. This internal governance, while not necessarily public-facing, is vital to ensure that even the most seemingly innocuous AI technologies are developed and deployed with a sense of intentional responsibility. It is plausible that organisations would be eager to earn their artificial intelligence certifications to enhance interest from customers and boost hiring.

In conclusion, the key thing is not just to highlight the key areas of discussion, but also create a spectrum for categorising companies and issues, for otherwise there is a risk that conversations are very difficult to resolve between many parties with diverging interests. There will naturally be many more resulting questions generated in these discussions, but by agreeing to and implementing a risk reward spectrum, we might be able to get to alignment more quickly and efficiently and with clear direction for those wanting to engage constructively.

This post was co-written with my colleague Will Bennett.

Additional reading: Speed it up — 7 Key considerations for widespread AI adoption.

Additional resources: Enz.ai — A Seedcamp company addressing the problem of responsible AI.

--

--