top of page
Search
Writer's pictureJessica Chapplow

Spilling the Chat GPTea

Updated: Apr 15, 2023

Welcome to a new series of opinion pieces and analysis that will explore the latest developments in generative AI.


Hot of the press: The Open Letter that launched a million opinions


Artificial intelligence experts are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society and humanity.

The letter, issued by the non-profit Future of Life Institute has been signed by more than 1,100 people calling for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.



What is the Future of Life Institute?

According to the European Union's transparency register, the Future of Life Institute is primarily funded by the Musk Foundation, as well as London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation.


Europe's stance

EU police force Europol on recently joined the ethical and legal debate with concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime. Meanwhile, the UK government unveiled proposals for an "adaptable" regulatory framework around AI. The government's approach, outlined in a policy paper published on Wednesday, would split responsibility for governing artificial intelligence (AI) between its regulators for human rights, health and safety, and competition, rather than create a new body dedicated to the technology.


My two pence

The letter isn’t perfect, but directionally right: AI is the defining technology or our time, and we need to slow down until we better understand the ramifications. At its best, AI has the potential to help each person and every organisation achieve more, and at its worst, it can cause serious harm as we have already seen. Exacerbated by the way that major players are becoming increasingly secretive about what they are doing, which makes it even more difficult for society to defend against whatever harms may materialise.

Despite its name, Open AI remains vague around its training data, in the paper released with GPT-4, the company stated that:

“Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.

No one from OpenAI signed this letter – they’re probably too focused on GPT-5, and no one from Anthropic, the team team spun out of OpenAI to build a “safer” AI chatbot, did either.


We’ve seen similar approaches to regain control of experimentation in the past. The Asilomar Conference halted experiments using recombinant DNA technology until better guidelines were put in place, and research continued to progress after. At the moment the letter sparks more questions than it solves, is 6 months long enough to put the right parameters in place? Who determines "more powerful than GPT-4" and how? Is this just about LLMs? * (* stay tuned for an exploration of Large Language Models in the next post).

116 views1 comment

1 Comment

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Matteo Grassi
Matteo Grassi
Mar 29, 2023
Rated 5 out of 5 stars.

Great piece on the AI open letter! I agree that we need to approach AI development with caution. Just wanted to add a few friendly thoughts:

  1. Let's have AI developers, policymakers, and regulators collaborate for better guidelines and transparency.

  2. A global set of ethical principles for AI is essential.

  3. Investing in AI education and public awareness is crucial.

  4. We should also consider the positive impacts of advanced AI systems.

Looking forward to your next post on LLMs! Keep up the good work!

Like
bottom of page