Artificial Intelligence (AI) is developing faster than ever. It can help us solve huge problems, but it also comes with risks. How do we make sure this powerful technology is used safely and responsibly? In this lesson, we will look at a real international agreement designed to manage the risks of the most advanced AI. We'll analyze what the promises really mean and then create our own mini-policy for using AI in our own community.
Warm-up
Think, Pair, Share
With a partner, think of one real risk a classroom chatbot (like ChatGPT or Gemini) could cause for students or teachers. It could be about academics, privacy, or something else. Be prepared to share your idea with the class.
Skim the Commitments
Watch, Listen, and Read for Key Actions
In May 2024, top AI companies met with global leaders in South Korea and signed the "Frontier AI Safety Commitments." Watch the news report about the event. Then, read the summary below and identify three concrete actions the companies promised to take.
Frontier AI Safety Commitments 2024
A news report on the AI Seoul Summit and the voluntary safety commitments signed by 16 AI tech companies.
Video Transcript
In business tonight, more than a dozen of the world's leading artificial intelligence firms have made fresh commitments to ensuring the safety of their technology. On the opening day of the AI Seoul Summit, an event co-hosted by the United Kingdom and South Korea, British Prime Minister Rishi Sunak says the commitment aims for transparency and accountability in the development of AI. [ 00:24 ]
The agreement builds on consensus reached at the inaugural Global AI Safety Summit in the UK last year. The 16 firms that have committed to the safety rules include US tech titans Microsoft, Amazon, Meta, and IBM, as well as France's Mistral AI and China's Zhipu AI. For more details, Ollie Barrett joins us live from London. Ollie, how significant is this deal and what does it entail? [ 00:56 ]
Well, the UK and South Korea certainly believe that it is significant. And one of the reasons they believe that is to do with the number of huge technology firms, some of which you mentioned there, that have signed up to this new agreement. And the British and the South Koreans certainly feel that that demonstrates a real move forward when it comes to the possible safety of AI in the future. It does see those firms committing to what are called safety outcomes when it comes to developing various AI products. It also means that they commit to effectively saying that they won't pursue certain models or products if they can't feel sure that the safety of those models and products might be guaranteed at some point in the future. [ 01:45 ]
If safety thresholds can't be guaranteed, then those firms would commit not to press ahead with one project or another on that basis. Thresholds, it's described as by the UK and the British. And also we're told that they will be input from trusted actors, including home governments as appropriate, while these companies and governments work on further potential regulations and thresholds that they expect to discuss at a major AI summit in France in 2025. That's called the AI Action Summit. And the UK is very keen to point towards the list of companies that have signed up, particularly Zhipu of China. The UK very keen to stress indeed that they have got the involvement of at least one major Chinese AI company as they move to what they say is a significant agreement at the day one of this summit in Seoul co-hosted by the British and the South Koreans. [ 02:46 ]
It is day one and security is a very big deal as well as alignment globally, but what else can we expect to come out of this Seoul Summit, AI Seoul Summit? Yeah, so a lot of day one has been virtual. That's how the UK Prime Minister Rishi Sunak has been involved in the summit, which has had some very high-level figures getting together for discussions and to announce this agreement with these tech companies on day one. On day one, we haven't seen involvement from China at government level, but on day two, we're told that certainly the South Korean officials are expecting that China will send representation of some sort. [ 03:31 ]
Day two on Wednesday will be an in-person meeting. It'll involve ministers from governments and countries around the world and also experts in the AI field. And day two is going to have a particular focus actually on some of the potential positives from AI. And that's something that the UK Prime Minister Rishi Sunak has been stressing, which is that he believes that you do need to try and sort out safety outcomes and regulation as much as possible so that there can be safety and security around the future of AI. But he says it's also important to work out how you can grasp some of the positive opportunities that AI presents. And so that'll be a major focus on day two. Thanks to Ollie Barrett reporting live there from London. [ 04:16 ]
Summary: The Frontier AI Safety Commitments
A group of leading technology firms, including Anthropic, Google, Microsoft, and OpenAI, announces a new set of voluntary safety commitments. The agreement states that these companies will focus on three key areas to ensure the safe development of their most advanced AI models, known as "frontier AI."
First, each company commits to publishing a safety framework. This document will explain how they measure and manage risks like misuse by bad actors or loss of control over the AI. The companies claim this will increase transparency and public trust.
Second, the firms agree to focus on "red lines." This means they will decide on specific risk levels that are too dangerous to tolerate. If internal testing shows an AI model could cross these red lines, the company pledges to pause its development until it can make it safer.
Finally, the companies declare that their commitments apply to both their current and future frontier AI models. They also expressed their support for governments to create stronger, enforceable regulations in the future.
Language Focus: Reporting Verbs
When we report what a person, company, or government has said, we can use different verbs to add nuance. Using a variety of reporting verbs makes your language more precise and academic.
Notice the verbs in the text above: announces, states, commits, claims, agrees, pledges, declares.
A neutral verb like states simply reports the information. A stronger verb like pledges suggests a serious promise. A verb like claims can suggest some doubt about whether the statement is true.
Examples
Neutral: The report states that AI development is accelerating.
Strong Promise: The CEO pledged to prioritize safety over profit.
Slight Doubt: The company claims its new model is completely safe, but some experts are skeptical.
Gap Hunt
Compare Commitments to an Independent Review
A promise is one thing, but action is another. The AI Safety Index is an independent project that tracks what AI companies are actually doing. Read the summary of its findings below. Compare it to the Safety Commitments. What are the biggest gaps or weaknesses in the commitments?
Discuss with your group: Are the commitments strong enough? Where could they be improved?

Summary: 2025 AI Safety Index Findings
The Future of Life Institute's 2025 AI Safety Index rates companies on 33 indicators of responsible development. While some progress has been made, the index highlights several critical weaknesses across the industry.
Lack of Enforcement: The index notes that most safety commitments are voluntary. There are no legal penalties if a company fails to follow its own safety framework. This is a significant gap; without enforcement, promises might not be kept, especially when safety measures are expensive.
Insufficient Transparency: While companies have started publishing safety policies, the index finds that they often lack detail. They might not clearly define what counts as a "severe harm" or explain their testing methods. This makes it difficult for outside experts to verify their claims.
Focus on Future Risks: Many companies focus on extreme, long-term risks (like an AI taking over the world). The index suggests this distracts from current, real-world harms like algorithmic bias, misinformation, and job displacement. These immediate problems should receive more attention.
Language Focus: Cautious Modality & Conditionals
When discussing problems and proposing solutions, we often use cautious language with modal verbs like might, could, and should. We also use conditional sentences (If X, then Y) to talk about causes and results.
Form & Examples
Cautious Modality: Subject + modal verb (might/could/should) + base verb
A voluntary policy could be ignored by companies.
Governments should create stronger laws.
Without clear rules, AI might cause unintended harm.
First Conditional: If + simple present, ... will/can/might + base verb.
If a company ignores a safety pledge, nothing will happen.
If we regulate AI now, we can prevent future problems.
Policy Studio
Draft a Campus AI Policy
Now, let's bring this global issue to our local campus. Imagine your school needs a simple, clear policy for how students can use AI tools ethically.
In small groups, draft a 3-line campus AI policy. Your policy must include a rule for each of these three areas:
- Disclosure: How should students show when they have used AI in their work?
- Data Limits: What kind of personal or confidential information should students not share with an AI?
- Escalation: If a student finds an AI tool is producing biased or harmful content, what should they do?
Use cautious modality (should, must) and conditionals to make your policy clear. Prepare to present your 3-line policy to the class.
Exit Task
Identify a Red Flag
Based on our discussion today, what is one "red flag"—a warning sign or a dangerous behavior—that you will watch out for as you and others use AI in the coming months?
Think about your own work, the news, or your school environment. Be ready to share one example.