Look at the two photos. One shows a smartphone, a device in your pocket. The other shows a data center, a massive building full of powerful computers. Artificial Intelligence (AI) is rapidly changing our world, but a big question is emerging: where should it "live"? Should AI processing happen on our personal devices, or in the powerful, centralized "cloud" of data centers?
In this lesson, we will explore two major developments in AI from 2024: NVIDIA's powerful new chips for data centers and Apple's new on-device AI for its products. We will compare these two approaches and debate which is better for different situations.

In Your Pocket
On-device AI runs directly on your phone or laptop.

In the Cloud
Cloud-based AI runs in massive, powerful data centers.
Anchor Video: The Future of AI Hardware
NVIDIA Blackwell: The Journey From Die to Data Center
This video from NVIDIA explains the technology behind the powerful chips used for cloud-based AI.
Video Transcript
Blackwell is an engineering marvel. It begins as a blank silicon wafer at TSMC. Hundreds of chip processing and ultraviolet lithography steps build up each of the 200 billion transistors layer by layer on a 12-inch wafer. The wafer is scribed into individual Blackwell die, tested and sorted, separating the good dies to move forward. [ 00:32 ]
The chip-on-wafer-on-substrate process, done at TSMC, SPIL, and Amkor, attaches 32 Blackwell dies and 128 HBM stacks on a custom silicon interposer wafer. Metal interconnect traces are etched directly into it, connecting Blackwell GPUs and HBM stacks into each system and package unit, locking everything into place. Then the assembly is baked, molded, and cured, creating the Blackwell B200 Super Chip. [ 01:02 ]
At KYEC, each Blackwell is stress-tested in ovens at 125°C and pushed to its limits for several hours. Back at Foxconn, robots work around the clock to pick and place over 10,000 components onto the Grace Blackwell PCB. Meanwhile, additional components are being prepared at factories across the globe. Custom liquid cooling copper blocks from Cooler Master, Auras, and Delta keep the chips at optimal temperatures. [ 01:39 ]
At another Foxconn facility, ConnectX-7 SuperNICs are built to enable scale-out communications and BlueField-3 DPUs to offload and accelerate networking, storage, and security tasks. All these parts converge to be carefully integrated into GB200 compute trays. [ 02:01 ]
MVLink is the breakthrough high-speed link that Nvidia invented to connect multiple GPUs and scale up into a massive virtual GPU. The MVLink switch tray is constructed with MVLink switch chips providing 14.4 terabytes per second of all-to-all bandwidth. [ 02:27 ]
MVLink spines form a custom blind-mated backplane, integrating 5,000 copper cables to deliver 130 terabytes per second of all-to-all bandwidth. This connects all 72 Blackwells, or 144 GPU dies, into one giant GPU. [ 02:46 ]
From around the world, parts arrive from Foxconn, Wistron, Quanta, Dell, Asus, Gigabyte, HPE, Super Micro, and other partners to be assembled by skilled technicians into a rack-scale AI supercomputer. In total: 1.2 million components, 2 miles of copper cable, 130 trillion transistors, weighing 1,800 kg. [ 03:16 ]
From the first transistor etched into a wafer to the last bolt fastening the Blackwell rack, every step carries the weight of our partners' dedication, precision, and craft. Blackwell is more than a technological wonder; it's a testament to the marvel of the Taiwan technology ecosystem. [ 03:39 ]
We couldn't be prouder of what we've achieved together. Thank you, Taiwan. [ 04:03 ]
Gallery Read: Two Futures of AI
Read the four information cards below. In small groups, discuss the pros (advantages) and cons (disadvantages) of each approach. Think about cost, speed, privacy, and power.
Card 1: The Powerhouse — Cloud AI
NVIDIA, a major technology company, announced its Blackwell platform in March 2024. These are extremely powerful chips, or GPUs, designed for huge data centers. They enable the creation of "trillion-parameter" AI models, which are the largest and most complex models in the world. A system called the DGX SuperPOD connects thousands of these chips to work together like one giant brain.
- Pros: The most powerful AI available; accelerates training of new AI models; enables complex tasks that a phone could never handle.
- Cons: Consumes a massive amount of energy; very expensive to build and maintain; requires a constant internet connection to use.
Card 2: Greener and Cheaper?
NVIDIA claims the Blackwell platform is much more efficient than older systems. It can reduce the cost and energy use for training large AI models by up to 25 times. This makes building powerful AI cheaper and potentially greener than before.
- Pros: More energy-efficient than previous data center chips; reduces the long-term cost of running AI services.
- Cons: Still uses far more energy than a personal device; the initial cost is incredibly high.
Card 3: The Butler — On-Device AI
Starting in late 2024, Apple introduced "Apple Intelligence" across its devices. This is AI that runs directly on the iPhone, iPad, or Mac. It helps with everyday tasks like writing emails, creating custom emojis ("Genmoji"), and organizing photos. Siri, the voice assistant, also becomes much smarter and more personal.
- Pros: Very private because your personal data doesn't leave your phone; very fast (low latency) because it doesn't need the internet; works offline.
- Cons: Much less powerful than cloud AI; limited to simpler tasks; uses your device's battery and processing power.
Card 4: Personal and Private?
The main advantage of on-device AI is privacy. Because the AI processes information on your phone, it can understand your personal context—your emails, your calendar, your photos—without sending that sensitive data to a company's server. Apple says this makes their AI more personal and safer.
- Pros: The highest level of privacy; features are tailored to your personal information securely.
- Cons: The AI's knowledge is limited to what's on your device; it can't access the vast information of a large cloud model.
Language Focus: Making Comparisons
When we evaluate technology, we often need to compare it. We use comparative and superlative adjectives to do this. We also use specific verbs to describe the impact of a new technology.
Comparatives & Superlatives
We use comparatives (-er or more/less) to compare two things. We use superlatives (-est or the most/least) to compare three or more things.
Cloud AI is more powerful than on-device AI. |
On-device AI is often safer and faster for personal tasks. |
The Blackwell GPU is the most powerful chip NVIDIA has ever made. |
For users concerned with privacy, on-device AI is the best option. |
Training large models is now less expensive than it was last year. |
Evaluation Verbs
These verbs help us explain the function or result of something.
Word | Example |
---|---|
enables | The new chip enables developers to create more complex AI. |
reduces | Running AI on the device reduces the delay, or latency. |
accelerates | Powerful hardware accelerates the training process for new AI models. |
improves | Apple Intelligence improves Siri's ability to understand context. |
optimizes | The software optimizes battery use while running AI tasks. |
Speaking: The Great Debate (Oxford-style)
Motion: "Schools should prefer on-device AI for privacy and cost."
- The class will be divided into two groups: For (you agree with the statement) and Against (you disagree with the statement).
- Your group will have 10 minutes to prepare your arguments. Use the information from the gallery read and the language focus section.
- Each team will present their main arguments.
- After the initial arguments, teams will have a chance to offer a rebuttal (respond to the other team's points).
Points to Consider
For the Motion (Agree)
- How does on-device AI protect student privacy?
- Is it cheaper for a school to use the AI already on student devices instead of paying for cloud services?
- Can on-device AI work without a perfect internet connection, which some schools may not have?
Against the Motion (Disagree)
- What powerful educational tools might schools miss out on if they only use on-device AI? (e.g., advanced research tools, complex simulations)
- Do all students have devices powerful enough to run on-device AI effectively?
- Is the cost of cloud services worth it for the advanced capabilities they provide?
Speaking: Local Use-Case Pitch
Design a new AI feature that solves a local problem.
In your small groups, you are a team of innovators. Your task is to design a new AI feature that uses either cloud AI or on-device AI to solve a local problem in your city or community.
- Brainstorm (10 minutes):
- Choose whether your solution will be on-device (private, fast, offline) or cloud-based (powerful, large-scale data).
- Identify a local problem. Examples include real-time translation on public transit, farming advice from satellite images, or instant emergency alert summaries.
- Name your feature and decide on its three main functions. Use the evaluation verbs (e.g., "Our app, 'TransitTalk,' enables tourists to understand announcements...").
- Prepare a Pitch (5 minutes):
- Prepare a short (2-minute) presentation to the class.
- Explain the problem, your solution, and why you chose on-device or cloud AI. Use comparatives to justify your choice (e.g., "We chose on-device because it's faster and more reliable in the subway...").
- Present (5 minutes):
- Each group will present its pitch to the class.
Conclusion
Unlock full access by logging in. Registered users can explore the entire lesson and more.