xAI has officially switched on Colossus 2, a record-breaking AI training cluster consuming one gigawatt of power. That is more electricity than the entire city of San Francisco uses at peak times. As the first operational system of this scale, it allows xAI to train massive frontier models at a single location rather than splitting the workload across different data centers. The facility is expected to scale up to 1.5 gigawatts by April, with plans to reach 2 gigawatts soon after. The system is powered by approximately 555,000 GPUs, representing an estimated hardware investment of $18 billion.
-

Silicon Valley’s New Battlefield: AI, Ethics and the Pentagon
It was supposed to be a government contract. It became a big problem that showed how different people in Silicon Valley think about war, ethics and the future of artificial intelligence. When…
-

OpenAI confirms its first hardware device arrives in 2026
OpenAI is officially “on track” to launch its first AI hardware device in 2026. Executive Chris Lehane confirmed at Axios House Davos that the company is shifting focus from experimental concepts to stable, everyday consumer products. He described the upcoming hardware as a more natural way to interact with AI, moving away from reliance on traditional screens like phones and laptops. This timeline aligns with recent comments from CFO Sarah Friar regarding 2026 as a pivotal year for monetization. It also fits CEO Sam Altman’s vision to define a new category of AI-first devices, developed in collaboration with former Apple design chief Jony Ive.
-

OpenAI officially introduces ads to ChatGPT
OpenAI has begun testing ads for free users in the US, offering increased usage limits in exchange for viewing sponsored content. At the same time, the company is expanding “ChatGPT Go” globally. This is an $8/mo plan that includes ads but unlocks extra features, while Plus and Enterprise tiers remain ad-free. OpenAI assures users that ads will be clearly labeled, won’t influence generated answers, and won’t rely on sold user data. Additionally, ads will not be shown to minors or appear alongside sensitive topics like health or politics. While the move is designed to fund broader access to advanced AI, critics warn it could compromise user trust if the chatbot begins promoting products too aggressively.
-

New Jersey makes AI a state project
New Jersey’s outgoing governor Phil Murphy signed a deal with NVIDIA on his way out, committing $25 million to build a state-run supercomputer for universities, community colleges, and an AI hub. It’s pitched as shared infrastructure for students, researchers, and startups—essentially an industrial policy bet on building local AI capacity rather than relying on rented cloud services. The timing locks it in before the next administration takes over, making it harder to abandon. While other states have done similar deals, this one bundles education, workforce training, and compute into one package. The real test is whether it becomes genuinely shared infrastructure that keeps AI talent in-state, or just fades after the press conference.
-

The U.S. just hit Nvidia’s AI chips with a 25% tariff
The U.S. has imposed a 25% tariff on advanced AI chips like Nvidia’s H200 and AMD’s MI325X that are produced abroad but exported from America to China—a strategic move aimed at limiting China’s access to cutting-edge AI technology while supporting domestic chipmakers who prefer selling with tariffs over losing market access entirely. The tariff exempts chips used domestically for research or defense, and comes as Chinese companies rush to place early orders while potentially reconsidering their import restrictions to avoid falling behind in AI development. While this signals America’s intent to reduce dependence on foreign semiconductor production—the U.S. currently manufactures only 10% of its own chips—the tariff itself doesn’t address the fundamental bottleneck: America’s limited domestic chip manufacturing capacity, which remains the core challenge in achieving true technological independence.
About
Who’s running this thing?
Meet the people making things happen (and breaking them sometimes).
Fact Station
Interesting Facts!
Because the universe didn’t come with documentation.
Why is reality often described as “information-limited” in physics?
Because of the Bekenstein bound, which sets a hard limit on how much information can exist inside any region of space. Even the entire observable universe, at maximum density, could store its information in a volume smaller than a sugar cube—showing information, not matter, is the true bottleneck.
Can AI actually discover new mathematics on its own?
Yes. In 2019, Google DeepMind trained an AI that found new, faster matrix-multiplication algorithms—outperforming methods humans had refined for over 50 years—without being taught any math rules explicitly.
How powerful is a modern smartphone compared to early space-age computers?
A single smartphone today is millions of times more powerful than the computers used by NASA during the Apollo Moon missions, which operated with less RAM than a digital watch.
Why aren’t quantum computers just “faster computers”?
Because their advantage comes from interference, not speed: incorrect solutions mathematically cancel out while the correct one amplifies. This behavior has no classical equivalent and is why quantum algorithms can beat classical ones so dramatically.








