Legal & Regulatory Frameworks Weekly AI News
December 22 - December 30, 2025## Europe Leads the Way with AI Rules
The European Union is making strong rules about artificial intelligence. This week, the EU published its first official guidebook called a Transparency Code of Practice. This guidebook explains how AI companies must label content that AI created, so people know when they are reading something made by a computer instead of a person. The EU wants to make sure that regular people can trust the information they see online.
The EU already passed a big law called the AI Act that started in August 2025. This law divides AI into groups based on how risky they are. Very risky AI systems, like ones that make important decisions about people's lives, have to follow strict rules. Companies that want to sell AI in Europe must follow these rules or they will not be allowed to do business there. Europe is showing other countries around the world how to do this.
## America Wants One Rule for Everyone
In the United States, the problem is that every state wants its own AI rules. This makes it very hard for companies because they have to follow different rules in different places. On December 11, the President signed an order saying the US government will try to create one national rule for AI instead of many state rules.
The President's order says the rule should help AI grow and help American companies win against other countries. But it also says the rule must protect children and keep people safe. The government is building a special team of lawyers to look at state rules and decide if they are too strict. This has made some people worried because they think the government might stop rules that protect people.
## India Creates Its Own AI Rules
India released its own AI rules in November that show another way to do things. India's rules focus on seven key ideas: being fair, being honest, and taking responsibility. India also created six pillars, which are like the main supports of a building, including things like building AI technology, making rules, and creating the right organizations. India wants to be the main country that checks if AI is being used correctly, especially for AI that could be risky.
## The Big Problem: Checking AI Is Hard
Even though countries are making new rules, there is a big problem: companies do not have good ways to check if their AI is working correctly. A study showed that 74 out of 100 companies worry about keeping information safe, but only 24 out of 100 actually have a real system to watch their AI. Many companies just follow rules because they have to, not because they really understand how their AI works.
Experts say that AI systems need to be watched all the time to make sure they do not make mistakes or treat people unfairly. The best companies are putting checking tools inside the AI itself, so problems get caught right away. They are also making sure a real person can always look at what the AI is doing and explain it.
## AI Agents Need New Rules
A special type of AI called agentic AI is becoming very popular. Unlike regular AI that does one task and stops, agentic AI can think, make plans, and do multiple tasks on its own. Three big AI companies—OpenAI, Anthropic, and Block—decided to work together and created something called the Agentic AI Foundation to help everyone use the same rules.
These companies created a system called Model Context Protocol that lets different AI agents talk to each other and work together. By the end of 2025, about 10,000 different systems were using this protocol. In 2026, the real challenge will be controlling thousands of AI agents working at the same time. Companies will need new tools to keep track of all these agents and make sure they do what they are supposed to do.
## Banks Think Regulators Will Help More
Good news is coming from banks and financial companies. About 6 out of 10 bank leaders think that in the next few years, regulators will be more helpful to companies using AI instead of just saying "no". Some think regulators will even actively help companies use more AI. However, 7 out of 10 banks still worry about unclear rules, because they do not always know exactly what regulators want.