- South Korea’s AI Basic Act took effect on January 22, 2026, making it the second country after the EU to implement a comprehensive AI law with binding obligations.
- Operators of “high-impact AI” must conduct risk assessments, provide meaningful explanations of AI outcomes, and maintain documentation on training data before deployment.
- Administrative fines reach up to 30 million KRW (approximately US$21,000) for violations including failure to notify users about AI use or to appoint a domestic representative.
- MSIT has granted a one-year grace period before enforcing penalties, giving businesses until January 2027 to achieve full compliance.
What Happened
South Korea’s Act on the Development of Artificial Intelligence and Establishment of Trust, known as the AI Basic Act, entered into force on January 22, 2026. The National Assembly passed the law in December 2024 after consolidating 19 separate AI-related bills that had been proposed by legislators across multiple parties into a single unified framework.
The Ministry of Science and Information and Communication Technology (MSIT) is responsible for finalizing the enforcement decrees that provide the technical compliance details. South Korea is now the second jurisdiction in the world — after the European Union — to operate under a comprehensive AI regulatory regime with binding legal obligations and enforcement mechanisms.
Why It Matters
The AI Basic Act establishes legally binding obligations that cover the full lifecycle of AI systems, from development through deployment and ongoing operation. Unlike Japan’s voluntary approach enacted months earlier, South Korea chose to attach concrete penalties and specific compliance requirements to its framework.
According to Cooley’s legal analysis, the Act applies extraterritorially, meaning foreign companies offering AI-powered products or services to South Korean users must comply regardless of where they are headquartered. Foreign AI companies must designate a Korean representative to liaise with the government on compliance and enforcement matters.
MSIT has stated it “will grant subject businesses a grace period of one year before administrative fines are imposed to support effective implementation of the AI Basic Act and preparation by companies.” This means substantive enforcement begins in January 2027.
Technical Details
The law defines two categories requiring heightened obligations: “high-impact AI” and “generative AI.” High-impact AI refers to systems that significantly affect human life, safety, or fundamental rights. This includes AI used in hiring decisions, credit scoring, law enforcement, judicial proceedings, and medical diagnosis.
Operators of high-impact AI must assess whether their system qualifies before deployment and may request MSIT to conduct the classification assessment on their behalf. Once classified, operators must provide a “meaningful explanation” of the AI’s outcomes to affected individuals, disclose the key criteria and principles used to reach those outcomes, and publish a summary of the training data used to build the system.
Generative AI — defined as AI that produces text, images, audio, or video by mimicking the structure and features of input data — carries separate transparency requirements. Operators must clearly label AI-generated content so recipients can identify it as synthetic, and must inform users when they are interacting with an AI system rather than a human.
Administrative fines reach 30 million KRW (approximately US$21,000) for specific violations: failing to notify users about AI use, failing to appoint a domestic representative, violating corrective orders issued by MSIT, or refusing to cooperate with government inspections. Repeated or severe violations can trigger escalating enforcement actions.
Who’s Affected
The Act distinguishes between “AI development business operators” — companies that build and train AI systems — and “AI utilization business operators” — companies that integrate AI into their products or services for end users. Both categories face compliance obligations, though requirements scale with the risk level of the AI system deployed.
Foreign companies — particularly large language model providers, cloud-based AI platforms, and SaaS vendors serving Korean users — must now appoint local representatives and ensure their systems meet the Act’s transparency and notification requirements. Korean startups have raised concerns about the compliance burden, particularly around the risk assessment and documentation requirements for high-impact AI applications.
What’s Next
MSIT is working to finalize the detailed enforcement decrees that will specify technical standards, compliance procedures, and assessment methodologies. The one-year grace period provides a window for businesses to build compliance programs before penalties take effect in January 2027. The government has also established an AI Safety Research Institute and plans to form sector-specific AI ethics committees to develop tailored guidance for industries including healthcare, finance, and education.
Related Reading
- The EU AI Act Takes Effect in 4 Months — 90% of Companies Aren’t Ready
- South Korea Is Giving Thousands of Elderly People ChatGPT Robots — 1 in 5 Koreans Is Now Over 65
- Decoding the 2026 White House AI Blueprint: Federal AI Policy Takes Shape
- China AI Regulation: The World’s Most Layered AI Governance Framework
- NVIDIA Tested Whether GPT-5 Could Control a Robot — It Failed at Every Basic Task Without Human Help