TL;DR Thailand is moving forward with a draft AI Law aimed at regulating artificial intelligence through a risk-based, sector-specific approach. The law assigns oversight to industry regulators and introduces obligations for high-risk AI providers, including human oversight, incident reporting, and local legal representation. Foreign AI businesses must navigate Foreign Business Act restrictions but can potentially secure full ownership through BOI promotion.
Introduction:
After a two-year delay, Thailand has restarted its plans to regulate artificial intelligence (AI). On May 2, 2025, the Electronic Transactions Development Agency (ETDA) held a public meeting to provide more information about the proposed AI legislation.
Due to the rapid growth and fast-changing legal and technological developments in AI, ETDA’s original draft law closely followed the European Union’s model. However, as AI continues to evolve, the ETDA has decided to amend the original draft and adapt it to match more closely with Thailand’s specific needs.
The newly proposed framework is based on international research and an analysis of how other countries are regulating AI. It focuses on five main areas, to ensure that AI can be safely and effectively developed, deployed, and used in Thailand, while also protecting users and encouraging technological growth.
Key Points
- Thailand’s draft AI law uses a risk-based regulation approach, delegating responsibility to sector-specific regulators.
- High-risk AI providers must implement risk management frameworks, maintain human oversight, keep operational logs, and report serious incidents to authorities.
- The law supports innovation through controlled regulatory sandboxes for testing and allows use of public data for AI development (with commercial use requiring permission).
- Foreign AI companies must appoint legal representatives in Thailand and consider ownership structures as AI businesses fall under the Foreign Business Act restrictions.
- The AI Governance Center will oversee implementation, with regulators given powers to issue stop orders, request platform takedowns, seize AI products, or block internet access for non-compliant systems.
Main Areas of Focus in the Draft AI Law
The updated framework is based on international research and an analysis of how other nations are managing AI. It focuses on five main areas, each designed to balance innovation with responsible oversight:
Risk-Based AI Regulation
One of the main areas of focus for the proposed artificial intelligence law is to manage risk in a way that’s practical and adapted to be used across different industries.
Delegation of Authority to Enforcement Bodies and Sectoral Regulators
To achieve this the ETDA plans to delegate the responsibility for identifying and managing AI risks to industry-specific regulators. This is a different approach from originally planned, which was to create a fixed list of banned or high-risk AI systems.
This approach has been chosen because every sector faces different kinds of risks when using AI. Specific sector regulators are more familiar with the issues and risks in their fields, therefore, they can more accurately decide what should be considered as “high-risk” AI.
These regulators will have the power to issue more detailed rules for their respective industry, but these rules must align with the overall artificial intelligence law.
There will also be the creation of a central enforcement agency that will oversee the process, helping to coordinate between sectors and address any gaps where no specific regulator is in charge.
Duties for AI providers considered high risk
Companies that use high-risk AI systems will be subject to an expected ‘duty of care’ which requires them to follow a set of responsibilities under the draft law to ensure accountability and protect the rights of individuals who may be affected by AI.
Under the draft AI law, organizations must:
- Maintain human oversight of high-risk AI systems to prevent harmful or unexpected outcomes.
- Keep detailed operational logs to track how the AI is functioning and being used.
- Ensure the accuracy and quality of input data, which directly impacts the AI system’s output.
- Notify individuals if an AI system could affect their rights or interests, such as decisions about credit, employment, or public services.
- Cooperate with investigations if something goes wrong, particularly if harm has been caused due to the AI.
AI systems classified as high-risk by regulators, must also implement the following:
Risk Management Systems
High-risk AI providers must set up formal risk management frameworks, for example, following international standards like ISO/IEC 42001:2023 or the NIST AI Risk Management Framework.
These systems will help to identify, assess, and reduce potential harm. If providers do not meet these standards, they may still be held liable if their failure to follow best practices leads to real harm.
Local Legal Representation
AI providers who are based outside of Thailand must appoint a legal representative within the country. This ensures Thai authorities can properly enforce the law and hold providers accountable.
Serious Incident Reporting
If an AI system causes unexpected harm or poses a safety risk, providers will be required to report the incident to the relevant enforcement agency. This allows regulators to step in quickly and take appropriate action.
Support for Innovation
To encourage AI development, the draft law introduces some policies to encourage innovation. The two main areas are:
Data Access for AI Development
Developers will be allowed to use publicly available online data for tasks such as text and data mining, which are an important part of training AI systems. This method is similar to how the EU approaches similar issues.
However, if the developer wishes to use that data for commercial purposes, they will still need to get permission from the data’s owner or rights holder.
Real World Testing Using Sandboxes
The draft also encourages the use of regulatory sandboxes i.e. controlled environments where companies can test AI systems in real life situations. These sandboxes will be set up through agreements between private companies and the relevant government agencies.
AI companies who operate within these sandboxes may be allowed to use personal data that was originally collected for other purposes, as long as the data is only used for serving the public interest.
To reduce the risk for innovators, the draft law includes a “safe harbor” rule. The safe harbor rule will protect companies from being punished for any unintended harm that occurs during testing.
In order to be covered by the safe harbour rule, the company must act in good faith and follow the sandbox rules. However, safe harbor does not protect the company from civil liability and they may still have to compensate individuals if any real harm is caused.
Read More: BOI Incentives for Renewable Energy in Thailand – Here’s What You Need to Know for 2025
General Legal Principles
The draft AI law is based upon key legal principles that aim to promote fairness, transparency, and accountability. These following principles have been designed to protect individuals and encourage responsible AI use:
Nondiscrimination
AI-generated decisions, such as automated contracts or administrative rulings will be treated as legally valid. People won’t be denied any rights or services simply because AI was used in the decision-making process.
AI is a Tool and not a separate entity
No matter how advanced an AI system is, its actions must always be traceable back to a human. Developers and users can’t deny responsibility because the AI acted in an unpredictable way. The ultimate responsibility will remain with the people who develop or use the technology.
Protection from Unexpected AI Actions
The draft law also contains safeguards against AI errors that couldn’t have been reasonably predicted. If an AI system does something no one could have expected, and if the person affected didn’t know and couldn’t have known about the risk, they may be protected from being legally bound by that action.
Right to Explanation and Appeal
People should have the right to understand and challenge decisions made by AI. Examples include:
- Being told when AI is involved in making a decision that affects them
- Receiving an explanation of how the AI reached that decision
- Having a way to appeal or challenge the outcome, possibly through a human review of the action.
AI Regulation and Regulatory Oversight
The draft artificial intelligence law assigns oversight responsibilities to the AI Governance Center (AIGC), which already operates under Thailand’s Electronic Transactions Development Agency (ETDA).
The AIGC will play a leading role in properly implementing the law. Its main responsibilities include:
- Research and development in AI governance to help shape future policies
- Advising organizations on how to adopt and use AI responsibly
- Supporting pilot programs and regulatory sandboxes that allow businesses to test AI in real conditions under controlled oversight
- Monitoring global AI trends and best practices to keep Thailand aligned with international standards
- Collecting data to assess Thailand’s readiness for AI
- Creating partnerships both in Thailand and with international organizations to support consistent and collaborative AI governance
Enforcement Mechanisms
The draft AI law will grant power to regulators to take action when AI systems are used in illegal or unsafe ways.
In such situations, regulators will be able to:
Issue stop orders
If an AI provider or user is found using prohibited or high-risk AI, regulators can issue an official order requiring them to stop offering or using the service.
Platform takedowns
If the AI system is available through a digital platform, e.g. a website or app, regulators will be able to request the platform to block or remove access to the service.
Seizing physical AI products
If the AI is part of a physical product (like a robot or smart device), regulators will be able to seize the product.
Blocking internet access
If the noncompliant AI is hosted outside a digital platform or the platform doesn’t follow the stop order, regulators can work with the Ministry of Digital Economy and Society to block access to the AI system through internet service providers (ISPs) in Thailand.
What This Means for Businesses in Thailand
For businesses that operate in the AI space, or use AI tools heavily for their business activities, this draft law could have significant effects on their business. If the draft legislation becomes law, it could significantly impact how AI is developed, used, and regulated in Thailand.
Businesses involved in AI should closely review the draft law and consider how its proposed rules might impact their operations. This is particularly important for companies who use complex AI systems.

Foreign Ownership Rules for AI Companies in Thailand
The Foreign Business Act may restrict foreign ownership in AI development and other AI businesses as it falls under the general restrictions of service activities. There are however several options available to AI companies in Thailand including:
- A Thai Limited Company with a Thai Partner
- 100% foreign ownership via a BOI promotion and a Foreign Business Certificate
- 100% foreign ownership via a Foreign Business Licence
Limited Company with a Thai Partner
The Foreign Business Act restricts foreigners from undertaking about 50 types of business. AI businesses are considered service businesses which are restricted as per Clause 20 of list 3 of the Foreign Business Act .
However, a popular alternative for foreign investors who wish to operate an AI business is to create a Thai company (a company registered in Thailand with Thai shareholder(s) owning more than 50% of the share capital).
Such a set up would mean the restrictions of the FBA would not apply as the company is not considered foreign. However, the Thai partners must not be nominees as Thailand forbids the use of nominee shareholders.
BOI Promotions for AI
While the BOI doesn’t offer any promotions directly relating to AI at the moment, they do offer promotions that include digital activities. Business activities that are covered under digital activities include developing software, which could be suitable for some AI business activities and AI development.
BOI Incentives and Benefits for AI Projects
BOI-promoted companies in Thailand receive significant advantages compared to regular Thai limited companies.
One of the most important benefits is the ability to have 100% foreign ownership, allowing the company to avoid the typical 49% limit imposed by the Foreign Business Act. In addition, BOI-promoted businesses are issued a Foreign Business Certificate, which exempts them from restrictions on over 50 business categories otherwise restricted to foreign-owned companies.
Another important advantage is the flexibility in hiring foreign talent. Unlike standard companies, which must maintain a 4:1 ratio of Thai to foreign employees, BOI-promoted firms are not subject to these quotas when hiring skilled foreign professionals.
Read More: Thailand’s S-Curve Industries: Driving Economic Growth & Innovation
Tax Benefits
Successful applicants for a BOI promotion for software will also be eligible for a corporate income tax exemption cap of 100 percent of the actual expenditure for the incentive is as follows:
Expenditure on the operation to acquire the standard quality system certificate ISO 29110 or CMMI from Level 2 or other equivalent international standards.
Expenditure on salaries for Thai information technology personnel additionally employed in comparison with Thai information technology personnel employed before the application submission date for investment promotion.
Expenditure on the information technology development-related training course to develop Thai personnel’s skills.
Our Thoughts
Thailand is putting a lot of effort into regulating artificial intelligence (AI) in a way that promotes innovation while ensuring responsible development. On May 2, 2025, the Electronic Transactions Development Agency (ETDA) introduced a revised draft of its AI legislation, which adopts a risk-based approach. The new framework encourages real-world testing through regulatory sandboxes, permits the use of public datasets for AI development, and outlines clear requirements for providers of high-risk AI systems.
While AI is a restricted service under Thailand’s Foreign Business Act, foreign companies can still operate with full ownership by applying for a promotion from the Board of Investment (BOI). Although the BOI does not yet offer a specific category for AI, many AI-related businesses, such as software and digital platform developers, may qualify under existing digital activity categories.
Receiving a BOI promotion allows 100 percent foreign ownership, grants a Foreign Business Certificate which permits a foreign company to engage in restricted activities under the FBA, removes local hiring quotas for skilled foreign professionals, and provides access to tax exemptions.
If you are planning to start your AI business in Thailand, now is the ideal time to take advantage of this rapidly developing sector. Contact us to learn how we can support your success in Thailand’s growing AI ecosystem.
Please note that this article is for information purposes only and does not constitute legal advice
FAQ
What is Thailand’s new AI Law and when will it take effect?
Thailand is developing a comprehensive AI Law that uses a risk-based, sector-specific approach to regulate artificial intelligence. The Electronic Transactions Development Agency (ETDA) held public meetings in May 2025 and is accepting feedback until June 9, 2025, with a revised draft expected thereafter. At Lex Nova Partners, we help businesses navigate these evolving regulations and prepare for compliance before the law takes effect.
How does Thailand’s AI regulation approach differ from other countries?
Thailand’s AI Law delegates oversight to industry-specific regulators rather than creating a fixed list of banned AI systems, recognizing that each sector faces unique AI risks. This flexible approach allows sector experts to determine what constitutes ‘high-risk’ AI in their fields while maintaining overall coordination through a central AI Governance Center. Our legal experts at Lex Nova Partners can help you understand how these regulations will apply to your specific industry.
What obligations do high-risk AI providers have under Thailand’s draft AI Law?
High-risk AI providers must maintain human oversight, keep detailed operational logs, ensure data accuracy, notify individuals when AI affects their rights, and cooperate with investigations. They must also implement formal risk management frameworks following international standards like ISO/IEC 42001:2023 or NIST guidelines, and report serious incidents to authorities. Lex Nova Partners specializes in helping AI companies establish compliant operational frameworks and risk management systems.
Can foreign companies own AI businesses in Thailand under the new regulations?
AI businesses fall under Foreign Business Act restrictions, typically limiting foreign ownership to 49%. However, foreign companies can achieve 100% ownership through BOI promotion or Foreign Business Licenses, particularly for software development and digital activities that qualify under existing categories. At Lex Nova Partners, we have extensive experience helping foreign AI companies structure their operations for full ownership while ensuring regulatory compliance.
What are the benefits of BOI promotion for AI companies in Thailand?
BOI-promoted AI companies enjoy 100% foreign ownership, Foreign Business Certificate exemptions from FBA restrictions, flexible hiring ratios for foreign professionals, and significant tax benefits including corporate income tax exemptions up to 100% of qualifying expenditures. These benefits make Thailand highly attractive for AI investment. Our team at Lex Nova Partners can guide you through the BOI application process and maximize your incentive package.
What enforcement powers will regulators have under Thailand’s AI Law?
Regulators will have broad enforcement powers including issuing stop orders, requesting platform takedowns, seizing physical AI products, and blocking internet access through ISPs for non-compliant systems. The AI Governance Center will coordinate enforcement across sectors while industry-specific regulators handle their domains. Lex Nova Partners helps businesses develop compliance strategies to avoid enforcement actions and maintain operational continuity.
How does Thailand’s AI Law support innovation and testing?
The draft law encourages innovation through regulatory sandboxes that allow real-world AI testing under controlled oversight, permits use of public data for AI development, and includes ‘safe harbor’ protections for good-faith testing activities. Companies can use personal data originally collected for other purposes when serving public interest within sandboxes. Our legal experts at Lex Nova Partners can help you establish sandbox agreements and navigate innovation-friendly provisions.
What data usage rights do AI developers have under Thailand’s proposed regulations?
AI developers can use publicly available online data for text and data mining to train AI systems, similar to EU approaches. However, commercial use of such data requires permission from rights holders, and developers must ensure data accuracy and quality as this directly impacts AI system outputs. Lex Nova Partners can help you establish compliant data usage policies and secure necessary permissions for commercial AI applications.
Do foreign AI companies need legal representation in Thailand?
Yes, AI providers based outside Thailand must appoint local legal representatives to ensure Thai authorities can properly enforce the law and hold providers accountable. This requirement applies to all foreign AI companies operating in Thailand, regardless of their business structure. Lex Nova Partners can serve as your legal representative and ensure full compliance with all AI Law requirements while protecting your business interests.
What are the key principles governing AI decisions under Thailand’s AI Law?
The law establishes that AI-generated decisions are legally valid, but ultimate responsibility remains with humans who develop or use the technology. People have the right to know when AI affects their decisions, receive explanations of AI reasoning, and appeal outcomes through human review. The law also protects individuals from unexpected AI actions they couldn’t reasonably have predicted. At Lex Nova Partners, we help businesses implement transparent AI decision-making processes that comply with these principles.
How can businesses prepare for Thailand’s AI Law implementation?
Businesses should review their AI systems against proposed risk categories, establish human oversight procedures, implement data quality controls, and prepare incident reporting mechanisms. Companies using complex AI should particularly focus on risk management frameworks and documentation requirements. The ETDA is currently accepting public feedback until June 9, 2025, making this an ideal time to engage with the regulatory process. Lex Nova Partners offers comprehensive AI Law readiness assessments and compliance implementation services to ensure your business is fully prepared.