top of page

Current AI policies, legislation, regulations, and more in the EU vs the US

AI policies, legislation, regulations, and more in the EU vs the US

These days it does not matter if your startup is directly involved in AI or not. AI is here to stay, and it is not a matter if this technology will impact your business but how. If you feel flummoxed by the state of AI legislation, regulation, and standards – especially differences in AI legislation in EU vs. the US – there is a reason for that. As an emergent technology, the jury is still out on how to best manage all the uses of AI technology. 

With so much confusion, how can you even begin to think about drafting an AI policy to protect your startup and ensure that this technology is used ethically and within the confines of any existing laws? 

Here we will outline how the EU and the US governments have been handling AI, the considerations that need to be addressed within an internal AI policy, and how your Fractional COO can help.

Laws, regulations, and standards: what’s the difference?

Before delving into the differences between what’s happening on either side of the Atlantic, it’s a good idea to clarify the differences between laws, regulations, and standards

Laws: Developed by legislative bodies (think: US Congress, EU Parliament). Obviously, anything that is included in a law is legally binding within the jurisdiction of the lawmaking body which passed it. 

Regulations: Rules, often sector-specific, based on the interpretation of the law that are made by government agencies.

Standards: Best practices documents that can be developed and published by governmental or non-governmental organizations. A standard is not legally binding unless adherence to it is stipulated within a legal contract or it is included by reference into a law. 

As a startup founder, you should familiarize yourself with all of the above relevant to your industry, and especially those which intersect with AI, prior to developing any AI policy for your organization.

Don't neglect an AI policy, seen here.

The US: A decentralized approach to AI

The US has taken a highly decentralized approach to regulating AI. Legislation at the federal level is relatively nascent, and only a few states have passed laws related to AI. Federal government agencies are still in the process of appointing Chief AI Officers. A number of federal agencies have begun issuing draft reports and working on standards

If you’re thinking about implementing your own AI policy, think about the regulations that rule in your sector. What should be in an AI policy depends on what regulations and standards are in your industry, which is why having an AI operations expert in your business could be of tremendous help. They can help you figure out how to navigate the rapidly changing world of AI regulation.

What this means is that depending upon which sector you are involved in, and especially if you plan to seek grants or other funding with the US federal government or serve as a government contractor, you will need to do your homework. 

Luckily, many government resources, such as the Federal Register, make it easy to subscribe to updates related to AI legislation, rulemaking, and calls for public input at the federal level. Similar resources also exist at the state and local government levels. 

Thinking of expanding into the US market in the new age of AI? We have a resource for you, a free PDF: The Essential Tech Startup Guide for Expanding into the US Market using AI. This resource gives you even more insight on AI legislation in EU vs. the US. Click the link to get your free copy!

What is the EU approach to regulating AI? Horizontal and vertical integration

Europe, in contrast, has taken a very centralized approach to AI. In January 2024 the European AI Office opened its doors in January 2024 to coordinate AI development and protections across all 27 states.  Shortly thereafter, the European Parliament made history by passing the world’s first piece of legislation on artificial intelligence on February 2, 2024. The new EU law for AI is outlined in the EU AI Act and it delineates AI activities by level of risk, and not necessarily by sector. 

Risk categories include:

  • Unacceptable risk: AI systems which are considered a threat to people. These are completely banned, with the exception of certain uses of biometric technology by law enforcement. 

  • High risk: AI systems that can interfere with personal safety or fundamental rights. These systems will be assessed prior to being made available on the market and iteratively throughout the product lifecycle. High risk systems include, but are not limited to, consumer electronics and motorized vehicles as well as systems used in education and employment.

  • General purpose and generative AI: These technologies are required to disclose that the content created by them was generated by AI. Also, developers of generative AI must engineer their systems to prevent the generation of illegal content and summaries of copyrighted date used for training these systems must also be published. 

  • Limited risk: Low risk AI technologies that nevertheless are subject to transparency requirements.  Users of these systems must be informed that they are using AI technology and that they may cease using the technology at their own discretion. 

AI policy insights, specifically for startups

A thematic, sector specific treatment of AI technology by the EU is found in the GenAI4Eu initiative, which was part of the same AI innovation package that led to the creation of the EU AI Office. 

The expressed purpose of the GenAI4EU program is to “support startups and SMEs in developing trustworthy AI that complies with EU values and rules.” The industrial sectors targeted by the GenAI4EU program include robotics, health, biotech, manufacturing, mobility, climate and virtual worlds. 

Don't neglect AI policies, a draft of which seen in this image.

Internationally Developed Standards: more insights on AI legislation in EU vs. US

Additionally, the International Organization for Standardization (ISO) based in Geneva, Switzerland has published three international standards on AI

  • ISO/IEC 42001:2023 Information technology — Artificial intelligence — Management system

  • ISO/IEC 23894:2023 Information technology — Artificial intelligence — Guidance on risk management

  • ISO/IEC 23053:2022 Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)

Because they have been developed with the consensus of international experts (including the US and EU member states), policymakers often lean on these types of standards when drafting legislation. Other international standardization bodies which are currently producing AI standards include the International Electrotechnical Commission (IEC), and the Institute of Electrical and Electronics Engineers (IEEE)

As AI begins to penetrate more industries, it is best to watch out for new or revised standards to be released that will either incorporate existing AI standards by reference or which will be highly specific for particular use cases. Additionally, keep in mind that in the future, it may be necessary to have your business’s use of AI audited in the way ESG audits are conducted against international standards. 

What this means for developing an AI policy for your startup (and how a Fractional COO can help)

The content of an AI policy for your startup is going to depend upon the following factors:

  • If you are developing a product or service based upon AI technology or if you are merely using an AI product or service to perform normal business tasks (e.g., customer support via a chatbot)

  • Which purposes are you relying on AI technologies today and which do you anticipate for the future

  • The risks associated with those purposes

  • Where are you legally incorporated and where you do business

  • Where you anticipate doing business in the future

Before pen hits paper, a Fractional COO can assist in thoroughly mapping out your business processes. And they should absolutely be aware of AI legislation in EU vs. the US in the process.

Armed with an understanding of the product or service you provide, plus how your work is conducted and what tools you utilize, and taking into account any pain points (high costs, poor communications) within your business process, your Fractional COO can then help you assess the relevant AI risks related to your startup.  

Bhuva Shakti, seen here, is the fractional COO you need.

Working together with your legal counsel, your Fractional COO can help you prepare a comprehensive AI policy that will educate your staff and demonstrate to your staff and clients that you have performed your due diligence when it comes to AI risk.

Operating your startup without an AI policy means exposing your business to unnecessary risk. Protect yourself against litigation and ensure your operations produce, market, and/or utilize AI technology ethically by partnering with a Fractional COO today. 

And if you’re wondering, “How do I draft an AI policy?” Bhuva’s Impact Global is positioned to help you answer that question and navigate the evolving international AI regulatory landscape. Give yourself the gift of peace of mind and schedule a consultation with Bhuva today, a fractional COO as a service to you! 


This blog post can also be found on Bhuva Shakti’s LinkedIn newsletter “The BIG Bulletin.” Both the BIG Bulletin on LinkedIn and the BIG Blog are managed by Bhuva’s Impact Global. We encourage readers to visit Bhuva’s LinkedIn page for more insightful articles, posts, and resources.


bottom of page