Artificial intelligence (AI) opens up unprecedented opportunities for businesses, but it also carries significant risks. In order to successfully exploit the potential of this new technology, these risks must be controlled and the desired quality of AI must be ensured.
With the AI Act, the EU is creating a framework for trustworthy AI. The legislation aims to ensure that ethical principles and human rights are respected.
The AI Act takes a risk-based approach. It defines four risk categories – from low risk to unacceptable risk. The higher the risk, the higher the requirements for companies. The AI Act is expected to come into force in late 2023 or early 2024. Once the AI Act is passed, affected companies will have 24 months to comply with the requirements for each AI system.
Failure to comply could result in fines of up to €30 million or 6% of annual turnover. We therefore recommend early action.
Establishing appropriate governance processes can be a painstaking process.
But it can also be an opportunity to be seized!
What can you expect in the course?
- No previous experience is required.
- There will be no paragraph writing.
- Instead, we work out principles and rules based on very practical examples that can be implemented in your own company.
What is your specific benefit?
- Certificate of participation
- Clarity on what you can do right now
- You will be aware of the current status of the legal and normative requirements.
- You can prepare for the EU AI act in good time
- Knowing what is required puts you ahead of the competition
How is the course structured?
- 10 teaching units
- Total video footage of approximately 5 hours
- Weekly meetings to discuss any questions
- Duration: 11 days
Who the training is for
For anyone involved in the development and deployment of AI systems:
- AI developers, data analysts, software developers
- Managing director, company management
- IT Responsible
- Staff of public authorities involved with AI
- Compliance and quality managers
- Consultants and all those who manage AI projects
What you will learn
Week 1
Basics
- What is the global significance of the AI act?
- What does Trustworthy AI mean?
- What are the time limits?
- Which technologies are affected by the AI act?
- Which risk classes are there?
- What is conformity assessment and how does it work?
- What is the role of technical standards?
- What are the different responsibilities of an AI producing company?
- How high can penalties be?
Week 2
The requirements for high-risk systems
- What are the new regulations on generative AI (such as ChatGPT)?
- What do accuracy and robustness mean?
- What is necessary for cyber security?
- What does human supervision of AI systems mean?
- What does a risk management system look like?
- How must data be processed?
- What needs to be documented?
- What does a risk management system need?
- What a quality management system needs
- How to test an AI system?
Packages
Premium
My recommendation
697
-
everything from the BASIC package
-
PLUS
-
Slack Channel for shared exchange
-
weekly questions / answers sessions
-
Bonus Material: Determine Risk Class
Popular
Deluxe
997
-
everything from PREMIUM package
-
PLUS
-
2 individual coaching sessions of 45 min each
Warranty
No one wants to pay for something they don’t need or want. That’s why I guarantee that if you contact me after watching two lessons and decide that the course is not for you, you will get your money back (no ifs or buts).
Time
The course is planned for early 2024. More details will follow.
Frequently asked questions
The AI Act is expected to come into force in late 2023 or early 2024.
The AI Act’s basic requirements for AI systems are clear. They will be changed only slightly. As the introduction of AI governance will take about three years, it is important to start early.
Yes, if you have completed all course units, you will receive a certificate of participation.
On the one hand, determining the risk class is quite complex. It requires a more detailed analysis to be able to assess it. On the other hand, AI systems can also change their risk class, for example through “concept drift” or through applications that have not been thought through. So it is good to be on the safe side.