AI has been integrated into almost every aspect of our lives, from everyday software we use at work, to the algorithms that determine what content is recommended to us at home.
While extraordinary in its capabilities, it isn’t infallible and will open up everyone to new and emerging risks. Legislation and regulations are finally catching up to the rapid adoption of this technology, such as the EU AI Act and new Best Practice Standards such as ISO 42001.
For those looking to integrate AI in a safe and ethical manner, ISO 42001 may be the answer.
Today Rachel Churchman, Technical Director at Blackmores, explains what ISO 42001 is, why you should conduct an ISO 42001 Gap analysis and what’s involved with taking the first step towards ISO 42001 Implementation.
You’ll learn
- What is ISO 42001?
- What are the key principles of ISO 42001?
- Why is ISO 42001 Important for companies either using or developing AI?
- Why conduct an ISO 42001 Gap Analysis?
- What should you be looking at in an ISO 42001 Gap Analysis?
Resources
In this episode, we talk about:
[00:30] Join the isologyhub – To get access to a suite of ISO related tools, training and templates. Simply head on over to isologyhub.com to either sign-up or book a demo.
[02:05] Episode summary: Rachel Churchman joins Steph to discuss what ISO 42001 is, it’s key principles and the importance of implementing ISO 42001 regardless of if you’re developing AI or simply just utilising it.
Rachel will also explain the first step towards implementation – an ISO 42001 Gap Analysis.
[02:45] Upcoming ISO 42001 Workshop– We have an upcoming ISO 42001 workshop where you can learn how to complete an AI System Impact Assessment, which is a key tool to help you effectively assess the potential risks and benefits of utilising AI.
Rachel Churchman, our Technical Director, will be hosting that workshop on the 5th December at 2pm GMT, but places are limited so make sure you register your place sooner rather than later!
[03:20] The impact of AI – AI is everywhere, and has largely outpaced any sort of regulation or legislation up until very recently. These are both needed as AI is like any other technology, and will bring it’s own risks, which is why a best practice Standard for AI Management has been created.
If you’d like a more in-depth breakdown of ISO 42001, check out our previous episodes: 166 & 173
[04:30] A brief summary of ISO 42001 – ISO 42001 is an Internationally recognised Standard for developing an Artificial Intelligence Management System. It provides a comprehensive framework for organisations to establish, implement, maintain, and continually improve how they implement and develop or consume AI in their business. It aims to ensure that AI risks are understood and mitigated and that AI systems are developed or deployed in an ethical, secure, and transparent manner, taking a fully risk-based approach to responsible use of AI.
Much like other ISO Standards, it follows the High-Level Structure and therefore can be integrated with existing ISO Management systems as many of the core requirements are very similar in nature.
[05:45] Why is ISO 42001 important for companies both developing and using AI? – AI is now becoming commonplace in our world, and has been for some time. A good example is the use or Alexa or Siri – both of these are Large Language AI Models that we all use routinely in our lives. But AI is now being introduced in many technologies that we consume in our working lives – all designed to help make us more efficient and effective. Some examples being:
- Microsoft 365 Copilot
- GitHub Copilot
- Google Workspace
- Adobe Photoshop
- Search Engines i.e. Google
Organisations need to be aware of where they’re consuming AI in their business as it may have crept in without them being fully aware. Awareness and governance of AI is crucial for several reasons:
For companies using AI they need to ensure they have assessed the potential risks of the AI such as unintended consequences and negative societal impacts, or potential commercial data leakage. They also need to ensure that if they are using AI to support decision making, that they have ensured that decisions made or supported by AI systems are fair and unbiased. It’s not all about risk – organisations can also use AI to streamlining processes helping to become more efficient and effective, or it could support innovation in ways previously not considered.
For companies developing AI, the standard promotes the ethical development and deployment of AI systems, ensuring they are fair, transparent, and accountable. It provides a structured approach to risk assessment and governance associated with AI, such as bias, data privacy breaches, and security vulnerabilities.
And for all, using ISO 42001 as the best practice framework, organisations can ensure that their AI initiatives are aligned with ethical principles, legal requirements, and industry best practices. This will ultimately lead to more trustworthy, reliable, and beneficial AI systems for all.
[10:00] Clause 7.4 Communication – The organisation shall determine the internal and external communications relevant to the system, and that includes what should be communicated when and to who.
[09:00] What are the key principles outlined in ISO 42001? –
- Fairness and Non-Discrimination – ensuring AI systems treat all individuals and groups fairly and without bias.
- Transparency and Explainability – Making AI systems understandable and accountable by providing clear explanations of their decision-making processes.
- Privacy and Security – Protecting personal data and privacy while ensuring the security of AI systems.
- Safety and Security – Prioritising the safety and well-being of individuals and the environment by mitigating potential risks associated with AI systems.
- Environmental & Social – Considering the impact of AI on the environment and society, promoting sustainable and responsible practices.
- Accountability and Human Oversight – Maintaining human control and responsibility for AI systems, ensuring they operate within ethical and legal boundaries. You’ll often hear the term ‘Human in the loop’. This is vital to ensure that AI is sanity checked by a human to ensure it hasn’t hallucinated or result ‘drifted’ in any way.
[11:10] Why conduct an ISO 42001 Gap Analysis? What is the main aim? – Any gap analysis is a strategic planning activity to help you understand where you are, where you want to be and how you’re going to get there. The ISO 42001 gap analysis will identify gaps and pinpoint areas where your AI practices need to meet the ISO 42001 requirements.
It aims to conduct a systematic review of how your organisation uses or develops AI to then assess your current AI management practices against the requirements of the ISO 42001 standard. This analysis will then help you to identify any “gaps” where your current practices do not fully meet the standard’s requirements. It also helps organisations to understand ‘what good looks like’ in terms of responsible use of AI.
It will help you to prioritise improvement areas that may require immediate attention, and those that can be addressed in a phased approach.
It will help you to understand and mitigate the risks associated with AI.
It will also help you to develop a roadmap for compliance to include plans with clear actions identified that can then be project managed through to completion, and as with all ISO standards it will support and enhance AI Governance.
[13:15] Does an ISO 42001 gap analysis differ from gap analysis for other standards? – Ultimately, no. The ISO 42001 gap analysis doesn’t differ massively from other ISO standard gap analysis, so anyone who already has an ISO Standard and has been through the gap analysis process will be familiar with it.
In terms of likeness, ISO 42001 is similar in nature to ISO 27001 in as much as there is a supporting ‘Annex’ of controls and objectives that need to be considered by the organisation. Therefore the questions being asked will extend beyond the standard High Level Structure format.
Now is probably a good time to note that the Standard itself is very informative and includes additional annex guidance information to include
- implementation guidance for the specific AI controls,
- an Annex for potential AI-related organisational objectives and risk sources,
- and an Annex that provides guidance on use of the AI management system across domains and sectors and integration with other management system standards.
[14:55] What should people be looking at in an ISO 42001 gap analysis? – The Gap Analysis will include areas such as looking at the ‘Context’ of your organisation to better understand what it is that you do, or the issues you are facing internally and externally in relation to AI – both now and in the reasonably foreseeable future, and also how you currently engage with AI in your business. This will help to identify your role in terms of AI.
It will also look at all the main areas typically captured within any ISO standard to include leadership and governance, policy, roles and responsibilities, AI Risks and your approach to risk assessment and treatment and AI system impact assessments. It also looks at AI objectives, the support resources you have in place to manage requirements, awareness within your business for AI best practice and use, through to KPI’s, internal audit, management review and how you manage and track issues through to completion in your business.
The AI specific controls look more in-depth at Policies related to AI, your internal organisation in relation to key roles & responsibilities and reporting of concerns, The resources for AI Systems, how you assess the impacts of AI Systems, The AI system lifecycle (AI Development), Data for AI Systems, Information provided to interested parties of AI Systems, and the use of AI Systems and 3rd party and customer relationships.
[18:10] Who should be involved in an ISO 42001 Gap analysis? – An ISO 42001 gap analysis looks at AI from a number of different angles to include organisational governance that includes strategic plans, policies and risk management, through to training and awareness of AI for all staff, through to technical knowledge of how and where AI is either used or potentially developed within the organisation. This means that it is likely that there will need to be multiple roles involved over the duration of a gap Analysis.
At Blackmores we always provide a Gap Analysis ‘Agenda’ that clearly defines what will be covered over the duration of the gap analysis, and who typically could be involved in the different sessions. We find this is the best way to help organisations plan the support needed to answer all the questions required.
It’s also important to treat the gap analysis as a ‘drains up’ review, to help get the most benefit out of the gap analysis. This will ensure that all gaps are identified so that a plan can then be devised to support the organisation to bridge these gaps, putting them on the path to AI best practice for their business.
If you’d find out more about ISO 42001 implementation, register for our upcoming Workshop on the 5th December 2024.
If you’d like to book a demo for the isologyhub, simply contact us and we’d be happy to give you a tour.
We’d love to hear your views and comments about the ISO Show, here’s how:
- Share the ISO Show on Twitter or Linkedin
- Leave an honest review on iTunes or Soundcloud. Your ratings and reviews really help and we read each one.
Subscribe to keep up-to-date with our latest episodes:
Stitcher | Spotify | YouTube |iTunes | Soundcloud | Mailing List
ISO 42001 was published in December of 2023, and is the first International Standard for Artificial Intelligence Management Systems.
It was introduced following growing calls for a common framework for organisations who develop or use AI, to help implement, maintain and improve AI management practices.
However, its benefits extends past simply establishing an effective AI Management System.
Join Steph Churchman, Communications Manager at Blackmores, on this episode as she discusses the top 10 reasons to adopt ISO 42001.
You’ll learn
- What is ISO 42001?
- What are the top 10 reasons to use ISO 42001?
- What risks can ISO 42001 help to mitigate?
- How can ISO 42001 benefit both users and developers of AI?
Resources
In this episode, we talk about:
[00:30] Join the isologyhub – To get access to a suite of ISO related tools, training and templates. Simply head on over to isologyhub.com to either sign-up or book a demo.
[02:30] What is ISO 42001?: Go back and listen to episode 166, where we discuss what ISO 42001 is, why it was introduced and how it can help businesses mitigate AI risks.
[02:45] Episode summary: We take a look at the top 10 reasons why you should consider implementing ISO 42001.
[02:55] #1: ISO 42001 helps to demonstrate responsible use of AI. – , ISO 42001 helps ensure fairness, non-discrimination, and respect for human rights in AI development and use.
Remember, AI can still be bias based on the fact that AI models are typically trained on existing data, so any existing bias will carry over into those AI models – an example of this is the existing lack of representation for minority groups.
We also need to take care in the use of AI over people, as staff being replaced by AI is a very real concern and should not be treated lightly. We’ve already seen a few cases where this has happened, especially across the tech support field where some companies mistakenly think that a chatbot can replace all human staff.
We also need to consider the ethics of AI content. It’s predicted that 90% of online content will be AI generated by 2026!
A lot of this generated content includes things like images, which poses a real concern over the values we’re translating to people. The content we consume shapes the way we think and if all we have is artificial, then what message is that conveying?
An example of this is Dove’s recent advert, which showed an example of AI generating images of very unobtainable ideals of a beautiful face. Which were predictably absolutely flawless, almost inhuman and something that can only be achieved through photo editing. If the internet was flooded with this sort of imagery, then that starts to become the expectation to live up to, which can be tremendously damaging to people’s self-esteem. They then went on to show actual unedited people, in all their varied and wonderful glory and stated that they will never use AI imagery in any of their future marketing or promotional material.
Which sends a very strong message – AI definitely has its place, but we need to fully consider the implications and consequences of it’s use and possible oversaturation.
[05:20] #2: Traceability, transparency and reliability – Information sourced via AI is not always correct – It collates information published online, and as many of us are aware, not everything on the internet is correct or accurate.
Data sets carelessly scrapped from online sources may also contain sensitive or unsavoury content. We’ve had cases where people have managed to ‘break’ Chat GPT, causing it to spew out nonsense answers which also contained sensitive information such as health data and personal phone numbers. While not usually accessible when requested, it does not stop the risk of this data being dug up through exploits. AI is like any other technology, and is not infallible.
So, it’s up to developers to ensure that the data used to train models is safe and appropriate for use. It should be expected that data sets will be scrutinised from a legal standpoint – either as a result misuse of AI or a mandatory exercise as a part of future legislation.
There’s also research that suggests data sets can be potentially poisoned to produce inaccurate results – which is another consideration for developers using live data sets, who will need to stay on top of these risks to ensure the integrity of their tools.
ISO 42001 provides specific guidance that covers how developers can ensure transparency and explainability within sample training data.
[06:45] #3: It’s a framework for managing risks and opportunities – AI, like any other new technology, is going to create new risks and opportunities.
Risks include the likes of inaccurate data being used, existing bias in data training sets, plagiarism, information security risks and data poisoning.
If you’re simply using AI to gather information, it’s also a good exercise to ensure that the information is coming from a reputable source. One easy way to so this is to simply ask for the source to be cited when pluging in a prompt into tools like Chat GPT and Gemini. You can then verify how legitimate that source is.
For web developers and SEO specialists, Google has recently updated it’s algorithm to punish those with a lot of AI generated content on their websites. So those within the SEO space may see some interesting trends over the course of 2024.
Another unfortunate risk is that of more complex scams being implemented through the use of AI. An example of this involves those who may use an AI assistant in their systems, which can be affected by malicious emails that contain prompt injections which could be used to send data from a victims machine to outside sources.
This is only touching on a few risks, but as you can see, there’s a lot to consider and I’ve no doubt that more complex risks will make themselves known as the technology evolves.
However, there are a lot of opportunities to be found with AI use.
There’s a huge potential for AI to be utilised to tackle mundane and routine tasks which could be automated.
AI also has the capability to scan masses of data and provide suggestions based on it’s findings. Obviously, humans can’t possibly compete with the sheer volume of data that AI can process, and so we can utilise it to help us make better more informed decisions.
A lot of commonly used software has already integrated various AI tools which offer great quality of life updates and help make a lot of tasks quicker. Which in turn means our time is better spent elsewhere on tackling the more complex issues that require a more human touch.
ISO 42001 can help you balance out these risks and opportunities by helping you build a robust management system to manage and mitigate risks, and drive forward opportunities through continual improvement.
[10:35] Join the isologyhub and get access to limitless ISO resources – From as little as £99 a month, you can have unlimited access to hundreds of online training courses and achieve certification for completion of courses along the way, which will take you from learner to practitioner to leader in no time. Simply head on over to the isologyhub to sign-up or book a demo.
[12:50] #4: Demonstrate that introducing AI is a strategic decision with clear objectives – Businesses looking to integrate AI should not make this decision lightly.
I know it’s tempting to play with the newest toy, but we should take care to look at any possible risks, and that it aligns with both your company objectives and ethics before rushing to utilise something.
For example, allowing your staff to use ChatGPT for content creation. You need to consider a few things:
You need to make sure Staff aren’t putting in any confidential or sensitive information into publicly available AI tools.
Also, ensuring that Staff understand that content provided by the likes of ChatGPT and Gemini could be plagiarised if used as is. You need to build, adapt and change the content so it’s something unique.
It’s all well and good introducing AI technology if it truly is going to be beneficial to your employees and to the business as a whole, however if you’re just introducing it because everyone else seems to be, then you really have to question if it’s worth it. If it’s not actively making your work lives easier and helping you to achieve your objectives, then is it really worth the potential cost and effort to implement?
It may also be worth looking into how the AI tool you’re using was created. There is sadly still a lot of exploitation involved in the development of new technology, so it’s up to you to ensure that the tools you’re using were created in an ethical way.
Ultimately, ensure that you are using AI safely, ethically and that it aligns with your businesses established objectives. This will need to be communicated clearly to everyone in the business.
ISO 42001 is, at its heart, a Management system standard. Like many other ISO Standards, it includes guidance on setting objectives and communicating these to your wider business.
[15:24] #5: ISO 42001 helps to implement safeguards – Certain features of AI may require safeguards to help protect businesses against the extra risks they pose, such as the increased potential of more sophisticated cyber attacks or compromised training data.
This can be applied within a particular process or an entire system.
Examples of features that may require these safeguards include:
- Automatic decision making
- Data analysis, insight and machine learning
- Continuous learning
Something you need to consider: Cyber scams are going to become a lot more complex with the help of AI, so you need to ensure you’re staff are both aware of this and how they can avoid falling prey to them. Safeguards may simply involve more training on these new risks, or updating to a more robust security software that is able to detect possible AI cyber scams.
Developers are also going to need to keep on top of any data being fed into their tools. Public live data tools especially will be more susceptible to being poisoned and tampered with, so it’s up to them to monitor and ensure the integrity of their data.
ISO 42001 provides guidance in it’s annexes for users and developers to implement these necessary safeguards.
[16:30] #6: ISO 42001 Supports compliance with legal and regulatory Standards – More AI focused legislation is an inevitability, with the new EU AI Act being a perfect example.
It’s important to ensure that you are prepared to comply with legislation as it’s released, or you may be held liable and be subject to fines.
Currently, the UK has no plans to introduce a new regulator for AI, instead relying on existing technology based regulators like the Information Commissioners Office (ICO), Ofcom and FCA.
ISO 42001 includes specific considerations for any potential applicable legislation.
[17:06] #7: ISO 42001 Can enhance your reputation – ISO Standards are internationally recognised and ensure you are complying with best practice.
Gaining certification to ISO 42001 will show you are confident in your AI related claims, and are happy to have this verified by a third party.
[17:30] #8: ISO 42001 Encourages innovation within your business – For as much as we’ve stressed the potential risks AI could expose your business to, ultimately AI is here to help make our lives easier. We just need to ensure we’re responsible when applying it.
ISO 42001 ensures you can safety integrate AI tools and systems within your business. It’s there to help guide the adoption of this new technology, and drive continual improvement as your management system matures.
[17:55] #9: ISO 42001 Can be easily integrated with existing systems – ISO 42001, like many ISO Standards, is based on the Annex SL format and can be easily integrated with existing ISO Management Systems such as an ISO 9001 (Quality management) or ISO 27001 (Information Security management) system.
Risks addressed in ISO 42001 include security, privacy and quality among others, and can help to enhance the effectiveness of your Management system in those areas.
[18:25] #10: ISO 42001 Does not require an existing Management System to implement – While ISO 42001 would make a great addition to any ISO Management System, it’s important to note that this can be implemented independently.
It is also not intended to replace or supersede any existing quality, safety or privacy Standards / existing management systems.
We’ll be releasing a suite of ISO 42001 related training content on the isologyhub, if you’d like to get notified as soon as this becomes available, please register your interest on our waitlist.
If you’d like to book a demo for the isologyhub, simply contact us and we’d be happy to give you a tour.
We’d love to hear your views and comments about the ISO Show, here’s how:
- Share the ISO Show on Twitter or Linkedin
- Leave an honest review on iTunes or Soundcloud. Your ratings and reviews really help and we read each one.
Subscribe to keep up-to-date with our latest episodes:
Stitcher | Spotify | YouTube |iTunes | Soundcloud | Mailing List
There’s no escaping it, AI is here to stay. Over the course of 2023 we’ve seen more general and public use of popular AI tools such as ChatGPT and Gemini (previously Google Bard).
It’s now even being integrated into everyday applications such as Microsoft Word and Teams. There is no doubt that there are a lot of benefits to using AI, however, with new technology comes new risks.
So how do we address the growing concerns around AI development and use? That’s where the new Standard for AI Management Systems, ISO 42001 comes in!
Join Mel this week as she explains exactly what ISO 42001 is, who it’s applicable to, why it was created and how ISO 42001 can help businesses manage AI risks.
You’ll learn
- What ISO 42001 AI Management Systems is
- Who it’s applicable to
- Why it was created
- How ISO 42001 can help businesses manage AI risks
Resources
In this episode, we talk about:
[00:30] Join the isologyhub – To get access to a suite of ISO related tools, training and templates. Simply head on over to isologyhub.com to either sign-up or book a demo.
[02:05] Episode summary: Today we’re touching on a very topical subject – AI, and more specifically the brand new AI Management System Standard – IS0 42001. We’ll also be exploring who it’s applicable to, why it was created and how it can help businesses manage AI risks.
[03:30] What is AI? – AI – otherwise known as Artificial intelligence, as it’s most simplest description is the science of making machines think like humans.
We’ve seen a lot of AI tools be released to the public over the last year or so, tools such as ChatGPT and Google Bard. It’s already being integrated with some of the most commonly used apps and programs like Microsoft word and Teams.
In short, AI integration is here to stay, so we may as well get to grips with it and make sure we’re using it responsibly.
[05:10] What is ISO 42001? – , ISO 42001 is the first International Standard for Artificial Intelligence Management Systems, designed to help organisations implement, maintain, and improve AI management practices.
It was jointly published in December 2023 by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC).
The emphasis of ISO 42001 is on integrating an AI Management System with an organisations existing management system – i.e. ISO 9001 or ISO 27001 compliant management systems.
Interestingly, a lot of the specific mentions of Artificial Intelligence and Machine Learning are within the Annexes rather than the body of the Standard. The Standard itself is very similar to ISO 27001 in that it’s mostly about what organisations should be doing to manage computer systems regardless of any AI components.
[08:00] The 4 Annexes of ISO 42001:
Annex A: This acts as a Management guide for AI system development, with a focus on trustworthiness.
Annex B: This provides implementation guidance for AI controls, with specific measures for Artificial intelligence and Machine Learning – if you’d like to learn more about the difference between the two, go back and listen to episode 135.
Annex C: Which addresses AI-related organisational objectives and risk sources.
Annex D: This one is about the domains and sectors in which an AI system may be used. It also addresses certification, and we’re pleased to see that it actively encourages the use of third-party conformity assessment. This just ensures that your AI claims have more validity.
[09:15] Who is ISO 42001 applicable to? – Those annex descriptions may have you assuming that this Standard is only applicable to organisations developing AI technology but in actuality it’s applicable to any organisation who is involved in developing, deploying OR Using AI systems.
So if you’re a company who is only utilising AI in your day to day activities, it’s still very much applicable to you!
[10:20] Join the isologyhub and get access to limitless ISO resources – From as little as £99 a month, you can have unlimited access to hundreds of online training courses and achieve certification for completion of courses along the way, which will take you from learner to practitioner to leader in no time. Simply head on over to the isologyhub to sign-up or book a demo.
[12:25] Why was ISO 42001 created?:
- To address the unprecedented rapid growth of AI and all the risks that come with this new technology.
- To ensure that AI development and use are trustworthy and above all, ethical.
- The public are also reasonably wary of this new technology, so ISO 42001 aims to help build more public trust and confidence in the future use of AI .
- ISO 42001 acts as guidance for organisations on exactly how to integrate AI Management controls with their existing systems.
[14:05] AI risks you should be aware of – This isn’t an exhaustive list, as the technology develops, more risks will become known. However, as of the start of 2024, you should be aware of:
Inaccurate information – Many of the chat bots and public AI tools are trained on publicly available information, and as we all know, not everything on the internet is true. So the output from these chat bots will need to be checked and verified by a person before being used or published.
AI bias – Studies have proven that AI results can still be bias. As all the data fed into it is all based on existing information, it still presents the issue of a lack of information from underrepresented groups, or existing bias based on existing data.
Time sensitivity – Not all AI use live data sets. Google Bard does, however Chat GPT is only accurate up until 2021. So double check whichever tool you’re using to make sure the information it produces is up-to-date.
Plagiarism – Data gathered using AI came from somewhere! If you simply copy and paste information provided by AI platforms, there’s a chance you may be plagiarising existing content. Be sure to just use AI as a starting point!
Security risks – Use of AI can expose you to additional security risks, For example, malicious actors could send someone an email with a hidden prompt injection in it. If the receiver happened to use an AI virtual assistant, the attacker might be able to manipulate it into sending the attacker personal information from the victim’s emails.
Data Poisoning – AI uses large data sets to train its models, and we currently rely on these data sets being relatively accurate. However, researchers have found that it’s possible to poison data sets – so in future, AI may not be very reliable if preventative measures aren’t put in place by AI developers.
[17:45] How can ISO 42001 help business manage these risks? – Above all, it provides a structured approach to identify, assess, and mitigate AI risks. ISO 42001 includes the guidance needed to put this in place from the start to ensure you don’t fall prey to the risks mentioned, with a view to monitor and update to address new risks in future.
It promotes transparency and accountability throughout the AI life cycle.
It helps ensure fairness, non-discrimination, and respect for human rights in AI development and deployment.
It will help minimise potential legal and ethical liabilities associated with AI. The UK’s current GDPR and Data Protection Act can loosely cover aspects of AI, depending on how the terminology is applied, but there are already dedicated AI based regulations being developed within the EU which will likely be adopted by the UK.
It can foster innovation and accelerate adoption of responsible AI practices.
And lastly, it provides a common language and framework for collaboration on AI projects.
[21:35] Don’t miss out on our ISO 42001 webinar – We’re partnering with PJR to bring you a 2-part webinar series on ISO 42001. Catch the first part on the 5th March 2024 at 3pm GMT, register your interest here.
If you’d like to book a demo for the isologyhub, simply contact us and we’d be happy to give you a tour.
We’d love to hear your views and comments about the ISO Show, here’s how:
- Share the ISO Show on Twitter or Linkedin
- Leave an honest review on iTunes or Soundcloud. Your ratings and reviews really help and we read each one.
Subscribe to keep up-to-date with our latest episodes:
Stitcher | Spotify | YouTube |iTunes | Soundcloud | Mailing List
Our 7 Steps to Success
The Blackmores ISO Roadmap is a proven path to go from idea to launching your ISO Management System.
Whether you choose to work with one of our ISO Consultants, our isologists, or work your own way through the process on our isology Hub, we’re certain you’ll achieve certification in no time!
We have a proven step by step process that our ISO Consultants implement as soon as our working relationship begins. We use our specialist skills and industry knowledge to determine what is already on track and where improvements can be made. We live and breathe ISO standards, we know the standards inside out so you don’t have to.
Our ISO Consultants can help you implement systems for any ISO Standard. See the full list for specialised standards here.
What our clients have to say
Trusted by leading organisations across all sectors, we support companies of all sizes in any location.
Listen to our Podcast
Welcome to the ISO Show podcast, dispelling myths and sharing tips for success to improve your business with ISO Standards. Join us to hear interviews with successful business leaders as they share their ISO journey with you.
Get top tips via audio master classes “ISO Steps to Success” on the most popular ISO Standards.