Everyone knows that ChatGPT is for kids, and Google Gemini is for your grand-ma, but if your company has already implemented Co-pilot, then what need do you have for platforms like EA? In this article, we explain the difference between the various chatbots, and where they are most useful (and where you need specialist support with platforms like EA).
What ‘Open’ means when put together with ‘AI’
When OpenAI first launched, many saw it as a vanguard for ‘open-source’ AI – a not-for-profit organisation that would herald a push for AI development for the benefit of humanity.
While OpenAI have certainly advanced progress – it’s philanthropic origins have become blurred – and now it’s clear the “open” in the name is code for something else.
Just as fire requires fuel, oxygen, and heat to burn; the AI triangle is compute (a fancy way to describe CPUs, GPUs, and NPUs – the chips that run AI software), algorithms (the fancy maths that makes up AI software), and data(without which the AI is useless).
The breakthrough that powers today’s generative-AI era was the realisation that if you throw sufficiently large quantities of data through the algorithms (and had sufficient levels of compute available to process it all) – you would get transformative abilities.
The breakthrough that powers todays generative-AI era was the realisation that if you throw sufficiently large quantities of data through the algorithms (and had sufficient levels of compute available to process it all) – you would get transformative abilities. The early 2020s were thus characterised by AI companies like OpenAI craving all forms of data they could harvest in order to better train their models. ChatGPT5 is better than ChatGPT4 and its earlier incarnations not particularly because of any breakthroughs in AI research, but mostly because it’s been trained on more data than before.
We’re probably living in a world of ‘peak-data’. With smartphones fully saturating the market and everyone live streaming on social media – the amount of data being created by humans is at the maximum level it has ever been. This means that AI companies are running out of big advances in data volumes to train their models on than they had before.
Why can’t they just use AI to create data and then train their models on this? Well, this doesn’t work due to a phenomenon called ‘model collapse’. It’s like in-breeding for AI. If its outputs are used as its inputs, it works for a while before becoming quite strange!
With the web being increasingly polluted with AI generated content alongside human content, it’s why articles like this are so important – 100% human generated content will become a rarity in the future – and is exactly the sort of content that AI craves.
Where else can AI companies get new data?
Well, from their usage. Free (and cheap) platforms like OpenAI/ Gemini et al are really given away (or cheap) because there is an incentive to provide them with documents, pictures, text in order to ask the AI to transform them in some way, as this data can be used to further advance the AI’s capabilities.
For the most part, this is harmless – but in the case of a corporate document that contains sensitive information – this could be akin to publishing the document on the web.
While source data cannot be entirely reconstituted through prompting AI systems – facts contained in source documents can indeed be revealed if sufficiently well prompted and safeguards set aside.
This carries huge risks for corporate usage of these systems – and is the primary reason that usage of GPT and Gemini should remain for kids and grandparents – and not professionals for their day jobs.
What about Copilot?
Microsoft Co-pilot, just like our EasyAutofill platform, doesn’t use customer data to train its model, and instead relies on publicly available data for model training, and charges market rates for its usage unlike the ‘open’ AI platforms.
Use of Microsoft Co-pilot in a corporate setting is therefore entirely safe, and appropriate. There is no risk that your sensitive document being analysed by Co-pilot could be accessed by another company, or facts leaking – just as is the case with EA.
But Co-pilot’s limitation is the fact that it is only a data output tool and not a data entry tool.
Why EA is Different
EA doesn’t just generate answers to due diligence forms such as those contained in PDF, Word, and Excel files – but also has a comprehensive review, edit, and approval platform built into the software.
This means that we don’t just generate first drafts – we continuously see how much editing is required to those answers – and crucially – see the final output once it’s improved.
This also means EA ‘learns’ through usage, and unlike Co-pilot, we can optimise the answers generated to be minimising edits – without ever exposing our customer data to ‘open’ AI services.
The principle difference here is in that EA being an ‘agentic’ AI platform – i.e. an AI worker skilled in a particular task (in our case, completing PDF, Word and Excel files in their original format), whereas Co-pilot is a Chatbot – an interface to communicate with an underlying AI model, and not a workflow orchestration tool in its own sense.
Choosing the Right AI
When choosing AI – our recommendation is to bear these two things in mind:
- Always buy professional tools for professional contexts – and be sure that your data is never going to be used to train the system.
- Pick the right tool for the job.
Chatbots extend capability that computers could already do by being better ‘search’ tools – but Chatbots cannot perform tasks end-to-end in the same way as agentic platforms like EA.
It’s better to think of EA more like hiring a human worker to do a task.
Need forms filling in? Well how much EA you need is a function simply of how many forms you have, how long they are, and how quickly you need them back.
If you were doing this in the pre-AI era, you could estimate the human workload and hire accordingly. It’s a similar process for platforms like EA.