Blog:
The Dark Side of AI #9: Employment Eclipse at Onclusive and ChatGPT's Scientific Slip
Having witnessed the beneficial impact of ChatGPT in enhancing a child's quality of life, and the potential of AI, it's now time to address the darker facets of this technology. Beyond generating flawed scientific papers, AI is also facilitating massive layoffs: welcome to the Dark Side of AI 9.
Have you heard of Onclusive, a company resulting from a 2022 merger between a French, a British and an American entity? A merger 100% funded by Californian investment fund Symphony Technology Group (STG), which has $10 billion under management. No? Well, you may be hearing that name more frequently in the coming days and months!
This giant of press release writing is indeed the perfect representation of everyone's fear: being replaced by AI. The company's CEO informed his staff that more than 210 French employees would be losing their jobs to AI! This “hasty thank-you” still represents 50% of the French section and almost 15% of the company's global workforce!
This initiative is simply a “trial” by the company to delve into the potentials that AI unfolds. The future holds the verdict on the number of employees to be released in the succeeding months should this initiative triumph...
While certain studies portray a favorable impact of AI on employment, others, such as those from Goldman Sachs, ring the warning bells, highlighting that 300 million jobs are already at stake...
Searching for texts, analyzing data, synthesizing it and finally writing it up. Writing a scientific text is not the simplest of writing exercises. Fortunately, human ingenuity has developed AI to simplify the task! Well, that's when AI works properly, or when you know how to use it!
French scientists have been caught red-handed for their misuse of the tool… Some, for example, forgot to remove the “Regenerate Response” or the prompt sentences before submitting them, others were clearly not shocked when ChatGpt transformed “breast cancer” into “breast peril”...
While the situation may lend itself to a smile, the reality is rather sad. Neither the co-authors, the editors, the media nor the reviewers of the prestigious journals that published the studies pointed out these errors!
So, can we still trust scientific studies in reputable publications if they include such errors? Couldn't other data also have been hallucinated by an AI?
More than ever, it remains important to think critically and follow quality sources. While we can only recommend that you follow HUMAN Protocol's Twitter and Discord, never forget to remain critical, even of us.
Legal Disclaimer
The HUMAN Protocol Foundation makes no representation, warranty, or undertaking, express or implied, as to the accuracy, reliability, completeness, or reasonableness of the information contained here. Any assumptions, opinions, and estimations expressed constitute the HUMAN Protocol Foundation’s judgment as of the time of publishing and are subject to change without notice. Any projection contained within the information presented here is based on a number of assumptions, and there can be no guarantee that any projected outcomes will be achieved.