AI Has an Ethics Problem – Does Blockchain Have a Solution?

On the surface, Artificial Intelligence is an industry dedicated to the perfection of machines. On closer examination, it’s actually about the refinement of humans.

AI is rapidly approaching the point at which it surpasses all human intelligence combined, but along the way it hasn’t just inherited our greatest qualities – it’s also gained many of our worst behaviors.

While AI can now spit out haikus, knows the best roast dinner recipe, can parse code better than a Computing Studies grad, and produces elaborate images in seconds, it also has a few behavioral problems. Particularly when it comes to ethics.

In the quest to train AI models using the enormous datasets they require, developers have thrown everything into the mixer: open data and, controversially, copyrighted data that’s been scraped from the web.

As a result, AI isn’t fussy about whose homework it copies and thus the industry has found itself riven with accusations of plagiarism and IP infringement. It’s a controversy that threatens to stall AI innovation and to deprive creators of rightful royalties.

But there is a solution to this seemingly intractable challenge and it comes from an industry that’s now routinely mentioned in the same breath as AI – blockchain.

AI’s Ethics Problem

Clearview AI has been landed with multiple lawsuits for scraping billions of images from social media without users’ consent and using them in its facial recognition software, which was sold to law enforcement agencies.

Clearview isn’t alone: hundreds of AI companies are also in the firing line over allegations that they’ve trained models using copyrighted data – resulting in AI-generated content that bears a striking similarity to the original content.

Artists and creators have sued companies like OpenAI for using their copyrighted material to train models without permission.

For example, a group of artists filed a lawsuit against Stability AI, DeviantArt, and MidJourney for the use of their artworks in training data.

The music industry is also currently embroiled in multiple AI lawsuits including one filed in New York by a group of major record companies against Udio, a generative AI service.

Udio lets users create digital music files derived from text prompts or audio files. As Mishcon reports, “They allege that using these prompts caused Udio’s product to generate music files strongly resembling copyrighted recordings.

For example, using the prompt “my tempting 1964 girl smokey sing hitsville soul pop” and excerpting lyrics from the band The Temptations led to Udio generating a digital music file called “Sunshine Melody” which would allegedly be instantly recognised as resembling the song “My Girl”.”

Sometimes, the results of AI copyright infringement are clear to see, such as when generative AIs have included Adobe watermarks in supposedly original generated content.

But more often, the signs of plagiarism are far subtler, hanging over the industry like a smoke cloud that refuses to dissipate.

Suspicion of copyright is almost as bad as the real thing, for it leaves innocent AI developers tasked with proving a negative: that their models haven’t infringed the law.

AI Ethics as a Service

AI developers, eyeing the mounting copyright lawsuits, are faced with a quandary: do they bury their heads in the sand and pray the problem will go away or do they address it head on?

In the early days of an industry, when standards haven’t been codified and bootstrapping was the name of the game, the eagerness to feed any dataset into machines, regardless of provenance, is understandable. But now that the industry has matured, this “move fast and break things” approach won’t cut it.

It’s clear that something has to change, and if the AI industry can’t get its house in order it will be forced to by judges handing down punitive fines and handsomely rewarding plaintiffs whose IP has been shamelessly stolen.

While there were few solutions capable of providing attribution for data in the early days, that’s no longer the case.

One sector that has been fast to foster a more ethical way of harvesting AI data is blockchain, where the power of open networks can be brought to full force in providing transparency into how data is used – and who gets paid.

As a result, we are now entering the era of AI Ethics as a Service.

The Rise of Onchain Attribution

For a glimpse of what the future of attributable AI looks like, one need only consider droppLink, a solution that supports ethical AI and the responsible development of models.

Just as global industries are moving away from dirty fossil fuels to clean energy sources, droppLink enables AI to transition from “dirty models” to clean datasets in which IP is acknowledged and creators recompensed.

One of the challenges in developing such a system is the logistics of the sheer number of creators a single dataset may have to pay.

With potentially hundreds of thousands of copyright holders in a publicly scraped dataset, current systems are simply incapable of automating the attribution process. 

To solve this, droppLink has developed a tokenized system to track and trace model activity.

Its two-sided marketplace allows IP owners to offer data to AI companies under specific commercial terms, with the attribution handled using smart contracts.

It’s the sort of task that blockchain excels at, as exemplified by another AI vertical where it’s currently proving its worth, DePIN, in which GPU providers are rewarded for the compute they provide to distributed AI networks.

Where Next for Artificial Intelligence?

While blockchain-based solutions such as droppLink demonstrate that it’s possible to maintain copyright without stifling innovation, it will take time for the industry to end its reliance on dirty data.

Like that fossil fuel it’s often likened to, oil, data is the lubricant that keeps the AI industry lubricated, and for better AI models to be developed, it’s imperative that this flow of data isn’t reduced to a trickle by over-zealous regulation.

For this reason, the AI industry would do well to proactively adopt frameworks that will protect OP without disrupting their business model.

AI’s infringement of intellectual property rights is a significant concern, given the technology’s ability to replicate, modify, and create content.

Establishing compensation models where content creators are paid for the use of their works in training datasets seems the only way to address this problem.

This is not just a technical challenge, though, it should be noted. Governments and private entities must work together to create frameworks that protect IP while fostering innovation.

The industry needs arbitration mechanisms to handle disputes involving AI and IP infringement efficiently and mediation services to resolve conflicts between AI developers and IP rights holders amicably.

Addressing AI infringement of IP requires a multifaceted approach that combines legal, technological, and collaborative efforts.

By updating laws, leveraging technology, and fostering cooperation, it should be possible to create a balanced environment that protects IP rights while ensuring AI can realize its full potential.

The day when artificial general intelligence (AGI) becomes a reality is closer than ever, and it’s a case of when, not if, AI surpasses humans in every meaningful metric.

We can’t stop the machines but we can at least teach them our better qualities, so that the models of tomorrow aren’t just all-knowing: they’re also ethical.

Subscribe to our newsletter

To be updated with all the latest news

Abhishek Kumar Jha
Abhishek Kumar Jha
Knowledge is Power

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Subscribe to our newsletter

To be updated with all the latest news

Read More

Suggested Post