You are currently viewing Utilizing Open-source Software Licensing Presents a Solution for Enforcing Responsible AI Usage

Utilizing Open-source Software Licensing Presents a Solution for Enforcing Responsible AI Usage

Numerous strategies have been proposed to regulate artificial intelligence (AI) due to its capacity for both societal benefit and harm. For instance, the EU’s AI Act imposes stricter regulations on systems categorized as general purpose and generative AI, as well as those deemed to present limited, high, or unacceptable risk.

While this approach is innovative and ambitious in addressing potential drawbacks, leveraging existing tools may offer additional benefits. Software licensing, a widely recognized framework, could be customized to address the complexities associated with advanced AI systems.

Responsible AI licenses, such as OpenRails (Open Responsible AI licenses), could play a pivotal role in addressing these challenges. Similar to open-source software, AI licensed under OpenRails allows developers to publicly release their systems under specific conditions. This enables anyone to utilize, modify, and redistribute the originally licensed AI.

However, what sets OpenRails apart is the inclusion of provisions promoting responsible use. These conditions mandate adherence to legal standards, obtaining consent before impersonating individuals, and prohibiting discriminatory practices.

In addition to the compulsory requirements, OpenRails licenses can be customized to integrate other conditions pertinent to the particular technology. For instance, if an AI system is designed to classify apples, the developer might stipulate that it should never be employed to categorize oranges, as such usage would be deemed irresponsible.

This approach is advantageous because numerous AI technologies are highly versatile, capable of serving various purposes. Predicting the potential misuse or exploitation of these technologies can be exceedingly challenging.

Thus, this model enables developers to promote open innovation while mitigating the risk of their concepts being utilized irresponsibly.

In contrast, proprietary licenses impose stricter limitations on the usage and modification of software. They aim to safeguard the interests of creators and investors and have played a pivotal role in the expansion of tech giants such as Microsoft, who capitalize on providing access to their systems for a fee.

Given its extensive impact, AI arguably warrants a distinct and more nuanced strategy that fosters the openness essential for advancement. Presently, many major corporations operate proprietary AI systems that are closed off. However, this trend could shift, as evidenced by several instances of companies embracing an open-source approach.

For instance, Meta’s generative AI system, Llama-v2, and the image generator Stable Diffusion are both open source. Additionally, French AI startup Mistral, founded in 2023 and currently valued at US$2 billion (£1.6 billion), is poised to publicly release its latest model, which is rumored to rival the performance of GPT-4 (the model underpinning ChatGPT).

However, while embracing openness in AI development, it’s crucial to balance it with a profound sense of responsibility toward society due to the potential risks associated with AI. These risks encompass algorithms having the capacity to discriminate against individuals, displace jobs, and even present existential threats to humanity.

Read More: