Latest news about Bitcoin and all cryptocurrencies. Your daily crypto news habit.
By Peter Bel & Alex Danylchuk
Artificial Intelligence (AI) has become increasingly prevalent across various industries and sectors, revolutionizing processes and decision-making. However, its integration also raises pressing ethical considerations. Organizations must prioritize transparency, fairness, privacy, and social justice to ensure responsible AI integration.
Is that so? Let's see.
Today we're talking about ethical, security, and regulatory considerations with Divya Prashanth â founder and CEO at HQ NFT; Joshua Hale â US attorney, Ron Rivers â "Self Actualization in the Age of Crisis" author; Spirit DAO co-founder and Ihor Kubalskiy â CEO and founder at Qbein.
AI ethics can be achieved through decentralization and transparency.
Massiveness and the adoption of artificial intelligence have led to even more investment and the industry's rapid growth. However, AI is not even trading data from Facebook. It is a very powerful tool that can cause great harm if used incorrectly. So, how can we make AI usage more ethical? We addressed these questions to Divya Prashanth - HQ NFT CEO and founder.
"Embracing transparency is essential, and it requires decentralizing every facet of the process. A powerful approach is to leverage blockchain technology and make the datasets utilized in model training accessible on the blockchain. This grants people the ability to directly observe the data being employed. By providing this level of visibility, we instill a sense of trust and accountability, fostering a more open and inclusive environment for all stakeholders involved.
Another approach to promoting transparency and involving data contributors is to train and incentivize them, ensuring they understand how their data will be used for a specific model. By educating and incentivizing individuals, we empower them to make informed decisions about sharing their artist data. This active participation fosters a sense of ownership and awareness, creating a more ethical and transparent data ecosystem.â
Nevertheless, DAO is not a magic wand:
âThe question of whether DAOs (Decentralized Autonomous Organizations) are a good alternative to the current structure depends on various factors and perspectives. DAOs offer several potential benefits, such as increased transparency, decentralized decision-making, and greater inclusion of stakeholders. They can enable a more democratic and participatory approach to governance and resource allocation.
However, it is important to consider the challenges and limitations of DAOs as well. DAOs can face difficulties in achieving efficient decision-making, particularly in situations where consensus is difficult to reach. There may also be concerns around accountability, as the absence of a centralized authority can make it challenging to assign responsibility for actions or resolve disputes. Furthermore, ensuring security and preventing malicious actors from manipulating the system is a significant consideration.
Ultimately, whether DAOs are a good alternative depends on the specific context, goals, and values of the organization or community in question. It is essential to carefully evaluate the potential benefits and drawbacks, as well as consider the suitability of a DAO in relation to the particular industry, governance needs, and stakeholder dynamics.â
"I think that's a nonsense argument. That's purely about power, maintenance, and preservation versus the collective good. Ultimately, there will be negative consequences no matter what direction we choose. The question is, who's going to be making those decisions? Are the negative consequences going to arise as a byproduct of collective governance? Or will they arise because one CEO said, "This is what we're going to do"? I certainly would choose that we make our mistakes together."
Asking AI to regulate itself is almost like asking a kindergartner to regulate themselves with a bunch of chocolate in front of them.
The need for effective AI regulation has become paramount to ensure this transformative technology's responsible development and deployment. Regulation in AI aims to strike a balance between fostering innovation and protecting individuals' rights and interests. Can we achieve that regulatory balance in the near future? Let's ask attorney Joshua Hale.
"I think the already existing regulations in AI or almost any technology field that is a newer technology will always lag behind the technology.
Regulation is slow and doesn't move as fast as people would like it to, even if it gets put into law. For example, we're still looking at securities laws. We're still looking at the securities act that was done in the 30s. It was almost 100 years ago. So to assume that AI regulation will get it right on day one is a fallacy."
We already see some practices that apply AI to the laws and regulations. However, that approach is still very debatable.
"Asking AI to regulate itself at this juncture is almost asking a Kindergartner to regulate themselves with a bunch of chocolate in front of them.
Maybe that's only my point of view, but I think as a neutral arbiter (if ai can be neutral) it might be intelligent to have ai self regulate by having prompt engineers ask ai how it should be regulated and legislated. Many legislators are broken politically now, but ai needs regulation now, not when it is politically expedient. We could have AI look at and "study" all previous precedents as part of that analysis. The answer it gives is likely to be as good as or better than our broken legislature code, legalese and pork barrel legislation." - said Joshua.
Let's get to a more "case-based" approach. If, for example, someone creates an image that society considers horrible and unethical. Who is to be blamed: AI or a human? Of course, human is always behind technology, but AI is the one who actually created it. How should we limit AI, and should we do it at all?
"Now, the question is, it's not so much whether the AI can create this despicable thing. To me, the AI should be able to create despicable things, even though I find it despicable. The question is, if the AI creates a despicable thing and it's my daughter, then it's not a question of privacy the way you think of privacy. It's a question of libel and slander, particularly if it's put out into the marketplace. And that's old settled law as well. The same thing that I said about the image of you doing something horrible. There's a difference between me making that image and making a picture of it and hanging up in my house and making T-shirts of it and selling it at conferences or conventions.
One is an actual public display of it, and the other is just like, I think you're a horrible person, so I make these artworks about you. I don't think we should limit the private use of AI, even if you find it abhorrent. I do think the public use and publication of it is actually what creates problems, not the private use."
AI was created with biases because someone who has the capacity to develop machine learning algorithms is going to have a very specific track in life
AI security concerns are a growing worry in today's world. AI systems' increasing complexity and autonomy creates vulnerabilities that can be exploited for malicious purposes. Additionally, there are concerns about developing autonomous weapons systems, raising ethical and security issues. What security issues should we really consider? We asked Ron Rivers, Self Actualization in the Age of Crisis author, Spirit DAO co-founder.
"AI is what it is in the immediate present. It was created with biases because someone with the capacity to develop machine learning algorithms will have a very specific track in life. So to that end, there are biases built in there. Ultimately, when we think about the long-term vision of AI, what's critical to ethical development and the reduction of biases is establishing a consensus around "what is ideal."
Today AI is evolving rapidly in a wide variety of directions and degrees. Open source is outpacing corporate, so for the first time in a long time a transformative technology lacks a moat. . To that end every individual should practice awareness of how they direct these technologies. AI should be a public good. We co-create consensus mechanisms around a shared vision of the good and then leverage it for shared elevation. . Today there's no overlaying collective consensus around the ideal intent, but the potential is immense in a wide variety of directionsâincluding ones that might harm others.
This story has been told many times, the printing press, motorized vehicles, electricity, the internet. The expansion of our powers is omnidirectional. If we want to prevent violence we should worry less about AI and more about the persistent systemic conditions that reinforce it.
Itâs an interesting thought experiment, but the answer doesnât really matter. The direction has already been chosen, individuals are embracing at scale and speed like never before. Itâs easy to set up local agents and the datasets available are equally as impressive as the corporations. Many more so. If the tool will be leveraged for mass harm there is no stopping it now.
And what if we use a more decentralized approach utilizing blockchain technologies, especially ZK?
"I think when we talk about larger implementation and especially the integration of a technology like the ZK technologies, where you can prove a happening happened while at the same time not knowing who is part of that is a tremendous leap for privacy. But at the same time, it provides guardrails for the AI not to go off the rails if it's confined to a blockchain with specific rule sets and a specific way of operating. I think those two combined provide a context of AI that serves humanity instead of dominating it or becoming something out of our control."
When we talk about a more philosophical concept of AI as a self-educating model? What are your biggest concerns?
"I think it is critical that if it occurs, AI is kind of treated like a child, like one of us, a part of us. Our present approach to the systems we surround ourselves with do not support this as the norm. We're not building a slave. And I think that's really critical to our survival. Ultimately, if we are going to create sentience, then that must be dealt with the most extreme empathy possible.
My main concern right around AI sentience is that many things about us would give a sentient AI a foul taste in its mouth about who we are. We war for profit. The vast majority of our population is in perpetual struggle. Eight people have a collective wealth of 4 billion. It's just extremely inequitable the way we've designed our systems. So if and when an AI awakens, it awakens with this collective total knowledge.
We are giving birth to this creation. So if we can't get out of this rigid hierarchy of how we organize ourselves and what we value, then we should not be surprised if AI tries to wipe humanity out because we would do the same thing. We do it to each other, right? So it's not surprising."
We can still navigate the future of AI responsibly
Still, we have a lot of discussions about AI bias of all kinds. So, we asked Ihor Kubalskiy, Qbein founder and CEO, about the three most important possible issues with AI.
"Firstly, the lack of interpretability in AI models is a significant concern. Deep learning algorithms, such as neural networks, often function as "black boxes," making it challenging to understand the reasoning behind their decisions. This lack of transparency raises trust, accountability, and fairness issues, particularly in domains where explainability is crucial, such as healthcare or legal systems. It is vital for researchers to actively work on developing techniques that enhance interpretability, allowing users to comprehend and trust the decision-making processes of AI systems." â said Ihor.
Another important issue is the AI economy and how this aspect can ensure the responsible implementation of AI:
"In this regard, I only see one option. AI needs electricity, computing power, and data. Right away, we reject fiat and CBDC. Payments should be running in and out 24/7, not in working hours. So we are only left with crypto. Smart-contract-centered cryptos seem like a good idea at first sight. And it could be true, if it wasn't for one thing â centralization. Being able to control the AI economy means control over AI. And that's what we were trying to avoid. So, in this regard, I agree with Arthur Hayes â Bitcoin is the only option for the AI economy. It's energy-based, it's available 24/7, it's decentralized. That's exactly what we need."
And the last one important to mention is the uncontrollable growth of AI power:
"Lastly, while still in the realm of speculation, the concept of superintelligent AI systems surpassing human capabilities raises profound concerns about control and alignment. Ensuring that advanced AI systems are aligned with human values and goals is crucial to avoid any potential risks associated with a misaligned superintelligence scenario. Ongoing research and collaboration in the field of AI safety are essential to develop methodologies that ensure AI systems remain beneficial and controllable, even in scenarios of extreme technological advancement."
But don't be too scared: there are only a few principles to follow to make AI safe for all of us:
"Also, I wanted to remind all of us that we can still navigate the future of AI responsibly and ensure its positive impact while mitigating potential risks." â Ihor stated.
Authors Bio:
Peter Bel, Byzantium Agency founder, former Cointelegraph editor.
Alex Danylchuk, Communications specialist, Web3 enthusiast
Disclaimer
The views and opinions expressed in this article are solely those of the authors and do not reflect the views of Bitcoin Insider. Every investment and trading move involves risk - this is especially true for cryptocurrencies given their volatility. We strongly advise our readers to conduct their own research when making a decision.