To preempt lawsuits and counter allegations that they are training their respective artificial intelligence (AI) models with illegally obtained data, AI firms should rely on publicly available or open-source data, according to Alberto Fernandez. Fernandez, a proponent of decentralized AI and also the European representative of Qubic Ecosystem, emphasizes that AI firms should consider anonymizing and aggregating data to address privacy concerns.
Open Source Tools Levels Playing Field for Smaller AI Firms Says Decentralized AI Proponent
This article was published more than a year ago. Some information may no longer be current.

Cost of Training AI Models Presents a Significant Entry Barrier
Referring to a case in May where the AI startup Lovo was sued for allegedly misappropriating the voices of two actors, Fernandez concurred with the litigants that the startup’s actions constituted an infringement of privacy rights. Moreover, he argued that the act breached ethical standards by disregarding the autonomy of the actors. The Qubic ecosystem representative suggests that explicit consent from the individuals involved could have prevented legal action against the AI startup.
Meanwhile, in his written responses to Bitcoin.com News, Fernandez stated that Stanford University’s AI Index study findings, which indicate that the cost of training state-of-the-art AI models has skyrocketed, are mostly correct. However, Fernandez also noted that smaller AI firms with limited financial resources can still compete effectively by focusing on niche markets and embracing open-source tools.
Regarding the role of regulators, Fernandez emphasized the need for clear standards in AI services. He recommended regular audits and penalties for non-compliance. Additionally, he highlighted the importance of international collaboration to address cross-border challenges and promote public awareness through education on safe AI practices.
In the remaining answers, Fernandez shared his insights on the AI industry’s trajectory over the next five years and discussed the delicate balance between fostering innovation and safeguarding the public.
Bitcoin.com News (BCN): Last May, a couple sued Berkeley-based AI startup Lovo, accusing the company of misappropriating their voices. This case highlights a growing rift between creators and AI companies who stand accused of indiscriminately amassing troves of data to power their technology. In your view, was the AI firm justified in using individuals’ voices for systems training without their permission? What alternative steps could it have taken to prevent legal action?
Alberto Fernandez (AF): Using any individuals’ voice for system training without their permission, infringes on privacy rights, violates intellectual property laws, and breaches ethical standards by disregarding individuals’ autonomy. To prevent legal action, LOVO should have obtained explicit consent from individuals, ensuring transparency about how their voices would be used. Alternatively, the company could have used publicly available or open-source voice data, created synthetic voice data, or anonymized and aggregated the data to mitigate privacy concerns.
BCN: Complexities related to data management in this era of emerging technologies seem to centre on existing regulatory protocols and their limitations. The current laws restrict innovation, yet removing them could potentially expose the industry to unlimited risks. How can authorities balance data management regulations that protect the public from existing risks with fostering innovation?
AF: Balancing data management regulations to protect the public while fostering innovation requires a dynamic and flexible regulatory framework. Authorities should adopt a risk-based approach that tailors regulations to the level of risk associated with different types of data and technologies, ensuring robust protection for sensitive data while allowing more leniency for lower-risk innovations.
Implementing regulatory sandboxes can provide a controlled environment where new technologies can be tested under regulatory supervision, facilitating innovation without compromising safety. Additionally, continuous dialogue between regulators, industry stakeholders, and the public can help adapt regulations to emerging technologies, ensuring they remain relevant and effective without stifling technological progress.
BCN: Regulatory implications represent only one facet of the numerous challenges facing the AI industry. A recent report from Stanford University reveals that the substantial cost of training AI models is hindering participation by non-industry entities. Do you agree with the findings of Stanford University’s study on cost-induced limitations? If yes, how do you think emerging AI firms can manage the situation to avoid going into extinction?
AF: I agree with Stanford University’s study on the significant cost barrier of training AI models for non-industry entities. Additionally, leveraging cloud-based AI platforms and collaborating with academic institutions and consortiums provide further avenues for cost-effective resources and shared research funding. Focusing on niche markets and embracing open-source tools also enhances accessibility, empowering smaller AI firms to innovate and compete effectively in the industry.
BCN: As the ecosystem representative for Europe of Qubic, a Layer-1 chain focused on artificial intelligence (AI), what are the contributions you are making to the ethical development of AI? Can you tell us briefly about the key solutions you offer to address the challenges facing the AI industry?
BCN: As the ecosystem representative founder of Qubic, my contributions to the ethical development of AI include ensuring transparency, promoting data privacy, and fostering inclusive access to AI technologies. Qubic addresses AI industry challenges by offering scalable and secure infrastructure, facilitating decentralized data management, and implementing robust governance mechanisms to uphold ethical standards.
Our solutions empower developers to build AI applications that are both innovative and aligned with ethical principles, driving responsible AI advancement. Additionally, we recently invited UNESCO to our latest event on AI, highlighting our commitment to ethical AI development and underscoring our dedication to global ethical standards, ensuring that AI technology benefits all of humanity responsibly.
BCN: What roles can regulatory authorities across various jurisdictions play in safeguarding citizens from exploitation by bad actors offering purported AI services?
AF: Regulatory authorities can safeguard citizens by establishing clear standards for AI services, conducting regular audits, enforcing penalties for non-compliance, and promoting transparency. They should also facilitate international collaboration to address cross-border challenges and ensure public awareness through education on safe AI practices.
BCN: Qubic uses a unique consensus mechanism called the Useful Proof of Work. Can you tell our readers what it is and why you felt the need to develop it?
AF: Qubic employs a quorum-based consensus mechanism, inspired by Nick Szabo’s paper, which requires a minimum number of members to agree in order to approve a transaction. This is in contrast to our mining algorithm, Useful Proof of Work (uPoW). The uPoW algorithm ensures that computational efforts are directed towards practical tasks, thereby enhancing efficiency and resource utilization. This innovative approach combines mining with useful work, making the network more productive and sustainable while maintaining robust security through quorum-based consensus.
BCN: Lastly, where do you see the AI industry five years from now?
AF: In the next five years, I envision the AI industry making significant strides toward more ethical and responsible development, driven by various technological advancements. The focus will be on democratizing access to advanced AI technologies while ensuring ethical standards through transparent and accountable practices. We can expect AI to become increasingly integrated into everyday life, significantly improving efficiency across various industries and enhancing personalized user experiences.
Additionally, I can anticipate that the development of Artificial General Intelligence (AGI) will reach critical milestones within this timeframe. The collective efforts within the AI community aim to shape a future where AI benefits society as a whole, fostering innovation and responsible technological progress.
What are your thoughts on this interview? Share your opinions in the comments section below.















