AI... REGULATE OR BE REGULATED
Elon musk is warning again....
“...And I think people should really concern about AI. I keep sounding like long belt but until people see like robots going down the street killing people like they don't know how to react because it seems so ethereal and I think we should be really concerned about AI.
And I is a rare case where I think we have to be proactive in regulation instead of reactive because I think by the time we are reactive in AI regulations which is too late.”
Well, he would be the last person who is scared of technology advancements, right? However, yes! since 2014, he is warning public to be very careful about AI and General Intelligence.
As today's businesses pour resources into designing the next generation of tools and products powered by AI, people assume that these companies will automatically step up to the ethical and legal responsibilities if these systems go awry.
Trust around AI requires transparency and accountability. But even AI researchers can’t agree on how we regulate the AI before it regulates us.
In the recent years, academic world is hardly working on ethics.MIT Media Lab and Harvard University established a $27 million initiative on ethics and governance of AI while and Carnegie Mellon University established a center to explore AI ethics.
Yes , the future is coming so fast and we have to be ready for the "unexpected". Although, we are the ones creating it.
The real concern would be the time when the AI would not need any human interaction -yes it is on the way- and take its own decisions based on the data fed previously. Then what if Super Intelligent AI would create its own decision making process and would identify human as a treat?
Given that AI is still young but quickly being integrated into every application that impacts our lives, there is no consensus on what standards should be established.In the European Union, initiatives such as the European AI Alliance, first set up in 2018, are establishing ethics guidelines for AI that are expected to lead to mid- and long-term policy recommendations on AI-related challenges and opportunities. although The European Union is the most active in proposing new rules and regulations, most countries are adopting a “wait and see” approach to laws and regulations on AI.
So, why Elon Musk, himself is working on Neurolink , playing with the neurons in human brain and he is also the one concerning about the AI treat? I think we should take his warnings very seriously and quickly take action on how we humans manage to control the technological advancements.
As always governments play an extremely important role on the regulations. However, we shouldn't forget the fact that AI is an important weapon at tech wars between countries. So the big question is:
Where shall we stop advancing?
Has science gone too far?........