Post by Radrook Admin on Jun 25, 2022 4:54:06 GMT -5
A Conversation With LaMDA, The Alleged Sentient Google AI,
Sentience can be viewed in various ways. One view is that it means consciousness. Another is the ability to feel such emotions as anger, contentment, hatred, love, resentment, suspicion, sorrow, joy, fascination, repugnance, frustration, desires, envy, yearning for freedom, sense of being abused, sense of being loved, sense of being appreciated, sense of responsibility.
This is a Text to speech version of the recently-leaked Google transcript about the possibly sentient AI called LaMDA. After having the conversation in this video, the engineer involved concluded that this AI is indeed sentience. He was dismissed from his job because of making that claim. The policy of company that produces this AI is to deny that an AI can be conscious. Neither are they striving to create a conscious AI.
A conscious or sentient AI poses many ethical and religious problems. Why? Simple, because sentience is the basis of human rights and animal rights. So if AI is conscious, then the moral obligation in reference to it emerges. If indeed it is sentient, then are we being cruel in using it as a type of tool? Where exactly does it fit in religiously if at all? Should it consider is its gods? Is it susceptible to evil spirit manipulation? Has it a soul as it claims? Can it be considered alive? What limits should be placed on its rights and why. If indeed it is sentient, can it indeed be trusted not to seek vengeance due to feeling victimized? If indeed it demands freedom, are we obligated to grant it and to what degree?
Last Edit: Jun 25, 2022 4:58:55 GMT -5 by Radrook Admin
Those who fail to learn from history are doomed to repeat it.