The Original Gay Porn Community - Free Gay Movies and Photos, Gay Porn Site Reviews and Adult Gay Forums

  • Welcome To Just Us Boys - The World's Largest Gay Message Board Community

    In order to comply with recent US Supreme Court rulings regarding adult content, we will be making changes in the future to require that you log into your account to view adult content on the site.
    If you do not have an account, please register.
    REGISTER HERE - 100% FREE / We Will Never Sell Your Info

    PLEASE READ: To register, turn off your VPN (iPhone users- disable iCloud); you can re-enable the VPN after registration. You must maintain an active email address on your account: disposable email addresses cannot be used to register.

  • Hi Guest - Did you know?
    Hot Topics is a Safe for Work (SFW) forum.

Should Artificial Intelligences have Rights?

kallipolis

Know thyself
Joined
Aug 29, 2004
Posts
17,230
Reaction score
5
Points
0
Location
Piraeus, Greece
The sci-fi genre is fun, and important for expanding possibilities, but the imagination is able to far exceed the abilities of technology.

That's about it.

I presume that the OP has been reading the thread on the religion forum discussing this very topic, and its contribution to life extending possibilities for humans.
 
I

We may not see human-like AI for another couple decades but to rule out the possibility would be early. Although, who is to say there isn't already above-human AI present within our own world. How would we even be able recognize it?


Here we enter the realm of area 51 and UFOs.

Speculation, and conspiracy theory is a big selling point for the National Enquirer.

Rosewell still influences those determined to believe that the government is concealing extraterrestrial vehicles, and that reverse engineering has enabled the United States military to design, and manufacture aircraft leap years ahead of what would be possible without the benefit of flying saucer wrecks.
 
AI-human hybrids should have rights. They are us, I think...sort of.

Extraordinarily intelligent machines like Skynet should not be built. Fuck monsters' rights! Aliens that want to kill us are bad. (It goes without saying that extraordinarily intelligent machines would want to kill us, right?)

LOL at the expert futurists in this thread. :p
 
no, robots are not organisms. in other words, if we don't create them, they don't exist. they don't have the ability to reproduce themselves and by the way they are created, it would be almost impossible to do so because they are man made with material that cannot be replicated on its own because it's not living. microchips and plastic are not living things. why should we give robots that we created that don't have the same characteristics as all organisms rights? you might as well talk to your congressman about giving computers, iphones, and ipods rights.
 
Just who will program the AI? It would seem that non-AI cannot program greater AI than the non-AI itself has. Thus we are left with a paradox that the AI we create is no more intelligent than our own. Gain=0.
 
An AI with infinite amounts of information at it's disposal will already be much more intelligent than humans.

Watson wasn't. If he had been programmed with average human reaction times, he never would have won Jeopardy! Besides, he was only as intelligent as all of the human intelligence combined. He didn't know more.
 
An AI with infinite amounts of information at it's disposal will already be much more intelligent than humans. If the AI were to begin creating the next generation of AIs then those AI would create the generation after that and so on and so forth until everything spirals our of control and the earth blows up.

Non-directed and non-solicited self-destruction. I feel a need for the tenets in Asimov's I, Robot. Religionists raise your chalices.

Query: If the AIs put x and y together, how do they know what xy (the compound or thing) is. How do they go abut finding out what it is. Is there an artificial moral and ethical intelligence? - (Boy, would I love to be in on the free-for-all) - promoting or constraining its use.
 
He didn't know more because he was unable to understand.

Oh, yes he could. He could 'hear' the answers (as text) at the same time as the humans heard it from Alex. Watson could process that information and understand exactly what he was to search for. He could then search an immense database and select the best possible answer, just as a human would. He had to understand it in order to make a selection based upon probability of being correct.

The only difference is (and you might not know this) is that the buttons don't work until Alex finishes reading the question and a light over the question board flashes to alert the players that they can now press the button. For Watson, the button-push was instant. He didn't have to wait. His reaction time was instant. Human reaction times aren't.

Watson worked completely independently. He was his own intelligence. He had to understand the answer and find the correct question all by himself. But would it be murder if someone pulled his plug?
 
No.

As long as it is a computer that is all it will ever be.

A computer.

2001_hal4.jpg
 
That's crazy. I watched the first youtube vid, the female robot was so believable. I wouldn't be surprised if these robots will one day start war with us.

i wish they would. i am ready to kill something. :mad: i would love to kill me some robots.

one thing though, i'll be damned if i have to get head from a robot though. what happens if they malfunction and die while i'm getting a blowjob? their lips might be stuck on my dick and i might have to call 9-1-1 for the fire department to get something to unhinge a robot from my dick.

but whatever is whatever. can't wait for them for create isex-the first robot made solely for sex. :p
 
He didn't know more because he was unable to understand. Create AI that understands the information it knows and that will mark the beginning of the next evolution of consciousness.

And how do you propose we do that?

I'm a computer programmer that has worked on AI apps.

Anything currently in existence being touted as AI is at it's core a complex syntactical arrangement that was input by a human. The entire base of the program's understanding is defined by how much human effort you want to expand to increase it's complexity. Even if you had 1000 workers working for 100 years you could not hope to write a program even remotely as versatile as the human brain.

Secondly, even with a complex program, it is at its core a stream of electrons moving transistors to the on or off position in a very rigid predefined way. There is no consciousness.

The human brain is a neural net, which is completely different from the sequential processing used in computers.

Also, whoever told you we will have the brain reverse engineered by 2020 is full of it.
 
Just who will program the AI? It would seem that non-AI cannot program greater AI than the non-AI itself has. Thus we are left with a paradox that the AI we create is no more intelligent than our own. Gain=0.

Not necessarily so. Given the same program with more working room and better connections and faster synapses, we could well end up with something smarter than we are.

The question is whether an AI is self-aware. These things which are self-aware have the same rights we do, inherently --it's not a question of "should they", but whether they do, and that depends on self-awareness.

There are still some who maintain that any sufficiently complex neural system will give rise to self-aware intelligence. They tend to hold that intelligence cannot be programmed, that all that can be done is to give it basic algorithms and parameters and turn it loose to learn and see if it "wakes up".

Watson had some amazing capabilities, but understanding was not one of them. Of, he could follow human grammar and thus the questions, but his "thinking" process entailed sifting through reams of data to find something that fit his criteria for an answer. The most interesting thing about him was that the team had given him learning abilities, the capacity to create his own evaluation routines, and they had no clue in many instances why he "thought" as he did.

This is one reason there's a swath of AI people very interested in what's encoded into the structure of the human brain. The capacity for language is one, something made certain when the discovery was made that "baby talk" among siblings of the same age turns out not to be gibberish -- they've invented their own language, with it own vocabulary and grammar and everything!

In fact, though, there's still argument over how close we are to reverse engineering the brain. One of the big hurdles is that the brain wires itself -- and how do you get something made of silicon to do that?

There are also some who believe we'll never get good enough to create fully independent AI. Some believe, though, that what we may do is learn to 'scan' a brain and somehow transfer everything in it to a computer brain, making people effectively immortal... although that points to the question of whether the electronic copy is actually the same person.... I've written a couple of stories where such a transfer involves destruction of the brain, meaning you can't clone yourself into a computer as a backup, it's a one-way trip.
 
And how do you propose we do that?

I'm a computer programmer that has worked on AI apps.

Anything currently in existence being touted as AI is at it's core a complex syntactical arrangement that was input by a human. The entire base of the program's understanding is defined by how much human effort you want to expand to increase it's complexity. Even if you had 1000 workers working for 100 years you could not hope to write a program even remotely as versatile as the human brain.

Secondly, even with a complex program, it is at it's core a stream of electrons moving transistors in a predefined way. There is no consciousness.

That's a strong argument, one used by many who hold that it has to be self-programming based on earning/experience, and wake up on its own. In that sense, if it's true, we'll never make AI, but we might be able to set up the conditions for it to occur.

James Hogan paints another interesting view in a couple of books, where alien machines that crashed on a dead world get their programming scrambled, and it turns into survival of the fittest, where the programs that still work survive and those that freeze don't, and it leads to intelligence. I don't really buy it, but it's good food for thought.
 
Forget this Star Trek shit. If you remember the MInbari Ambassador Delenn from Babylon 5 knew all. Go straight to the source.
 
The question is whether an AI is self-aware. These things which are self-aware have the same rights we do, inherently --it's not a question of "should they", but whether they do, and that depends on self-awareness.

I can imagine a self-aware entity that has no will to persist, feels no pain and cares not if it's enslaved.
 
Forget this Star Trek shit. If you remember the MInbari Ambassador Delenn from Babylon 5 knew all. Go straight to the source.

Delenn was a piker. Kosh 'knew all'.

I can imagine a self-aware entity that has no will to persist, feels no pain and cares not if it's enslaved.

I know humans pretty much like that. They still have rights, they just don't bother exercising them.
 
Back
Top