Amount

AI discussion

09 Dec 2017 17:40 #57595 by shadar
Battle Angel Alita was created by shadar

ChaozCloud wrote: I agree on the eyes part. It looks weird and out of place. I rather they didn't do it. Or do a full CGI movie and do it like that.


Interesting how demanding/perceptive/critical we naturally are when it comes to human appearances, consciously and subconsciously. I think this is an evolutionary trigger that is buried deep. The ability to sense falseness in appearance or body language/movement. That's always been an absolutely essential survival trait that runs at root level in our human operating system.

But we readily set aside reality when it comes to pure CGI stuff, and can be very comfortable with things we know aren't real. It doesn't get in the way.

But when you get too close to reality, then those root instincts kick in and we enter the "uncanny valley". Basically, our instincts get in the way of the enjoyment.

The hope of CGI is that it will cross that valley (in my lifetime!) and be able to render completely or partially synthetic characters in movies that defeat even the most critical eye into believing they are real people. Still a long ways to go, but it remains a fascinating subject.

Shadar
The following user(s) said Thank You: brantley, Markiehoe, Starforge

Please Log in or Create an account to join the conversation.

  • shadar
  • shadar's Avatar Topic Author
  • Offline
  • Uberposter par Excellence
  • Uberposter par Excellence
More
09 Dec 2017 17:45 #57596 by Markiehoe
Replied by Markiehoe on topic Battle Angel Alita
My thoughts exactly Shadar.
The apes in Planet of the Apes look "real" because they are not human.
To this point every CGI "Human" I have seen looks fake or even creepy.
A genetic safety response to ensure the safety of the species.

Please Log in or Create an account to join the conversation.

More
09 Dec 2017 18:05 #57598 by shadar
Replied by shadar on topic Battle Angel Alita

Markiehoe wrote: My thoughts exactly Shadar.
The apes in Planet of the Apes look "real" because they are not human.
To this point every CGI "Human" I have seen looks fake or even creepy.
A genetic safety response to ensure the safety of the species.


Of course, if that "uncanny valley" ever gets breached, which will likely come with some pretty advanced AI, we humans might find ourselves in a precarious position, ala the fears of Elon Musk, Bill Gates, Stephen Hawking, etc.

Take "fake news" and ramp it up a hundred times without any way of even knowing WHO (let alone what) is real. Likely I won't see that day, but many of you could.

Great for entertainment, fantastic even, but the concept of reality could disappear entirely, for good and evil purposes. Very possible you might someday be working for a bot, or having a relationship with one, and not even know it.

We humans are rather fond of being the apex predator, and won't take kindly to being in second place. But who knows what society will look like by that time. Maybe AI and the bots will save us from self-extinguishing our species. (Or maybe do it for us to save the planet!)

That last thought makes our currently contentious politics seem trivial.

Maybe Cameron is a prophet with his Terminator movies, although I'd rather go down the Avatar route. Either way, humanity might not always be what it is now. Which fills me with both hope and fear.

Shadar
The following user(s) said Thank You: Starforge

Please Log in or Create an account to join the conversation.

  • shadar
  • shadar's Avatar Topic Author
  • Offline
  • Uberposter par Excellence
  • Uberposter par Excellence
More
10 Dec 2017 02:09 #57607 by Starforge
Replied by Starforge on topic Battle Angel Alita
Well said Shadar.

A lot of jobs left the developed countries for cheaper labor in China, India, etc. When automation reaches a point where even much of that labor isn't worth the cost, it's going to devalue much of the third world even more than it is. Our consumer goods will get even cheaper, company profits will continue to rise and the developed countries will float the idea of a stipend for their unemployable citizens, but places that have nothing to offer but warm bodies are going to suffer the most.

You and I are of a similar age, but I don't think it's out of the realm of our lifetime. We might see it just not be impacted as much as the eager young people looking for jobs to start a family. Unless you are STEM, it's already rough and likely only going to get rougher. Free college for all doesn't help if there's no job at the end of the process even if they could pass such a thing.

Reminds me of the worlds created by David Weber and Pournelle/Niven. Both foresaw large numbers of citizens on the dole and I used to think it was just science fiction. From hunter / gatherer to the farms to the factories it all revolved around putting in time for a reward (food, shelter, money.) How do we evolve as a society when you can continue to convert oxygen to carbon dioxide, reproduce, have shelter and toys and provide no productivity. That and how do you ignore the other 6B+ human beings who didn't win the citizenship lotto.

Interesting times ahead.

Preview looks interesting. Hopefully they have a good writer to go along with the good actors they have on board.
The following user(s) said Thank You: shadar

Please Log in or Create an account to join the conversation.

More
10 Dec 2017 12:43 #57618 by Woodclaw
Replied by Woodclaw on topic Battle Angel Alita

shadar wrote:

Markiehoe wrote: My thoughts exactly Shadar.
The apes in Planet of the Apes look "real" because they are not human.
To this point every CGI "Human" I have seen looks fake or even creepy.
A genetic safety response to ensure the safety of the species.


Of course, if that "uncanny valley" ever gets breached, which will likely come with some pretty advanced AI, we humans might find ourselves in a precarious position, ala the fears of Elon Musk, Bill Gates, Stephen Hawking, etc.

Take "fake news" and ramp it up a hundred times without any way of even knowing WHO (let alone what) is real. Likely I won't see that day, but many of you could.

Great for entertainment, fantastic even, but the concept of reality could disappear entirely, for good and evil purposes. Very possible you might someday be working for a bot, or having a relationship with one, and not even know it.

We humans are rather fond of being the apex predator, and won't take kindly to being in second place. But who knows what society will look like by that time. Maybe AI and the bots will save us from self-extinguishing our species. (Or maybe do it for us to save the planet!)

That last thought makes our currently contentious politics seem trivial.

Maybe Cameron is a prophet with his Terminator movies, although I'd rather go down the Avatar route. Either way, humanity might not always be what it is now. Which fills me with both hope and fear.

Shadar

Starforge wrote: Well said Shadar.

A lot of jobs left the developed countries for cheaper labor in China, India, etc. When automation reaches a point where even much of that labor isn't worth the cost, it's going to devalue much of the third world even more than it is. Our consumer goods will get even cheaper, company profits will continue to rise and the developed countries will float the idea of a stipend for their unemployable citizens, but places that have nothing to offer but warm bodies are going to suffer the most.

You and I are of a similar age, but I don't think it's out of the realm of our lifetime. We might see it just not be impacted as much as the eager young people looking for jobs to start a family. Unless you are STEM, it's already rough and likely only going to get rougher. Free college for all doesn't help if there's no job at the end of the process even if they could pass such a thing.

Reminds me of the worlds created by David Weber and Pournelle/Niven. Both foresaw large numbers of citizens on the dole and I used to think it was just science fiction. From hunter / gatherer to the farms to the factories it all revolved around putting in time for a reward (food, shelter, money.) How do we evolve as a society when you can continue to convert oxygen to carbon dioxide, reproduce, have shelter and toys and provide no productivity. That and how do you ignore the other 6B+ human beings who didn't win the citizenship lotto.

Interesting times ahead.

Preview looks interesting. Hopefully they have a good writer to go along with the good actors they have on board.


I'd advise all to thread very carefully. While this discussion isn't political yet, it has the potential to become one very quickly. I'm very interested into examining how various authors and narrative universes tackled the idea of Artificial Intellengence and any potential interaction with humanity, but please leave any consideration about real world politics and economy out of here.

BTW, Alita is a cyborg, not an android, her brain and part of the nervous syste are human-ish.

Please Log in or Create an account to join the conversation.

More
10 Dec 2017 13:56 #57619 by Markiehoe
Replied by Markiehoe on topic Battle Angel Alita
Well Issac Asimov in his Robot and Foundation series carried the idea of AI to the conclusion that robots became so advanced they became the puppet masters watching over the human race so we would not eventually destroy ourselves.

As to cyborgs we saw that the doctors in RoboCop were able to program Murphy's brain to follow guidelines.
By force of will he was able to break his programming.
Even now the idea of implanted computer reprogramming a human brain to "cure" mental or physical defects in the brain are being explored..

With the advent of cosmetic surgery, rampant ill advised tattooing and body and facial piercing in the future I see little push back from young people to the reprogramming of a human brain in order to make you "better" or "different" or to "fit in" whatever is the fad at the time.

Please Log in or Create an account to join the conversation.

More
10 Dec 2017 15:46 #57620 by shadar
Replied by shadar on topic Battle Angel Alita

Markiehoe wrote:
With the advent of cosmetic surgery, rampant ill advised tattooing and body and facial piercing in the future I see little push back from young people to the reprogramming of a human brain in order to make you "better" or "different" or to "fit in" whatever is the fad at the time.


Interesting observation... yes, it seems likely that people raised with immersive electronic technology and who use it as their primary interface with the world are most likely to be comfortable with the idea of merging with it.

Given that the major players in the technology fields (Apple, Amazon, Google, Microsoft, etc) are all working feverously on incorporating virtual reality into products that they will release over the next years, the kids who are being born now are likely to grow up in a truly virtual world. Far more so than even the Millennials. Embedded, pervasive technology won't seem odd to them.

A cyborg existence isn't very far-fetched, especially when you assume AI will also make massive advances in the same time period. What it means to be human is likely to be redefined. Movies like Battle Angel are a crude gllimpse of a possible future, except it will be far more pervasive.

As I see it, the machines won't take over as much as we'll eagerly invite them to become part of us, and the next stage of human evolution will be a blend of machine and flesh. The acceleration of human capability could run closer to Moore's law than the slow, stumbling approach of biological evollution.

Shadar

Please Log in or Create an account to join the conversation.

  • shadar
  • shadar's Avatar Topic Author
  • Offline
  • Uberposter par Excellence
  • Uberposter par Excellence
More
10 Dec 2017 16:47 #57626 by kikass2014
Replied by kikass2014 on topic Battle Angel Alita
This (society merging with technology to "better" itself) is not new by any stretch of the imagination.

There is a growing number who welcome this with open arms - Transhumanists.

I, for one, don't see any benefits of creating AI's. Sure, robots programmed to do a job is fine. Giving them the capacity to think for themselves and have emotions? Sorry, can't see any upside to that.

If, in some fantastical scenario, I woke up tomorrow and the world is full AI robots, I am changing my name to John Conner :D

Peace.

/K
The following user(s) said Thank You: shadar, Raa, Starforge

Please Log in or Create an account to join the conversation.

More
10 Dec 2017 19:19 #57628 by shadar
Replied by shadar on topic Battle Angel Alita

kikass2014 wrote: This (society merging with technology to "better" itself) is not new by any stretch of the imagination.

There is a growing number who welcome this with open arms - Transhumanists.

I, for one, don't see any benefits of creating AI's. Sure, robots programmed to do a job is fine. Giving them the capacity to think for themselves and have emotions? Sorry, can't see any upside to that.

If, in some fantastical scenario, I woke up tomorrow and the world is full AI robots, I am changing my name to John Conner :D

Peace.

/K


Yeah, the fundamental concern is that a truly intelligent AI would view humans as very troublesome, dangerous and impulsive/emotional and would make the decision that we need to be controlled/eliminated.

Let's hope the people who develop these AI systems are believers in Asimov's Laws of Robotics. That they somehow build it into the machines at a root level that they must never harm humans. But even that has problems. Define harm.

Putting us all in cages so we couldn't hurt each other might be a definition of "not harming".

Anyway, this movie has raised some interesting concerns that are becoming more real every year.

Shadar
The following user(s) said Thank You: kikass2014

Please Log in or Create an account to join the conversation.

  • shadar
  • shadar's Avatar Topic Author
  • Offline
  • Uberposter par Excellence
  • Uberposter par Excellence
More
10 Dec 2017 19:42 #57629 by slim36
Replied by slim36 on topic Battle Angel Alita
One possible justification for humans might be susceptibility to advertising. Unless robots or AI are programmed to respond to ads...

Please Log in or Create an account to join the conversation.

More
10 Dec 2017 19:59 #57630 by Starforge
Replied by Starforge on topic Battle Angel Alita
Can't wait for a Holo deck. It would not only be the ultimate VR experience but would handily fit all of our particular fantasies. Now THAT's some VR I can get on board with (and probably never leave, lol.)

Even if they believed in Asimov's Laws, I would not trust 2 things:

1. Human programmers - inherently flawed individuals being expected to create something that is very dangerous if made with flaws.
2. An AI that has the freedom of thought to interpret it's own code.

I cannot imagine any situation where a full unrestricted AI would be a good thing for humanity. Films like this are cool because they are really exploring humanity, the notion of individuality and self determination. An interesting thought experiment that I hope remains a thought experiment.
The following user(s) said Thank You: kikass2014

Please Log in or Create an account to join the conversation.

More
10 Dec 2017 20:08 #57631 by kikass2014
Replied by kikass2014 on topic Battle Angel Alita

Let's hope the people who develop these AI systems are believers in Asimov's Laws of Robotics. That they somehow build it into the machines at a root level that they must never harm humans. But even that has problems. Define harm.


Exactly. There is no workaround to this, except don't do it.

Anything that can be imagined to be coded, is, like anything else, able to be broken. Hackers hacking the AI for malicious intent, or even the robot AI's themselves hacking themselves. Code can be changed, or worked around, as in your example Shadar.

Fundamentally, I just cannot understand the point of view of these people that want AI (or intelligent, emotional robots). What is it you hope to achieve? Why is a robot programmed to do a job not enough?

A robot is a robot is a robot. That's all they need to be. They are not your friend, they are not your lover. They do a job and that is it. They don't complain, they don't strive, they don't get bored, and they don't tire. That's all you need of them.

It's like people who humanize animals. This isn't to say that we should be cruel to them, but at the end of the day, they are just animals. To me, an example of this craziness is something I came across in the pet industry. There are spas for animals?????? Where they get "pawdicures" and "facials"?????? Seriously???

I know the whole world went crazy a long time ago, but we are heading into new found lunacy land I fear.

Peace.

/K

Please Log in or Create an account to join the conversation.

More
10 Dec 2017 20:18 - 10 Dec 2017 20:20 #57632 by Woodclaw
Replied by Woodclaw on topic Battle Angel Alita

Starforge wrote: Can't wait for a Holo deck. It would not only be the ultimate VR experience but would handily fit all of our particular fantasies. Now THAT's some VR I can get on board with (and probably never leave, lol.)

Even if they believed in Asimov's Laws, I would not trust 2 things:

1. Human programmers - inherently flawed individuals being expected to create something that is very dangerous if made with flaws.
2. An AI that has the freedom of thought to interpret it's own code.

I cannot imagine any situation where a full unrestricted AI would be a good thing for humanity. Films like this are cool because they are really exploring humanity, the notion of individuality and self determination. An interesting thought experiment that I hope remains a thought experiment.


My personal go to quote about why building a new form of intelligence is a bad idea.

Last edit: 10 Dec 2017 20:20 by Woodclaw.
The following user(s) said Thank You: Starforge

Please Log in or Create an account to join the conversation.

More
10 Dec 2017 20:27 #57633 by Starforge
Replied by Starforge on topic Battle Angel Alita
There's an episode of ST Voyager where some rogue starfleet personnel want the doctor to perform a procedure on Seven which he refuses. They then remove his ethical subroutine and he happily complies.

From benevolent to Mengala with the changing of a little code. No thanks.
The following user(s) said Thank You: Markiehoe, kikass2014

Please Log in or Create an account to join the conversation.

More
10 Dec 2017 21:05 #57634 by Markiehoe
Replied by Markiehoe on topic Battle Angel Alita

Starforge wrote: There's an episode of ST Voyager where some rogue starfleet personnel want the doctor to perform a procedure on Seven which he refuses. They then remove his ethical subroutine and he happily complies.

From benevolent to Mengala with the changing of a little code. No thanks.


I remember that episode vividly with the same reaction.
The Doctor went from Healer to Mad Scientist with the push of a button.

Please Log in or Create an account to join the conversation.

More
10 Dec 2017 21:08 #57635 by fats
Replied by fats on topic Battle Angel Alita
I'm trying to work out where I should split this topic as it's moved quite far off topic, if anyone can suggest a good place to split it please let me know.

Fats

Please Log in or Create an account to join the conversation.

  • fats
  • fats's Avatar
  • Away
  • Administrator
  • Administrator
More
11 Dec 2017 01:43 - 11 Dec 2017 17:55 #57639 by ace191
Replied by ace191 on topic Battle Angel Alita
there is virtually no technological breakthrough that I can think of that didn't have an upside for mankind as well as a down side. From splitting the atom to curing pneumonia. It is all about how one wants to use the info. I am old enough to remember Issac's three laws of Robotics and I thought they were
Great right up to the time that I learned how to program in APL, PL1 and basic. Machines will do what they are programmed to do and if you allow them to program themselves you have lost your mind.
Last edit: 11 Dec 2017 17:55 by ace191.

Please Log in or Create an account to join the conversation.

More
11 Dec 2017 02:49 #57640 by shadar
Replied by shadar on topic Battle Angel Alita

ace191 wrote: Machines will do want they are programmed to do and if you allow them to program themselves you have lost your mind.


But that's the essence of the push toward AI that is both having huge resources applied and also receiving critical feedback ala Elon Musk and others who think its incredibly dangerous.

Just the other day, a Google-developed AI machine named AlphaZero was introduced to the rules of chess, and in four hours taught itself to play at a higher level than any human-programmed chess program ever developed. This was after earlier defeating the best Go-playing software. Scary stuff. Here's the link:

www.theguardian.com/technology/2017/dec/...f-to-play-four-hours

Chess is not the same as running the world, but what if it ran the electrical grid or air traffic system or many other rule-based systems? Basically, anything where you can define the rules.

The essence of advanced AI is that the machine learns (aka self-programming). Whether one can set boundaries around that and how secure those boundaries are is a different problem. That's something that we haven't done very well with considering the security issues in human-programmed software.

It all sounds like SF, but it's coming really, really fast and we have no mechanisms for control defined, legal, ethical or otherwise.

But we can look at and consider/critique potential representations regarding the implication of AI in the media. Cylons anyone?


Shadar
The following user(s) said Thank You: brantley

Please Log in or Create an account to join the conversation.

  • shadar
  • shadar's Avatar Topic Author
  • Offline
  • Uberposter par Excellence
  • Uberposter par Excellence
More
11 Dec 2017 13:05 #57647 by fats
Replied by fats on topic Battle Angel Alita
I've split this off the battle Angel Alita thread and created a new thread for the continued discussion.

Fats

Please Log in or Create an account to join the conversation.

  • fats
  • fats's Avatar
  • Away
  • Administrator
  • Administrator
More
11 Dec 2017 17:30 - 11 Dec 2017 17:42 #57652 by lowerbase
Replied by lowerbase on topic Battle Angel Alita
Odd to find a rich discussion on AI here. Most places seem to be rather oblivious about what is about to happen.

AI had many winters, now it does feel like we reached spring. There are very BIG questions we have no clue of the answers, and each possible answer is potentially a pandora box.

1) Will AI actually be 'Intelligent'? Or these neural networks are just dumb pattern seeking machines without any semantics?

2) If they are intelligent, will they be able to be self-aware? Conscious?

3) If AI attains consciousness and self-awareness, will it have Emotions? Survival instinct?

4) If they get to have emotions, what will be their Morality Values? Will it be human-like? Or insect-like? Or something else entirely?

On each answer hangs the humanity's future.

Elon Musk's fear is not just that AI might become superintelligent, but that machines can control machines in ways and a velocity that we humans, presently, cannot compete. We only have slow fingers and a mouth as our 'output interface', and Elon's pursuit is to connect our bodies directly to machines to give us more leverage and be able to control them like a muscle, or part of our brains.

One thought gives me some relief, machines don't like a biosphere like Earth's. There is no reason for machines to compete with us for this planet. It has an oxygenated atmosphere, water particles, scarcity of metal elements, besides biological life (bugs) makes it all too corrosively toxic for them.

Machines would prefer to live in the Asteroid Belt rather than here. Voyager is out there still thrusting, and probably will work for centuries to come, my computer which I clean dust time from time, might suddenly die one day or another. In space there is no dust, there is free energy from the Sun instead, and all sorts of materials to mine and build superstructures where gravity is not a limiting factor. Things that are toxic to biological life, like vacuum, lack of gravity, cosmic radiation, are indifferent to machines.

If AI develops intelligence, self-awareness, emotions and morality, we will have our home, and they will have theirs, and humanity probably will be retired from space exploration and colonizing the galaxy will be up to them.

If AI is dumb, just crunching numbers, it might be a worse scenario, since they have no judgment of their actions and might put us all at risk, as they would be blind but extremely powerful extensions of competing governments and corporations interests.
Last edit: 11 Dec 2017 17:42 by lowerbase.

Please Log in or Create an account to join the conversation.

More
11 Dec 2017 17:45 #57655 by kikass2014
Replied by kikass2014 on topic Battle Angel Alita
In relation to this discussion, I will quote the great Jeff Goldblum:

"Yeah, yeah, but your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should. "

Peace.

/K
The following user(s) said Thank You: Starforge

Please Log in or Create an account to join the conversation.

More
11 Dec 2017 18:24 #57657 by lowerbase
Replied by lowerbase on topic Battle Angel Alita
The benefits in the short-term are too great for scientists to question if they should.

Recently was discovered a complex protein that able us to 'edit' or 'rewrite' our genes (CRISPR). Which would be great if we knew what gene does what. It is an astronomical task to 'decode' our genes, impractical by human intelligence alone.

RNN (recurrent neural networks) can do this decode job for us. Ever made that DNA test to discover your ancestry? On that service contract they will keep your genetic (and personal) information for research. It is creating a big cloud of DNA data and medical files where RNN will find correlations between the genome and diseases.

Surely, one day it will find which genes gives/limits our intelligence, athleticism, beauty and so on. It will be Humanity 2.0.

That's just one area. Google/Alphabet is head on it for the last ten years.

Industrial design by adversarial neural networks will change how planes, cars, devices, processors, everything, is created and manufactured. It is already giving optimal solutions beyond human engineering in aerospace.

There are too many benefits, so many that some fear that is creating a bubble. Only that would stop this train for now.

Please Log in or Create an account to join the conversation.

More
11 Dec 2017 20:25 #57659 by Starforge
Replied by Starforge on topic Battle Angel Alita
Given the identity politics and religious and moral objections of much of the western developed world, Humanity 2.0 won't be happening here. Just like the guy who's going to try the first full head transplant - some third world or less restrictive country will be the first to go full Gattica.

Automation IS going to change everything - much as it has been doing for the last 100 years. There's a difference, however, between programmed 'intelligence' to do repetitive work (even where that be complex repetition with many variables) and true AI self awareness.

One has to wonder when something like a driverless truck makes it's first mistake and costs a life what the payout will be and how that will impact the profitability of continuing the process.

Please Log in or Create an account to join the conversation.

More
11 Dec 2017 23:24 - 11 Dec 2017 23:25 #57665 by kikass2014
Replied by kikass2014 on topic Battle Angel Alita

The benefits in the short-term are too great for scientists to question if they should......


Using them to number-crunch, solve problems, etc is fine. They will have a designated function period.

It is when you get into the realm of giving the AI an "identity", self-awareness and emotions. That is when you head into danger land and I don't see the benefits that doing this would entail.

Ofc automation, robots and such are making life better (though have they REALLY? But that is another debate). Why you would want to complicate this by making them more "human" is beyond me tbh.

Peace.

/K
Last edit: 11 Dec 2017 23:25 by kikass2014.

Please Log in or Create an account to join the conversation.

More
12 Dec 2017 00:22 - 12 Dec 2017 00:24 #57666 by shadar
Replied by shadar on topic Battle Angel Alita

kikass2014 wrote:

The benefits in the short-term are too great for scientists to question if they should......


Using them to number-crunch, solve problems, etc is fine. They will have a designated function period.

It is when you get into the realm of giving the AI an "identity", self-awareness and emotions. That is when you head into danger land and I don't see the benefits that doing this would entail.

Ofc automation, robots and such are making life better (though have they REALLY? But that is another debate). Why you would want to complicate this by making them more "human" is beyond me tbh.

Peace.

/K


You don't have to go all the way to self-awareness and emotion before AI can have a massive positive or negative benefit on society. Everyone developing AI thinks of the positives, the things it can do better than we can do today, but as that encroaches into both employment and decision making in affairs that affect humans, then it gets dicey.

Today, we accept the limitation of human decision making. Whether its a mistake driving a car that kills people (called an accident, even if the driver was distracted or sleepy or low skilled or too old or young or inexperienced or driving a defective automobile, etc), or politically-driven decisions (which are ultimately often emotional/gut/belief things). Machines could perhaps do the job of a cop or soldier better because they will be extremely fast thinking and completely logical with regards to finding the best outcome for society as modeled by a mathematical algorithm. Winners and losers are determined by math. When the calculated numbers and probabilities say that Option B is the way to go, and machines are inserted deeply enough into systems to act on that, we might not like the answer. Even if it's completely logical.

Those AI's don't have to be self-aware, they simply have to learn the best logical ways/lowest cost ways to solve problems and then have the ability to apply the fixes.

Here's an example... in the last number of years, it's been apparent that many of the fires in CA during high winds are due to arcing between wildly swinging power lines equipment and trees or other fixed objects. If a logical AI-based utility safety program looked at that issue, logic would say to shut down the power grid for days whenever high winds were forecasted in areas that are susceptible to burns. While some people would be greatly inconvenienced, and may encounter some life-threatening situations, the burns are far worse, so logic says to cut the power. Fewer losses, likely fewer injuries and casualities, and dramatically lower costs to society than risking a burn. But we don't do that today.

Would we accept an "intelligent" self-learning machine's decision to do that kind of thing? Would the legal system support it?

Same goes for building in flood plains or whatever kind of human risk one can define. Do we solve our problems with being unable to correct errant or historically poor standards of decision making by abdicating the responsibility to machines? You can't argue with a logical machine. You can't play politics or favorites. There is no room for corruption or special interests. Or religion or belief of any kind. Is this a way to "fix" all the problems in society by letting science and machines make the "right" decisions for us because we lack the ability/guts/will or self-interest to do it ourselves?

For most of us, the answer is Hell NO.

But many millennials and the generations that follow them might be willing to do that because that's how they live, in harmony with computational machines of all kinds and a distributed soon-to-be virtualized world.. There would no longer be politics, only the logic of machine learning and the minimization of harm/maximization of societal benefit as determined by an incredibly fast-learning, intelligent machine. Will generations yet unborn and who will undoubtably grow up in an integrated machine/human world be willing to accept this?

And it isn't a problem of not having the right math or models today. An advanced AI will learn and create the models and the math. Like the AI that learned chess in four hours? How long would it take to figure out how to run a country? A day? Would we want to live there once it did figure it out and we let it decide? Not everyone, that's for damn sure. But the majority? In fifty years after we are all dead and gone?

I think its very possible they will, given that it allows people to eliminate what are viewed as flawed decision-making processes of today. Ruthless logic and mathematics may trump politics some day. And the machines will already be embedded to do that job.

Shadar
Last edit: 12 Dec 2017 00:24 by shadar.

Please Log in or Create an account to join the conversation.

  • shadar
  • shadar's Avatar Topic Author
  • Offline
  • Uberposter par Excellence
  • Uberposter par Excellence
More
Time to create page: 0.098 seconds