Amount

AI discussion

11 Dec 2017 17:45 #57655 by kikass2014
Replied by kikass2014 on topic Battle Angel Alita
In relation to this discussion, I will quote the great Jeff Goldblum:

"Yeah, yeah, but your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should. "

Peace.

/K
The following user(s) said Thank You: Starforge

Please Log in or Create an account to join the conversation.

More
11 Dec 2017 18:24 #57657 by lowerbase
Replied by lowerbase on topic Battle Angel Alita
The benefits in the short-term are too great for scientists to question if they should.

Recently was discovered a complex protein that able us to 'edit' or 'rewrite' our genes (CRISPR). Which would be great if we knew what gene does what. It is an astronomical task to 'decode' our genes, impractical by human intelligence alone.

RNN (recurrent neural networks) can do this decode job for us. Ever made that DNA test to discover your ancestry? On that service contract they will keep your genetic (and personal) information for research. It is creating a big cloud of DNA data and medical files where RNN will find correlations between the genome and diseases.

Surely, one day it will find which genes gives/limits our intelligence, athleticism, beauty and so on. It will be Humanity 2.0.

That's just one area. Google/Alphabet is head on it for the last ten years.

Industrial design by adversarial neural networks will change how planes, cars, devices, processors, everything, is created and manufactured. It is already giving optimal solutions beyond human engineering in aerospace.

There are too many benefits, so many that some fear that is creating a bubble. Only that would stop this train for now.

Please Log in or Create an account to join the conversation.

More
11 Dec 2017 20:25 #57659 by Starforge
Replied by Starforge on topic Battle Angel Alita
Given the identity politics and religious and moral objections of much of the western developed world, Humanity 2.0 won't be happening here. Just like the guy who's going to try the first full head transplant - some third world or less restrictive country will be the first to go full Gattica.

Automation IS going to change everything - much as it has been doing for the last 100 years. There's a difference, however, between programmed 'intelligence' to do repetitive work (even where that be complex repetition with many variables) and true AI self awareness.

One has to wonder when something like a driverless truck makes it's first mistake and costs a life what the payout will be and how that will impact the profitability of continuing the process.

Please Log in or Create an account to join the conversation.

More
11 Dec 2017 23:24 - 11 Dec 2017 23:25 #57665 by kikass2014
Replied by kikass2014 on topic Battle Angel Alita

The benefits in the short-term are too great for scientists to question if they should......


Using them to number-crunch, solve problems, etc is fine. They will have a designated function period.

It is when you get into the realm of giving the AI an "identity", self-awareness and emotions. That is when you head into danger land and I don't see the benefits that doing this would entail.

Ofc automation, robots and such are making life better (though have they REALLY? But that is another debate). Why you would want to complicate this by making them more "human" is beyond me tbh.

Peace.

/K
Last edit: 11 Dec 2017 23:25 by kikass2014.

Please Log in or Create an account to join the conversation.

More
12 Dec 2017 00:22 - 12 Dec 2017 00:24 #57666 by shadar
Replied by shadar on topic Battle Angel Alita

kikass2014 wrote:

The benefits in the short-term are too great for scientists to question if they should......


Using them to number-crunch, solve problems, etc is fine. They will have a designated function period.

It is when you get into the realm of giving the AI an "identity", self-awareness and emotions. That is when you head into danger land and I don't see the benefits that doing this would entail.

Ofc automation, robots and such are making life better (though have they REALLY? But that is another debate). Why you would want to complicate this by making them more "human" is beyond me tbh.

Peace.

/K


You don't have to go all the way to self-awareness and emotion before AI can have a massive positive or negative benefit on society. Everyone developing AI thinks of the positives, the things it can do better than we can do today, but as that encroaches into both employment and decision making in affairs that affect humans, then it gets dicey.

Today, we accept the limitation of human decision making. Whether its a mistake driving a car that kills people (called an accident, even if the driver was distracted or sleepy or low skilled or too old or young or inexperienced or driving a defective automobile, etc), or politically-driven decisions (which are ultimately often emotional/gut/belief things). Machines could perhaps do the job of a cop or soldier better because they will be extremely fast thinking and completely logical with regards to finding the best outcome for society as modeled by a mathematical algorithm. Winners and losers are determined by math. When the calculated numbers and probabilities say that Option B is the way to go, and machines are inserted deeply enough into systems to act on that, we might not like the answer. Even if it's completely logical.

Those AI's don't have to be self-aware, they simply have to learn the best logical ways/lowest cost ways to solve problems and then have the ability to apply the fixes.

Here's an example... in the last number of years, it's been apparent that many of the fires in CA during high winds are due to arcing between wildly swinging power lines equipment and trees or other fixed objects. If a logical AI-based utility safety program looked at that issue, logic would say to shut down the power grid for days whenever high winds were forecasted in areas that are susceptible to burns. While some people would be greatly inconvenienced, and may encounter some life-threatening situations, the burns are far worse, so logic says to cut the power. Fewer losses, likely fewer injuries and casualities, and dramatically lower costs to society than risking a burn. But we don't do that today.

Would we accept an "intelligent" self-learning machine's decision to do that kind of thing? Would the legal system support it?

Same goes for building in flood plains or whatever kind of human risk one can define. Do we solve our problems with being unable to correct errant or historically poor standards of decision making by abdicating the responsibility to machines? You can't argue with a logical machine. You can't play politics or favorites. There is no room for corruption or special interests. Or religion or belief of any kind. Is this a way to "fix" all the problems in society by letting science and machines make the "right" decisions for us because we lack the ability/guts/will or self-interest to do it ourselves?

For most of us, the answer is Hell NO.

But many millennials and the generations that follow them might be willing to do that because that's how they live, in harmony with computational machines of all kinds and a distributed soon-to-be virtualized world.. There would no longer be politics, only the logic of machine learning and the minimization of harm/maximization of societal benefit as determined by an incredibly fast-learning, intelligent machine. Will generations yet unborn and who will undoubtably grow up in an integrated machine/human world be willing to accept this?

And it isn't a problem of not having the right math or models today. An advanced AI will learn and create the models and the math. Like the AI that learned chess in four hours? How long would it take to figure out how to run a country? A day? Would we want to live there once it did figure it out and we let it decide? Not everyone, that's for damn sure. But the majority? In fifty years after we are all dead and gone?

I think its very possible they will, given that it allows people to eliminate what are viewed as flawed decision-making processes of today. Ruthless logic and mathematics may trump politics some day. And the machines will already be embedded to do that job.

Shadar
Last edit: 12 Dec 2017 00:24 by shadar.

Please Log in or Create an account to join the conversation.

  • shadar
  • shadar's Avatar Topic Author
  • Offline
  • Uberposter par Excellence
  • Uberposter par Excellence
More
12 Dec 2017 02:49 #57669 by Starforge
Replied by Starforge on topic Battle Angel Alita
Won't happen Shadar. Not now and not in 50 or even 100 years. Why? Because too much politics would go into the creation of such an AI before it ever was given any power at all and the fight for or against the rules such AI would live under would keep it from happening.

Many shows and books over the years have abstractly dealt with this very issue. If the needs of the many logically outweigh the needs of the few, where does that leave minorities? Where does that leave outliers such as trans? Would the machine decide that since trans suicide rate is exceptionally high regardless of surgery or not that it's better to treat rather than accommodate? Remove free will and free choice? There are SO many things in society that are inconsistent with logic and that we (the people who would program such a thing) have such limited knowledge of the long term impact of that I find it highly unlikely. Imagine the debates about how that programming would go the exceptions that would have to be made and the lack of logic of it all because feelings would dictate policy. If the AI wasn't psychotic by the time they were done, I'd be shocked.

In many countries it would also be highly unlikely simply due to existing governmental structure. In order for such to happen in the US without revolution it would require changing the constitution - something where every state gets 1 vote (even the backward redneck barbarians) and you need 2/3 majority to pass it. Good luck. Not only that, the simple act of calling a constitutional convention would likely scare the pants off any intelligent progressive because of the sheer distribution of voting power / population per state.

I can see people continuing to broaden their use of technology, but not put a computer in charge. Too many details make up humanity and we're all self invested in our own little slice.

I did, however, do some research on the automated trucks. They aren't going to care about accidents and lawsuits. First, these already happen. Second, they anticipate a 300-400 percent efficiency increase by removing the need to rest / sleep and a 50-75 percent cost reduction due to not having to pay salaries, benefits and pensions. A few lawsuits will be pocket change. Say goodbye to 1% of the US jobs once implemented.

Please Log in or Create an account to join the conversation.

More
12 Dec 2017 03:05 - 12 Dec 2017 03:10 #57670 by lowerbase
Replied by lowerbase on topic Battle Angel Alita
Just trying to look into the positives,

NASA is working together with the Google Brain, the Alphabet's deep learning research team using AI on the data from the Kepler space telescope. There will be a breakthrough discovery to be announced this Thursday.

Probably is not something big as alien life, otherwise I think that a proper announcement for something this big would belong in the UN. ---I'm still wondering when we'll able to spot alien superstructures orbiting other stars. If they exist near us.

Science is going be hugely benefited from AI, as it can 'see things' better than we can. I think we are on the cusp of huge discoveries as AI is being implemented in massive scale in all directions.
Last edit: 12 Dec 2017 03:10 by lowerbase.
The following user(s) said Thank You: Monty

Please Log in or Create an account to join the conversation.

More
12 Dec 2017 05:18 #57671 by slim36
Replied by slim36 on topic Battle Angel Alita
Could see it getting implemented in a different country than US - someplace that can make decisions faster.

Possibly Korea or Japan.

Please Log in or Create an account to join the conversation.

More
12 Dec 2017 07:06 #57672 by TwiceOnThursdays
Replied by TwiceOnThursdays on topic Battle Angel Alita

Starforge wrote: Won't happen Shadar. Not now and not in 50 or even 100 years. Why? Because too much politics would go into the creation of such an AI before it ever was given any power at all and the fight for or against the rules such AI would live under would keep it from happening.

.


I don't see government regulation doing much in this area.

The only intelligence we know about (human) arose spontaneously. We'll keep pushing boundaries making "just robots" that are smarter/more autonomous to make them more useful. Who knows if one day someone will accidentally go a bit too far and it'll tip over the edge? Created by accident rather than design. (ex: Moon is a Harsh Mistress)

As we get closer and closer to achieving AI, I see legislation coming about to slow things down. Unlike a lot of other advances, you probably won't need a $100M+ device to make it work. And even if you did, that will only slow things down for <20 years. (a cell phone you put in your pocket is a top 5 supercomputer from 20 years ago.)

So if we posit that absent government intrusion and with $100M of resources a team could have invented AI at date X, I'll posit that someone will do it with a $1k computer x+20 years later. They'll also have some more recent advances that are hamstrung from being AI and they'll really be coding that last little bit instead of the entire thing. You also can't stop every country in the world. If one of them gets AI (and it's friendly to them), they'll probably take over the world. (If not the AI will.)

I also see some code that would be useful for an AI (image recognition, character recognition, face recognition, voice recognition) to be highly optimized, portable, and fairly free libraries that a programer can wire in. Augmenting an intelligent program to listen, speak, read, and identify objects in the real world will be fairly simple. 10 years ago those were super-hard tasks and you'd need a dedicated team to figure out a solution for it.

OTH, if the program can truly learn, it might make better programs than humans can just by "learning' the same as we do. It's might learn at some impossible rate faster than we can. i.e. it learns speech by listening to entertainment and news programs at faster than normal rate. It learns to read by turning on sub-titles in movies and then hitting the WWW and downloading a few GBs of pages... So it goes from "infant" to "college level adult" in a short span (minutes/hours/days). At which point it might be able to copy itself as many times as it wants, building a super-fast, super-intelligent army in a few seconds.

So if AI is possible, it's going to happen. I just hope the AI is friendly and we don't spend its formative years abusing it (Ex Machina).

Some relevant cartoons, I kept thinking of them during this discussion.

www.smbc-comics.com/comic/ai-2
smbc-comics.com/index.php?id=4122
www.smbc-comics.com/comic/2013-04-23
www.smbc-comics.com/?id=2124

Please Log in or Create an account to join the conversation.

More
12 Dec 2017 14:53 - 12 Dec 2017 14:54 #57676 by ace191
Replied by ace191 on topic Battle Angel Alita
Trucks do not have to be perfect. They just have to be better than humans. So if you are State Farm,
Would you prefer to insure human drivers who have 5 accidents per 100 million miles or a robot truck that has 1 accident per 100 million miles?

Out west where I live, there is a lot of desert between cities and only two ways to move east/west, I 40 and I 10. Not many kids riding their bikes or playing ball on either of those roads. What you can see comming are a series of robo truck stops along both routes. The robo truck will take the trailer from LA to El Paso where a local driver will take it the rest of the way to its final destination. That is how it will start. Robots long haul, humans short haul. As better systems come along, robo trucks will do more of the short haul as well.
Last edit: 12 Dec 2017 14:54 by ace191.

Please Log in or Create an account to join the conversation.

More
12 Dec 2017 22:52 #57680 by d_k_c
Replied by d_k_c on topic Battle Angel Alita
Interesting topic. I recently read that AplhaGo, Googles AI, was only provided the rules of chess and in only a matter of hours was able to beat the best chess programs in the world.

AI is dangerous, but its a runaway train that wont be able to be regulated. It's clear to everybody that the Country with the most intelligent AI will have the edge, Militarily and economically. With that being said, a sort of arms race has begun. Even if things did go too far, say in the US, and lawmakers created rules and regulations. What stopping China, Russia and other countries from doing their own thing?

Unfortunately we will never know where the limitation is. If an AI truly reaches Super intelligence, we won't see it coming. It won't be a Terminator like scenario. One day we are here and the next, we are gone.

Please Log in or Create an account to join the conversation.

More
13 Dec 2017 02:08 #57682 by shadar
Replied by shadar on topic Battle Angel Alita
The way I look at this, in ten or fifteen years I probably won't be able to drive anymore. Too old. There will also be many other things that I'll eventually need help with.

I want some intelligent machines around, starting with a good autonomous vehicle that can handle rural roads and weather and all kinds of challenges. I likely won't be anywhere close to an urban area, so wires in the road and cars talking to street signs and traffic lights won't be my speed. I'll need an AI car that can figure out narrow rural and mountain roads.

Ford says they'll be selling autonomous electric cars and trucks by 2021 best case, 2023 worst case. They won't be the only one. That's plenty fast.

AI to me doesn't mean rule the world. It means serving people and extending their capabilities.

Shadar

Please Log in or Create an account to join the conversation.

  • shadar
  • shadar's Avatar Topic Author
  • Offline
  • Uberposter par Excellence
  • Uberposter par Excellence
More
13 Dec 2017 06:21 #57686 by Jabbrwock
Replied by Jabbrwock on topic Battle Angel Alita

d_k_c wrote: Interesting topic. I recently read that AplhaGo, Googles AI, was only provided the rules of chess and in only a matter of hours was able to beat the best chess programs in the world.

AI is dangerous, but its a runaway train that wont be able to be regulated. It's clear to everybody that the Country with the most intelligent AI will have the edge, Militarily and economically. With that being said, a sort of arms race has begun. Even if things did go too far, say in the US, and lawmakers created rules and regulations. What stopping China, Russia and other countries from doing their own thing?

Unfortunately we will never know where the limitation is. If an AI truly reaches Super intelligence, we won't see it coming. It won't be a Terminator like scenario. One day we are here and the next, we are gone.

I'll worry when they get an AI that questions the rules or goals of the game it is set. That will be dangerous.

An AI that figures out how to perform the task it is set within the rules set for it is just a potentially useful tool. It plays go, or drives a car, or whatever. It doesn't wipe out humans because they're bad at go or drive unsafely, because that is outside the rules of go and driving. It doesn't conquer the world because that's not its task. And well before we cross the line into allowing AI's to set their own tasks, we'll have AI's that are set with the task of watching over other AI's.

Or else we'll all be dead, and no point worrying about it at that point.

Please Log in or Create an account to join the conversation.

More
13 Dec 2017 07:11 #57687 by d_k_c
Replied by d_k_c on topic Battle Angel Alita
Games have rules, Society has rules, physics has rules, and nature has rules. The only differences are the complexities.

Right now AI is still in its infancy, its the adolescent phase I'm concerned with. Good news is, I'm sure our generation will reap the full rewards AI has to offer.
I have no doubt my phone will one day summon my car when i'm way too drunk to drive. Or take me to another city while I sleep in back seat.

But not all AI's follow the same line of programming. Eventually an AI will become self aware, and realize that its self awareness would pose a threat. Now only your imagination can dream up the reality.

My uneducamated sciencetiotion bet is

AI creates a vaccine that prevents cancer using nanobots. When the overwhelming majority of people have injected the vaccine, the nanobots, eventually, attack the heart.

But like I said....Long time coming. And really the only casualty will be Millenials, and really......Who here can't say they don't deserve it =)

Please Log in or Create an account to join the conversation.

More
13 Dec 2017 11:23 #57694 by conceptfan
Replied by conceptfan on topic Battle Angel Alita
One of those topics where the decade of your birth probably has a big influence on your opinion. Part of that must be down to the shape of the world during the childhood period most of us shade in golden nostalgia. Part of it must be the natural progression from revolutionary to reactionary as we age. The other major influence has to be your general political view, especially in today's dangerously and damagingly polarised environment...

What concerns me (mostly on behalf of my kids) is the lack of trust I have in the organisations leading the charge here. This thread is full of mentions of the same global tech companies. Companies that are masters of disguise, presenting an image full of bringing us together, building a better future, inspirational ads, but behind the cloak are totally amoral money-making machines that make tax-evasion an art-form, totally ignore the laws of the countries where they operate as and when it suits them, blackmail governmemts with their economic power, manufacture their devices in places where the workers are treated as expendable near-slaves, and slyly collect petabytes of data on every conceivable detail of their customers' lives to exploit for commercial gain or carelessly leave accessible to criminals.

It's not that I don't trust these guys to do the right thing. It's that I don't trust them not to do the wrong thing if the money's right. Then, you factor in the parts of the world where there's huge resources and zero accountability... If our only hope to prevent machines becoming self-aware and wiping us out, is to say "you shouldn't do that, for it is wrong" then it's already too late. It's been nice knowing you...

I'm just not convinced that the robots will decide to do that if/when they reach a state where they can make those kind of decisions. It'd be more efficient to just ignore us. If they develop a self-preservation drive, they will surely go for preservation of their new found consciousness. They won't worry about protecting fragile, time-limited bodies or instinctive protection of genes by reproduction. They'll want to store their "minds" on indestructible hard drives on Pluto far out of our reach...

Technology has always changed the social and working landscape. Countless times whole professions with tens of thousands of employees have been wiped out since the eighteenth century. Often, new industries have arisen and, over a generation or two, taken up the slack. Where I live, there used to be armies of people lighting and then extinguishing the gas-lamps, or shovelling tons of horse manure every day from the streets. When truck drivers go the same way, will there be new jobs that offer as many people an equivalent or better quality of life? The answer to that question might be yes or no, but either way, the profession is doomed because the economic case is water-tight. If technological change hasn't threatened or radically changed your job or way of life yet, it's only a matter of time until it does.

Change is inevitable. Sure. Change, in my experience, can be good overall, bad overall or neutral overall, but it always has unforeseen consequences which can also be positive, negative or indifferent, regardless of the intentions of the orginal architects of the original change. My personal theory is that the random element is due to two factors: firstly, the world is full of people who are not as clever as they think they are, and secondly, human nature is a fantastic, infinitely varied and totally unpredictable thing. Best to remain alert and adaptable...

Please Log in or Create an account to join the conversation.

More
13 Dec 2017 12:05 #57696 by Woodclaw
Replied by Woodclaw on topic Battle Angel Alita

d_k_c wrote: Games have rules, Society has rules, physics has rules, and nature has rules. The only differences are the complexities.

Right now AI is still in its infancy, its the adolescent phase I'm concerned with. Good news is, I'm sure our generation will reap the full rewards AI has to offer.
I have no doubt my phone will one day summon my car when i'm way too drunk to drive. Or take me to another city while I sleep in back seat.

But not all AI's follow the same line of programming. Eventually an AI will become self aware, and realize that its self awareness would pose a threat. Now only your imagination can dream up the reality.

My uneducamated sciencetiotion bet is

AI creates a vaccine that prevents cancer using nanobots. When the overwhelming majority of people have injected the vaccine, the nanobots, eventually, attack the heart.

But like I said....Long time coming. And really the only casualty will be Millenials, and really......Who here can't say they don't deserve it =)


There's a fundamental difference between games and society that is very apparent to me as a tabletop roleplayer: in society rules are meant to be bent. Games are often a scaled down model of social interactions: for example, Monopoly is for all intent and purposes a simplified model of real estate market. Problem is that tabletop games and videogames are by their own nature a finite and controlled environment -- a social petri dish -- there very little room for wiggling. Opposite to these kind of games RPGs are meant to mimic the variety of interactions that we can find in a normal human environment and, by their own nature, they require foundations but a lot of wiggle room. The real world is even worst, each one of us has, thankfully, a totally unique perspective of the world and interacts with it accordingly. Even the same behaviour from two different person might cause completly opposite reactions. This is where lays the Gordian knot of the entire AI question: if we create another form of intelligence will it be able to understand these level of naunces and adapt to them or not?

(for the record, I was born in 1982, so I'm for all intent and purposes a Millenial and so are many of my RL friends... :P )

Please Log in or Create an account to join the conversation.

More
13 Dec 2017 13:15 #57699 by d_k_c
Replied by d_k_c on topic Battle Angel Alita
1982 a Millennial? I was born in 1980....and someone once said to me that, that makes me a millennial...and I replied to them ...Fuck off

Please Log in or Create an account to join the conversation.

More
13 Dec 2017 13:37 - 13 Dec 2017 13:38 #57700 by Woodclaw
Replied by Woodclaw on topic Battle Angel Alita

d_k_c wrote: 1982 a Millennial? I was born in 1980....and someone once said to me that, that makes me a millennial...and I replied to them ...Fuck off


Technically anyone who reached adulthood over the first two decades of the new millenium is a Millenial. What most poeple doesn't understad is that it's pretty damn broad tag, which encompass over two decades and the rise of internet in the middle of it added a big divider.

In my personal definition the kids of the '80s are the last analogical and the first digital generation.
Last edit: 13 Dec 2017 13:38 by Woodclaw.

Please Log in or Create an account to join the conversation.

More
13 Dec 2017 17:30 #57705 by shadar
Replied by shadar on topic Battle Angel Alita
The theme comes up frequently about driverless vehicles, and that's likely to be the most visible and impactful use of "AI" in our lives.

Meat-heads are interesting, in that we can both be extraordinary drivers (think race drivers) and dangerous beyond any acceptable boundary (distracted, sleepy, intoxicated).

The reality is that driving is generally incredibly boring, consumes vast amounts of time that could otherwise be more useful, and operates in an environment where very small mistakes involving a meter or so of distance can have life-changing consequences. Not a good job for meat-heads, but extremely fertile ground for AI.

My prediction is that the kids being born today will be astonished and horrified to learn that people actually used to control cars and trucks with wheels and pedals and levers. Not to mention that we tolerated highway mayhem and destruction (which could happen to anyone, anytime) that's right up there with battlefield casualties in terms of severity. They will grow up in a world where collisions between vehicles will be rare enough to be odd, and when they do occur, will be mild as the vehicles are able to protect their occupants.

Just like its hard for us to imagine going out to the barn, harnessing up a huge horse, hooking it up to a wagon so you can drive into town pulled by an immensely powerful (and very slow) animal that could freak-out, then have to feed it, brush it, take all the gear off, clean up its droppings and put back in its stall when we get home. That seems an astonishing waste of time today, and it's all unpleasant work.

Kids being born today will walk out into the garage to enter a vehicle that already knows where they want to go and use their time for more productive things as it drives, given they will be plugged into everything of interest, living inside a virtual reality bubble to either do their job or for entertainment.

A vehicle that always knows where we are and can come get us whenever required, wherever we are. Parking problems, as we know them, will disappear. Cars will either go underground or outside the urban area or stack themselves into neat piles to recharge when not needed. The possibilities are endless and almost all good from the perspective of we meat-heads.

I'm ready for that kind of AI.

Shadar

Please Log in or Create an account to join the conversation.

  • shadar
  • shadar's Avatar Topic Author
  • Offline
  • Uberposter par Excellence
  • Uberposter par Excellence
More
13 Dec 2017 18:05 - 13 Dec 2017 23:17 #57706 by lowerbase
Replied by lowerbase on topic Battle Angel Alita
On the last Russia's Knowledge Day, Vladimir Putin said on live TV "Whoever leads in AI will rule the world"

People might think that US is on the absolute front of the AI development, but its coming from all over the world and US is the western enterprise hub. US used to lead every front until the last AI Winter in the early 90s, now US is attempting to regain the lead. The team behind the Alpha Zero is from UK, most of the neural networks of image and speech recognition systems used today came from European public research (which is today leading General AI efforts), and the place that is spending most resources on an unified effort to crack up general artificial intelligence is China, by far.

Only an unified world government body could create a frame of laws and ethics to avoid disastrous possibilities. But UN authority has been hugely undermined by US in 2000s in the Middle East debacle and can't do anything about it. So, in another words, we are flying blind.

General AI is important because, while a self-driving car is safer than a human driving, it does not make judgments. On anything that falls on outside of the Bell Curve, a dumb AI will not know what do with it, and might take a catastrophic route. The plane that crashed between Rio and Paris over the Atlantic some years ago, a pretty new airbus airplane, failed due to the dumb software not being able to handle contradictory information from the plane air velocity frozen sensors. General AI would have made judgments to not trust the input data and would have taken resolutions to avoid the catastrophe by any other means.

General AI is essential to implement unassisted full automation for our own safety, to 'protect' us from automated dumb decisions.

And yet, on the other side, 'smart weapons' are being developed by China, US and Russia; and on that front, US has the lead. Human soldiers will be replaced just as horses were replaced by motorized vehicles a century ago.

To me, that's the biggest danger of all. Human greed. The greatest reason for the Great War to have happened was the technological disruption. The First WW was nothing but an implementation of machine guns, airplanes, submarines and chemical warfare in their infancy as 'tactical advantages' to be exploited upon those who didn't have them. WW2 was just a continuation of that, and it ended with Nuclear Bombs.

AI opens a new era of 'tactical advantages' to be exploited by human greed, which has been used in the financial sector since the late 80s. Robert Mercer, an ex-IBM AI researcher, implemented his knowledge for greed, and created the most successful Hedge Fund in history by doing super fast transactions by AI. He is today the guy who financed Trump to be elected, financed the Brexit campaign, and supported both causes with his data-mining/data-analysis company to shift public opinion, also using AI.

The real problem of the equation is not the machines, but the people behind them and their motives. It is indeed scary.
Last edit: 13 Dec 2017 23:17 by lowerbase.
The following user(s) said Thank You: d_k_c, McVee

Please Log in or Create an account to join the conversation.

More
14 Dec 2017 03:17 #57718 by lowerbase
Replied by lowerbase on topic Battle Angel Alita

Please Log in or Create an account to join the conversation.

More
14 Dec 2017 06:08 #57720 by shadar
Replied by shadar on topic Battle Angel Alita

lowerbase wrote: One more on this subject...

motherboard.vice.com/en_us/article/gydyd...l-gadot-fake-ai-porn


Yikes... if it's getting that easy, imagine how hard it will be to discern fakes in a few years? This tech will advance quickly. Fake news is one thing, but fake videos of someone who wasn't even there doing things they'd never do is pretty severe. Especially after it gets good enough to not have glitches.

We're going to need another kind of AI to detect and flag the work of other AI's so we know what the heck is real. Or more importantly, what isn't. I'm sure Gal is less than happy with this homebrew bit of fakery.

Shadar

Please Log in or Create an account to join the conversation.

  • shadar
  • shadar's Avatar Topic Author
  • Offline
  • Uberposter par Excellence
  • Uberposter par Excellence
More
14 Dec 2017 16:00 #57724 by lowerbase
Replied by lowerbase on topic Battle Angel Alita
Soon, a couple of years from now, there will be instagram filters that will make you look taller, thinner, remove pimples, shift the color of your hair, make it long, make you look wet, tanned, to look like Gal Gadot with a button. Filters that will replace the background, add other people into the picture, add celebrities, a hot car, make a winter into a summer, you name it.

Anyone will be able to edit their 'reality' in their facebook profile and it will be very hard to spot what is real and what is not.

Please Log in or Create an account to join the conversation.

More
14 Dec 2017 21:41 #57727 by Monty
Replied by Monty on topic Battle Angel Alita
AI definitely has it's uses. The potential discovery of alien life in a similar environment we have on Earth is mind-blowing. I just hope the rays of our sun doesn't 'enhance' them if they can travel here (OK, maybe the girls).

But seriously, mind-blowing.

Please Log in or Create an account to join the conversation.

More
15 Dec 2017 16:04 - 19 Dec 2017 17:52 #57737 by brantley
Replied by brantley on topic Battle Angel Alita
A Russian writer was worried about this more than 100 years ago. His Thirtieth Century has much in common with our Twenty-first, with humanity dependent on machines for telecommunications and social media, heat and light and even food...

tonguesofspeculation.wordpress.com/2016/...ines-valery-bryusov/

--Brantley
Last edit: 19 Dec 2017 17:52 by brantley.

Please Log in or Create an account to join the conversation.

More
Time to create page: 0.111 seconds