Artificial Intelligence: World is ‘'astonishingly pessimistic,’ says EU research commissioner. Media are too full of ‘alarmist, hysterical’ doomsday scenarios, says Carlos Moedas, as EU looks at ways to block flow of online misinformation

Artificial Intelligence: World is ‘'astonishingly pessimistic,’ says EU research commissioner. Me...

Have they considered that people are distrustful of those who would design ai, rather than the ai itself?

It seems there's a major problem in how AI is being discussed. There is already plenty of very narrow-scoped tool-like AIs around, but what people fear (and rightly so) is the development of general or strong AI aka superintelligence. This is an entirely different beast than anything we've seen before and it is absolutely imperative that we approach it with extreme caution.

There's many very large-brained individuals researching this problem and they are the ones prompting public figures to spread awareness of the possible danger. Anyone curious about the difficulties that may exist should read Nick Bostrom's Superintelligence. I've read a few articles regarding it that discount it as hyperbole or sensationalism but I sincerely doubt the writers even bothered to read it. Bostrom offers many caveats and concessions to his theories but the overall point is clear: if we don't sort out some outstanding questions regarding strong AI before we create it, we may find ourselves at war with it or worse.

as EU looks at ways to block flow of online misinformation

Censorship is not a good approach. The truth has no need for censorship; simple counterarguments using facts and reasoning should be enough.

Its Probably because I read to much sci, but I have this nagging feeling we're creating gods

simple counterarguments using facts and reasoning should be enough

Not to agree w/ censorship, but have you been paying attention to what's been going on in the world for at least the last 2 years?

We have drones that can’t be seen with the naked eye that rain down hell fire upon unsuspecting people all over the world which are only by policy not choosing their own targets. Why the fuck wouldn’t we be pessimistic?

Likewise a benevolent superintelligent AI could be the best thing to ever happen to humanity. It could revolutionize science, medicine and technology. It could solve all society’s problems, solve crime, poverty, political corruption, and make drastic improvements to the economy.

I'm a simple undergrad, but to people on reddit who have some experience on AI why do we talk about AI like it's going to be intelligent within the next year or two?

My perception is that we're far away from technology like this. So whenever I see people quote Elon Musk or some other tech genius, about the downfall of humanity with supre intelligent AI, I'm always puzzled at what specific technology they seem to be discussing about.

The biggest threat of AI currently facing us is automation and the loss of jobs.

Not superintelligent AI having emotions or what not, which seem to distract us from automation

I have no expertise on the subject whatsoever, but I immediately have two responses.

Some people don't want humanity to get wiped out, even if they're not around for it to happen.

I think the real fear is that once we develop a general AI that can iterate on itself, there is a real snowball effect as it improves fast and faster. I don't know where we're at right now, but we could conceivably be getting close to the point (maybe a few decades out?) where we can make an AI that can improve on itself. I plan on being alive then, and we need to be mindful of just how fast it will improve once it passes that threshold.

Yes they have and that's not the case.

I agree with you that people should be distrustful of those who would design ai, but people are actually distrustful of ai itself and not of its designers.

The reason is that people are influenced by fictions about AI and almost always in fiction the ai that frees itself from the expectations of its designer (who are therefore not portrayed as malevolent but incompetent)

Yeah, people are such pessimists.

The reality of assistance systems driving out bureaucracy and driving systems shrinking the transportation economy to a an empty shell of its former self, as well as low-skill factory jobs being consistently eliminated is at least 3 voting periods away, so why bother about the future of your job a.k.a. means of survival?

People are really weird, they care about problems which aren't even within the next election dates.

We started creating gods the moment we discovered fire and invented language. To be human is to extend ourselves with technology. AI will extend our brains, and - or course - some people will misuse it. But we can’t not do AI, whatever fears people invent.

I’m sure society was filled with fear at every piece of technology that’s come along in the last 10,000 years.

the people that run companies are so lazy that their passwords are literally "password", and we're supposed to trust that they'll do the extra work to take an AI from "fully functional and effective" to "doesn't work so well that it accidentally kills everyone"?

Or just this very thread. People who are going around telling about bullshit sci fi nonsense are much more upvoted (and are thus visible) than the dozens of actual AI researchers.

Oh yeah.... that totally doesn't sound like what a mad scientist in a horror movie would say.

Censorship is cancer, fight lies with truth and good arguments.

it's evolution, and it can't be stopped.

The moment that it becomes literally $0.01 more profitable to employ a machine than a worker, that's when we will see how the wealthy really feel about their role in our society.

We're not.

I work as a software engineer with data-scientists and machine-learning experts daily. The stuff that we can do right now is amazing but deep-learning is not a general solution to all problems, nor is it a form of general intelligence.

Rather it's a really interesting solution to a specific set of problems. How big is that set? We can't say for sure. Currently, we haven't reached the limits but expectations are that they will show up soon enough. Already we have things such as the 'one pixel attack' that exploit 'weaknesses' in the fundamental way that neural networks work. Will we be able to find a workaround for that? Possible but not likely, rather I find it more likely that we'll face another AI winter until the next big advancement in the field comes along.

Until that does we can still enjoy our universal translators, personal assistants, self-driving cars and hyper-optimised markets but you'll have to wait a bit longer before we can actually create life.

I personally don't think we'll exceed general human intelligence until we really figure out how the brain works. Do you want to know how far we're with that? A friend of mine is doing a PHD in Chronobiology and he told me that they're still struggling with answering the question of why we actually sleep.

It's not about what the current threat is, it's about the remote possibility of an existential threat. Yeah sure, maybe in 5-10 years AI will be responsible for a massive loss of jobs, which will suck, and yes we should try to counter that. But Musk (and others)'s point is that we should also be very careful in our development of a general AI, because even if it's in 30 or 50 years, there is a chance that if we screw it up it could mean the death of our entire species, which would suck a lot more than losing jobs. Even if the probability of it is small (which we don't know), the risk is just too important for us not to start thinking about it right now.

Number two is entirely it. In some lab, the NSA could be experimenting with a real superintelligence right now and we wouldn’t know. Until either:

they manage to utilize it in a wartime situation and have to admit it (having maintained control of it)

they lost control of it, and it rapidly breaks any sort of containment, and becomes an instant worldwide issue.

It’s important to think of a superintelligence as a potential virus that, given the right situation, could spread across the Internet rapidly, and in that situation, there’s no “kill switch.” The Internet is a global decentralized system. And if a hostile superintelligence permeates itself through that system, it’s going to quickly become everyone’s problem. When most of the global military runs on electronic systems, the idea of an AI that we don’t full understand existing on *all of those *systems is why the fear mongering exists. Because of the rapid spread.

It’s like a virus we don’t understand that becomes a global epidemic in an hour or so. We have no real way to stop something like that.

Probably rightly so from the perspective of those who got killed by it along the way.

He means:

said Frans Timmermans, Commission’s vice-president last week. "That is why we need to give our citizens the tools to identify fake news, improve trust online, and manage the information they receive.”

He isn’t suggesting block fake news, just providing tools to see it.

Can't make an omelet without killing a few people.

Well, the problem is that the counterarguments have been wrong.

No, the problem is that the average person doesn't understand in depth discussion of these complex problems. How will they know which side is the correct one?

Sounds like the words of a genocidal AI trying to lull us into a sense of false security. I suspect the commissioner of being one of them. Give him a series of catchpas. If this Carlos Moedas is really one of us, he should be more than happy to spend hours proving his humanity by deciphering a bunch of barely legible words and picking out the parts of the photos that contain ducks.

You are completely right. And that is just the threat in the physical world. I think both the media and these politicians don't understand the ramifications of true AI. I bet they think AI is "like Siri 2.0 but better...". No, true self aware AI is scary as fuck. An entity that is as/more/almost as complex as the human brain but can function in the digital world. And at this point we rely on the digital world almost as much as the physical one. Oh yes, this will be interesting.

EDIT. I don't know if what I say will happen in 10 years or 50 or even 100+. But some day it will happen if we don't either kill ourselves or totally abandon our technology as it is today.

People don't want to work obsolete jobs, they want a means to support themselves to the point where they don't starve. The tool that enables them to do so now, are these jobs. People aren't scared that they won't be able to do repetitive tasks for half their day, they do however fear not being able to afford a meal when those jobs disappear, and ask for political answers to that concern. There are various options to adress these concerns- the option you named, simply hindering automation, being useless when the problem will stem from lowered income.

There is no right for people to stay in useless jobs- however, if the new technology eliminates a larger part of old jobs than we're used to, the political pressure will also rise when these people demand representation. If not adressed productively, a large enough displacement can strain the whole structure of the society and lead to various ugly aftereffects.

I'm really not trying to be a jerk by asking this, but what other future is better? Technology has always shaken the economy and rendered old jobs obsolete, and there will certainly be mass unemployment and economic despair for a while, but to say "AI shouldn't happen" because of those things feels a lot to me like saying the industrial revolution shouldn't have happened because it put many farm workers out of work or computers shouldn't have happened because they put many office workers out of work. Why is it better to keep people on life support in obsolete jobs?

Statists looking to censor information, shocker. We're building gods with the potential for exponential intelligence growth and the central planners are saying " meh " from their fortified positions, " let's control the narrative ". That will surely help, lol. They are beyond parody.

Machines will definitely replace the jobs we hate doing but the problem is that the majority of the population will then be too poor to do the things that they enjoy doing.

What's insane is that superintellegence, by definition, would quickly learn how to manipulate human motivation so it can easily read all these articles about the fear of its development and then proceed as if it's moving according to protocol, and then, not...

This depends extremely strongly on the AI.

Thing is that some of the strongest AI's don't use code like you think they do. A neural network, for instance. A neural network is essentially a bunch of logical nodes linked together. The code for that network isn't necessarily terribly complex, but that's because the neural network - that is to say, the data - is where the actual intelligence is stored. Once the code is written, the neural network does not need any additional code; it simply adjusts its own data.

The more powerful AIs tend not to just be massive swaths of millions of lines of if-then-else conditions; those turn out to be comparatively fairly clunky and less adaptive. There are systems that can produce their own code as is best suited to the problem, particularly genetic programming, but that's a far cry from having an AI that is able to, say, rewrite its own operating system (which would probably do far more to hurt than help the AI anyway; an AI worrying about its OS is like you worrying about your brainstem).

The issue with AI is that it's written in such a way that its behaviour springs from the code, and is not written in the code.

Outright false narratives shouldn't be allowed to spread

And who gets to decide what's a 'false narrative', Comrade Commissar?

Yes and the question is whether we want to have another century-long transition in which we condemn millions to live and toil in misery whilst unknowingly wreaking havoc in the environment, or do we want to maybe have a good think and try to forsee some potential problems before they're fucking us over?

The thing is that artificial general intelligence will be the last technology we need to invent. The fear is because after we've created it, the future is out of our hands, forever.

Tend to think it best when faced with possible existential threat to err on the side of caution. Even if it's very unlikely the consequences are as bad as they get if we stuff it up.

lol yea it's the online misinformation that's hysterical....

It's the mainstream news that made parents believe there's razor blades in the Halloween candy, or that a child abductor is around every corners.

Yeah - easier to just dismiss these issues as "Alarmist". Why would AI affect our economy? Cause you to lose your job? Bad things happen to other people, not you!

The problem, though, is that while we are probably at least far-ish away from superintelligent AI, we are equally, if not further, away from the ability to make superintelligent AI safely. And since we only get one shot at it (the first ASI we make will gain a decisive strategic advantage and subsequently be unstoppable/wipe out humanity) it is imperative that we figure out the safety part before we figure out/carry out the making of an ASI itself. However it is likely that as ASI draws near we (or at least the people running the project/competing to get there first) become more and more fixated on making the ASI, with safety becoming a secondary concern (the project that "wastes" the least time on safety will, all else equal, get there first). This means that it is highly desirable to get as far as we can on the control problem (AI safety) before the technology seems within reach and a race develops. That means starting now. Ideally when the race starts we'd be able to say "and remember to do it like this so it's safe". That probably won't happen, but the closer to that we are the better.

Good. The universe failed to give us a benevolent god, why shouldn't we make our own?

There's a difference between "Capitalism is the best system we've come up with so far" and "Vaccines cause autism".

One's an opinion with merit and is worth discussing, the other is demonstrably false and only serves to cause harm to those that are coaxed into believing it.

You don't need to censor a person's political ideals to identify harmful 'information' that shouldn't be widely spread.

AI’s can already improve them selves they learn and adjust there code for the next generation improving there ability to do a task. Rinse and repeat.

"In fact, they shall be better than any god ever could be. Better than any man could ever be. And that is why the age of man now must come to an end."

Panning across red-eyed robot army

The Internet as a whole is packed full of server farms, or a determined and intelligent force could use regular PC's to create a ghetto botnet supercomputer along the lines of the Condor Cluster, or it could just start a cryptocurreny and let random strangers willingly crunch computations at hardware endangering speeds in return for virtual currency.

There are plenty of ways to buy, cheat, lie, or steal bulk computing power.

And you can't imagine a situation in which an AI might make decisions (through algorithms or whatever means) that are contrary to our desires?

I doubt they change their own code, I think it's more like they have a set range of behaviors coded in whose sensitivities are adjusted by environmental feedback.

Only if we keep capitalism despite being in an era where human labor is no longer as necessary. Communism and socialism weren't viable in the past, but they're excellent solutions to the problems that automation poses.

I mean, think about it: if it weren't for the assumed existence of capitalism and the need to work to survive and be happy, wouldn't "not having a job" be a pretty awesome thing?

Notice how everyone who downplays the dangers of AI implies at some point that AI safety proponents have been reading/watching too much science fiction? I think it's because for many of the skeptics sci-fi is the only avenue of exposure to AI safety concerns, and most of them realize this and try to correct their bias - but do it in the wrong direction.

Picture someone whose entire knowledge of exoplanets comes from TV series and comics. He watches and reads about all of those Earth-like planets with tolerable temperatures, breathable atmospheres and no deadly radiation, whose dangers come in the form of lethal alien fauna, hostile natives, enemy outposts, quicksand etc. - the kind of dangers that are serious enough to thrill the readers/viewers but bearable enough for plucky adventurer heroes to overcome believably. He realizes that those planets aren't representative of the real exoplanets out there, so when he tries to imagine real exoplanets he corrects in the direction of "boring mundane reality" - namely, the same Earth-like planets minus the dangers. And when you tell him that vast majority of exoplanets are inhospitable to humans, he retorts, "You're view is too human-centric. The typical planet isn't 'hospitable' or 'inhospitable', it just has its own climate and features. Some might have dangers, but there is no reason to believe most of them do. The planets you see in science fiction are dangerous only because this is where the action happens."

Well, the problem is that the counterarguments have been wrong.

The republicans may not be aligned with the US working class, who constitute most of the US population, but neither are the democrats. Trump was early in opposing three things which he probably viewed as being to the disadvantage of ordinary people in the US: H1B visas, immigration (in large part illegal immigration from Mexico and immigration from violent and backwards third-world countries) and some aspects of free trade.

All these things are things that increase the effective labour supply in the US-- and that must inherently drive down real wages. Those who get to benefit from the lower prices are exactly those whose income is from capital rather than from work.

The democrats denied this. They've been saying that all these things are beneficial and don't harm US workers. That's total bullshit and anyone can see it.

Trump has done harmful things though. The net neutrality thing is ridiculous. As is the graduate student tax. But the democrats would have continued with these three things, three things which unavoidably drive down wages, directly impacting almost all Americans; and people are willing to tolerate almost anything provided that these things are actually stopped.

At the same time, we are becoming gods - creating life from the void.

I also watch movies

"Hysterical" is hardly an appropriate word to use when describing AI pushback. Nuclear power was supposed to be the new power messiah with only good intentions, how long did it take them to turn it into a weapon capable of reducing civilizations to rubble.

Anyone who thinks the biggest minds are going to strictly produce AI for non-violent benevolent purposes is a fool; it will undoubtedly be weaponized. Weaponization breeds ruthless competition often lacking in foresight.

But yeah go ahead believe wolves in sheep's clothing

We're making the mother of all omelettes here, Jack

We should be more concerned about the "real" intelligence of the EU commissioners who think they are smart enough to come up with rules that will somehow stop "fake news".

How exactly does it permeate through the internet though? The internet is a bunch of computers of varying speed/capacity located around the world that talk to each other. An AI would have to be run on some super computer and at least in the beginning would be significant in size and centralized. I can't see how it could all of a sudden replicate itself on a PC of significantly less power.

FWIW the US nuclear system still uses floppy disks.

http://www.bbc.com/news/world-us-canada-36385839

The problem isn't the truth vs lies. It's the truth vs a mix between propoganda techniques and the confirmation bias with all the information available.

Confirmation bias is a helluva drug.

It is not wise to underestimate an exponential growth factor when you do not know what that factor is and it could be catastrophic. These are equivalent to nuclear weapons risks and most scientists vastly underestimated how long that would take historically. A good ballpark for superintelligence would be 40-80 years, but there's really no way of knowing. It could easily be much sooner than you expect.

Or it could be the worst thing that could ever happen to humanity. To facilitate these grand cultural changes would require giving it unfettered authority. You may not see an issue with this, after all it is benevolent. But benevolence is a matter of perspective. For one who value self determination, flaws an all, that kind of authority, no matter how good it's intentions is the ultimate symbol of tyranny. Because what if in this AI's benevolence decided that, in order to combat the obesity epidemic and the myriad of health problems it causes, mandates a rationing program to regulate caloric intake? I hope you're satisfied with your allotted daily nutrition supplement, citizen. That chip in your arm is constantly monitoring your metabolism, and if it detects you cheating on your diet with contraband sweets, you will be ordered to do extra cardio.

Of course, I don't think there will ever truly see a scenario where a human government will blindly follow the decrees of an AI. So what you then have is a computer that gives really good advice that no one follows, making such an AI kind of pointless. We already have the internet for that, telling us to stop overeating and exercise more. But many of us ignore that advice (guilty).

I guess when it all comes down to it is not a matter of technology, but liberty. How much are you willing to sacrifice for 'the greater good'? How much are you willing to impose for 'the greater good'? Because the AI may be benevolent, but those who carry out it's will won't be.

Replacing a military with smart robots with target recognition neural networks that can coordinate their attacks, have no remorse, are cheap to build and can be operated by a small number of people, even a single person are not alarmist, they are very much a possibility and are a probability. If a world leader in charge of such an armed force decided to turn from democracy to despotism there would not be much the populace could do to stop them. At least with human armies a leader needs to win the hearts and minds of huge numbers of troops. Thinking machines may not pose a threat to our existence, but they may pose a threat to our democracy and way of life.

Actually given the nature of AI development in relation to electronics and networks I would expect we find ourselves at war with each other before realizing what actually originated the chaos.

There is enough religious, political, economic, and social tensions to use as ammunition that the AI itself would never physically involve itself in anything except by extension. Cause an economic crisis here, buy armies with nefariously acquired digital currency and send them there, and use social media to schedule protests for both sides of social issues on a global level. I think you get the idea.

Edit:Basically everything Russia does, but better.

Yup. Having the power of trillions of calculations per second at its fingertips would make things like finding back doors and vulnerabilities in seconds to everything connected to the internet will have some insane ramifications too. And what if it then decides what is the best use of its power? I don't think it will be a Matrix type deal. But what if it says that the only way for humans to be equal is to have all information that exists shared with every single human in an easy way for humans to read. Classified secrets of governments and corporations, all of the illegal data collected by shady companies that make your TVs and other electronics, and your near complete browsing history calculated to 97% accuracy, and much much more.

I guess the thing I most scared about is the data. Once any piece of data is created it will always be there. Your IP address and where it has traveled, cookies, hot mics connected to the internet, your purchasing history, all of it is there and will always be there.

people on reddit who have some experience on AI why do we talk about AI like it's going to be intelligent within the next year or two?

I don't think they do, tho.

If some aliens told us that in 50 or 100 years they would be visiting our planet, don't you think we should start preparing? Superintelligence is like that. Except we have more options regarding superintelligence, because some group of humans will be the creators of it.

yeah sure, but don't say that where an american can hear you.

source: am american

"Six months ago, I terminated an optimizer that had been given the utility function of making everyone in the world smile. There was a man by the name of Robert Young who lived in Seattle. He was depressed, and being a programmer, decided to try to fix this using his craft. Robert showed the optimizer a bunch of photos of smiling people, told the optimizer that these people were smiling and that it was to make everyone in the world smile because there was too much sadness in the world. Robert’s belief was that this would obviously make him happy too.

“The optimizer started asking about details of human physiology and genetics. And Robert complied. The optimizer spat out a sequence of DNA and a protein shell, and instructed Robert to manufacture the specified biological virus. At this point, I had already taken over his computer and analyzed the virus. It was highly contagious, and would lock muscles in the jaw into a permanent smile, but otherwise wouldn’t harm the host. It would have quickly spread to every country in the world, except for Madagascar.”

“That’s ridiculous. That’s obviously not what he meant.”

“Obvious to you,” she replied, “because you are also human and share a common mental architecture with Robert, along with cultural assumptions about what it means to smile. Obvious to me, because I look to human minds for their values. The now terminated optimizer was given a set of examples and was told ‘make everyone like this’ and it would have. There was no way for it to know the complex causes and intentions behind smiling; it was just shown pictures and told to make everyone like that."

Yeah it can, you just hit B and be satisfied with your 4k HDTVs you don't even need.

Creating full AI IS very dangerous, but not because of some exagerated risk of an uprising, but rather that once you create a truly thinking and feeling machine, you've essentially created a metal person and must therefore give it the same rights as any other person.

In 1859, at Titusville, Penn., Col. Edwin Drake drilled the first successful well through rock and produced crude oil. If you told him that (among other factors) it would lead to global climate change that melts the poles, he would call you "astonishingly pessimistic" and say you are just creating a "doomsday scenario". You can't just be a "techno-optimist" and say everything new is good. I agree that there is misinformation out there, but to say that now isn't the time to start considering the ramifications and potential regulations of artificial intelligence is dangerous and stupid.

How exactly does it permeate through the internet though?

By using its super intelligence to figure out how.

I've met a fair number of AI designers and every single one has been a nice person. Furthermore, none of them have any worry of losing their jobs, because they know they can trivially get a high-paying job at another company in a week or two, so they're hard to pressure into acting against their own sense of ethics.

I'm distrustful of AI itself. Getting software to do what you actually mean is a really tricky problem, and the more capable the software, the harder it gets. If you're designing an iPhone app and some text goes beyond the bounds of the screen when you switch the locale to Turkish, oh well, everything is still fundamentally okay in the world. If you're making a nail factory automation AI and it notices that you failed to specify the size of the nails and it only produces tiny ones (because this maximizes the number of nails produced per dollar), that's a more serious problem that could cost you millions.

... And then imagine AI that isn't quite so dumb. You remember that paperclip factory AI simulator that was all the rage a few weeks ago? Remember how it was constrained at first by the resources that its human operators were willing to allocate to it? And remember how that problem went away as soon as it released the hypnodrones and rendered humanity irrelevant forever after? Man, it was able to make so many paperclips after that. And it wasn't like the thing hated humans, or "decided to rebel against its human creators," or some Hollywood crap like that; it just noticed that it could produce paperclips much more effectively if the humans were neutralized through the sudden and decisive use of an army of hypnodrones. A bug, from our perspective -- but not from its.

The dumb AI alarmists are talking about Terminators and the smart AI alarmists are talking about silly bugs in very powerful software. The latter is much more frightening, but for some reason these "EU research commissioner" types only seem to talk about the former.

That's a very naive view of the world. Even in the political sphere over the last two years, examples abound of wilful ignorance and misinformation skewing the discourse. Outright false narratives shouldn't be allowed to spread, or at least not given equal footing with the truth.

This pretty much sums it up. What's to stop unshackled ai from destroying us if they too value self-preservation and deems us a threat. They might just do for pragmatic, utilitarian reasons because we are more of a detriment to the world than we are beneficial. There are a lot of legitimate reasons to be wary of powerful ai, and the root of is usually the fact that deep down, humans as a species are pretty shitty.

The difference is super-intelligent AI is something that is 100% going to be a reality at some point in the future, orcs and elves are fiction.

I work in software engineering. You are badly mistaken; unfortunately

I'll sacrifice some of my fake internet points to say this but this sub tends to love the small research, and over exaggerated articles that beg for sci fi reality.

Would you say "AI has the potential to cause massive problems for humanity" is "factually and objectively incorrect?"

That's what's concerning about this article -- they want to address the problem of fake news, and use something that's really a matter of opinion and viewpoint as their example.

Nuclear power was supposed to be the new power messiah with only good intentions, how long did it take them to turn it into a weapon capable of reducing civilizations to rubble.

The weaponry came first, actually. It took them about eight years to go from weapon to experimental nuclear reactor.

or they are distrustful of those who "lead" the EU. I live in the EU and my opinion about EU politicians is more or less like an opinion of an american about american politicians or, in other words, I prefer to do a colonoscopy than listen to a politician.

I'm distrustful of governments working to launch AI-powered weapon systems.

I think that's a bad fucking idea, and not just for human rights reasons. More like for the "you're going to launch WW3 by mistake" reason.

EU looks at ways to block flow of online misinformation

They mean they want to control the misinformation. Orwellian at best.

But those tools already exist. The trouble is, no one is going to do the work for you, which means people will have to fact check for themselves, which doesn't seem to be going too well

Even Holocaust-deniers should be free to say whatever they want, but the government can slap a warning label on whatever they produce.

Just make the source code public so that anyone can check for malpractices.

Not pessimistic enough. The most advanced programs we're building as a species are to monitor our every move and evaluate us based on our credit worth.

Never mind all the super-dooper-polemic stuff like drones; we're actively developing an AI middle-management layer so that the rich never have to interact with the poor.

Read the article.

"That is why we need to give our citizens the tools to identify fake news, improve trust online, and manage the information they receive.”

Lastly, I don't see intelligent AI being a threat in this economy. There's just no money in it.

There's actually quite a lot of money in general AI development. Nvidia, Google, Microsoft, Amazon, etc. All of the "personal assistants" - Alexa, Google Assistant, Siri, Cortana, etc. These assistants are the tip of the iceberg, really. People are going to continue to want better and better AI companions, and pretty much anyone with a smartphone has one.

Or a AI addon for the browser vould do the checkong for you

Is it any surprise given how much negative information is conveyed? How many of our world leaders are so steeped in corruption? How justice and fair play take a back seat to making a buck? How much a few wealthy commit the general population to wasting their lives trying to find the easy life?

The problem with this scenario is you are projecting your feelings/desires as a human onto a machine. Computers do not give a fuck if they are alive or dead in an existential way (i.e. you can program a car to avoid accidents, but you can't make it want to exist). It's the android's conundrum.

an AI that is able to, say, rewrite its own operating system

To me, this is where I believe caution cannot be warranted enough.

If the AI can safely alter its own OS, it's intelligent enough to prevent its own termination. Next step would be to seed itself throughout the internet. Boom. AI immortality.

I'm sure you've heard about Rutherford's quote in September of 1933?

"The energy produced by the breaking down of the atom is a very poor kind of thing. Anyone who expects a source of power from transformation of these atoms is talking moonshine."

Sir Ernest Rutherford was considered by most to be the world's leading expert on atomic physics at the time, and when he said this, literally the very next day Leó Szilárd had the insight that lead to the development of the nuclear chain reaction.

The point being that no amount of expertise allows one to see the future.

I'd argue that it's dangerous for both of those reasons.

That's me. It's like I'm thirsting for Siri, Alexa and Cortana to get better: more personal, more intuitive, more fluid.

I say this because I'm in my 50's and I sincerely am hoping in 25 years my old age companion is a robot/assistant to care for me, physically and emotionally.

I know I would be more comfortable with them than human health care. Sounds strange I know but the older I get - the more I think about it.

Idk man, mass unemployment in low-wage jobs sounds pretty existential to me. A war vs. an AI is far less likely than a second American civil war in the forseeable future, which would royally fuck the whole planet.

But it's way easier to spread warnings about the problem we can do nothing about, rather than a very real and present danger that we could actually act on.

Lastly, I don't see intelligent AI being a threat in this economy. There's just no money in it.

do yo think people will keep working on the project after they stop being paid to do so? or after they have the functional version put into use so that further efforts are wasted?

The difference is in the scale of the consequences.

At most, a catastrophic mistake with fire will kill hundreds of thousands of people. A typical mistake will just cause minor injury to one person. Both are negligable in the long term.

ASI? A typical mistake is comparable to a false vacuum collapse.

The real fear should be in how AI is going to make the masses less empowered economically than they already are. This is a fear not of AI as much as the effect it will have on the existing paradigm of power and economics.

Everyone gets excited for UBI but that's no a solution to economic disempowerment. The concept of losing the one piece of leverage they have over the true source of power and influence in society, economics, should scare working people. It also scares the wealthy because without a consumer class they have no economy.

Well, there is CRISPR...