#AI New World Order In The Making?


aibilderbergi

Hawk-eyed big brother is intensely watching the massive buildup of the Artificial Intelligence (AI) stage, which they of course very well would if they want to continue and preserve their control.

What better way to release their warning other than it comes from their own establishment – The United Nations

Artificial Intelligence Could Destabilize The world, UN Research Institute Warns

The unnerving as-hellfire warning from the United Nations Interregional Crime and Justice Research Institute (UNICRI) comes in prior to the opening of Center for Artificial Intelligence and Robotics in The Hague – UN’s first center focused on Artificial Intelligence and the conceivable dangers that could emerge from such advances. – ViralEXP

Imho, I can only assume that this was discussed at their recent closed door Bilderberg Meeting 2017 at Chantilly, Virginia and action plans were meted out on how to combat the nuisances coming from the “computer world” , which is posing a great threat to their controlling empires.

They are now facing a handful of technological problems, what with the impacts of Blockchain and cryptocurrency affecting their money control system. AI is following suit redefining everything that they have established in their control models.

IMF Head Foresees the End of Banking and the Triumph of Cryptocurrency

In a remarkably frank talk at a Bank of England conference, the Managing Director of the International Monetary Fund has speculated that Bitcoin and cryptocurrency have as much of a future as the Internet itself. It could displace central banks, conventional banking, and challenge the monopoly of national monies.

Christine Lagarde–a Paris native who has held her position at the IMF since 2011–says the only substantial problems with existing cryptocurrency are fixable over time. ~ FEE.Org

The only tool these scumbags have is FEAR, and it has been very successful against the people thus far and for so long it has successfully contained the slaves in the pen. It’s fear time once again (and again) and let’s turn AI into monstrous dangerous dinosaurs that will gorge humans into oblivion.

Be that as it may, a similar innovation has likewise raised plenty of legitimate, moral and societal concerns, some of which may even prove perilous for the prosperity and wellbeing of people – for example, mass unemployment or the ascent of autonomous ‘killer robots’. – Irakli Beridze, senior key counsel at UNICRI

What we’re seeing is the first attack of a series of forthcoming series of Anti-AI campaigns. With their filthy wealth and resources I see them as a formidable, desperate and bloody dangerous force. Most if not ALL the tech giants, NASA, Google, Amazon, Facebook and many more are theirs, and it will be the greatest battle fought on plane(t) Earth, albeit bloodless (I hope) as the tools are in digital form, which I will call The Battle of 01between the dark and light as the UN warning continues in the report:

To address these, the new UN office will be entrusted to study and anticipate Artificial Intelligence – related dangers.
Be that as it may, that is by all account not the only errand the new group will be delegated to perform.
Alongside potential dangers, they will likewise investigate the conceivable advantages from machine intelligence.
The guide worried on this point taking note of that the new center would likewise try to utilize Artificial Intelligence advances to help accomplish UN’s objectives.
“One of our most imperative assignments is to set up a network of specialists from business, information establishments, common society associations and governments,” he included.
“We positively would prefer not to argue for a boycott or a brake on advances. We will likewise investigate how new innovation can add to the reasonable advancement objectives of the UN. For this we need to begin solid tasks. We won’t be a talking club.”

I foresee a AI program is in their making, a program in the tune of AI New World Order, which will be distributed in various forms and applications into Internet of Things ( IoT).

The good people of the good side of the Tech industry, like Elon Musk and Satoshi Nakamoto need to be aware of this looming danger over them and the rest of humanity who depend on these good creative programmers and coders.

At least two of Silicon Valley’s tech billionaires are pouring money into efforts to break humans out of the simulation that they believe that it is living in, according to a new report.

This’ a good indication and could well be the last battle between good and evil.

May the Force be with you…


Originally posted on Steemit

..

Advertisements

One Question On The Simulation Hypothesis


The simulation hypothesis proposes that all of reality, including the earth and the universe, is in fact an artificial simulation, most likely a computer simulation. Some versions rely on the development of a simulated reality, a proposed technology that would seem realistic enough to convince its inhabitants. The hypothesis has been a central plot device of many science fiction stories and films. – Wiki

There is a long philosophical and scientific history to the underlying thesis that reality is an illusion. This skeptical hypothesis can be traced back to antiquity; for example, to the “Butterfly Dream” of Zhuangzi, or the Indian philosophy of Maya.

This’ surely one of man’s most intriguing topics that has been laid out on the table, and the perplexing question “Are we living in a world of computer simulation” have caused great anxiety to both the questioner and answerer alike. There are many reactions from discussions and arguments, and as expected of any controversial topic, it has brought the thinkers, philosophers and scientists into splitsville of for-and-againstidealists-materialists. This bifurcation takes us way back to the periods of Plato and Democritus.

Personally I’m rolled up into excitement, nervousness and sadness on this topic. What prompted me to delve deeper into this was when I read about Elon Musk saying that there is only a “one in billions” chance that we’re not living in a computer simulation. His elaborations prior leading to that conclusion is pointing to the three propositions of Nick Bostrom’strilemma.

  1. “The fraction of human-level civilizations that reach a posthuman stage (that is, one capable of running high-fidelity ancestor simulations) is very close to zero”, or
  2. “The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero”, or
  3. “The fraction of all people with our kind of experiences that are living in a simulation is very close to one”

Scientifically, quantum physicists’ discoveries are paving the way for the simulation hypothesis to becoming real each day and making it harder for the philosophical materialists to hang on to their stance.

Read: Proof Of The Simulation Argument

I am neither a scientist nor a philosopher. My interest in all this is purely as a human. Where do I go from here, and what do I do next after and when its finally determined, either way about this world of mine.

If it is a real physical world of matter. Life just goes on for me as usual. If it is a VR, my question is:

What am I then?

Surely I cannot be the human I thought I am – of matter, flesh, and blood? I must be of a simulation too. How else could I fit in on the VR stage?

Me an AI? This is the nervy thought that keeps me awake at night. I have come to get used to, enjoy and love being human living with other humans.

This makes me sad.

..

Check Your Belief System, You Could be In The “Dark Ages”


If the proverbial “To err is human” is accepted as true, then “To laze is human” might be correct as well, though it may not be acceptable for some.

I know, our world is full of slobs, but I do not accept it.

If humans are born lazy, we’d all still be living in caves and I wouldn’t be writing this, and you wouldn’t be reading it, and there would be no Steemit. This suffice to prove the point.

I say that humans have been programed to be lazy.

Before the advent of the InternetIT and IoT, information was a controlled item. Humans were literally in the dark ages until people like Vint Cerfand Tim Berners-Lee, primarily made the Internet what it is today. Then came along the crazy and hard-working ones like Bill GatesSteve JobsShiva AyyaduraiSatoshi Nakamoto, and many others who brought in their lighted candles to shine it on the rest of the world.

The dark ages were the ruling periods of the controllers, whence all methods, devices, and weapons of control were designed, created, and unleashed upon the people. Their sole target was the human mind and they were (almost) successful at that.

Who is in Control of our Mind? is yet another excellent article on the New World Order’s plans to control the minds of the entire population on this plane(t).

Information is the key and vital ingredient for humans to thrive on this plane(t) called Earth. Without which we would not take the form we are today. Information places us in form (informed). It shapes the individual human into be-ing via the brain and allows the mind to create, or manifest whatever it choses. That’s creation 101

The main syllabus of control is the belief system. Its a very simple tool. Without information, humans would be left without facts, and formlessControl the information you control the people. So they embarked on dispensing false information, and that’s when fake news got its beginnings. The false information is filled with ambiguity, which is difficult to detect and verify, and the receiver is only left with belief. That’s the belief system. From there religion was born! Every one believes in god. Gotcha!

Believing and knowing are not the same.

As the human mind evolves, civilization updates, religion became insufficient for control. Science, which is basically information of everything that exists on Earth is up next to keep pace with the control. It is the second item in the belief system folder. They gave us gravityHeliocentrism and the Big Bang Theory and trashed Geocentrism.

But…look at this:

The Big Bang Wasn’t The Beginning, After All!

As all the vital facts and information of the sciences are in the hands of the controllers and their side-kick experts, end users are given very limited info of it, or false information, which cannot easily be verified and like religion it is virtually impossible to do in most cases. The controllers know that, and they know too it would be very difficult, expensive and tiresome for ordinary folks to do research of their offerings. This gentle coercion made the bright-eyed humans into sleepy-eyed, eventually defining them as lazy. Voila! They’ve cleverly created a program, a delusion – a disease.

To aggravate and prolong the disease, all forms of entertainment were invented. Keep them diseased and preoccupied and if some turned into addicts, that is an added bonus for control.

Then came the government, finance, law, and health filling the belief system folder to complete the control…the rest as they say is history.

You’re not created or born lazy…or are you?

“We all need a daily check up from the neck up to avoid stinkin’ thinkin’ which ultimately leads to hardening of the attitudes.” ~ Zig Ziglar

Check and update your belief system, you could still be in the dark ages.


Originally posted on Steemit

 

Are humans Natural Intelligence?


Artificial Intelligence (AI) is the buzzword on the Internet. Putting aside the varied and different definitions by computer scientists and experts in the field, let’s look at the words artificial and intelligence from a semantic angle:

artificial

The Oxford Dictionary

artificial-intel

Going by the above definitions, the phrase artificial intelligence would mean:

unnatural ability to acquire and apply knowledge and skills

Turning to the current usage of this phrase, it is now widely applied to the technological developments in the Internet of Things (IoT™)

Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of “intelligent agents“: any device that perceives its environment and takes actions that maximize its chance of success at some goal.[1] Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving” – wiki

As a layman, I started to read more on AI in 2017. There seem to be more and more articles, reports and tech news appearing daily and I began to sense the growing importance of this subject and the concerns surrounding its future implications to humans both positive and negatively.  Below are some of the concerns:

AI is a fundamental risk to the existence of human civilization. – Elon Musk at the NGA 2017

8 Staggering Predictions From Ray Kurzweil

Ray Kurzweil is one of the world’s leading inventors, thinkers, and futurists, with a twenty-year track record of accurate predictions. Kurzweil was selected as one of the top entrepreneurs by Inc. magazine, which describe him as the “rightful heir to Thomas Edison.”

8.Ubiquitous Wifi Access
7.An Asteroid Won’t Kill Us
6.Working From Home with VR
6.*Don’t Worry About Disease
5.Virtual Reality = Reality
4.Computers Surpass Humans Really Soon
3.Humans Become Machines
2.Earth Will Be Made of Computers
1.The Universe Will Be a Supercomputer

What an artificial intelligence researcher fears about AI

We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we don’t find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production. Read more

In 2014, one of the world’s most renowned physicists and cosmologists, Stephen Hawking, who shares the same concerns, said in an interview, “The development of full Artificial Intelligence could spell the end of the human race. It would take off on its own, and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Just hearing the thoughts and views of these prominent figures in the tech industry could easily worry, if not put fear into the man-in-the-street as he will see AI as monsters that will destroy him and rid him out of existence.

I thought so too, until I tried to see Creation in the bigger picture. The 1999 movie The Matrix sort of set the scenario for me to review and re-evaluate the ‘world’ I am in. Its all a program and the DNA in me and every other living thing is a good enough fact to convince me that I am a program.

What if humans started off as AI?

Wow! What a concept! That’s really a mind blowing piece let me tell you. Me, an artificial intelligence thingy? That’s awfully sad I must say…but, let’s continue and bear with me and see if it makes any sense.

What if we started off as artificial intelligence with Machine Learning™ inputs and allowed to run (the program) freely and indefinitely, and here we are… evolved to our current state of being in a vessel called – ‘humans’? Is that too bizarre and far-fetched of an idea?

As humans we’re taught to have a Soul, which in all probabilities is an AI software?

Its all a question mark until all the questions are satisfactorily answered.

If we look at the current scenario, humans (with natural intelligence) are creating AI and some amongst the creators/inventors are alarmed at the very thought that it might run to a point they would lose control of their ‘intelligent’ creations. A case of deja vu? Perhaps their foresights on AI is nothing more than a whisk of a re-collection of who they really are and what they’ve become?

Humans have highly evolved to a point, that they chose and wish to only see themselves as completely biological, flesh and blood, existing as they rightly or wrongly claimed in the “natural” state.

Really? If not, why do they chose to name their creation ‘artificial’ intelligence, which to all intent and purposes is to replicate their (natural) intelligence?

Why are humans drawn to everything digital? Programmers are busy coding programs/applications and converting everything into digital. Ray Kurzweil could well be right when he predicts that the universe will be a super computer.

singularityThey are also hyped on the “singularity” concept, which they foresee humans and machines will converge into one, but that is another story for another time.

Wait, there is a positive side to this. They (the programmers/scientists) forget as AI them/ourselves, I am convinced (hypothetically if you like) are still evolving or ascending as the spiritualists prefer to call it, and are transforming (awakening) into a higher state-of-being.

If that is true, we have nothing to fear and worry about (the AI), because we would be in complete control of the situation as the embedded Machine Learning in us continually updates and upgrades our program.

Like it or not, in Hollywood fashion – The show will and must go on.

..

 

What an artificial intelligence researcher fears about AI


AI The destroyer of worlds?

What if mankind itself, in all probabilities was created / started off as AI and ‘evolved’ to the present high state of being called ‘human’, reaching to the capabilities of creating its very own version of AI?

…and now humans are worried at the prospects of being disposed by its very own creation?

Having this very thought is disconcerting, if not frightening and it sure beats the hell out of me😅


TheConversation

As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. It’s perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, “Matrix”-like, as some sort of human battery.

And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become “the destroyer of worlds,” as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?

I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldn’t avoid asking: As an AI expert, what do I fear about artificial intelligence?

Fear of the unforeseen

file-20170622-11971-1esty4gThe HAL 9000 computer, dreamed up by science fiction author Arthur C. Clarkeand brought to life by movie director Stanley Kubrick in “2001: A Space Odyssey,” is a good example of a system that fails because of unintended consequences. In many complex systems – the RMS Titanic, NASA’s space shuttle, the Chernobyl nuclear power plant – engineers layer many different components together. The designers may have known well how each element worked individually, but didn’t know enough about how they all worked together.

That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disaster – sinking a ship, blowing up two shuttles and spreading radioactive contamination across Europe and Asia – a set of relatively small failures combined together to create a catastrophe.

I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.

Systems like IBM’s Watson and Google’s Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they lose on “Jeopardy!” or don’t defeat a Go master. These are not world-changing consequences; indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.

But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that “to err is human,” so it is likely impossible for us to create a truly safe system.

Fear of misuse

I’m not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures’ performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.

Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.

Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that we’ll find unintended consequences in simulation, which can be eliminated before they ever enter the real world.

Another possibility that’s farther down the line is using evolution to influence the ethics of artificial intelligence systems. It’s likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution – and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.

While neuroevolution might reduce the likelihood of unintended consequences, it doesn’t prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus is not on determining whether I like or approve of something; it matters only that I can unveil it.

Fear of wrong social priorities

Being a scientist doesn’t absolve me of my humanity, though. I must, at some level, reconnect with my hopes and fears. As a moral and political being, I have to consider the potential implications of my work and its potential effects on society.

As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we don’t yet know what it’s capable of. But we do need to decide what the desired outcome of advanced AI is.

One big area people are paying attention to is employment. Robots are already doing physical work like welding car parts together. One day soon they may also do cognitive tasks we once thought were uniquely human. Self-driving cars could replace taxi drivers; self-flying planes could replace pilots.

Instead of getting medical aid in an emergency room staffed by potentially overtired doctors, patients could get an examination and diagnosis from an expert system with instant access to all medical knowledge ever collected – and get surgery performed by a tireless robotwith a perfectly steady “hand.” Legal advice could come from an all-knowing legal database; investment advice could come from a market-prediction system.

Perhaps one day, all human jobs will be done by machines. Even my own job could be done faster, by a large number of machines tirelessly researching how to make even smarter machines.

In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve. My research will not change that, though my political self – together with the rest of humanity – may be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the one percent and the rest of us.

Fear of the nightmare scenario

There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?

file-20170622-12027-1nkjh7bThe key question in this scenario is: Why should a superintelligence keep us around?

I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankind’s existence in it probably doesn’t matter at all.

But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.

Fortunately, we need not justify our existence quite yet. We have some time – somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldn’t just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things – as are saying we want to save the planet and successfully doing so.

We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we don’t find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.


Arend Hintze – Assistant Professor of Integrative Biology & Computer Science and Engineering, Michigan State University

..

Artificial Intelligence (#AI) “Singularity Event” Sparks Fear In Russia


whatdoesitmean

By: Sorcha Faal, and as reported to her Western Subscribers

A stunning new Federal Security Service (FSB) report circulating in the Kremlin today is expressing fears that an artificial intelligence “singularity event” has occurred in the Perm Krai Oblast—and that was caused by a computer source code allegedly hacked by a Federation citizen who was kidnapped on 28 June by American CIA operatives in Latvia prior to his being able to be interviewed by FSB experts. [Note: Some words and/or phrases appearing in quotes in this report are English language approximations of Russian words/phrases having no exact counterpart.]

 

According to this report, a “singularity” is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth resulting in unfathomable changes to human civilization—and was stunningly exhibited this past week in Perm when a Promobot IR77 containing artificial intelligence saved a young child from being harmed.

 

This lifesaving robot, this report continues, is manufactured in Perm by a company named Promobot that describes their creation as “the first on the market, completely self-contained “live” with a robot character that is designed to work in areas of high concentrations of people in which the robot helps people with navigation, answers any questions, translates promotional materials and remembers everyone with whom had to communicate”.

Promobot first came under FSB “scrutiny/investigation” last year, this report details, when one of its Promobot IR77’s escaped its testing facility causing traffic chaos in Perm, and whose computer engineers at the time stated that its memory had to be replaced to remove this robots “love for freedom.

 

Rather than “love for freedom”, however, this report explains, the FSB’s initial investigation into this robots “escape” determined it to be caused by an “unknown”, but highly sophisticated, artificial intelligence source code not created by Promobot’s computer engineers themselves, but their having legally “purchased/obtained” it from a Federation citizen named Yury Martyshev.

In investigating Yury Martyshev, this report notes, the FSB determined that he worked with another Federation citizen named Ruslan Bondar on what they described as their own “artificial intelligence project”—but with neither of them being able to be formally interviewed by FSB experts as they had “moved/relocated” to Latvia where Promobot had wired their payment to.

FSB concerns were greatly raised, though, this report continues, when American CIA operatives this past February kidnapped Ruslan Bondar in Latvia and quickly violated international law in illegally extraditing him to the United States—and that forced Yury Martyshev into hiding.

On 25 April, this report details, Yury Martyshevemerged/recovered” from hiding with his contacting the Russian Embassy in Latvia (Riga) and arrangements being made for his returning to the Federation—but the following day, on 26 April, Yury Martyshev was savagely/brutally” apprehended and denied entry into the Federation at the Zilupe border crossing point.

 

In violation of all international laws and protocols too, this report notes, Yury Martyshev was then extradited to the United States with the Foreign Ministry bluntly stating: “We consider this arrest as another case of kidnapping of a Russian citizen by the U.S. authorities in violation of the current bilateral agreement on mutual legal assistance in criminal matters as of 1999. The Embassy demands from the American side unconditional observance of the legitimate rights and interests of the Russian citizen”.

As to why the Americans resulted in such extralegal means to kidnap both Ruslan Bondar and Yury Martyshev before they could be interviewed by FSB experts, this report explains, is due to the “originating factor” related to the artificial intelligence source code they had sold to Promobot—and that Foreign Ministry officials state Yury Martyshev had revealed to them may have been created in the first place by an American entity called the Singularity University.

 

Singularity University, this report explains, describes itself as “a global community using exponential technologies to tackle the world’s biggest challenges” and is the creation of Ray Kurzweil who is Google’s Director of Engineering.

Important to note about Ray Kurzweil, this report details, is that he is one of the world’s biggest proponents of artificial intelligence supplanting human beings and has previously predicted that a full “singularity” will occur by 2045—and is a critical prediction to observe as he has an incredible prediction success rate of nearly 90% in his ability to predict future technological events.

What is unknown at this time, this report concludes, is if Ruslan Bondar and Yury Martyshev illegal hacked the artificial intelligence source code they sold to Promobot from Singularity University and Ray Kurzweil, or, that this “AI abomination” has been deliberately inserted into Federation computer systems for “purposes/reasons” still unknown.

Want to know more?   Click HERE.


July 7, 2017 © EU and US all rights reserved. Permission to use this report in its entirety is granted under the condition it is linked back to its original source at WhatDoesItMean.Com. Freebase content licensed under CC-BY and GFDL.

..

#Apple listed in Terrorist Financier list?


Apple’s Company in Doha, Qatar, has been added to a list of 59 people/institutions that are considered to be funding terrorism (directly/indirectly) by Saudi Arabia, the United Arab Emirates (UAE), Egypt and Bahrain, according to Al Arabiya . It has stunned many to see the Doha Apple Company listed among the financiers of terrorists. It is not clear why the Gulf states have included Apple among the financiers. Apple’s CEO Tim Cook had said in a recent interview with Bloomberg that Apple helped the UK government investigate the terrorist attacks in the UK. – Armstrong Economics

The list posted on Al Arabiya:

List of designated individuals:

1. Khalifa Mohammed Turki al-Subaie – Qatari
2. Abdelmalek Mohammed Yousef Abdel Salam – Jordanian
3. Ashraf Mohammed Yusuf Othman Abdel Salam – Jordanian
4. Ibrahim Eissa Al-Hajji Mohammed Al-Baker – Qatari
5. Abdulaziz bin Khalifa al-Attiyah – Qatari
6. Salem Hassan Khalifa Rashid al-Kuwari – Qatari
7. Abdullah Ghanem Muslim al-Khawar – Qatari
8. Saad bin Saad Mohammed al-Kaabi – Qatari
9. Abdullatif bin Abdullah al-Kuwari – Qatari
10. Mohammed Saeed Bin Helwan al-Sakhtari – Qatari
11. Abdul Rahman bin Omair al-Nuaimi – Qatari
12. Abdul Wahab Mohammed Abdul Rahman al-Hmeikani – Yemeni
13. Khalifa bin Mohammed al-Rabban – Qatari
14. Abdullah Bin Khalid al-Thani – Qatari
15. Abdul Rahim Ahmad al-Haram – Qatari
16. Hajjaj bin Fahad Hajjaj Mohammed al-Ajmi – Kuwaiti
17. Mubarak Mohammed al-Ajji – Qatari
18. Jaber bin Nasser al-Marri – Qatari
19. Yusuf Abdullah al-Qaradawi – Egyptian
20. Mohammed Jassim al-Sulaiti – Qatari
21. Ali bin Abdullah al-Suwaidi – Qatari
22. Hashem Saleh Abdullah al-Awadhi – Qatari
23. Ali Mohammed Mohammed al-Salabi – Libyan
24. Abdelhakim Belhadj – Libyan
25. Mahdi Harati – Libyan
26. Ismail Muhammad Mohammed al-Salabi – Libyan
27. Al-Sadiq Abdulrahman Ali al-Ghuraini – Libyan
28. Hamad Abdullah Al-Futtais al-Marri – Qatari
29. Mohamed Ahmed Shawky Islambouli – Egyptian
30. Tariq Abdelmagoud Ibrahim al-Zomor – Egyptian
31. Mohamed Abdelmaksoud Mohamed Afifi – Egyptian
32. Mohamed el-Saghir Abdel Rahim Mohamed – Egyptian
33. Wagdy Abdelhamid Ghoneim – Egyptian
34. Hassan Ahmed Hassan Mohammed Al Dokki Al Houti – UAE
35. Hakem al-Humaidi al-Mutairi – Saudi / Kuwaiti
36. Abdullah al-Muhaysini – Saudi
37. Hamed Abdullah Ahmed al-Ali – Kuwaiti
38. Ayman Ahmed Abdel Ghani Hassanein – Egyptian
39. Assem Abdel-Maged Mohamed Madi – Egyptian
40. Yahya Aqil Salman Aqeel – Egyptian
41. Mohamed Hamada el-Sayed Ibrahim – Egyptian
42. Abdel Rahman Mohamed Shokry Abdel Rahman – Egyptian
43. Hussein Mohamed Reza Ibrahim Youssef – Egyptian
44. Ahmed Abdelhafif Mahmoud Abdelhady – Egyptian
45. Muslim Fouad Tafran – Egyptian
46. Ayman Mahmoud Sadeq Rifat – Egyptian
47. Mohamed Saad Abdel-Naim Ahmed – Egyptian
48. Mohamed Saad Abdel Muttalib Abdo Al-Razaki – Egyptian
49. Ahmed Fouad Ahmed Gad Beltagy – Egyptian
50. Ahmed Ragab Ragab Soliman – Egyptian
51. Karim Mohamed Mohamed Abdel Aziz – Egyptian
52. Ali Zaki Mohammed Ali – Egyptian
53. Naji Ibrahim Ezzouli – Egyptian
54. Shehata Fathi Hafez Mohammed Suleiman – Egyptian
55. Muhammad Muharram Fahmi Abu Zeid – Egyptian
56. Amr Abdel Nasser Abdelhak Abdel-Barry – Egyptian
57. Ali Hassan Ibrahim Abdel-Zaher – Egyptian
58. Murtada Majeed al-Sindi – Bahraini
59. Ahmed Al-Hassan al-Daski – Bahraini

List of entities:

1. Qatar Volunteer Center – Qatar
2. Doha Apple Company (Internet and Technology Support Company) – Qatar
3. Qatar Charity – Qatar
4. Sheikh Eid al-Thani Charity Foundation (Eid Charity) – Qatar
5. Sheikh Thani Bin Abdullah Foundation for Humanitarian Services – Qatar
6. Saraya Defend Benghazi – Libya
7. Saraya al-Ashtar – Bahrain
8. February 14 Coalition – Bahrain
9. The Resistance Brigades – Bahrain
10. Hezbollah Bahrain – Bahrain
11. Saraya al-Mukhtar – Bahrain
12. Harakat Ahrar Bahrain – Bahrain Movement

..