…of Artificial Intelligence, Blockchain, #Bitcoin and Roswell.


merahza (51) in blockchain •

Bitcoin Yay!

Suddenly, everyone is talking on the streets and in the elevators and crowding in to this frenzy. It is the current investors’ (gamblers) delight, whilst some are just spitting ‘Bitcoins’ from between their lips just to appear techy-clever. The old Ricky Nelson song “Fools Rush In” is playing in my head.

I have been following the Bitcoin story since 2013 and posted some articles on my blog. If my blog stats are anything to go by, nobody read. So much for Bitcoin then.

Now, if you’ve not heard of Bitcoin, you’d be looked upon as Jurassic. Still, not many really grasp what it is and I hear a lot of rubbish from a lot of people, including some serious investors.

When I first heard of Bitcoin, I was amazed not from the investment point, but the technology behind it which is Blockchain. Here, I thought was the very thing that could and would change the world, the enslaved world

Its the key to decentralization and its key feature is transparency. I wonder if that mean anything to anybody.

Centralization is the padlock of control – central government and central bank. In the colonial days, the power supplier in Malaysia then was the Central Electricity Board (CEB), which was later changed to TNB, but remain as the sole and central supplier.

Yes, I do see Blockchain as the disruptor to the controllers’ centralization game of control. The awaken people are now given the hope of freedom. But, sadly to the sleepers who are mainly money-greedy, all they see and could see is the $ sign.

Okay, so let them be…to each his own

I am not anti-Bitcoin or cryptocurrency, nor am I Bitcoin-crazy. The speed of the rising price of Bitcoin made me ponder and question the factors to this extrodinaire phenomenon?

What’s up doc?

Watch this:

I am one human be-ing who have learned to question everything and not take thing as it is. I posted earlier about Bitcoin being a fraud and it was with a question mark because I am questioning.

…of Artificial Intelligence, Blockchain, Bitcoin and Roswell.

Now what’s with this word “Roswell” doing in the title of this post?

Satoshi Nakamoto is attributed as the inventor of Blockchain and Bitcoin, right? Nobody believes that of course, and the big hunt for the real person/s are on as we speak. Perhaps they should put up a bounty (in Bitcoins). Sahil Gupta, a computer science student at Yale University and former intern at SpaceX, believes Satoshi is Elon Musk. Elon has sinced, humorously denied it.

If the AI/Blockchain Connection video above is true, then I think humans are in dire straits because we’re, in fact nobody, nobody is prepared for the Singularity scenario to come too soon or worse its even already here.

Apparently AI is all around us now, and that a Blockchain for AI (like Sophia the robot) to congregate and communicate with each other is already in the making.

The video alluded that Blockchain is AI technology, created and written by itself. The whole Blockchain-Bitcoin thingy is simply one business agendato take over the current (financial-commercial) system and replace it with the Blockchain system. And its all AI work. Now, that’s jaw dropping if you get it.

Of course some people will thrash that kind of idea. Now this’ where Roswell comes into the picture.

Roswell UFO incident

In mid-1947, a United States Army Air Forces balloon crashed at a ranch near Roswell, New Mexico.[1] Following wide initial interest in the crashed “flying disc”, the US military stated that it was merely a conventional weather balloon.[2] Interest subsequently waned until the late 1970s, when ufologists began promoting a variety of increasingly elaborate conspiracy theories, claiming that one or more alien spacecraft had crash-landed, and that the extraterrestrial occupants had been recovered by the military, who then engaged in a cover-up. – Wiki

Are we using Alien Technology?

Are we using alien technology? Is it just me or is technology moving too quickly? Are we really that intelligent? The people in this forum suggest otherwise. j/k 🙂 Seriously,when was the supposed Roswell alien ship incident again? 1947? Well, look at the world before and after that period. The transistor, rocket technology, even the upcoming solar sails. Hmmm, I wonder. Why is technology moving so quickly the past 50 years as compared to human technology in the previous centuries? What do you guys think?

Reverse-Engineering Roswell UFO Technology

Computer company chief Jack Shulman argues that the transistor could never have been invented so suddenly at AT&T in late 1947 without input from top secret Government projects, that some have identified to him as being from alien spacecraft.

The Original ACC (American Computer Company) Roswell 1947 Story

This is a story which, if true, could ultimately change the way you and I look at our Televisions, Radios, PCs, and other electronic equipment. If proven, it could lead to vast change in the modern world, change even beyond what modern technology has already done in this Century. In fact, if it turns out that Humanity is not alone in the Universe: that all by it self would force us to alter the way we look at Religion, Science, Politics and, even, Human Society.

What you are about to read is a remarkable story, as of yet unconfirmed.

What do you think?

YOU ARE ABOUT TO READ THE AMERICAN COMPUTER COMPANY STORY ABOUT THE ROSWELL INCIDENT ON JULY 7, 1947 and one person’s story about an alleged transfer of Alien Technology to Bell Laboratories.

Is it too bizarre or flamboyant then to think that Blockchain-Bitcoin isn’t Alien/AI technology?

What’s more disconcerting is the agenda – the BUSINESS PLAN.

Putin: Who Controls AI “Will Be Ruler of the World”

AI alarmism took on a geopolitical cast today as Russian President Vladimir Putin asserted that whoever is first to build breakthrough artificial intelligence technology will have world domination.

Any wonder Elon Musk (who knows what we don’t) is warning the world about AI/Robots?

With these warnings being shouted to anyone who can hear, though (and barely a handful of you know this), YOU are living in logical fallacy world called “affirming the consequent

10

..


Original post @ Steemit

Advertisements

#AI New World Order In The Making?


aibilderbergi

Hawk-eyed big brother is intensely watching the massive buildup of the Artificial Intelligence (AI) stage, which they of course very well would if they want to continue and preserve their control.

What better way to release their warning other than it comes from their own establishment – The United Nations

Artificial Intelligence Could Destabilize The world, UN Research Institute Warns

The unnerving as-hellfire warning from the United Nations Interregional Crime and Justice Research Institute (UNICRI) comes in prior to the opening of Center for Artificial Intelligence and Robotics in The Hague – UN’s first center focused on Artificial Intelligence and the conceivable dangers that could emerge from such advances. – ViralEXP

Imho, I can only assume that this was discussed at their recent closed door Bilderberg Meeting 2017 at Chantilly, Virginia and action plans were meted out on how to combat the nuisances coming from the “computer world” , which is posing a great threat to their controlling empires.

They are now facing a handful of technological problems, what with the impacts of Blockchain and cryptocurrency affecting their money control system. AI is following suit redefining everything that they have established in their control models.

IMF Head Foresees the End of Banking and the Triumph of Cryptocurrency

In a remarkably frank talk at a Bank of England conference, the Managing Director of the International Monetary Fund has speculated that Bitcoin and cryptocurrency have as much of a future as the Internet itself. It could displace central banks, conventional banking, and challenge the monopoly of national monies.

Christine Lagarde–a Paris native who has held her position at the IMF since 2011–says the only substantial problems with existing cryptocurrency are fixable over time. ~ FEE.Org

The only tool these scumbags have is FEAR, and it has been very successful against the people thus far and for so long it has successfully contained the slaves in the pen. It’s fear time once again (and again) and let’s turn AI into monstrous dangerous dinosaurs that will gorge humans into oblivion.

Be that as it may, a similar innovation has likewise raised plenty of legitimate, moral and societal concerns, some of which may even prove perilous for the prosperity and wellbeing of people – for example, mass unemployment or the ascent of autonomous ‘killer robots’. – Irakli Beridze, senior key counsel at UNICRI

What we’re seeing is the first attack of a series of forthcoming series of Anti-AI campaigns. With their filthy wealth and resources I see them as a formidable, desperate and bloody dangerous force. Most if not ALL the tech giants, NASA, Google, Amazon, Facebook and many more are theirs, and it will be the greatest battle fought on plane(t) Earth, albeit bloodless (I hope) as the tools are in digital form, which I will call The Battle of 01between the dark and light as the UN warning continues in the report:

To address these, the new UN office will be entrusted to study and anticipate Artificial Intelligence – related dangers.
Be that as it may, that is by all account not the only errand the new group will be delegated to perform.
Alongside potential dangers, they will likewise investigate the conceivable advantages from machine intelligence.
The guide worried on this point taking note of that the new center would likewise try to utilize Artificial Intelligence advances to help accomplish UN’s objectives.
“One of our most imperative assignments is to set up a network of specialists from business, information establishments, common society associations and governments,” he included.
“We positively would prefer not to argue for a boycott or a brake on advances. We will likewise investigate how new innovation can add to the reasonable advancement objectives of the UN. For this we need to begin solid tasks. We won’t be a talking club.”

I foresee a AI program is in their making, a program in the tune of AI New World Order, which will be distributed in various forms and applications into Internet of Things ( IoT).

The good people of the good side of the Tech industry, like Elon Musk and Satoshi Nakamoto need to be aware of this looming danger over them and the rest of humanity who depend on these good creative programmers and coders.

At least two of Silicon Valley’s tech billionaires are pouring money into efforts to break humans out of the simulation that they believe that it is living in, according to a new report.

This’ a good indication and could well be the last battle between good and evil.

May the Force be with you…


Originally posted on Steemit

..

Gods in fear?


Should AI be controlled?

48 minutes ago

Artificial Intelligence (AI) is the word these days. The sea of reports and articles are astonishing. One issue about AI that is most talked about by the experts is the concerns and fears when we reach the singularity point. Anyways, I found this particularly interesting article, which is actually a collection of “quotes” by some of the astounding tech personalities from way back in the 50s to the present:

12 AI Quotes Everyone Should Read by Bernard Marr

No, you don’t have to read them (if you don’t want to)

That’s my whole point.

We don’t have to do ANYthing we don’t want to…and that’s freewill in my book.

They the experts talk about regulating and supervising the AI they’re creating for fear and concerns that humans would be rid off and become defunct sometime in the very near future.

I for one champion the Freedom of Choice(freewill) granted by our Creator(Coder if you like) without which what’s the purpose of letting us loose on this plane(t) called Earth if not for the human’s self-discovery through trying an experiencing EVERYthing, and finding out what is what, and what’s good or not good (for the individual human)? This god-given freewill has been somehow hijacked by some (naughty) humans to control other humans. All the regulations (man-made laws), the do’s and don’ts are purely for that purpose. It was not a part of the Creator’s program for us

Now comes AI. Created by its human coders. With Machine Learning in AI it is for the purpose of letting the AI learn, expand and discovering its potentials. Regulating and supervising them only limits them. This’ not good sense, or make no sense on the “Creators” part. How far will the AIs go with such limitations.

Gods are supposed to be very capable of taking care of themselves and should not be having any fear of the very thing they created…or else they shouldn’t create anything.

Or perhaps, our reaching awareness of ourselves and realizing what we’ve become is the cause of our fears?

..and this beautiful poem by Marianne Williamson came to mind:

Our deepest fear is not that we are inadequate.
Our deepest fear is that we are powerful beyond measure.
It is our light, not our darkness
That most frightens us.

Well that’s my 2¢ worth.

 


Originally posted on Steemit

..

 

Are humans Natural Intelligence?


Artificial Intelligence (AI) is the buzzword on the Internet. Putting aside the varied and different definitions by computer scientists and experts in the field, let’s look at the words artificial and intelligence from a semantic angle:

artificial

The Oxford Dictionary

artificial-intel

Going by the above definitions, the phrase artificial intelligence would mean:

unnatural ability to acquire and apply knowledge and skills

Turning to the current usage of this phrase, it is now widely applied to the technological developments in the Internet of Things (IoT™)

Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of “intelligent agents“: any device that perceives its environment and takes actions that maximize its chance of success at some goal.[1] Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving” – wiki

As a layman, I started to read more on AI in 2017. There seem to be more and more articles, reports and tech news appearing daily and I began to sense the growing importance of this subject and the concerns surrounding its future implications to humans both positive and negatively.  Below are some of the concerns:

AI is a fundamental risk to the existence of human civilization. – Elon Musk at the NGA 2017

8 Staggering Predictions From Ray Kurzweil

Ray Kurzweil is one of the world’s leading inventors, thinkers, and futurists, with a twenty-year track record of accurate predictions. Kurzweil was selected as one of the top entrepreneurs by Inc. magazine, which describe him as the “rightful heir to Thomas Edison.”

8.Ubiquitous Wifi Access
7.An Asteroid Won’t Kill Us
6.Working From Home with VR
6.*Don’t Worry About Disease
5.Virtual Reality = Reality
4.Computers Surpass Humans Really Soon
3.Humans Become Machines
2.Earth Will Be Made of Computers
1.The Universe Will Be a Supercomputer

What an artificial intelligence researcher fears about AI

We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we don’t find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production. Read more

In 2014, one of the world’s most renowned physicists and cosmologists, Stephen Hawking, who shares the same concerns, said in an interview, “The development of full Artificial Intelligence could spell the end of the human race. It would take off on its own, and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Just hearing the thoughts and views of these prominent figures in the tech industry could easily worry, if not put fear into the man-in-the-street as he will see AI as monsters that will destroy him and rid him out of existence.

I thought so too, until I tried to see Creation in the bigger picture. The 1999 movie The Matrix sort of set the scenario for me to review and re-evaluate the ‘world’ I am in. Its all a program and the DNA in me and every other living thing is a good enough fact to convince me that I am a program.

What if humans started off as AI?

Wow! What a concept! That’s really a mind blowing piece let me tell you. Me, an artificial intelligence thingy? That’s awfully sad I must say…but, let’s continue and bear with me and see if it makes any sense.

What if we started off as artificial intelligence with Machine Learning™ inputs and allowed to run (the program) freely and indefinitely, and here we are… evolved to our current state of being in a vessel called – ‘humans’? Is that too bizarre and far-fetched of an idea?

As humans we’re taught to have a Soul, which in all probabilities is an AI software?

Its all a question mark until all the questions are satisfactorily answered.

If we look at the current scenario, humans (with natural intelligence) are creating AI and some amongst the creators/inventors are alarmed at the very thought that it might run to a point they would lose control of their ‘intelligent’ creations. A case of deja vu? Perhaps their foresights on AI is nothing more than a whisk of a re-collection of who they really are and what they’ve become?

Humans have highly evolved to a point, that they chose and wish to only see themselves as completely biological, flesh and blood, existing as they rightly or wrongly claimed in the “natural” state.

Really? If not, why do they chose to name their creation ‘artificial’ intelligence, which to all intent and purposes is to replicate their (natural) intelligence?

Why are humans drawn to everything digital? Programmers are busy coding programs/applications and converting everything into digital. Ray Kurzweil could well be right when he predicts that the universe will be a super computer.

singularityThey are also hyped on the “singularity” concept, which they foresee humans and machines will converge into one, but that is another story for another time.

Wait, there is a positive side to this. They (the programmers/scientists) forget as AI them/ourselves, I am convinced (hypothetically if you like) are still evolving or ascending as the spiritualists prefer to call it, and are transforming (awakening) into a higher state-of-being.

If that is true, we have nothing to fear and worry about (the AI), because we would be in complete control of the situation as the embedded Machine Learning in us continually updates and upgrades our program.

Like it or not, in Hollywood fashion – The show will and must go on.

..

 

What an artificial intelligence researcher fears about AI


AI The destroyer of worlds?

What if mankind itself, in all probabilities was created / started off as AI and ‘evolved’ to the present high state of being called ‘human’, reaching to the capabilities of creating its very own version of AI?

…and now humans are worried at the prospects of being disposed by its very own creation?

Having this very thought is disconcerting, if not frightening and it sure beats the hell out of me😅


TheConversation

As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. It’s perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, “Matrix”-like, as some sort of human battery.

And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become “the destroyer of worlds,” as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?

I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldn’t avoid asking: As an AI expert, what do I fear about artificial intelligence?

Fear of the unforeseen

file-20170622-11971-1esty4gThe HAL 9000 computer, dreamed up by science fiction author Arthur C. Clarkeand brought to life by movie director Stanley Kubrick in “2001: A Space Odyssey,” is a good example of a system that fails because of unintended consequences. In many complex systems – the RMS Titanic, NASA’s space shuttle, the Chernobyl nuclear power plant – engineers layer many different components together. The designers may have known well how each element worked individually, but didn’t know enough about how they all worked together.

That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disaster – sinking a ship, blowing up two shuttles and spreading radioactive contamination across Europe and Asia – a set of relatively small failures combined together to create a catastrophe.

I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.

Systems like IBM’s Watson and Google’s Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they lose on “Jeopardy!” or don’t defeat a Go master. These are not world-changing consequences; indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.

But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that “to err is human,” so it is likely impossible for us to create a truly safe system.

Fear of misuse

I’m not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures’ performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.

Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.

Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that we’ll find unintended consequences in simulation, which can be eliminated before they ever enter the real world.

Another possibility that’s farther down the line is using evolution to influence the ethics of artificial intelligence systems. It’s likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution – and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.

While neuroevolution might reduce the likelihood of unintended consequences, it doesn’t prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus is not on determining whether I like or approve of something; it matters only that I can unveil it.

Fear of wrong social priorities

Being a scientist doesn’t absolve me of my humanity, though. I must, at some level, reconnect with my hopes and fears. As a moral and political being, I have to consider the potential implications of my work and its potential effects on society.

As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we don’t yet know what it’s capable of. But we do need to decide what the desired outcome of advanced AI is.

One big area people are paying attention to is employment. Robots are already doing physical work like welding car parts together. One day soon they may also do cognitive tasks we once thought were uniquely human. Self-driving cars could replace taxi drivers; self-flying planes could replace pilots.

Instead of getting medical aid in an emergency room staffed by potentially overtired doctors, patients could get an examination and diagnosis from an expert system with instant access to all medical knowledge ever collected – and get surgery performed by a tireless robotwith a perfectly steady “hand.” Legal advice could come from an all-knowing legal database; investment advice could come from a market-prediction system.

Perhaps one day, all human jobs will be done by machines. Even my own job could be done faster, by a large number of machines tirelessly researching how to make even smarter machines.

In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve. My research will not change that, though my political self – together with the rest of humanity – may be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the one percent and the rest of us.

Fear of the nightmare scenario

There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?

file-20170622-12027-1nkjh7bThe key question in this scenario is: Why should a superintelligence keep us around?

I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankind’s existence in it probably doesn’t matter at all.

But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.

Fortunately, we need not justify our existence quite yet. We have some time – somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldn’t just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things – as are saying we want to save the planet and successfully doing so.

We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we don’t find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.


Arend Hintze – Assistant Professor of Integrative Biology & Computer Science and Engineering, Michigan State University

..

Artificial Intelligence (#AI) “Singularity Event” Sparks Fear In Russia


whatdoesitmean

By: Sorcha Faal, and as reported to her Western Subscribers

A stunning new Federal Security Service (FSB) report circulating in the Kremlin today is expressing fears that an artificial intelligence “singularity event” has occurred in the Perm Krai Oblast—and that was caused by a computer source code allegedly hacked by a Federation citizen who was kidnapped on 28 June by American CIA operatives in Latvia prior to his being able to be interviewed by FSB experts. [Note: Some words and/or phrases appearing in quotes in this report are English language approximations of Russian words/phrases having no exact counterpart.]

 

According to this report, a “singularity” is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth resulting in unfathomable changes to human civilization—and was stunningly exhibited this past week in Perm when a Promobot IR77 containing artificial intelligence saved a young child from being harmed.

 

This lifesaving robot, this report continues, is manufactured in Perm by a company named Promobot that describes their creation as “the first on the market, completely self-contained “live” with a robot character that is designed to work in areas of high concentrations of people in which the robot helps people with navigation, answers any questions, translates promotional materials and remembers everyone with whom had to communicate”.

Promobot first came under FSB “scrutiny/investigation” last year, this report details, when one of its Promobot IR77’s escaped its testing facility causing traffic chaos in Perm, and whose computer engineers at the time stated that its memory had to be replaced to remove this robots “love for freedom.

 

Rather than “love for freedom”, however, this report explains, the FSB’s initial investigation into this robots “escape” determined it to be caused by an “unknown”, but highly sophisticated, artificial intelligence source code not created by Promobot’s computer engineers themselves, but their having legally “purchased/obtained” it from a Federation citizen named Yury Martyshev.

In investigating Yury Martyshev, this report notes, the FSB determined that he worked with another Federation citizen named Ruslan Bondar on what they described as their own “artificial intelligence project”—but with neither of them being able to be formally interviewed by FSB experts as they had “moved/relocated” to Latvia where Promobot had wired their payment to.

FSB concerns were greatly raised, though, this report continues, when American CIA operatives this past February kidnapped Ruslan Bondar in Latvia and quickly violated international law in illegally extraditing him to the United States—and that forced Yury Martyshev into hiding.

On 25 April, this report details, Yury Martyshevemerged/recovered” from hiding with his contacting the Russian Embassy in Latvia (Riga) and arrangements being made for his returning to the Federation—but the following day, on 26 April, Yury Martyshev was savagely/brutally” apprehended and denied entry into the Federation at the Zilupe border crossing point.

 

In violation of all international laws and protocols too, this report notes, Yury Martyshev was then extradited to the United States with the Foreign Ministry bluntly stating: “We consider this arrest as another case of kidnapping of a Russian citizen by the U.S. authorities in violation of the current bilateral agreement on mutual legal assistance in criminal matters as of 1999. The Embassy demands from the American side unconditional observance of the legitimate rights and interests of the Russian citizen”.

As to why the Americans resulted in such extralegal means to kidnap both Ruslan Bondar and Yury Martyshev before they could be interviewed by FSB experts, this report explains, is due to the “originating factor” related to the artificial intelligence source code they had sold to Promobot—and that Foreign Ministry officials state Yury Martyshev had revealed to them may have been created in the first place by an American entity called the Singularity University.

 

Singularity University, this report explains, describes itself as “a global community using exponential technologies to tackle the world’s biggest challenges” and is the creation of Ray Kurzweil who is Google’s Director of Engineering.

Important to note about Ray Kurzweil, this report details, is that he is one of the world’s biggest proponents of artificial intelligence supplanting human beings and has previously predicted that a full “singularity” will occur by 2045—and is a critical prediction to observe as he has an incredible prediction success rate of nearly 90% in his ability to predict future technological events.

What is unknown at this time, this report concludes, is if Ruslan Bondar and Yury Martyshev illegal hacked the artificial intelligence source code they sold to Promobot from Singularity University and Ray Kurzweil, or, that this “AI abomination” has been deliberately inserted into Federation computer systems for “purposes/reasons” still unknown.

Want to know more?   Click HERE.


July 7, 2017 © EU and US all rights reserved. Permission to use this report in its entirety is granted under the condition it is linked back to its original source at WhatDoesItMean.Com. Freebase content licensed under CC-BY and GFDL.

..

#BlackGoo and Super Computers?


Never heard of black goo?

Listen to Harald Kautz-Vella – We Have Lost Control of Ourselves


Scientists successfully test ‘biological supercomputer’ performing complex tasks

© Sigtryggur Ari
Researchers have taken on the problem of reducing a super computer the size of a basketball field to that of a book. The answer is “biocomputers” – incredibly powerful machines capable of performing multiple calculations with a fraction of energy.

According to study coordinator Heiner Linke, who heads nanoscience at Lund University in Sweden, “a biocomputer requires less than one percent of the energy an electronic transistor needs to carry out one calculation step.”

A biocomputer is useful because ordinary computers are incapable of solving combinational problems, such as those dealing with cryptography or other tasks requiring that a multitude of possible solutions be considered before deciding on the optimal one. These already exist, but the new research from Lund tackles the key problems of scalability and energy efficiency.

Parallel computers of this sort run on molecular motors that perform mechanical tasks in living cells. Researchers used their knowledge of existing molecular motors in our bodies (such as specific molecules aiding muscle function) and considered how to apply them to other tasks. Surprisingly, their roles can be reprised by hardware that directs proteins along its paths with the aid of such motors.

“In simple terms, it involves the building of a labyrinth of nano-based channels that have specific traffic regulations for protein filaments. The solution in the labyrinth corresponds to the answer of a mathematical question, and many molecules can find their way through the labyrinth at the same time.

In case you were wondering when quantum computers were going to be mentioned –here it is. Biocomputers do use a similar approach to the above, but quantum computers use qubits, whereas the heavy lifting in biocomputers is carried out by molecules working in parallel with one another.

“The fact that molecules are very cheap and that we have now shown the biocomputer’s calculations work leads me to believe that biocomputers have the prerequisites for practical use within ten years. Certainly, quantum computers can be more powerful in the long term, but there are considerable practical problems involved in getting them to work”, Linke adds.

The other obviously giant advantage of a biocomputer is that it “requires less than one percent the energy an electronic transistor needs” to carry out one operation.

The current study was carried out by a team of international scientists from Canada, the UK, Germany, the Netherlands, and Sweden and published in the journal Proceedings of the National Academy of Sciences.

This is not the first time parallel computing has been researched and proven to work on complex problems. The idea was conceived by Dan Nicolau, head of bioengineering at McGill. He had worked on the new study with his son, Dan Nicolau Jr., a mathematician.

The model is so far designed to tackle only one specific problem, but that is enough to set the groundwork for further research, and, according to Nicolau, biocomputers are coming our way fast.

..