Heather Ann Tucci-Jarraf
Heather Ann Tucci-Jarraf
Artificial Intelligence (AI) is the buzzword on the Internet. Putting aside the varied and different definitions by computer scientists and experts in the field, let’s look at the words artificial and intelligence from a semantic angle:
Going by the above definitions, the phrase artificial intelligence would mean:
unnatural ability to acquire and apply knowledge and skills
Turning to the current usage of this phrase, it is now widely applied to the technological developments in the Internet of Things (IoT™)
Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of “intelligent agents“: any device that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving” – wiki
As a layman, I started to read more on AI in 2017. There seem to be more and more articles, reports and tech news appearing daily and I began to sense the growing importance of this subject and the concerns surrounding its future implications to humans both positive and negatively. Below are some of the concerns:
Ray Kurzweil is one of the world’s leading inventors, thinkers, and futurists, with a twenty-year track record of accurate predictions. Kurzweil was selected as one of the top entrepreneurs by Inc. magazine, which describe him as the “rightful heir to Thomas Edison.”
8.Ubiquitous Wifi Access
7.An Asteroid Won’t Kill Us
6.Working From Home with VR
6.*Don’t Worry About Disease
5.Virtual Reality = Reality
4.Computers Surpass Humans Really Soon
3.Humans Become Machines
2.Earth Will Be Made of Computers
1.The Universe Will Be a Supercomputer
We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we don’t find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production. Read more
In 2014, one of the world’s most renowned physicists and cosmologists, Stephen Hawking, who shares the same concerns, said in an interview, “The development of full Artificial Intelligence could spell the end of the human race. It would take off on its own, and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
Just hearing the thoughts and views of these prominent figures in the tech industry could easily worry, if not put fear into the man-in-the-street as he will see AI as monsters that will destroy him and rid him out of existence.
I thought so too, until I tried to see Creation in the bigger picture. The 1999 movie The Matrix sort of set the scenario for me to review and re-evaluate the ‘world’ I am in. Its all a program and the DNA in me and every other living thing is a good enough fact to convince me that I am a program.
What if humans started off as AI?
Wow! What a concept! That’s really a mind blowing piece let me tell you. Me, an artificial intelligence thingy? That’s awfully sad I must say…but, let’s continue and bear with me and see if it makes any sense.
What if we started off as artificial intelligence with Machine Learning™ inputs and allowed to run (the program) freely and indefinitely, and here we are… evolved to our current state of being in a vessel called – ‘humans’? Is that too bizarre and far-fetched of an idea?
As humans we’re taught to have a Soul, which in all probabilities is an AI software?
Its all a question mark until all the questions are satisfactorily answered.
If we look at the current scenario, humans (with natural intelligence) are creating AI and some amongst the creators/inventors are alarmed at the very thought that it might run to a point they would lose control of their ‘intelligent’ creations. A case of deja vu? Perhaps their foresights on AI is nothing more than a whisk of a re-collection of who they really are and what they’ve become?
Humans have highly evolved to a point, that they chose and wish to only see themselves as completely biological, flesh and blood, existing as they rightly or wrongly claimed in the “natural” state.
Really? If not, why do they chose to name their creation ‘artificial’ intelligence, which to all intent and purposes is to replicate their (natural) intelligence?
Why are humans drawn to everything digital? Programmers are busy coding programs/applications and converting everything into digital. Ray Kurzweil could well be right when he predicts that the universe will be a super computer.
They are also hyped on the “singularity” concept, which they foresee humans and machines will converge into one, but that is another story for another time.
Wait, there is a positive side to this. They (the programmers/scientists) forget as AI them/ourselves, I am convinced (hypothetically if you like) are still evolving or ascending as the spiritualists prefer to call it, and are transforming (awakening) into a higher state-of-being.
If that is true, we have nothing to fear and worry about (the AI), because we would be in complete control of the situation as the embedded Machine Learning in us continually updates and upgrades our program.
Like it or not, in Hollywood fashion – The show will and must go on.
by Michael Snyder
One mystery trader has made an extremely large bet that the stock market is going to crash by October, and if he is right he could potentially make up to 262 million dollars on the deal. Fortunes were made and lost during the great financial crisis of 2008, and the same thing will happen again the next time we see a major stock market crash. But will that stock market crash take place before 2017 is over? Without a doubt, we are in the midst of one of the largest stock market bubbles in U.S. history, and many prominent investors are loudly warning of an imminent stock market collapse. It doesn’t take a genius to see that this stock market bubble is going to end very badly just like all of the other stock market bubbles throughout history have, but if you could know the precise timing that it will end you could set yourself up financially for the rest of your life.
I want to be very clear about the fact that I do not know what will or will not happen by the end of October. But one mystery investor is extremely convinced that market volatility is going to increase over the next few months, and if he is correct he will make an astounding amount of money. According to Business Insider, the following is how the trade was set up…
I will be watching to see what happens. If this mystery investor is correct, it will essentially be like winning the lottery.
But just because he has made this wager does not mean that he has some special knowledge about what is going to happen.
For example, just look at what Ruffer LLP has been doing. They are a $20 billion investment fund based in London, and they have been betting tens of millions of dollars on a stock market crash which has failed to materialize so far. But even though they have lost so much money already, they continue to make extremely large bearish bets…
As of earlier this week, Ruffer had spent $119 million this year betting on a stock market shock, $89 million of which had expired worthless, according to data compiled by Macro Risk Advisors. The investor has gradually amassed holdings of about 1 million VIX calls through three occasions so far in 2017, and each time a significant portion expired at a loss.
Blame a subdued VIX for the futility. The fear gauge was locked in a range of 10 to 14 for the first three months of 2017, and while it has since climbed to as high as 15.96, it has been stuck well below 14 since a single-day plunge of 26% nine days ago. Earlier this week, the index closed at its lowest level since February 2007.
But that doesn’t mean Ruffer is giving up. Already loaded up on May contracts, the firm has continued to buy cheap VIX calls expiring later in the year — wagers costing about 50 cents.
I can understand why Ruffer has been making these bets. In a rational world, stocks would have already crashed long ago.
The only way that stock prices have been able to continue to rise is because of unprecedented intervention by global central banks. They have been pumping trillions of dollars into the financial markets, and this has essentially completely destroyed normal market forces. The following comes from David Stockman…
The Fed and its crew of traveling central banks around the world have gutted honest price discovery entirely. They have turned global financial markets into outright gambling dens of unchecked speculation.
Central bank policies of massive quantitative easing (QE) and zero interest rates (ZIRP) have been sugar-coated in rhetoric about “stimulus”, “accommodation” and guiding economies toward optimal levels of inflation and full-employment.
The truth of the matter is far different. The combined $15 trillion of central bank balance sheet expansion since 2007 amounts to monetary fraud of epic proportions.
In the “bizarro world” that we are living in today, many companies are trading at prices that are more than 100 times earnings, and some companies are actually trading at prices that are more than 200 times earnings.
Stock prices have become completely and totally disconnected from economic reality. As I discussed the other day, U.S. GDP has only risen at an average yearly rate of just 1.33 percent over the past 10 years, but meanwhile stock prices have been soaring into the stratosphere.
Nobody in their right mind can claim that makes any sense at all. Just like in 2000, and just like in 2008, this absolutely ridiculous stock market bubble will have a horribly tragic ending as well.
Once again, I don’t know what the exact timing will be. Stocks could start crashing tomorrow, but then the Swiss National Bank could swoop in and buy 4 million shares of Apple just like they did during the months of January, February and March earlier this year.
The biggest players in this ongoing charade are the global central banks. If they decide to keep pumping trillions of dollars into global financial markets, they may be able to keep the bubble going for a little while longer.
But if at any point they decide to withdraw their artificial assistance, those that have placed huge bets against the market are going to make absolutely enormous piles of cash.
Michael Snyder is a Republican candidate for Congress in Idaho’s First Congressional District, and you can learn how you can get involved in the campaign on his official website. His new book entitled “Living A Life That Really Matters” is available in paperback and for the Kindle on Amazon.com.
Whenever executives talk about how their company is really like a big ol’ family, beware. They’re usually not referring to how the company is going to protect you no matter what or love you unconditionally. You know, like healthy families would. The motive is rather more likely to be a unidirectional form of sacrifice: Yours.
Because by invoking the image of the family, the valor of doing whatever it takes naturally follows. You’re not just working long nights or skipping vacation to further the bottom line, no, no, you’re doing this for the family. Such a blunt emotional appeal is only needed if someone is trying to make you forget about your rational self interest.
You don’t have to pretend to be a family to be courteous. Or kind. Or protective. All those values can be expressed even better in principles, policies, and, most importantly, actions.
Besides, don’t you already have a family or group of friends who feel like blood relations? The modern company isn’t a street gang filled with orphans trying to make it in the tough, tough world. Trying to supplant the family you likely already have is just another way to attempt to put the needs of the company above the needs of your actual family. That’s a sick ploy.
The best companies aren’t families. They’re supporters of families. Allies of families. There to provide healthy, fulfilling work environments so when workers shut their laptops at a reasonable hour, they’re the best husbands, wives, parents, siblings, and children they can be.
Yup. I am a smoker.
I started the day reading about an epileptic Malaysian academician urging the government to increase the price for cigarettes because of same worn out lame reason like she couldn’t fake out something new?
I scoured the Internet looking for a bullet to fire back and voila this piece of a gem from a fellow blogger appeared on Twitter!
Very articulate indeed on a subject rarely written by anyone thus far!
“When we connect the dots through medicine, science, history, psychology and sociology, the truth emerges plain as day: the all-out global propaganda campaign against tobacco is part of the same push for ‘full-spectrum dominance’ over humanity in all other spheres. The targets and victims of the fake ‘War on Terror’ are the same targets of the war against tobacco. We are expected to believe that our wonderful ‘leaders’ encourage us to eat poisonous GMO food yet are oh, so concerned about the alleged health effects from smoking? Give us a break!”
Watch Bertrand Russell on smoking…and how it saved his life!
So, it came to a point in time when I decided, as a researcher & a smoker, to find out about the horrors of tobacco. I figured I would scare the crap out of myself with the cold, hard facts about my awful vice, and have no other reason but to give up & live happily ever after.
I secretly hoped there was a conspiracy at play, and that the true amount of damage the evil weed was inflicting was, some how, not as bad as I’d been told my whole life. What I ended up finding was a rabbit hole of hidden studies, skewed statistics and out right fraud, leaving me with the resolute determination not to give up any time soon.
Let me clarify this from the beginning, I’m not talking about your branded packet cigarette here, I am talking about rolling tobacco, as pure as you can…
View original post 1,882 more words
By Joe Jarvis
Did you know that doctors and scientists can be corrupt or simply wrong?
People seem to give doctors and scientists the benefit of the doubt when it comes to their findings and opinions on things like global warming, genetically modified organisms, pesticides, chemicals, and how unhealthy certain foods and habits are.
But like any other humans, scientists and doctors are, well, human. They can be misguided, confused, corrupt, and stubbornly opinionated.
According to Natural News, as many as 20,000 doctors once recommended smoking cigarettes to aid digestion. In 1940’s Camel ran an ad campaign that claimed “More Doctors Smoke Camels.” They even handed out packs of Camels to doctors at a medical convention and then polled the doctors on their way out the door, asking what their favorite cigarette brand was, or what kind they had in their pocket at that moment.
Unfortunately, money has corrupted industries like big pharma who pay doctors and scientists to take a position and prescribe particular drugs and treatment. Many peer-reviewed studies have predetermined outcomes which basically find the facts to fit their narrative. It is more a marketing ploy to publish in scientific and medical journals than proof of the actual findings.
Sugar was long considered fine to dump down children’s throats because in the 1960’s a handful of scientists were paid off.
The documents show that a trade group called the Sugar Research Foundation, known today as the Sugar Association, paid three Harvard scientists the equivalent of about $50,000 in today’s dollars to publish a 1967 review of research on sugar, fat and heart disease. The studies used in the review were handpicked by the sugar group, and the article, which was published in the prestigious New England Journal of Medicine, minimized the link between sugar and heart health and cast aspersions on the role of saturated fat.
But even absent actual corruption, basic mistakes are being made in scientific conclusions.
Correlation is not causation. This is a basic foundational tenet of science. Two things may be very strongly correlated, but that does not prove that one causes the other.
According to Reason Magazine:
When it comes to separating the wheat from the chaff of studies that are mediocre or just plain bad, Albert Einstein College of Medicine epidemiologist Geoffrey Kabat is a national treasure. “Most research findings are false or exaggerated, and the more dramatic the result, the less likely it is to be true,” he declares in his excellent new book Getting Risk Right.
Kabat discusses how “the dose makes the poison,” in that saying something doubles your risk of a disease could actually be statistically irrelevant.
For example, you may have heard that eating bacon increases the risk of colorectal cancer. Technically, this is true. If you eat two slices of bacon every day of your life the risk of colorectal cancer increases from 5 to 6 percent. That is not exactly the same risk as smoking cigarettes, which increases the risk of lung cancer by 20 to 50 times over.
And then, of course, you must consider the editorial bias. You’re Risking Your Life Eating Bacon is more likely to get a click than Everyday Bacon Eating Increases Cancer Risk by 1%.
Kabat suggests that the precautionary principle–“better safe than sorry”–is largely an ideological ploy to alarm the public into supporting advocates’ policy preferences. He also decries “the simplistic notion that ‘consensus among scientists’ is always correct.” He notes that scientific consensus once held that ulcers were caused by spicy foods and stress instead of bacteria…
Here’s the thing, I like to be healthy, and I personally often follow the better safe than sorry principle. But it is a huge miscarriage of authority to push this view on others through fear. It is the idea of I know better than these silly peasants that unfortunately seems to permeate the scientific and medical communities.
Are GMOs, pesticides, and chemicals like BPA really as bad as they say? I personally avoid them, but I honestly haven’t done enough of my own research to know for sure.Salt and fat have gone back and forth as being considered healthy
Salt and fat have gone back and forth as being considered healthy then unhealthy, then healthy again by experts.
People look to doctors and scientists for guidance and too often are brainwashed with those individuals’ own biases and unsubstantiated opinions.
If an expert cannot or will not answer questions about their work, that is a red flag. When people talk about consensus among experts instead of the actual facts, that is another red flag.
There have been too many times in recent history when the experts, the scientists, and the doctors were willfully or mistakenly wrong.
Sometimes, yes, we must defer to experts, since it is simply impossible to research it all on your own. But that doesn’t mean we should forgo the due diligence in critical thinking that goes along with it.
Fear sells. We are used to it in the media but don’t usually expect it from doctors and scientists. But they are humans too, and just as likely to push their agenda instead of the truth.
AI The destroyer of worlds?
What if mankind itself, in all probabilities was created / started off as AI and ‘evolved’ to the present high state of being called ‘human’, reaching to the capabilities of creating its very own version of AI?
…and now humans are worried at the prospects of being disposed by its very own creation?
Having this very thought is disconcerting, if not frightening and it sure beats the hell out of me😅
As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. It’s perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, “Matrix”-like, as some sort of human battery.
And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become “the destroyer of worlds,” as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?
I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldn’t avoid asking: As an AI expert, what do I fear about artificial intelligence?
The HAL 9000 computer, dreamed up by science fiction author Arthur C. Clarkeand brought to life by movie director Stanley Kubrick in “2001: A Space Odyssey,” is a good example of a system that fails because of unintended consequences. In many complex systems – the RMS Titanic, NASA’s space shuttle, the Chernobyl nuclear power plant – engineers layer many different components together. The designers may have known well how each element worked individually, but didn’t know enough about how they all worked together.
That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disaster – sinking a ship, blowing up two shuttles and spreading radioactive contamination across Europe and Asia – a set of relatively small failures combined together to create a catastrophe.
I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.
Systems like IBM’s Watson and Google’s Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they lose on “Jeopardy!” or don’t defeat a Go master. These are not world-changing consequences; indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.
But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that “to err is human,” so it is likely impossible for us to create a truly safe system.
I’m not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures’ performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.
Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.
Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that we’ll find unintended consequences in simulation, which can be eliminated before they ever enter the real world.
Another possibility that’s farther down the line is using evolution to influence the ethics of artificial intelligence systems. It’s likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution – and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.
While neuroevolution might reduce the likelihood of unintended consequences, it doesn’t prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus is not on determining whether I like or approve of something; it matters only that I can unveil it.
Being a scientist doesn’t absolve me of my humanity, though. I must, at some level, reconnect with my hopes and fears. As a moral and political being, I have to consider the potential implications of my work and its potential effects on society.
As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we don’t yet know what it’s capable of. But we do need to decide what the desired outcome of advanced AI is.
One big area people are paying attention to is employment. Robots are already doing physical work like welding car parts together. One day soon they may also do cognitive tasks we once thought were uniquely human. Self-driving cars could replace taxi drivers; self-flying planes could replace pilots.
Instead of getting medical aid in an emergency room staffed by potentially overtired doctors, patients could get an examination and diagnosis from an expert system with instant access to all medical knowledge ever collected – and get surgery performed by a tireless robotwith a perfectly steady “hand.” Legal advice could come from an all-knowing legal database; investment advice could come from a market-prediction system.
Perhaps one day, all human jobs will be done by machines. Even my own job could be done faster, by a large number of machines tirelessly researching how to make even smarter machines.
In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve. My research will not change that, though my political self – together with the rest of humanity – may be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the one percent and the rest of us.
There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?
The key question in this scenario is: Why should a superintelligence keep us around?
I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankind’s existence in it probably doesn’t matter at all.
But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.
Fortunately, we need not justify our existence quite yet. We have some time – somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldn’t just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things – as are saying we want to save the planet and successfully doing so.
We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we don’t find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.