Machine Learning versus Machine Intelligence
John M. Boyer 060000VMNY | Yesterday 11:41 AM | Tags: ibmwatson ai cognitivecomputing analytics ibm | 116 Visits 0 people like this 0 people like this Like
Machine learning today is every bit as calculated, as simulated, as is machine intelligence. It is easier to use machine intelligence to highlight how much greater human cognition is, which is why I’ve been using a machine intelligence algorithm over the last several entries. However, the conclusion drawn so far is that, while machine intelligence is only simulated, it is still quite effective and valuable as an aid to human insight and decision making. Machine learning offers another leap forward in the effectiveness and hence value of machine intelligence, so let’s see what that is.

Machine learning occurs when the machine intelligence is developed or adapted in response to data from the domain in which the machine intelligence operates. The James Blog entry only does this degenerately, at a very coarse grain level, so it doesn’t really count except as a way to begin giving you the idea. The James Blog entry plays a game with you, and if he loses, he adapts by increasing his lookahead level so that his minimax method will play more effectively against you next time. In some sense, he learned that you were a better player. However, this is only a single integer of configurability with only a few settings of adjustment that controls only one aspect of the machine intelligence algorithm’s operation. To be considered machine learning, a method must typically have a more profound impact on the operation of the algorithm, with much more adaptation and configurability based on many instances of input data. An example will clarify the more fine grain nature of machine learning.

The easiest example of which I can think is a predictive analytic algorithm called linear regression. Let’s say you’d like to be able to predict or approximate the purchase price of a person’s new car based on their age. Perhaps you want to do this so that you can figure out what automobile advertisements are most appropriate to show the person. Now, as soon as you hear this example, your human cognition kicks in and you rattle off several other likely variables that would impact the most likely amount of money a person is willing to spend on a car, such as their income level, debt level, nuclear familial factors, etc. This analytic technique is typically called multiple linear regression (MLR) exactly because we humans most often dream up many more than two variables that we want to simultaneously consider. Like most machine learning techniques, MLR does not learn of new factors to consider by itself. It only considers those factors that a human has programmed it to consider. When they are well chosen, additional variables typically do make an MLR model more effective, but for the purpose of discussing the concept of machine learning, the simple two-variable example suffices since your mind will have no problem generalizing the concept.

Suppose you have records of many prior car purchases, including a wide and nicely distributed selection of prices of the cars and ages of their buyers. This is referred to as “training data”. If you plotted the training data, it might look something like the blue points in the image below. Let purchase price be on the vertical Y axis since it is the “dependent” variable that we want to predict, and let age be on the X-axis since it is a predictor, or “independent” variable. MLR uses a standard formula to compute a “line of best fit” through the given data points, again like the one shown in red in the picture.

imageA line has a formula that looks like this: Y=C1X1+C0, where C1 is a constant that governs the slant (slope) of the line, and C0 is a constant that governs how high or low the line is (C0 happens to be the point where the line meets the Y-axis, and the line slopes up or down from there). If we had more variables, then MLR would just compute more constants to go with each of them. For example, if we wanted to use two variable predictors of a dependent variable, then we’d be using MLR to create a line of the form Y=C2X2+C1X1+C0.

Technically, MLR computes the constants like C1 and C0 of the line Y=C1X1+C0 in such a way that the line minimizes the sum of the squares of the vertical (Y) distances between each data point and the line. For each point, we take its distance from the line as an amount of “error” in the prediction. We square it because that gets rid of the negative sign (and, less importantly, magnifies the error resulting from being further from the line). We sum the squares of the errors to get a total measure of the error produced by the line, and the line is computed so as to minimize that total error.

Once the constants have been computed, it is a trivial matter to use the MLR model as a predictor. You simply plug the known values of the predictor variables into the formula to compute the predicted Y-value. In the car buying example, X1 is the age of a potential buyer, and so you multiply that by the C1 constant, then add C0 to obtain the Y-value, which is the predicted value of the car.

In this way, hopefully you can see that the MLR “learns” the values of the constants like C1 and C0 from the given data points. Furthermore, the actual algorithm that produces the machine intelligence only computes the result of a simple linear equation, so hopefully you can also see that the predictive power comes mainly from the constants, which were “learned” from the data. In the case of the minimax method, most of the machine intelligence came from the algorithm, but with MLR– as with most machine learning– the machine intelligence is for the most part an emergent property of the training data.

Lastly, it’s worth noting that there are a lot of “best practices” around using MLR. However, these are orthogonal to topic of this post. Suffice it to say that just like the minimax method has a very limited domain in which it is effective as a machine intelligence, MLR also has a limited domain. For example, the predictor variables (the X’s) do need to be linearly related to the dependent variable in reality. However, within the limited domain of its linearly related data, MLR is quite effective and an excellent example of a simple machine learning technique that produces machine intelligence within that domain.

The Case Against Lena Dunham Shoot. Your face or ?

Sometimes I wonder if it is okay to demolish what God has created and call it modern day modification. I am not talking about buildings. I am talking about facial lifts, butt lift, lip and so many others out there. But I have looked at so many modifications and I am yet to see anyone who actually recovered from it. In fact, they worse by day and you keep trying to re-modify it until you die.

Well, if you are saying God is not perfect and did not create you perfect enough, then go have yours done. But I am curious about your gains and your pains.

Good luck!

The Bishop of what!!! Bling lol

The Vatican has suspended a senior German church leader – dubbed the “bishop of bling” by the media over his alleged lavish spending.

Bishop of Limburg Franz-Peter Tebartz-van Elst is accused of spending more than 31m euros (£26m; $42m) on renovating his official residence.

If this is not outrageous in your view, I don’t know what is. Figure it out!

This is extremely just too Churchy for me!

The Devil’s Sleep (May 18, 1949)

This might be of interest to you. You may leave your comments after watching this. Very interesting.

OCD Viewer

The Devil's Sleep
The Devil’s Sleep (1949)
Directed by W. Merle Connell
Screen Classics

The men who brought you the sexploitation classic Test Tube Babies (1948) are at it again.

In The Devil’s Sleep, producer George Weiss and director W. Merle Connell expose the shocking truth about “reds” and “bennies” (a.k.a. Seconal and Benzedrine). Namely, that they’re being peddled out of swank health clubs to unwitting overweight middle-aged women who want to “reduce,” as well as to bored teenagers looking for kicks.

Bennies might have been great for keeping soldiers and aviators alert and awake during World War II, but we don’t want them on the tree-lined streets of our idyllic suburban neighborhoods, gosh darn it!

Timothy Farrell plays the mustachioed owner of the health club, Umberto Scalli, and William Thomason plays Detective Sergeant Dave Kerrigan, the man who’s warm on his trail.

If you’ve seen Test Tube Babies you may remember…

View original post 160 more words