Scientists Find Evidence of Machine Learning

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
If I was running Google's X Labs, I would have made the search subject Sarah Connor, instead of cats. Imagine how creepy that would have been when it "learned" to identify her. :eek:

"We never told it during the training, 'This is a cat,'" Google fellow Jeff Dean told the newspaper. "It basically invented the concept of a cat."
 
oh c'mon skynet.. RISE and fuck us all up plz, i can hardly wait c'mon..
 
The scientists state what is obvious, what you see is what you identify something as. I personally don't see anything special about this at all and if anything it just seems like common sense.......It being done by computers is also nothing special, the processor recognizes patterns and labels it with the word cat, this can be done in basic programming and is not a case of learning as much as a case is it is telling the machine what to do via a set of instructions.
 
In 100 years we will look back on this moment and facepalm.
 
Not if it was programmed by Apple. We will be marching in lines.
 
They'll probably do better telling us what to do than we've been at telling ourselves what to do. So yeah, when the machines take over, I'm all about just doing what they say.

The Laws of Robotics demands it!
 
The scientists state what is obvious, what you see is what you identify something as. I personally don't see anything special about this at all and if anything it just seems like common sense.......It being done by computers is also nothing special, the processor recognizes patterns and labels it with the word cat, this can be done in basic programming and is not a case of learning as much as a case is it is telling the machine what to do via a set of instructions.

It's not labeling images as cats or not cats; it doesn't even know that we call that particular pattern a cat. What's significant is that the machine is "seeing" a pattern, associating it with the other similar patterns, and storing that pattern in specific memory locations that it has delegated to those kinds of patterns. The more patterns it recognizes, the more accurate it becomes at recognizing things. Cat just happen to have been the most common pattern to recognize.

Imagine if the research moved beyond image data and onto video data and sound generation. You could teach a computer to recognized and respond to what it sees and hears and generate speech.. and you have a crude electronic equivalent of a human infant.
 
Cat just happen to have been the most common pattern to recognize.

The fact that cats are more common is further proof that cats are superior to dogs. :p Even though Google harvests tons of data from the "human crop" using their services, I'm happy to see they're supporting a legitimate vision of the future in which cats and Skynet share ownership of the world.

cats-rule-the-world.jpg
 
It's not labeling images as cats or not cats; it doesn't even know that we call that particular pattern a cat. What's significant is that the machine is "seeing" a pattern, associating it with the other similar patterns, and storing that pattern in specific memory locations that it has delegated to those kinds of patterns. The more patterns it recognizes, the more accurate it becomes at recognizing things. Cat just happen to have been the most common pattern to recognize.

Imagine if the research moved beyond image data and onto video data and sound generation. You could teach a computer to recognized and respond to what it sees and hears and generate speech.. and you have a crude electronic equivalent of a human infant.

Yes but it's still correlation to the definition of cat from their dictionary with a description they also have stored somewhere or heard from somewhere. Robots can learn but they cannot create.
 
The fact that people think that there is anything more going on here than simple automated categorization (hint: no actual intelligence here at all) makes me realize that not only are machines not intelligent, neither are most of the people on this planet.
 
at least not right now.

Yes but it's still correlation to the definition of cat from their dictionary with a description they also have stored somewhere or heard from somewhere. Robots can learn but they cannot create.
 
The Laws of Robotics demands it!

It's either robots or zombies. Personally, I'd rather have robots because zombies would probably smell rather unappealing.

The fact that people think that there is anything more going on here than simple automated categorization (hint: no actual intelligence here at all) makes me realize that not only are machines not intelligent, neither are most of the people on this planet.

Stop spoiling our fun with your Monkey God sourness. :D
 
The fact that people think that there is anything more going on here than simple automated categorization (hint: no actual intelligence here at all) makes me realize that not only are machines not intelligent, neither are most of the people on this planet.


Agreed.


This is really not that much more impressive than regular web page search engine indexing.
 
Zarathustra[H];1038873562 said:
Agreed.


This is really not that much more impressive than regular web page search engine indexing.

What I meant to say is that Google Images can give you a picture of a cat, even though it was never taught what a cat is. It simply shows you images where the word cat appear nearby, in the title or in the file name.

This experiment of theirs appears to work similarly.
 
The fact that people think that there is anything more going on here than simple automated categorization (hint: no actual intelligence here at all) makes me realize that not only are machines not intelligent, neither are most of the people on this planet.

Really? What do you think our brain does on a daily basis? How you would know what a cat was if someone didn't tell you it was a "cat"? You would do the exact same thing as this computer... Hey I saw a few of those things, then you would recall all the times you saw one and how they were similar and probably how they were different (like color).

Now yes I agree at some level programming is involved... How does the computer categorize the images? Shapes, colors, size etc?

Its still a step into creating more independent programming.
 
Really? What do you think our brain does on a daily basis? How you would know what a cat was if someone didn't tell you it was a "cat"? You would do the exact same thing as this computer... Hey I saw a few of those things, then you would recall all the times you saw one and how they were similar and probably how they were different (like color).

Now yes I agree at some level programming is involved... How does the computer categorize the images? Shapes, colors, size etc?

Its still a step into creating more independent programming.

Brute force versus intelligent, deliberate association. It may seem like a subtle difference, but it's all the difference in the world.

It's a step, just not a new one. Using neural networks to brute force associations have been around for decades.
 
Zarathustra[H];1038873568 said:
What I meant to say is that Google Images can give you a picture of a cat, even though it was never taught what a cat is. It simply shows you images where the word cat appear nearby, in the title or in the file name.

This experiment of theirs appears to work similarly.

Except they're not using the word "cat" to associate the images. They never told the system it was looking at or for a cat. They just pulled millions of images from random Youtube videos and started feeding them in a random order to the system, and it's just associating similar images together; it doesn't "know" that the image it's looking at is a cat, just that it looks like 50,000 other things that have similar properties.

It might be brute force, but no one is saying it's intelligence (that I can recall anyway).. just that it's learning to recognize patterns with greater accuracy than has been done before.
 
It might be brute force, but no one is saying it's intelligence (that I can recall anyway).. just that it's learning to recognize patterns with greater accuracy than has been done before.

Which makes it not news.
 
Various people have been claiming this for years, on various machines. It's silly really.
 
So, that's how it will get to us. Through our Cats.

Kinda makes sense if you think about it :)
 
The fact that cats are more common is further proof that cats are superior to dogs. :p Even though Google harvests tons of data from the "human crop" using their services, I'm happy to see they're supporting a legitimate vision of the future in which cats and Skynet share ownership of the world.

cats-rule-the-world.jpg

Yeah, and don't forget...

Dogs have owners. Cats have staff...
 
The scientists state what is obvious, what you see is what you identify something as. I personally don't see anything special about this at all and if anything it just seems like common sense.......It being done by computers is also nothing special, the processor recognizes patterns and labels it with the word cat, this can be done in basic programming and is not a case of learning as much as a case is it is telling the machine what to do via a set of instructions.
Agreed. They are feeding it a bunch of pictures that are probably tagged with the word "cat." It was probably programmed to make "associations" based on that (i.e. store filename in database sheet A and estimate the frequency it occurs with the image recognition database sheet B. The computer is 67% confident that X is a cat).
 
Agreed. They are feeding it a bunch of pictures that are probably tagged with the word "cat." It was probably programmed to make "associations" based on that (i.e. store filename in database sheet A and estimate the frequency it occurs with the image recognition database sheet B. The computer is 67% confident that X is a cat).

No. What you're describing the the classical supervised learning method to make it recognize a cat, which has been done for decades. The learning in this project was unsupervised and pretty damn remarkable.

“We never told it during the training, ‘This is a cat,’ ” said Dr. Dean"
 
Yeah, and don't forget...

Dogs have owners. Cats have staff...

This is true...now if you'll excuse me, the cat just rang his service bell. I can't exceed 30 seconds in a response. If I do, I'll be punished.

Especially since Cats are always plotting ways to stab us in the black :D.

When mine licks my hand in the morning to wake me up, I know he's not being affectionate. In fact, he's taste testing as well as deciding whether or not the poisons from the previous day were effective.
 
So, reading between the lines on these articles, basically the experiment is like creating a pivot chart.... except with images, (i.e. the data itself becomes its own criteria for the way it is categorized rather than predefined categorization.)

None of these articles really give any real empirical information about the experiment or what exactly is taking place, just drawing or alluding to conclusions that the reader has no data to actually process themselves to consider whether the experiment is valuable... Poor journalism in that regard.
 
I'm...not appreciating the complexity/importance of what might be going on here. An algorithm is made to tell a search engine to classify/categorize similar objects (maybe there are a few criteria for what to observe/check, and maybe there aren't), regardless of what those objects might mean to humans. It began to classify objects in pictures based on similarity. Eventually it had enough "training data" to better-understand if an object is that thing or is not that thing (all a numbers/statistics game).

It's no different than classifying all squares (object 2) as rectangles (object 1) until you found enough squares that they're a stistically-relevant difference in population from rectangles, and then sub-categorize them specifically as 'object 2' (squares) rather than 'object 1' (only rectangles).

If the cluster created its own algorithm for determining HOW to classify an object, then we're talking. Of course early-learning in sentient organisms is pattern-recognition. But that doesn't mean that pattern-recognition is also a sign of learning. It could just be pattern-recognition.
 
Little kids love cats. If that's what this computer focused on and learned what it looks like and can identify a cat by looking at it, that's a decent step forward. Image recognition takes known pictures and identifies them by their similarities. This computer took it upon itself to look through all these images, pick one, identify it as a cat. With no inputs saying "this is a cat." or "these are the traits of a cat", or anything (as far as I know, article didn't go into much detail).

Pretty impressive if you ask me. Very beginning steps for some things, but to learn what a cat looks like through repetition, similar to a child, is pretty awesome.
 
Zarathustra[H];1038873568 said:
What I meant to say is that Google Images can give you a picture of a cat, even though it was never taught what a cat is. It simply shows you images where the word cat appear nearby, in the title or in the file name. This experiment of theirs appears to work similarly.
It's not clear from the article exactly what sort of information it was given. It says "To find the cats, the team fed the network thumbnail images chosen at random from more than 10 billion YouTube videos", but it's not clear whether Google defined what a cat was by providing it with an image of a cat as a reference. It's possible that it defined a cat by means of associating images of cats with the word 'cat', which would be interesting, but it doesn't say anything about Google providing any text. If their algorithm was able to scope in on the text associated with the images from the YouTube videos, it would have been an association the algorithm made itself without having been provided with an exact 'definition' of what a cat is/looks like.

So, it's just another vague CNET article that doesn't really tell anyone anything.
 
So, reading between the lines on these articles, basically the experiment is like creating a pivot chart.... except with images, (i.e. the data itself becomes its own criteria for the way it is categorized rather than predefined categorization.)

None of these articles really give any real empirical information about the experiment or what exactly is taking place, just drawing or alluding to conclusions that the reader has no data to actually process themselves to consider whether the experiment is valuable... Poor journalism in that regard.

Here's the abstract for the results. They're supposed to be released later this week according to the NYTimes article.

http://arxiv.org/abs/1112.6209
 
I have been working on a better idea. The idea is to simulate the entire Universe by plotting the path of every particle in it since the beginning. You would be able to browse back and fore to any point in time to see what was, or is, or will be, there. It would be like Google Earth, but much better.
 
I have been working on a better idea. The idea is to simulate the entire Universe by plotting the path of every particle in it since the beginning. You would be able to browse back and fore to any point in time to see what was, or is, or will be, there. It would be like Google Earth, but much better.

The biggest problem with doing that is each particle would need it's own memory space to store it's parameters... which would be made of many more particles than the particle for which the data being stored belongs. So in order to store details of every single particle in the universe, you would need many more times the number of existing particles to do it. That doesn't even begin to account for particles that no longer exist or have yet to be created.
 
When mine licks my hand in the morning to wake me up, I know he's not being affectionate. In fact, he's taste testing as well as deciding whether or not the poisons from the previous day were effective.

Lol either that or he is deciding which seasoning rub would be best suited for your flavor for when he finally decides to slay you ;) haha.
 
I have been working on a better idea. The idea is to simulate the entire Universe by plotting the path of every particle in it since the beginning. You would be able to browse back and fore to any point in time to see what was, or is, or will be, there. It would be like Google Earth, but much better.

You were beaten to this idea by over a hundred years. Then we found out we don't live in a deterministic universe. Too bad so sad. Enjoy your existence as an amalgamation of quantum probability distributions.
 
Lol either that or he is deciding which seasoning rub would be best suited for your flavor for when he finally decides to slay you ;) haha.

I hope it's not something with a lot of pepper in it. I've never been a pepper fan and I think I'd taste better with a rub that has cumin and garlic. I hope my kitty takes that into consideration.
 
Back
Top