DooKey
[H]F Junkie
- Joined
- Apr 25, 2001
- Messages
- 12,447
Back in May of this year Google announced the creation of an AI (AutoML) that is able to design better AIs. AutoML built a new AI that's called NASNet that is used to recognize objects. NASNet out performs its human designed counterparts and the methods used to develop it are the future of advanced AI. However, reading about this kind of technology is starting to worry me. Do we really want AI to improve itself so much that it decides it doesn't need us anymore?
Though the applications for NASNet and AutoML are plentiful, the creation of an AI that can build AI does raise some concerns. For instance, what’s to prevent the parent from passing down unwanted biases to its child? What if AutoML creates systems so fast that society can’t keep up?
Though the applications for NASNet and AutoML are plentiful, the creation of an AI that can build AI does raise some concerns. For instance, what’s to prevent the parent from passing down unwanted biases to its child? What if AutoML creates systems so fast that society can’t keep up?