A 'consciousness conductor' synchronizes and connects mouse brain areas

Jagger100

Supreme [H]ardness
Joined
Oct 31, 2004
Messages
7,631
thats inaccurate. "ai" as we use it (commercially) is a finite set of knowledge that can be interpreted and added upon similar to any programing language. There are also many forms of ai some straying far from where any meaningful results may be gained and perhaps closer to artificial consciousness. we also arnt getting dumber by any means. A smart person can do some reaserch and get themselves to the forfrount of "ai" in its implementation or theory. They have a good idea and the field gets pushed forward. trying to define the force behind developing ai as human skill is almost individualistic. now the point where actual ai can be used to push the field will be a cool point indeed but that is fairly differnt from the "ai" used today. I would also like to point out the capability of "ai" as we use it is determined by computational ability which is still absalutly on the uptrend.

imagine the computation ability avalible in the future if quantum computing reaches a viable point? or can we never get to that point as human skill is on the downtrend?
I was pondering the Elon Musk Doomsday scenario not actual state of the art.
 

Blakestr

[H]ard|Gawd
Joined
Aug 11, 2004
Messages
1,803
AI is a wall. The better it gets, the dumber we get.

Then the dumber AI gets.

It depends if we can get AI smart enough to improve itself before we get too dumb. I don't think we can get there, we'll get too dumb first. So we're safe from an AI take over.

The reason we don't see a ton of extraterrestrial life isn't because they get wiped out by their own technology, they just get perpetually trapped in an "idiocracy" phase.
View attachment 271292

I think this is important to remember but I also think you aren't allowing for the outliers that push the boundaries. We are learning more about the brain and our concept of "free will" is starting to erode but we are understand much more about what "pushes OUR buttons." An AI won't fatigue or want to change careers, nothing is motivated by neurotransmitter triggering various emotional responses or simply, "eh, I'd rather watch Netflix." ... I think the only thing missing would creativity and intuition but then, given enough computing power, you can brute force creativity and intuition if you have a computer who can simulate billions of potentialities and a human to guide those things.

Example - Imagine how an AI would invent a helicopter, having zero reference for a helicopter. So you feed in the total knowledge of how physics and wind velocity and you also maybe give it data on various flying birds and insects, and also planes. Of course include marine life, since water, like air, is a fluid (but nothing about a helicopter). The computer is going to come up with billions of designs, looking at the various fins of say of fish, calculating those different fin shapes affects on wind resistance and lift...eventually you will get something better than a helicopter, but the key will be learning how to encapsulate what the computer "values" as optimal.

Remember, the main benefit is iteration time. Yeah the computer might take over the world in your coffee break but it will also go down an entire line of thinking and you'll come back to your desk with 10,000 new engineering sketches and simulations.
 

RamboZombie

Weaksauce
Joined
Jul 11, 2018
Messages
123
I think this is important to remember but I also think you aren't allowing for the outliers that push the boundaries. We are learning more about the brain and our concept of "free will" is starting to erode but we are understand much more about what "pushes OUR buttons." An AI won't fatigue or want to change careers, nothing is motivated by neurotransmitter triggering various emotional responses or simply, "eh, I'd rather watch Netflix." ... I think the only thing missing would creativity and intuition but then, given enough computing power, you can brute force creativity and intuition if you have a computer who can simulate billions of potentialities and a human to guide those things.

Example - Imagine how an AI would invent a helicopter, having zero reference for a helicopter. So you feed in the total knowledge of how physics and wind velocity and you also maybe give it data on various flying birds and insects, and also planes. Of course include marine life, since water, like air, is a fluid (but nothing about a helicopter). The computer is going to come up with billions of designs, looking at the various fins of say of fish, calculating those different fin shapes affects on wind resistance and lift...eventually you will get something better than a helicopter, but the key will be learning how to encapsulate what the computer "values" as optimal.

Remember, the main benefit is iteration time. Yeah the computer might take over the world in your coffee break but it will also go down an entire line of thinking and you'll come back to your desk with 10,000 new engineering sketches and simulations.
wonder what the computer comes up with once we have supplied it with a paperplane, a rubber band and a used playboy magazine - thesis is that it won't do it well because once we are there - we will be to stupid to use it..
 

erek

Supreme [H]ardness
Joined
Dec 19, 2005
Messages
7,716
UltraTaco



"This chapter studies one of AI’s most challenging topics—the possibility of self-consciousness and self-aware robots (or AI systems). Since Turing Test to the latest AI development, we have never ceased to explore human intelligence, and the creation of intelligent systems and robotics. With the advancement of technology, almost all core AI functionalities including machine learning, computer vision, data mining, natural language processing, and agent ontology have been examined immensely with significant success and applied to our daily activities. Many AI scientists believe it is the time to explore this ultimate question—robot consciousness. This chapter begins with consciousness concepts and machine consciousness in neuroscience disciplines brief literature review to current AI and machine learning R&D. Next, we explore machine consciousness typical approach—the Good Old-Fashioned Artificial Consciousness (GOFAC) which consists of five major components: (1) Functionalism; (2) Information integration; (3) Embodiment; (4) Enaction, and (5) Cognitive mechanisms. Lastly, we conclude AI and machine consciousness study, outstanding issues; problems to approach in order to design and build a true self-consciousness and self-awareness robot."

https://books.google.com/books?id=N...lf-consciousness&pg=PA347#v=onepage&q&f=false
 

cdabc123

2[H]4U
Joined
Jun 21, 2016
Messages
2,883
UltraTaco

"Is consciousness a continuous stream of percepts or is it discrete, occurring only at certain moments in time?"

https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(20)30170-4

well you see that is a tricky question to answer without a more exact definition of consciousness. By most definitions I have see there is little to no time element involved meaning it is nether a moment of experience nor a collection of such. I would say its similar to a thought or experience where its existence lies in the past and you are recalling and possibly building upon the manifestation of consciousness.

If they are addressing a single moment of thought where you are recalling or altering a conscious ideology I could see that being defined as a single instance in time but I do not believe that is the entirety of what makes up consciousness.

I disagree with them trying to tie in consciousness to perceptions and using visual dylays to make their point, as firstmost I belive that is closer to qualia then conciousness (I do subscribe to being able to percive a moment of qualia and that may fit into the consciousness definition). Next just because such a delay exists does not mean you perceive such especially in the case of vision. a "moment" within the mind is composed of many ms worth of actions where a large variaty of systems are all functioning.
 

erek

Supreme [H]ardness
Joined
Dec 19, 2005
Messages
7,716
Friendship with an AI Companion | Lex Fridman Podcast #121



 

erek

Supreme [H]ardness
Joined
Dec 19, 2005
Messages
7,716
How to Give A.I. a Pinch of Consciousness
A.I. researchers are turning to neuroscience to build smarter, more powerful neural networks

"In 1998, an engineer in Sony’s computer science lab in Japan filmed a lost-looking robot moving trepidatiously around an enclosure. The robot was tasked with two objectives: avoid obstacles and find objects in the pen. It was able to do so because of its ability to learn the contours of the enclosure and the locations of the sought-after objects.
But whenever the robot encountered an obstacle it didn’t expect, something interesting happened: Its cognitive processes momentarily became chaotic. The robot was grappling with new, unexpected data that didn’t match its predictions about the enclosure. The researchers who set up the experiment argued that the robot’s “self-consciousness” arose in this moment of incoherence. Rather than carrying on as usual, it had to turn its attention inward, so to speak, to decide how to deal with the conflict.
This idea about self-consciousness — that it asserts itself in specific contexts, such as when we are confronted with information that forces us to reassess our environment and then make an executive decision about what to do next — is an old one, dating back to the work of the German philosopher Martin Heidegger in the early 20th century. Now, A.I. researchers are increasingly influenced by neuroscience and are investigating whether neural networks can and should achieve the same higher levels of cognition that occur in the human brain.
Far from the “stupid” robots of today, which don’t have any real understanding of where they are or what they experience, the hope is that a level of awareness analogous to consciousness in humans could make future A.I.s much more intelligent. They could learn by themselves, for example, how to select and focus on data in order to acquire new skills that they assimilate and go on to perform with ease. But giving machines the power to think like this also brings with it risks — and ethical uncertainties.
“I don’t design consciousness,” says Jun Tani, PhD, co-designer of the 1998 experiment and now a professor in the Cognitive Neurorobotics Research Unit at the Okinawa Institute of Technology. He tells OneZero that to describe what his robots experience as “consciousness” is to use a metaphor. That is, the bots aren’t actually cogitating in a way we would recognize, they’re just exhibiting behavior that is structurally similar. And yet he is fascinated by parallels between machine minds and human minds. So much so that he has tried simulating the neural responses associated with autism via a robot.
“Research on consciousness is still considered somewhat taboo in A.I.”
One of the world’s foremost A.I. experts, Yoshua Bengio, founder of Mila, the Quebec Artificial Intelligence Institute, is likewise fascinated by consciousness in A.I. He uses the analogy of driving to describe the switch between conscious and unconscious actions.
“It starts by conscious control when you learn how to drive and then, after some practice, most of the work is done at an unconscious level and you can have a conversation while driving,” he explains via email.
That higher, attentive level of processing is not always necessary — or even desirable — but it seems to be crucial for humans to learn new skills or adapt to unexpected challenges. A.I. systems and robots could potentially avoid the stupidity that currently plagues them if only they could gain the same ability to prioritize, focus, and resolve a problem.
Inspired in part by what we think we know about human consciousness, Bengio and his colleagues have spent several years working on the principle of “attention mechanisms” for A.I. systems. These systems are able to learn what data is relevant and therefore what to focus on in order to complete a given task.
“Research on consciousness,” Bengio adds, “is still considered somewhat taboo in A.I.” Because consciousness is such a difficult phenomenon to understand, even for neuroscientists, it has mostly been discussed by philosophers until now, he says.
Knowledge about the human brain and the human experience of consciousness is increasingly relevant to the pursuit of more advanced systems and has already led to some fascinating crossovers. Take, for example, the work by Newton Howard, PhD, professor of computational neurosciences and neurosurgery at the University of Oxford. He and colleagues have designed an operating system inspired by the human brain.
“When it’s deployed, it’s like a child. It’s eager to learn.”
Rather than rely on one approach to solving problems, it can choose the best data processing technique for the task in question — a bit like how different parts of the brain handle different sorts of information.
He’s also experimenting with a system that can gather data from various sensors and sources in order to automatically build knowledge on various topics. “When it’s deployed, it’s like a child,” he says. “It’s eager to learn.”
All of this work, loosely inspired by what we know about human brains, may push the boundaries of what A.I. can accomplish today. And yet some argue it might not get us much closer to a truly conscious machine mind that has a sense of a self, a detached “soul” that inhabits its body (or chipset), with free will to boot.
The philosopher Daniel Dennett, who has spent much of his life thinking about what consciousness is and is not, argues that we won’t see machines develop this level of consciousness anytime soon — not even within 50 years. He and others have pointed out that the A.I.s we are able to build today seem to have no semblance of the reflective thinking or awareness that we assume are crucial for consciousness.
It’s in the search for a system that does possess these attributes, though, that a profound crossover between neuroscience and A.I. research might happen. At the moment, consciousness remains one of the great mysteries of science. No one knows to what activity in the brain it is tied, exactly, though scientists are gradually working out that certain neural connections seem to be associated with it. Some researchers have found oscillations in brain activity that appear to be related to specific states of consciousness — signatures, if you like, of wakefulness.
By replicating such activity in a machine, we could perhaps enable it to experience conscious thought, suggests Camilo Miguel Signorelli, a research assistant in computer science at the University of Oxford.
He mentions the liquid “wetware” brain of the robot in Ex Machina, a gel-based container of neural activity. “I had to get away from circuitry, I needed something that could arrange and rearrange on a molecular level,” explains Oscar Isaac’s character, who has created a conscious cyborg.
“The risk of mistakenly creating suffering in a conscious machine is something that we need to avoid.”
“That would be an ideal system for an experiment,” says Signorelli, since a fluid, highly plastic brain might be configured to experience consciousness-forming neural oscillations — akin to the waves of activity we see in human brains.
This, it must be said, is highly speculative. And yet it raises the question of whether completely different hardware might be necessary for consciousness (as we experience it) to arise in a machine. Even if we do one day successfully confirm the presence of consciousness in a computer, Signorelli says that we will probably have no real power over it.
“Probably we will get another animal, humanlike consciousness but we can’t control this consciousness,” he says.
As some have argued, that could make such an A.I. dangerous and unpredictable. But a conscious machine that proves to be harmless could still raise ethical quandaries. What if it felt pain, despair, or a terrible state of confusion?
“The risk of mistakenly creating suffering in a conscious machine is something that we need to avoid,” says Andrea Luppi, a PhD student at the University of Cambridge who studies human brain activity and consciousness.
It may be a long time before we really need to grapple with this sort of issue. But A.I. research is increasingly drawing on neuroscience and ideas about consciousness in the pursuit of more powerful systems. That’s happening now. What sort of agent this will help us create in the future is, like the emergence of consciousness itself, tantalizingly difficult to predict."

https://onezero.medium.com/how-to-give-a-i-a-pinch-of-consciousness-c70707d62b88#_=_
 

erek

Supreme [H]ardness
Joined
Dec 19, 2005
Messages
7,716
Ah, this is taco101
Whenever obstacles in life appear, I freeze nd wobble. Perhaps having a backup support system, so to speak isn't such a bad idea after all.

Taco is excited!
"Here are 12 generated essays from GPT-3 using The Guardian's prompt, at various temperature settings. Remember, GPT-3's task is to be as formulaic as possible. I'm amused at how many of them begin by quoting Elon Musk and Bill Gates." -- https://twitter.com/JanelleCShane/status/1303802582876512257
 

erek

Supreme [H]ardness
Joined
Dec 19, 2005
Messages
7,716

erek

Supreme [H]ardness
Joined
Dec 19, 2005
Messages
7,716




"An interval-valued utility theory for decision making with Dempster-Shafer belief functions





The main goal of this paper is to describe an axiomatic utility theory for Dempster-Shafer belief function lotteries. The axiomatic framework used is analogous to von Neumann-Morgenstern's utility theory for probabilistic lotteries as described by Luce and Raiffa. Unlike the probabilistic case, our axiomatic framework leads to interval-valued utilities, and therefore, to a partial (incomplete) preference order on the set of all belief function lotteries. If the belief function reference lotteries we use are Bayesian belief functions, then our representation theorem coincides with Jaffray's representation theorem for his linear utility theory for belief functions. We illustrate our representation theorem using some examples discussed in the literature, and we propose a simple model for assessing utilities based on an interval-valued pessimism index representing a decision-maker's attitude to ambiguity and indeterminacy. Finally, we compare our decision theory with those proposed by Jaffray, Smets, Dubois et al., Giang and Shenoy, and Shafer."





https://techxplore.com/news/2020-09-artificial-intelligence-expert-theory-decision-making.html
 
Joined
May 25, 2020
Messages
20
(Also Sprach Zarathustra[H] makes you think-
-------------

Fascinating thread and your comments are some of the most interesting. If we create an artificial consciousness the ethical complications are monumental and tough to answer.
- If an AI "Person" exists and is simply "unplugged", who suffers? He/she continues exactly where he/she was interrupted when power returns. Is "life, liberty...:" diminished?
- Animals not granted "personage"? 'K I'm a dog lovdr, but...some dogs are little different than some babies in front of a mirror. MOST dogs can take a single look at a human face and react to anger, welcoming, distress. etc. Did you happen to hear or read about the dog helping on a drug bust, a few days ago. who bit the perp and held on while being stabbed 9 times? Lived through it, the arrested the perp and it was the dogs 2nd time being stabbed. Just what IS intellegence?
-You CAN'T put the Genie back in tghe jar. We can't just say "No." We have to face the issues and answer them. AI is here to stay.
 
  • Like
Reactions: erek
like this

erek

Supreme [H]ardness
Joined
Dec 19, 2005
Messages
7,716
https://bdtechtalks.com/2020/09/21/gpt-3-economy-business-model/

Commercial artificial intelligence
Ideally, OpenAI would have made GPT-3 available to the public. But we live in the era of commercial AI, and AI labs like OpenAI rely on the deep pockets of wealthy tech companies and VC firms to fund their research. This puts them under strain to create profitable businesses that can generate a return on investment and secure future funding.”
 

erek

Supreme [H]ardness
Joined
Dec 19, 2005
Messages
7,716
"AI devs created a lean, mean, GPT-3-beating machine that uses 99.9% fewer parameters" -- https://thenextweb.com/neural/2020/...ting-machine-that-uses-99-9-fewer-parameters/

" We have shown that it is possible to achieve fewshot performance similar to GPT-3 on SuperGLUE with LMs that have three orders of magnitude fewer parameters. This is achieved using PET, a method that reformulates tasks as cloze questions and trains an ensemble of models for different reformulations. We have proposed a simple yet effective modification to PET that enables us to use it for tasks that require predicting multiple tokens. In extensive experiments, we have identified several factors responsible for the strong performance of PET combined with pretrained ALBERT: the possibility to concurrently use multiple patterns for transforming examples into cloze questions, the ability to compensate for patterns that are difficult to understand, the usage of labeled data to perform parameter updates, and the underlying LM itself. To enable comparisons with our work, we make our dataset of few-shot training examples publicly available."
 

erek

Supreme [H]ardness
Joined
Dec 19, 2005
Messages
7,716
"Decoding Nonconscious Thought Representations during Successful Thought Suppression

Controlling our thoughts is central to mental well-being, and its failure is at the crux of a number of mental disorders. Paradoxically, behavioral evidence shows that thought suppression often fails. Despite the broad importance of understanding the mechanisms of thought control, little is known about the fate of neural representations of suppressed thoughts. Using fMRI, we investigated the brain areas involved in controlling visual thoughts and tracked suppressed thought representations using multivoxel pattern analysis. Participants were asked to either visualize a vegetable/fruit or suppress any visual thoughts about those objects. Surprisingly, the content (object identity) of successfully suppressed thoughts was still decodable in visual areas with algorithms trained on imagery. This suggests that visual representations of suppressed thoughts are still present despite reports that they are not. Thought generation was associated with the left hemisphere, whereas thought suppression was associated with right hemisphere engagement. Furthermore, general linear model analyses showed that subjective success in thought suppression was correlated with engagement of executive areas, whereas thought-suppression failure was associated with engagement of visual and memory-related areas. These results suggest that the content of suppressed thoughts exists hidden from awareness, seemingly without an individual's knowledge, providing a compelling reason why thought suppression is so ineffective. These data inform models of unconscious thought production and could be used to develop new treatment approaches to disorders involving maladaptive thoughts."

https://www.mitpressjournals.org/doi/abs/10.1162/jocn_a_01617
 

erek

Supreme [H]ardness
Joined
Dec 19, 2005
Messages
7,716
"Humans have tended to believe that we are the only species to possess certain traits, behaviors, or abilities, especially with regard to cognition. Occasionally, we extend such traits to primates or other mammals—species with which we share fundamental brain similarities. Over time, more and more of these supposed pillars of human exceptionalism have fallen. Nieder et al. now argue that the relationship between consciousness and a standard cerebral cortex is another fallen pillar (see the Perspective by Herculano-Houzel). Specifically, carrion crows show a neuronal response in the palliative end brain during the performance of a task that correlates with their perception of a stimulus. Such activity might be a broad marker for consciousness.
Subjective experiences that can be consciously accessed and reported are associated with the cerebral cortex. Whether sensory consciousness can also arise from differently organized brains that lack a layered cerebral cortex, such as the bird brain, remains unknown. We show that single-neuron responses in the pallial endbrain of crows performing a visual detection task correlate with the birds’ perception about stimulus presence or absence and argue that this is an empirical marker of avian consciousness. Neuronal activity follows a temporal two-stage process in which the first activity component mainly reflects physical stimulus intensity, whereas the later component predicts the crows’ perceptual reports. These results suggest that the neural foundations that allow sensory consciousness arose either before the emergence of mammals or independently in at least the avian lineage and do not necessarily require a cerebral cortex."

https://www.statnews.com/2020/09/24/crows-possess-higher-intelligence-long-thought-primarily-human/
 

erek

Supreme [H]ardness
Joined
Dec 19, 2005
Messages
7,716
Zarathustra[H] UltraTaco

"Formation of Neural Circuits in an Expanded Version of Darwin's Theory: Effects of DNAs in Extra Dimensions and within the Earth's Core on Neural Networks


Aim: In this paper, inspiring Darwin's theory, we propose a model which connects evolutions of neural circuits with evolutions of cosmos. In this model, in the beginning, there are some closed strings which decay into two groups of open strings.

Methods: First group couple to our universe from one side and produce matters like some genes of DNAs and couple to an anti-universe from another side with opposite sign and create anti-matters like some anti-genes of anti-DNAs. Second group couple to the star and planet's cores like the earth's core from one side and produce anti-matters like stringy black anti-DNA and couple to outer layers of stars and planets like the earth from other side and produce matters like some genes of DNAs on the earth. Each DNA or anti-DNA contains some genetic circuits which act like the circuits of receiver or sender of radio waves. To transfer waves of these circuits, some neurons emerge which some of them are related to genetic circuits of anti-DNAs in anti-universe, and some are related to genetic circuits of stringy black anti-DNA within the earth's core. A collection of these neural circuits forms the little brain on the heart at first and main brain after some time.

Results: To examine the model, we remove effects of matters in outer layers of earth in the conditions of microgravity and consider radiated signals of neural circuits in a chick embryo. We observe that in microgravity, more signals are emitted by neural circuits respect to normal conditions. This is a signature of exchanged waves between neural circuits and structures within the earth's core.

Conclusion: These communications help some animals to predict the time and place of an earthquake."


https://pubmed.ncbi.nlm.nih.gov/31850135/
 

erek

Supreme [H]ardness
Joined
Dec 19, 2005
Messages
7,716
"
We are conscious beings: Somehow, the activity of our brains and nervous systems gives rise to states in which we have subjective experiences; and when we are in such states, we are aware of specific content. Researchers are only beginning to develop an understanding of the neural mechanisms underlying these phenomena. Much of the research in the last few decades has focussed on discerning the neural correlates of consciousness, using neuroimaging methods. Correlates, however, are not causes: to draw inferences about the processes that generate consciousness, rather than accompanying it, one must manipulate brain activity and examine the effects this has on conscious states and contents. One way to do so is to use brain stimulation techniques, such as transcranial magnetic stimulation (TMS). In this review, we survey the consciousness literature with a special emphasis on TMS studies. We begin by examining what is known about the neural substrates of states of consciousness – the kinds of brain activity that determine whether a person is awake, asleep, or suffering from a disorder of consciousness. We then delve into the contents of consciousness, by examining the literature on perceptual awareness. Throughout, we highlight current controversies and promising avenues for further research.
"


https://www.tandfonline.com/doi/full/10.1080/03036758.2020.1840405

full-Text: https://www.researchgate.net/public...isms_underlying_conscious_states_and_contents
 

THRESHIN

2[H]4U
Joined
Sep 29, 2002
Messages
3,230
AI is an interesting topic to most of us, but I have to say I find all the fear about it short sighted. I won't say that some sort of skynet scenario isn't possible because we just don't know. But that's my point - this is all an unknown.

Since no concious AI has ever been developed (as far as we know) any 'evidence' of why we should not develop AI is based on works of fiction. Just think about that for a moment. Some of us would ban research based on a science fiction book or movie created by someone that knows nothing about the research. Rightfully so, many of the past books were written long before any of this was possible.

I think we should be cautious but at the same time not react only in fear. That has never worked out well in the past.
 
  • Like
Reactions: erek
like this

erek

Supreme [H]ardness
Joined
Dec 19, 2005
Messages
7,716
AI is an interesting topic to most of us, but I have to say I find all the fear about it short sighted. I won't say that some sort of skynet scenario isn't possible because we just don't know. But that's my point - this is all an unknown.

Since no concious AI has ever been developed (as far as we know) any 'evidence' of why we should not develop AI is based on works of fiction. Just think about that for a moment. Some of us would ban research based on a science fiction book or movie created by someone that knows nothing about the research. Rightfully so, many of the past books were written long before any of this was possible.

I think we should be cautious but at the same time not react only in fear. That has never worked out well in the past.

Artificial Intuition in Tech Journalism on AI: Imagining the Human Subject​


https://research.stmarys.ac.uk/id/eprint/4485/


----


Deep fuzzy model for non-linear effective connectivity estimation in the intuition of consciousness correlates​


"Highlights


This study establishes a Deep Fuzzy structure to model the Multivariate Autoregressive (DF-MVAR) used in the Granger Causality.

First – order TSK fuzzy rules are the cores of the network and in combination with stacked structure guarantee interpretability in all layers.

DF_MVAR as a nonlinear MVAR was applied to two nonlinear synthetic time series and compared with linear Granger.

DF-MVAR was performed to detect the connectivity networks of EEG in consciousness states.

Results demonstrated the superiority of DF-MVAR in comparison with linear Granger Causality in the detection of the effective connectivity.

Abstract​

The brain connectivity, as a promising technique to explore brain networks during resting-states or cognitive tasks, has been employed remarkably in recent years. The aim of this study is to propose a new approach to improve the Granger causality as one of the fundamental methods for calculation of brain effective connectivity. To this end, we utilized a deep fuzzy structure to model the multivariate autoregressive used in the Granger causality. The proposed model benefits from the hierarchical stacked structure where first – order TSK fuzzy rules are the cores of the network. In the first layer of the stacked structure, the antecedents of the fuzzy rules are extracted from the fuzzy clustering of the input space. For subsequent layers, due to the input perturbation which is caused by the previous layer output, a shuffling approach is adopted. To assess our proposed model, we applied it to two nonlinear synthetic time series and compared it with linear Granger. Results revealed that our model is superior in the detection of effective connectivity. We additionally exploit the pioneer model for one of the controversial concepts in cognitive neuroscience in recent years: the neural correlates of visual consciousness. We applied our method to detect connectivity networks of EEG in consciousness states. Our results demonstrated that the proposed nonlinear connectivity estimator was capable of detecting novel correlates: significant differences have been observed among different states of consciousness, not only in presence of attention, as the linear method detected it, but also in absence of attention."


----

Neural Correlates of Dual Decision Processes: A Network-Based Meta-analysis​


"It is well-received that human decision mechanism involves two processes: intuition and deliberation, which is also known as faster system 1 and slower system 2. A large volume of research has used this mechanism to interpret human decision behavior and the activation of associated bran regions in different scenarios. Recently, a trend of brain image research is to focus not on the role of individual brain areas but on the network of area connectivity. The purpose of this research is hence to explore how different brain regions are connected when these different decision processes are activated. In particular, we conduct a meta-analysis to build new knowledge on existing published primary research to construct neural networks associated with these dual processes. The social network analysis is used for this meta-analysis and results will be reported."
 
Last edited:

THRESHIN

2[H]4U
Joined
Sep 29, 2002
Messages
3,230

Artificial Intuition in Tech Journalism on AI: Imagining the Human Subject​


https://research.stmarys.ac.uk/id/eprint/4485/


----


Deep fuzzy model for non-linear effective connectivity estimation in the intuition of consciousness correlates​


"Highlights


This study establishes a Deep Fuzzy structure to model the Multivariate Autoregressive (DF-MVAR) used in the Granger Causality.

First – order TSK fuzzy rules are the cores of the network and in combination with stacked structure guarantee interpretability in all layers.

DF_MVAR as a nonlinear MVAR was applied to two nonlinear synthetic time series and compared with linear Granger.

DF-MVAR was performed to detect the connectivity networks of EEG in consciousness states.

Results demonstrated the superiority of DF-MVAR in comparison with linear Granger Causality in the detection of the effective connectivity.

Abstract​

The brain connectivity, as a promising technique to explore brain networks during resting-states or cognitive tasks, has been employed remarkably in recent years. The aim of this study is to propose a new approach to improve the Granger causality as one of the fundamental methods for calculation of brain effective connectivity. To this end, we utilized a deep fuzzy structure to model the multivariate autoregressive used in the Granger causality. The proposed model benefits from the hierarchical stacked structure where first – order TSK fuzzy rules are the cores of the network. In the first layer of the stacked structure, the antecedents of the fuzzy rules are extracted from the fuzzy clustering of the input space. For subsequent layers, due to the input perturbation which is caused by the previous layer output, a shuffling approach is adopted. To assess our proposed model, we applied it to two nonlinear synthetic time series and compared it with linear Granger. Results revealed that our model is superior in the detection of effective connectivity. We additionally exploit the pioneer model for one of the controversial concepts in cognitive neuroscience in recent years: the neural correlates of visual consciousness. We applied our method to detect connectivity networks of EEG in consciousness states. Our results demonstrated that the proposed nonlinear connectivity estimator was capable of detecting novel correlates: significant differences have been observed among different states of consciousness, not only in presence of attention, as the linear method detected it, but also in absence of attention."


----

Neural Correlates of Dual Decision Processes: A Network-Based Meta-analysis​


"It is well-received that human decision mechanism involves two processes: intuition and deliberation, which is also known as faster system 1 and slower system 2. A large volume of research has used this mechanism to interpret human decision behavior and the activation of associated bran regions in different scenarios. Recently, a trend of brain image research is to focus not on the role of individual brain areas but on the network of area connectivity. The purpose of this research is hence to explore how different brain regions are connected when these different decision processes are activated. In particular, we conduct a meta-analysis to build new knowledge on existing published primary research to construct neural networks associated with these dual processes. The social network analysis is used for this meta-analysis and results will be reported."


I understood very little of that but it sounds awesome
 

erek

Supreme [H]ardness
Joined
Dec 19, 2005
Messages
7,716
I understood very little of that but it sounds awesome
"Foundations of human consciousness: Imaging the twilight zone" -- https://www.jneurosci.org/content/early/2020/12/22/JNEUROSCI.0775-20.2020

"Trying to understand the biological basis of human consciousness is currently one of the greatest challenges of neuroscience. While the loss and return of consciousness regulated by anesthetic drugs and physiological sleep are employed as model systems in experimental studies on consciousness, previous research results have been confounded by drug effects, by confusing behavioral “unresponsiveness” and internally generated consciousness, and by comparing brain activity levels across states that differ in several other respects than only consciousness. Here, we present carefully designed studies that overcome many previous confounders and for the first time reveal the neural mechanisms underlying human consciousness and its disconnection from behavioral responsiveness, both during anesthesia and during normal sleep, and in the same study subjects."
 

erek

Supreme [H]ardness
Joined
Dec 19, 2005
Messages
7,716
GoldenTiger UltraTaco THRESHIN

1609261188622.png


"The discoveries of the cognitive sciences can tell us a great deal in our quest to build artificial intelligence with the flexibility and generality of the human mind. Machines need not replicate the human mind, but a thorough understanding of the human mind may lead to major advances in AI.
In our view, the path forward should start with focused research on how to implement the core frameworks15 of human knowledge: time, space, causality, and basic knowledge of physical objects and humans and their interactions. These should be embedded into an architecture that can be freely extended to every kind of knowledge, keeping always in mind the central tenets of abstraction, compositionality, and tracking of individuals.10 We also need to develop powerful reasoning techniques that can deal with knowledge that is complex, uncertain, and incomplete and that can freely work both top-down and bottom-up,16 and to connect these to perception, manipulation, and language, in order to build rich cognitive models of the world. The keystone will be to construct a kind of human-inspired learning system that leverages all the knowledge and cognitive abilities that the AI has; that incorporates what it learns into its prior knowledge; and that, like a child, voraciously learns from every possible source of information: interacting with the world, interacting with people, reading, watching videos, even being explicitly taught.
It's a tall order, but it's what has to be done."

https://cacm.acm.org/magazines/2021/1/249452-insights-for-ai-from-the-human-mind/fulltext
 

erek

Supreme [H]ardness
Joined
Dec 19, 2005
Messages
7,716
"Artificial Intelligence agents are required to learn from their surroundings and to reason about the knowledge that has been learned in order to make decisions. While state-of-the-art learning from data typically uses sub-symbolic distributed representations, reasoning is normally useful at a higher level of abstraction with the use of a first-order logic language for knowledge representation. As a result, attempts at combining symbolic AI and neural computation into neural-symbolic systems have been on the increase. In this paper, we present Logic Tensor Networks (LTN), a neurosymbolic formalism and computational model that supports learning and reasoning through the introduction of a many-valued, end-to-end differentiable first-order logic called Real Logic as a representation language for deep learning. We show that LTN provides a uniform language for the specification and the computation of several AI tasks such as data clustering, multi-label classification, relational learning, query answering, semi-supervised learning, regression and embedding learning. We implement and illustrate each of the above tasks with a number of simple explanatory examples using TensorFlow 2. Keywords: Neurosymbolic AI, Deep Learning and Reasoning, Many-valued Logic."
 

erek

Supreme [H]ardness
Joined
Dec 19, 2005
Messages
7,716

Newfound brain structure explains why some birds are so smart—and maybe even self-aware


"A neural correlate of sensory consciousness in a corvid bird:
Humans have tended to believe that we are the only species to possess certain traits, behaviors, or abilities, especially with regard to cognition. Occasionally, we extend such traits to primates or other mammals—species with which we share fundamental brain similarities. Over time, more and more of these supposed pillars of human exceptionalism have fallen. Nieder et al. now argue that the relationship between consciousness and a standard cerebral cortex is another fallen pillar (see the Perspective by Herculano-Houzel). Specifically, carrion crows show a neuronal response in the palliative end brain during the performance of a task that correlates with their perception of a stimulus. Such activity might be a broad marker for consciousness.
Science, this issue p. 1626; see also p. 1567
Abstract
Subjective experiences that can be consciously accessed and reported are associated with the cerebral cortex. Whether sensory consciousness can also arise from differently organized brains that lack a layered cerebral cortex, such as the bird brain, remains unknown. We show that single-neuron responses in the pallial endbrain of crows performing a visual detection task correlate with the birds’ perception about stimulus presence or absence and argue that this is an empirical marker of avian consciousness. Neuronal activity follows a temporal two-stage process in which the first activity component mainly reflects physical stimulus intensity, whereas the later component predicts the crows’ perceptual reports. These results suggest that the neural foundations that allow sensory consciousness arose either before the emergence of mammals or independently in at least the avian lineage and do not necessarily require a cerebral cortex."
"A cortex-like canonical circuit in the avian forebrain
Mammals can be very smart. They also have a brain with a cortex. It has thus often been assumed that the advanced cognitive skills of mammals are closely related to the evolution of the cerebral cortex. However, birds can also be very smart, and several bird species show amazing cognitive abilities. Although birds lack a cerebral cortex, they do have pallium, and this is considered to be analogous, if not homologous, to the cerebral cortex. An outstanding feature of the mammalian cortex is its layered architecture. In a detailed anatomical study of the bird pallium, Stacho et al. describe a similarly layered architecture. Despite the nuclear organization of the bird pallium, it has a cyto-architectonic organization that is reminiscent of the mammalian cortex."
 
Top