Windows 11’s AI-powered Copilot (and its Bing-powered ads) enters public preview

Did you submit the form for access, or they just added it on their own?

Porting over to it from OpenAI takes like 5 minutes if you've done any work there. It's just a new API endpoint and key, and a couple very, very minor code changes. Same SDK.
Both I requested it but I think I jumped the gun and they were adding it anyways.
 
The frustration compounds yet again. First the smaller issue of them wrapping this new Copilot so closely to the advertising and data mining features of Bing, as opposed to letting it be its own thing; never acceptable. I saw one of my Win10 copies of Edge offering it which would be fine inand of itself, but installing it as part of the OS - both Win11 and apparently Win10 coming - with users somehow having little control even of that, is a major issue. Apart from some of the other issues, I could understand MS wanting to get an "AI" integrated, just like they did with Cortana in the past. Honestly I'm kind of surprised they aren't going with that name and branding instead of the more generic "copilot". I'm also a bit surprised that Microsoft decided to just license OpenAI connections on a massive scale for this as opposed to them developing something themselves, which kind of brings me to the biggest issue I think in the grand scheme of things. MS Copilot depends entirely and is powered by an external, remote-hosted, software as a service, large language model from "OpenAI" (the name of the company is hypocritical bunk)! ChatGPT / GPT-4, DALL-E and their other services are proprietary models, with proprietary training data and the like - the only 'open' element is an API to connect to their services! As development has continued, free usage has been increasingly restricted to encourage subscription or other sort of purchase and increasingly, sanitization (some say, censorship) adds an increasingly high restriction to ensure that "objectionable" content or requests, will be declined by the model and that its harder than it used to be to "jailbreak" through properly structured queries. All of OpenAIs models and tech are proprietary and doled out to (often paying) petitioners, while all user input and responses can be used to further refine the projects, which gives them a massive early mover advantage in terms of massive volumes of training data.

MS Copilot, embedded in Windows or otherwise, seems to be proving two of my core fears about the trajectory of AI development true to a frustrating degree. First, I have always objected to the proprietary software as a service model for myriad reasons both practical and philosphical, just a tiny few of which I noted in my criticism above. Next, this creates the conditions for a handful of "successful" services to become central to everyoen's AI projects and therefore in a position of inordinent amounts of control, if their API and integrations are powering tthe vast majority of AI-using services. Microsoft, one of the biggest computing companies in the world who has sometimes thrown money down the toilet in order to chase trends or push their new thing, has decided not to create or host thier own AI model, but instead to just license OpenAI's services to power the Copilot. Putting aside that in some ways this may be a smart business decision for MS, that does not mean its a step in the right direction for users, their privacy, or control. , it exemplifies one of my fears of "LLM / AI " futures where a handful of megacorp's proprietary services "power" everything. Sure, you can create whatever widget you want with the API they give you, but the brains behind it are on another server, using a proprietary model and training data; not to mention you will likely pay for the privledge for these "integrations"!

If even Microsoft either finds it too difficult/costly etc... to develop and host their own LLM that they can fully control, that does not bode well for the average user who will have little choice, finding an increasing amount of applications, sites, and services to be at the behest of the particular AI integration used. AI will be primary in the control and for the benefit of a handful of corporate interests unless we start making changes ASAP! The alternative is favoring models that re open source and self-hostabled, with open and exchangable training data formats, all under licences that prevent forks that "steal from the commons" into proprietary lockdown. Stable Diffusion is one of the few "top tier" AI models that is open source and it has lead to amazing benefits and development in its field. Up until recently it was uncontested, but now we're seeing OpenAI's proprietary alternative DALL-E's latest edition has gotten a lot of attention and is seen as an equal, diverting a lot of casual usage and training data through things like the MS Bing's Create, which of course both is wrapped into the data mining ecosystem of MS and OpenAI alike. In many other use cases however from general text use with GPT-4 to voice generation from Elevenlabs, the proprietary SaaS models are seen as the best around. There ARE some FOSS self-hostable alternatives out there like LLaMA , OpenVoiceOS/Neon, and others that are doing some pretty neat stuff but they will remain as hobbyist and other limited use cases if they can't compared favorably with the big name SaaS types. Those highly technical, ideologically minded on the value of open LLMs, privacy seeking, and other use cases may put up with a more complex or difficult model to configure and the like but most users are not going to care about this as much as it "just working", especially if they can't see the harms being done to their privacy and other issues. Unless of course there is a major failure or incident at which point "Who could have known something like this could happen?!".

Ultimately though, if we want a future where the benefits of LLM / AI are ubiquitously available leaving it up to "the market" will not be sufficient. There are a handful of poliices that could help, such as mandating that in general gov't or public funded projects using AI must be open in models, training data and the like so that we all benefit from that purchased or created with our tax money, as well as greater transparency so that issues that arise can be more easily assessed. There's a lot more to it (I haven't even gotten into the economic impacts in many different areas, or dealing with the loud foolishness of "Anti-AI artists" loudly on social media pushing for poorly understood faux-protections that will give well heeled megacorps even more control) but we're going to start need to making better choices for the role AI / LLM will have in our lives, lest other powerful vested interests will shape it to serve their preferences and the prediictable inequity will continue to compound in myriad ways.
 
You want to use it for business data to run a business, or test products before launch, or anything else that touches the real world, I think I would want that to come to a screeching halt.
to make a clear example:
https://www.windsystemsmag.com/ai-applications-in-wind-energy-systems/

This should be ban ? If something like a windmill that has many sensor and recorded failure in a nice database, created a ML model from the data and testing if it can predict and avoid issue, this is automatically bad.

If we have a long history of collecting data before drilling to see if a oil-mineral-etc... spot in the ground and the result to match the data, we should not create a ML from it, use it on the list of site that were refused has the primilarly data was judged not good enough, test the best scored site by the ML to see if there is any value ? That bad ?

This should not be allowed:
https://carbonrobotics.com/

Laser herbicide-pesticide using a ML model instead of chemical ? We close the door just because it is AI despite all the potential for the future of soil and nature that it present ?.

Predicting the rest of the current words and the next words you are about to type on a bad typing platform like a phone ? It has been a bit everywhere for a while now.

This sound a bit like what people said about PC in the 70s-early 80s (a lot of their prediction were true, how bad it would be quite exaggerated)
 
The frustration compounds yet again. First the smaller issue of them wrapping this new Copilot so closely to the advertising and data mining features of Bing, as opposed to letting it be its own thing; never acceptable. I saw one of my Win10 copies of Edge offering it which would be fine inand of itself, but installing it as part of the OS - both Win11 and apparently Win10 coming - with users somehow having little control even of that, is a major issue. Apart from some of the other issues, I could understand MS wanting to get an "AI" integrated, just like they did with Cortana in the past. Honestly I'm kind of surprised they aren't going with that name and branding instead of the more generic "copilot". I'm also a bit surprised that Microsoft decided to just license OpenAI connections on a massive scale for this as opposed to them developing something themselves, which kind of brings me to the biggest issue I think in the grand scheme of things. MS Copilot depends entirely and is powered by an external, remote-hosted, software as a service, large language model from "OpenAI" (the name of the company is hypocritical bunk)! ChatGPT / GPT-4, DALL-E and their other services are proprietary models, with proprietary training data and the like - the only 'open' element is an API to connect to their services! As development has continued, free usage has been increasingly restricted to encourage subscription or other sort of purchase and increasingly, sanitization (some say, censorship) adds an increasingly high restriction to ensure that "objectionable" content or requests, will be declined by the model and that its harder than it used to be to "jailbreak" through properly structured queries. All of OpenAIs models and tech are proprietary and doled out to (often paying) petitioners, while all user input and responses can be used to further refine the projects, which gives them a massive early mover advantage in terms of massive volumes of training data.

MS Copilot, embedded in Windows or otherwise, seems to be proving two of my core fears about the trajectory of AI development true to a frustrating degree. First, I have always objected to the proprietary software as a service model for myriad reasons both practical and philosphical, just a tiny few of which I noted in my criticism above. Next, this creates the conditions for a handful of "successful" services to become central to everyoen's AI projects and therefore in a position of inordinent amounts of control, if their API and integrations are powering tthe vast majority of AI-using services. Microsoft, one of the biggest computing companies in the world who has sometimes thrown money down the toilet in order to chase trends or push their new thing, has decided not to create or host thier own AI model, but instead to just license OpenAI's services to power the Copilot. Putting aside that in some ways this may be a smart business decision for MS, that does not mean its a step in the right direction for users, their privacy, or control. , it exemplifies one of my fears of "LLM / AI " futures where a handful of megacorp's proprietary services "power" everything. Sure, you can create whatever widget you want with the API they give you, but the brains behind it are on another server, using a proprietary model and training data; not to mention you will likely pay for the privledge for these "integrations"!

If even Microsoft either finds it too difficult/costly etc... to develop and host their own LLM that they can fully control, that does not bode well for the average user who will have little choice, finding an increasing amount of applications, sites, and services to be at the behest of the particular AI integration used. AI will be primary in the control and for the benefit of a handful of corporate interests unless we start making changes ASAP! The alternative is favoring models that re open source and self-hostabled, with open and exchangable training data formats, all under licences that prevent forks that "steal from the commons" into proprietary lockdown. Stable Diffusion is one of the few "top tier" AI models that is open source and it has lead to amazing benefits and development in its field. Up until recently it was uncontested, but now we're seeing OpenAI's proprietary alternative DALL-E's latest edition has gotten a lot of attention and is seen as an equal, diverting a lot of casual usage and training data through things like the MS Bing's Create, which of course both is wrapped into the data mining ecosystem of MS and OpenAI alike. In many other use cases however from general text use with GPT-4 to voice generation from Elevenlabs, the proprietary SaaS models are seen as the best around. There ARE some FOSS self-hostable alternatives out there like LLaMA , OpenVoiceOS/Neon, and others that are doing some pretty neat stuff but they will remain as hobbyist and other limited use cases if they can't compared favorably with the big name SaaS types. Those highly technical, ideologically minded on the value of open LLMs, privacy seeking, and other use cases may put up with a more complex or difficult model to configure and the like but most users are not going to care about this as much as it "just working", especially if they can't see the harms being done to their privacy and other issues. Unless of course there is a major failure or incident at which point "Who could have known something like this could happen?!".

Ultimately though, if we want a future where the benefits of LLM / AI are ubiquitously available leaving it up to "the market" will not be sufficient. There are a handful of poliices that could help, such as mandating that in general gov't or public funded projects using AI must be open in models, training data and the like so that we all benefit from that purchased or created with our tax money, as well as greater transparency so that issues that arise can be more easily assessed. There's a lot more to it (I haven't even gotten into the economic impacts in many different areas, or dealing with the loud foolishness of "Anti-AI artists" loudly on social media pushing for poorly understood faux-protections that will give well heeled megacorps even more control) but we're going to start need to making better choices for the role AI / LLM will have in our lives, lest other powerful vested interests will shape it to serve their preferences and the prediictable inequity will continue to compound in myriad ways.

It's not hosted externally - they explicitly guarantee that they own the infrastructure that Azure OpenAI runs on. It intentionally has zero connection to any OpenAI services.

OpenAI cannot interact with or see anything you do on Microsoft's version of it. This is a literal selling point of the service.
 
If even Microsoft either finds it too difficult/costly etc... to develop and host their own LLM that they can fully control, that does not bode well for the average user who will have little choice, finding an increasing amount of applications, sites, and services to be at the behest of the particular AI integration used. AI will be primary in the control and for the benefit of a handful of corporate interests unless we start making changes ASAP! The alternative is favoring models that re open source and self-hostabled, with open and exchangable training data formats, all under licences that prevent forks that "steal from the commons" into proprietary lockdown. Stable Diffusion is one of the few "top tier" AI models that is open source and it has lead to amazing benefits and development in its field. Up until recently it was uncontested, but now we're seeing OpenAI's proprietary alternative DALL-E's latest edition has gotten a lot of attention and is seen as an equal, diverting a lot of casual usage and training data through things like the MS Bing's Create, which of course both is wrapped into the data mining ecosystem of MS and OpenAI alike. In many other use cases however from general text use with GPT-4 to voice generation from Elevenlabs, the proprietary SaaS models are seen as the best around. There ARE some FOSS self-hostable alternatives out there like LLaMA , OpenVoiceOS/Neon, and others that are doing some pretty neat stuff but they will remain as hobbyist and other limited use cases if they can't compared favorably with the big name SaaS types.

Llama 2 70B is pretty close to GPT-4 and it is quite probably something at least twice as good will run locally on a cheap smartphone in 10 years (lot of sign that with better training-data you can do a lot with not that large of models), if the pace on inference performance keep up for just a little while, the paper on transformer is only 6 years old, tech got mainstream only with Turing launch.

The giant hosted one will still be better, but your average run on a phone affair will probably good enough for agents that speak to hotel agents, basic tax-law-health, translation, personal assistants, part of the world where people in charge (politically, culturally, media, etc...) can afford to pay nice humans to do those will have a lot of resistance (like the english nobility had toward electricity and central heating when you have human servant what the points), not so much in the rest of the world, in Nigeria-India if competing with Pixar-Disney animation because of generative AI become possible in 2050 why would they not do it ? In Africa if a cheap smart toilet-watch make possible to detect cancer in advance, will an establishment be able to block them like they will in the western world (where people in charge have access to superb scanning machine and healthcare staff and do not see the point) ?
 
to make a clear example:
https://www.windsystemsmag.com/ai-applications-in-wind-energy-systems/

This should be ban ? If something like a windmill that has many sensor and recorded failure in a nice database, created a ML model from the data and testing if it can predict and avoid issue, this is automatically bad.

If we have a long history of collecting data before drilling to see if a oil-mineral-etc... spot in the ground and the result to match the data, we should not create a ML from it, use it on the list of site that were refused has the primilarly data was judged not good enough, test the best scored site by the ML to see if there is any value ? That bad ?

This should not be allowed:
https://carbonrobotics.com/

Laser herbicide-pesticide using a ML model instead of chemical ? We close the door just because it is AI despite all the potential for the future of soil and nature that it present ?.

Predicting the rest of the current words and the next words you are about to type on a bad typing platform like a phone ? It has been a bit everywhere for a while now.

This sound a bit like what people said about PC in the 70s-early 80s (a lot of their prediction were true, how bad it would be quite exaggerated)

There is no doubt AI can discover optimal solutions that humans cant current do on their own based on its ability to process very large amounts of data.

I support Machine Learning sifting through massive amounts of data and finding potential solutions for humans to then verify using static traditional scientific, engineering and statistical models, but yes. No matter how positive the outcome, the negatives of AI are orders of magnitude worse, and it must be stopped at any cost.
 
The frustration compounds yet again. First the smaller issue of them wrapping this new Copilot so closely to the advertising and data mining features of Bing, as opposed to letting it be its own thing; never acceptable. I saw one of my Win10 copies of Edge offering it which would be fine inand of itself, but installing it as part of the OS - both Win11 and apparently Win10 coming - with users somehow having little control even of that, is a major issue. Apart from some of the other issues, I could understand MS wanting to get an "AI" integrated, just like they did with Cortana in the past. Honestly I'm kind of surprised they aren't going with that name and branding instead of the more generic "copilot". I'm also a bit surprised that Microsoft decided to just license OpenAI connections on a massive scale for this as opposed to them developing something themselves, which kind of brings me to the biggest issue I think in the grand scheme of things. MS Copilot depends entirely and is powered by an external, remote-hosted, software as a service, large language model from "OpenAI" (the name of the company is hypocritical bunk)! ChatGPT / GPT-4, DALL-E and their other services are proprietary models, with proprietary training data and the like - the only 'open' element is an API to connect to their services! As development has continued, free usage has been increasingly restricted to encourage subscription or other sort of purchase and increasingly, sanitization (some say, censorship) adds an increasingly high restriction to ensure that "objectionable" content or requests, will be declined by the model and that its harder than it used to be to "jailbreak" through properly structured queries. All of OpenAIs models and tech are proprietary and doled out to (often paying) petitioners, while all user input and responses can be used to further refine the projects, which gives them a massive early mover advantage in terms of massive volumes of training data.

MS Copilot, embedded in Windows or otherwise, seems to be proving two of my core fears about the trajectory of AI development true to a frustrating degree. First, I have always objected to the proprietary software as a service model for myriad reasons both practical and philosphical, just a tiny few of which I noted in my criticism above. Next, this creates the conditions for a handful of "successful" services to become central to everyoen's AI projects and therefore in a position of inordinent amounts of control, if their API and integrations are powering tthe vast majority of AI-using services. Microsoft, one of the biggest computing companies in the world who has sometimes thrown money down the toilet in order to chase trends or push their new thing, has decided not to create or host thier own AI model, but instead to just license OpenAI's services to power the Copilot. Putting aside that in some ways this may be a smart business decision for MS, that does not mean its a step in the right direction for users, their privacy, or control. , it exemplifies one of my fears of "LLM / AI " futures where a handful of megacorp's proprietary services "power" everything. Sure, you can create whatever widget you want with the API they give you, but the brains behind it are on another server, using a proprietary model and training data; not to mention you will likely pay for the privledge for these "integrations"!

If even Microsoft either finds it too difficult/costly etc... to develop and host their own LLM that they can fully control, that does not bode well for the average user who will have little choice, finding an increasing amount of applications, sites, and services to be at the behest of the particular AI integration used. AI will be primary in the control and for the benefit of a handful of corporate interests unless we start making changes ASAP! The alternative is favoring models that re open source and self-hostabled, with open and exchangable training data formats, all under licences that prevent forks that "steal from the commons" into proprietary lockdown. Stable Diffusion is one of the few "top tier" AI models that is open source and it has lead to amazing benefits and development in its field. Up until recently it was uncontested, but now we're seeing OpenAI's proprietary alternative DALL-E's latest edition has gotten a lot of attention and is seen as an equal, diverting a lot of casual usage and training data through things like the MS Bing's Create, which of course both is wrapped into the data mining ecosystem of MS and OpenAI alike. In many other use cases however from general text use with GPT-4 to voice generation from Elevenlabs, the proprietary SaaS models are seen as the best around. There ARE some FOSS self-hostable alternatives out there like LLaMA , OpenVoiceOS/Neon, and others that are doing some pretty neat stuff but they will remain as hobbyist and other limited use cases if they can't compared favorably with the big name SaaS types. Those highly technical, ideologically minded on the value of open LLMs, privacy seeking, and other use cases may put up with a more complex or difficult model to configure and the like but most users are not going to care about this as much as it "just working", especially if they can't see the harms being done to their privacy and other issues. Unless of course there is a major failure or incident at which point "Who could have known something like this could happen?!".

Ultimately though, if we want a future where the benefits of LLM / AI are ubiquitously available leaving it up to "the market" will not be sufficient. There are a handful of poliices that could help, such as mandating that in general gov't or public funded projects using AI must be open in models, training data and the like so that we all benefit from that purchased or created with our tax money, as well as greater transparency so that issues that arise can be more easily assessed. There's a lot more to it (I haven't even gotten into the economic impacts in many different areas, or dealing with the loud foolishness of "Anti-AI artists" loudly on social media pushing for poorly understood faux-protections that will give well heeled megacorps even more control) but we're going to start need to making better choices for the role AI / LLM will have in our lives, lest other powerful vested interests will shape it to serve their preferences and the prediictable inequity will continue to compound in myriad ways.


I want a similar solution to what Elizier Yudkowksy proposes. A global ban on any AI research by treaty with no exception for governments or militaries, a destruction of all existing AI models, and strict tracking of all GPU's manufactured so that if anyone tries to create a datacenter where training of newer and better AI models can be undertaken, it can be destroyed via air strikes, and a commitment from the international community to do whatever it takes militarily to destroy them without exception or delay.

While I think Elizier Yudkowsky's doomsday scenario "ending all life on earth" is a little farfetched, I do agree with his proposed solution. AI will bring about change to the world that most people do not want, it will ruin lives, and it will harm people. It will upend economies and destroy nations. We cannot allow that kind of upheaval because some people think "AI is cool" or "a neat way to solve a problem". That kind of callous pushing for someone's vision of the future with utter disregard for its impacts needs to be resisted and stopped by any means necessary.

I think stopping AI needs to be a global priority at an even greater level than stopping global warming or preventing nuclear war. There needs to be a no compromises, no exceptions complete and total end to the field of AI research and the destruction of all progress made to date, regardless of the neat little projects it can accomplish.
 
Last edited:
I want a similar solution to what Elizier Yudkowksy proposes. A global ban on any AI research by treaty with no exception for governments or militaries, a destruction of all existing AI models, and strict tracking of all GPU's manufactured so that if anyone tries to create a datacenter where training of newer and better AI models can be undertaken, it can be destroyed via air strikes, and a commitment from the international community to do whatever it takes militarily to destroy them without exception or delay.

While I think Elizier Yudkowsky's doomsday scenario "ending all life on earth" is a little farfetched, I do agree with his proposed solution. AI will bring about change to the world that most people do not want, it will ruin lives, and it will harm people. It will upend economies and destroy nations. We cannot allow that kind of upheaval because some people think "AI is cool" or "a neat way to solve a problem". That kind of callous pushing for someone's vision of the future with utter disregard for its impacts needs to be resisted and stopped by any means necessary.

I think stopping AI needs to be a global priority at an ever greater level than stopping global warming or preventing nuclear war. There needs to be a no compromises, no exceptions complete and total end to the field of AI research and the destruction of all progress made to date, regardless of the neat little projects it can accomplish.
everyone is ignoring all the warnings about ai for the past 40 55 years, we've seen how this ends....
 
I support Machine Learning sifting through massive amounts of data and finding potential solutions for humans to then verify using static traditional scientific, engineering and statistical models, but yes. No matter how positive the outcome, the negatives of AI are orders of magnitude worse, and it must be stopped at any cost.
Maybe I just not understand what you mean, you support the exact thing you say must be stopped at any cost in the next sentence ?

For example you both support the AlphaFold project that made possible to predict almost every protein folding instead of taking humans a 3-4 year PHD for each:
https://www.cnet.com/science/biolog...ure-of-nearly-every-protein-known-to-science/

But at the same time you want to stop alphafold to help medecine research predict protein folding ?
 
Last edited:
It's not hosted externally - they explicitly guarantee that they own the infrastructure that Azure OpenAI runs on. It intentionally has zero connection to any OpenAI services.

OpenAI cannot interact with or see anything you do on Microsoft's version of it. This is a literal selling point of the service.
The issue is that YOU cannot, nor can you even see the code or model or complete set of training data etc. Yes, MS has licensed the tech and is running it on their servers. Still, they are paying OpenAI for access and hosting of a local instance. Now, I can't say for certain if there's any amount of data sharing at any level between OpenAI and Microsoft, the specifics of that comes down to the licensing agreement between them. There may be any number of circumstances in this regard, much the same that a company who licenses a Microsoft or Google product for use on an enterprise level may have an instance of it hosted on hardware they cotnrol,, but it doesn't mean that no form of metric, telemetry, usage, or other data is shared back with MS or Google, necessarily. However, for all the users who are affected by it that doesn't erase the issues simply because whatever telemetry or data is not being instantly handed to OpenAI because its on an Azure server somewhere; its enough of an issue for it to be Microsoft's and for MS to have licensed an access to OpenAI's tech as a service. It still contributes to both proprietary nature and centralization of AI technologies and two megacorps deciding who's doing what hosting of what instances isn't going to help with all of the issues of concern I referenced.
 
Maybe I just not understand what you mean, you support the exact thing you say must be stopped at any cost in the next sentence ?

For example you both support the AlphaFold project that made possible to predict almost every protein folding instead of taking humans a 3-4 PHD for each:
https://www.cnet.com/science/biolog...ure-of-nearly-every-protein-known-to-science/

But at the same time you want to stop alphafold to help medecine research predict protein folding ?

At least as I understand (but I am not an expert in the field)

Machine learning is a very useful tool for sifting through large amounts of data and flagging potential patterns of interest.

AI on the other hand is a technology that analyzes data in real time and makes real world decisions based on it.


Using machine learning to discover a pattern in data, that scientists and engineers can then follow up on using static models and actual human decision making processes is fine. it is even a positive, as long as the scientists and engineers properly scrutinize the results, and run their own static tests to verify that output.

AI is more dangerous. It is tasked with making real time decisions that are not fully vetted by humans. Think driving an autonomous car.

If AI methods are utilized for business, scientific and engineering decisions, where you just enter a query, and it spits out an answer, and you just accept that answer, then it is dangerous.

It comes to the difference between Research and Development. Research highlights potentials. Development makes it real. Research can be highly aided by machine learning models probably to positive effect. But then those discovered potentials need to be understood, and manually developed and verified by humans the old fashioned way.

As long as you never trust a recommendation or decision by an automated system, then I am fine with it. It is the decisionmaking and recommendations that are the problem. Even some underlying data provided by AI can be dangerous if a human is making the final decision and just trusting the data as is, as you never know what the AI model put into it or what it used as part of its analysis.

Picture it like sifting through a junk yard looking for treasure. By all means, run a machine learning algorithm on the junkyard. Have it highlight areas of the junkyard that look different (shinier or something) and flag them for you, so you can then go look for yourself and see if they contain the treasure you are looking for. Don't let the AI tell you what is treasure and what is garbage, just trusting it blindly.

Machine learning is a great pattern finding technology that can save lots of time. It's what you do once you have found the patterns that really matters.

An example from pharmaceuticals - for instance - could be to use Machine Learning to scan for chemicals that potentially could have an interesting impact on the human body based on some selected criteria. Once Machine learning has found those patterns - however - it is incumbent upon scientists to manually analyze those patters. Is it real? Is it a false positive? What does it do? Why does it work? These are all things that need to be determined by scientists manually. Then comes development. Structuring clinical trials. Analyzing validation test results, etc. This all also has to be done manually by engineers and statisticians.

You can never let an AI make a recommendation or make a decision. It can be trusted to find things and present them for further analysis, like a research assistant pointing out to the PhD researcher, "hey look at this neat thing I found, what do you make of it?" The researcher then both needs to be competent enough to understand how to analyze it themselves and wise enough to not trust the model blindly.

I believe that Machine Learning - as I have describe above - can be of great value to society, and that it is AI that is potentially harmful. That said, if we cannot adequately distinguish between the two, shut it all down. The risks outweigh the rewards.
 
At least as I understand (but I am not an expert in the field)

Machine learning is a very useful tool for sifting through large amounts of data and flagging potential patterns of interest.

AI on the other hand is a technology that analyzes data in real time and makes real world decisions based on it.
I think we are getting close to the issue, AI is just the fancy name we give to ML blackbox at the moment, we often do not use it anymore for scripted AI a la NPC and others like we did in the past. AI is a vague term that people use to describe what computer have yet to have done well for a while, no one would have argued in the past if playing Chess was AI or not in the past, nobody hesitated to call AI advancement the NPC in Half life 2, now they do. Banning AI could tend to ban the future of computing in general for that reason.

Banning AI is currently a way to say banning the training and use of Machine learned blackbox, reduce computer application to purely pre-made fully human written algorithm, no more learning from data.

The robot that kill the bad insects and burn the bad foncus on plants but leave the good one on them is an AI tech that analyze data in real time and make real world decision on it, when you want to predict a protein folding using AlphaFold has well.

AI is more dangerous. It is tasked with making real time decisions that are not fully vetted by humans. Think driving an autonomous car.

Which is very dangerous when drived by humans in 200 years it could be hard to believe for the people of the future how many people die each year in the past from non-assisted driving (and will not feel stranger than the heavily assisted airplane driving), think the laser robot killing insects on plants being an exact example of a newly created world government with the force to destroy the already used in the field and stop the production of the next one for what you are saying.

There is time with the decision is low impact enough to let the AI do it, a car being driven one day, a low energy laser on a crop this day, big decision will not be made by the AI (deciding if the car are good enough, looking at the field and stopping or not their usage) there is no one disagreeing here.

You can never let an AI make a recommendation

This what ML system do, they score the prediction of outcome, what percentage according to data they consider something to be something or not (i.e. literal recommendation, you never let it be the only source of recommendation for something important and not proven by a large sample of very similar case where the AI recommandation was proven to be good, which everyone agree and no one would think to do otherwise).
 
Last edited:
The issue is that YOU cannot, nor can you even see the code or model or complete set of training data etc. Yes, MS has licensed the tech and is running it on their servers. Still, they are paying OpenAI for access and hosting of a local instance. Now, I can't say for certain if there's any amount of data sharing at any level between OpenAI and Microsoft, the specifics of that comes down to the licensing agreement between them. There may be any number of circumstances in this regard, much the same that a company who licenses a Microsoft or Google product for use on an enterprise level may have an instance of it hosted on hardware they cotnrol,, but it doesn't mean that no form of metric, telemetry, usage, or other data is shared back with MS or Google, necessarily. However, for all the users who are affected by it that doesn't erase the issues simply because whatever telemetry or data is not being instantly handed to OpenAI because its on an Azure server somewhere; its enough of an issue for it to be Microsoft's and for MS to have licensed an access to OpenAI's tech as a service. It still contributes to both proprietary nature and centralization of AI technologies and two megacorps deciding who's doing what hosting of what instances isn't going to help with all of the issues of concern I referenced.

The model is definitely black boxy, but the AOAI service itself does make those guarantees that nothing is shared with OpenAI, or even other Microsoft products/models.

For other confidentiality scenarios, they provide a way to go further yet https://learn.microsoft.com/en-us/legal/cognitive-services/openai/limited-access

This is why the service is super enticing, they want those government and healthcare bux.

If you think they're lying, then I guess you're stuck hosting whatever yourself.
 
The model is definitely black boxy, but the AOAI service itself does make those guarantees that nothing is shared with OpenAI, or even other Microsoft products/models.

For other confidentiality scenarios, they provide a way to go further yet https://learn.microsoft.com/en-us/legal/cognitive-services/openai/limited-access

This is why the service is super enticing, they want those government and healthcare bux.

If you think they're lying, then I guess you're stuck hosting whatever yourself.
The blackboxiness of the model, the training data, and everything else from OpenAI is really the primary issue. The other stuff is secondary, though still important, but the primary issue is it means a centralization and lockdown of the tech on which "everything" relies, regardless of who has been deigned worthy of hosting instances and the parameters thereof.

I have no doubt they're angling for HIPAA complaint and the myriad of gov't contract compatible bux, so yeah I'm sure in those instances the minimal amount of information is shared to OpenAI. In other circumstances I'm sure there are versions that, depending on your contract, more or less info will be shared but its obvious on things like the version embedded in Copilot be it in Edge or Win11/10, or the DALL-E 3 variant available as part of Bing' "create", that clearly it is very transparently linked to the MS data mining and advertising arena; you agree to be contacted and cede your data by its use and "credits" for access on your account to be able to generate things come from MS Rewards / Bing etc. All of these things would be less of a problem if any numbers of people could be hosting open licensed, equally capable DALL-E 3 instances the way they do for Stable Diffusion, but right now the only wayto access DALL-E 3 is either be a OpenAI Plus subscriber and/or through partners like this MS/Bing Create setup, and ultimately the tech itself is designed to be blackboxy, partnered, paid, and distributed out at their behest.

Edit: That MS document there also highlights another issue in that how you have to pay, subscribe, onboard and the like in order to APPLY to modify content restriction parameters. While some of this is likely PR butt covering (and likely part of the parameters for MS Azure to license the tech from OpenAI), it is certainly a concern - just like with the versions hosted by OpenAI - that there are a lot of restrictions against "objectionable" content built into the filters by default. Atop the other concerns of black-boxiness above, the degree by which MS (or OpenAI. I'd actually be curious to know to what degree MS has full autonomy to make these changes or if they have to contact OpenAI in order to ask for either the ability or the technical content necessary for a less or unencumbered model or different training data) will decide who and what use cases are worthy of having those modifications granted only adds to the issue. Now, I'm sure that some gov't contractor, big pharma, or tech megacorp with significant SLEs are likely to have their work deemed legitimate and granted, but if we've seen anything so far there is concern that a standard user, even a paying oneor a small instance operator may not be so lucky - especially when they're both developed by and have contracted hosting at megacorps with control of the proprietary "defaults" who are incentivized to avoid a PR snafu or at least be sure it is convincingly given a patina of legitimacy by social stature and investment.

Ultimately the real issue is that you have to "ask" at all, and that you can't just go elsewhere and host your own, equally capable and transparently developed version if you don't like the restrictions someone else puts on their version. Mixed with centralized, proprietary development and control the issues are only compounded.
 
Last edited:
Im all for an 'ai' handling medical billing and insurance. I dont care how many of those people have to find new work, nothing ever gets billed correctly lol. Weathermen might be the next one in line imo.

Sad what we use this for is advertising, cuz there just isnt enough of that in the daily lives we live.
 
Weathermen might be the next one in line imo.
this could go fast on stuff that run on a simple desktop:
https://www.nature.com/articles/d41586-023-03552-y
https://www.ft.com/content/ca5d655f-d684-4dec-8daa-1c58b0674be1

AI outperforms conventional weather forecasting methods for first time​

Google DeepMind’s model beat world’s leading system in 90% of metrics used and took only a fraction of the time

As an example of a successful forecast, DeepMind scientists mentioned Hurricane Lee in the north Atlantic in September. “GraphCast was able to predict correctly that Lee would make landfall in Nova Scotia nine days before it happened, in comparison with only six days for traditional approaches,” said Rémi Lam, lead author of the Science paper. “That gave people three more days to prepare for its arrival.”
GraphCast produces a 10-day forecast within a minute on a single Google TPU v4 cloud computer.
 
This is probably a dumb question, but has anyone looked into using Windows Server to avoid the nonsense? Or did Microsoft poison that version too?
 
So I decided to google something about the newest Disney movie Wish, and google was being a b-hole by not loading (which for some reason happens more often than I'd like on my computer, ad block doing it maybe?) so instead I did Bing to try it and this is what I got. "Asha is the main character of Wish, an upcoming animated film by Netflix" by Netflix... AI making everything "better" ... anyways just thought I'd share, and I would have gone duckduckgo but bing was faster to type :(
 
So I decided to google something about the newest Disney movie Wish, and google was being a b-hole by not loading (which for some reason happens more often than I'd like on my computer, ad block doing it maybe?) so instead I did Bing to try it and this is what I got. "Asha is the main character of Wish, an upcoming animated film by Netflix" by Netflix... AI making everything "better" ... anyways just thought I'd share, and I would have gone duckduckgo but bing was faster to type :(
Microsoft integrating "AI" into Bing is why I stopped using it.
 
I'm gonna tell it to delete System32 itself and see what happens
This reminds me when there was no way to disable the shutter sound on the camera in Samsung's Galaxy devices. Then someone told Bixby to do it and it actually worked! 🙃
 
I would have gone duckduckgo but bing was faster to type
Just a tip: DDG can be also be accessed via duck.com (they bought the domain from Google a number of years ago).

Even easier would be setting up a custom search for the addressbar.
 
For some reason it never updated on my current computer, but on my new setup it has it.

Like in edge Thats incredible value for $0 and I am unable to think to a single downside of typing windows+c for easy access to copilot and just not using it when you do not use it. What was the argument against this for free ?
 
For some reason it never updated on my current computer, but on my new setup it has it.

Like in edge Thats incredible value for $0 and I am unable to think to a single downside of typing windows+c for easy access to copilot and just not using it when you do not use it. What was the argument against this for free ?
Yeah my Home desktop, doesn't have it either, but my work laptop does, both are 11 Pro, running 22H2, still think they are doing a random deployment, or possibly it is tied to Edge usage in some way not sure.
 
I saw it on my work only Thinkpad X1 which has Win 11 iOT Enterprise.
I hid the icon from the tray.
 
I am in a Microsoft presentation on this rollout right now going over the security blah blah blahs, and Identity security and persona profiles for Co-Pilot.
It seems pretty solid, from a government security aspect they have covered all the bases.

Edit:
Somebody had the pair to straight up call out Microsoft (in their own meeting) for the Money grab on this as it makes a lot of it sound like it depends on features in E5/A5 licenses with security purview tagging, but it does work just fine with good old file permissions.
 
Last edited:
I was setting up a new PC for a client today. They use Edge, so I was importing all their bookmarks and passwords into Edge, when Microsoft's new 'Copilot' fires up, takes up 25% of the browser Window, and prompts me to log into a Microsoft account. In typical Microsoft underhanded fashion, no matter how many times I closed the requester asking me to sign into a Microsoft account, it just popped straight back up! Further compounding my frustration was the fact that the requester took focus and prevented me from closing Edge.

In the end, the only way to avoid Microsoft's insistence that I sign into a Microshaft account, was to bring up Task Manager and forcibly close Edge.

Bend over Microsoft, I've got an outstanding place for that Microshaft account.
 
I was setting up a new PC for a client today. They use Edge, so I was importing all their bookmarks and passwords into Edge, when Microsoft's new 'Copilot' fires up, takes up 25% of the browser Window, and prompts me to log into a Microsoft account. In typical Microsoft underhanded fashion, no matter how many times I closed the requester asking me to sign into a Microsoft account, it just popped straight back up! Further compounding my frustration was the fact that the requester took focus and prevented me from closing Edge.

In the end, the only way to avoid Microsoft's insistence that I sign into a Microshaft account, was to bring up Task Manager and forcibly close Edge.

Bend over Microsoft, I've got an outstanding place for that Microshaft account.
Go to Settings, Sidebar, click on App specific settings, and disable everything in there except Copilot. For some reason, you can't hide the Sidebar if you disable Copilot. You can kill it with the registry by creating a DWORD called HubsSidebarEnabled at HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Edge and setting the value to 0. The sidebar still pops up if you open your favorites, but it should be empty.
 
  • Like
Reactions: Bric
like this
I am quite surprised by how well it work (30 entry length of context, speed) versus not so long ago, they are giving away something that cost them like $10 a month for moderate user, $20-40 for big user, for free just like that, with what seem absolute zero downside.
 
I am quite surprised by how well it work (30 entry length of context, speed) versus not so long ago, they are giving away something that cost them like $10 a month for moderate user, $20-40 for big user, for free just like that, with what seem absolute zero downside.
The demo they gave us yesterday was pretty bang on. It first uses resources local to the user; email, SharePoint, OneDrive, Dropbox, etc… only after searching local data (which they were very clear is not used for training) does it then reach out to online resources. They also said that the Co-Pilot LLM confers with multiple LLM’s so it’s theirs and ChatGPT and possibly a few others. Where CoPilot works as a collective filter for the others processing their output for delivery to the user.
The demos they did with it’s integration into Excel, Powerpoint, and Dynamics were impressive.

“Can you help me build a presentation going over last months figures and how they relate to our expected figures based on the last 4 years of data”

Some question like that and it built the Excel sheets, pulled the data from Dynamics, used the past presentations as a template to build the PowerPoint, and had the whole thing outlined in a minute or so. So that way they just needed to do fine tuning, it even started doing some content specific AI generated art based on a library of stock photos. It took some 2-3 hours of prep work and boiled it down to something that took maybe a minute and a half. Available to E3/A3 license users and up for a yet to be announced price coming first half of 2024.

They were very clear that it searches local data first that the user has access to weather they know they have access or not and that if we have any permission over shares in the system it will search those as well so they recommend a full permission and security review as part of their implementation pre-stage for the low price of $5000++.
 
Last edited:
Back
Top