Latest

new Although Musk lacked insider information on the situation, he raised questions about the conflict between Altman and OpenAI’s chief scientist Ilya Sutskever, suggesting there might be a serious concern related to artificial intelligence (AI).Wonderful Engineering, 23h ago
new Artificial intelligence (AI) was a common thread throughout all of the presentations. Some startups were very upfront about their use of AI ("Gomboc.ai, the AI is in our name," Amit told the judges), while others touched on their AI use when explaining their technology capabilities.darkreading.com, 18h ago, Event
new At the dinner, the two men, along with others gathered, discussed starting an AI lab that would be transparent, open-source, and dedicated to democratizing the benefits of advance artificial intelligence. Musk and a few other members of the “PayPal mafia”—including Peter Thiel and Reid Hoffman—invested millions to get the lab rolling.Fortune, 8h ago
new Professor Toby Walsh is Chief Scientist at University of NSW’s new AI institute (UNSW.AI). He is a strong advocate for limits to ensure AI is used to improve our lives, having spoken at the UN, and to Heads of State, parliamentary bodies, company boards and many others on this topic. This advocacy has led to him being “banned indefinitely” from Russia. He is a Fellow of the Australia Academy of Science and was named on the international “Who’s Who in AI” list of influencers. He has written four books on AI for a general audience, the most recent is “Faking It! Artificial Intelligence...InnovationAus.com, 15h ago
Google Brain cofounder and current Landing AI CEO Andrew Ng says the tech industry is still “very far” from achieving systems smart enough to do things like that. And he’s concerned about the misuse of the term itself. “The term AGI is so misunderstood,” he says.Fast Company, 3d ago
new Of course, I can’t ignore the fact that I’m writing at a time when artificial intelligence is really starting to take off. For better or worse, its applications could be voluminous and extremely impactful, hastening efficiencies at a rate faster than we’ve ever seen in various industries. In my day job as a data scientist with the U.S. Forest Service, I’m having conversations every week about how to responsibly move forward with AI — which use-cases make sense, how should we think about the risks, and what even qualifies as AI anyway? It’s a big topic. But for the purpose of this discussion, the point is that if it is as effective as some experts suggest it will be — how will we direct the energy we save? Will we fall into Jevons Paradox, or will we redirect that saved energy into novel, useful, creative work? As Richard Heinberg recently put it: If you’re driving off a cliff, do you need a faster car?...resilience, 1d ago

Latest

new ...“If I read the tea leaves, I would say we are very close to AGI, or at least some form of Gen AI that can reasoned on its own and solve complex problems without human help. We need to get ready, embrace these technologies, learn to manage the risk, and re-skill the workforce rapidly.” – Rod Fontecilla, Chief Innovation Officer, Guidehouse...Connected World - IoT and Digital Transformation, 1d ago
Behind the Curtain: Myth of AI restraint: "Lots of people want to roll artificial intelligence out slowly, use it ethically, regulate it wisely. But everyone gets the joke: It defies all human logic and experience to think ethics will trump profit, power, prestige. Never has. Never will." This article suggests that there'll likely be no serious regulation of generative AI, which one day soon could spawn artificial general intelligence (AGI) — the one that could outthink our species." (Axios, November 21, 2023)...Silverchair, 3d ago
new Kennedy School Adjunct Lecturer in Public Policy Bruce Schneier says artificial intelligence has the potential to transform the democratic process in ways that could be good, bad, and potentially mind-boggling. The important thing, he says, will be to use regulation and other tools to make sure that AI tools are working for everyone, and just not for Big Tech companies—a hard lesson we’ve already learned through our experience about social media and other tech tools.harvard.edu, 2d ago

Top

The turmoil at ChatGPT-maker OpenAI, bookended by the board of directors firing high-profile CEO Sam Altman on Nov. 17, 2023, and rehiring him just four days later, has put a spotlight on artificial intelligence safety and concerns about the rapid development of artificial general intelligence, or AGI. AGI is loosely defined as human-level intelligence across a range of tasks. The OpenAI board stated that Altman’s termination was for lack of candor, but speculation has centered on a rift between Altman and members of the board over concerns that OpenAI’s remarkable growth – products such as ChatGPT and Dall-E have acquired hundreds of millions of users worldwide – has hindered the company’s ability to focus on catastrophic risks posed by AGI. OpenAI’s goal of developing AGI has become entwined with the idea of AI acquiring superintelligent capabilities and the need to safeguard against the technology being misused or going rogue. But for now, AGI and its attendant risks are speculative. Task-specific forms of AI, meanwhile, are very real, have become widespread and often fly under the radar.GovTech, 6d ago
The exact reasons behind OpenAI’s self-immolation this weekend still aren’t clear—even now that Altman is coming back. But they’re thought to revolve around a dispute between Altman and Sutskever about the future direction of the artificial intelligence startup. Altman has reportedly been criticized by some within OpenAI for trying to push the company’s development of AI tech too quickly. Specifically, there’s a push to achieve artificial general intelligence, or AGI, which refers to AI that can learn to accomplish any intellectual task that human beings can perform.Fast Company, 11d ago
Much of Shear’s past rose to the surface in the last week, including his cameo as a character in a Harry Potter fanfiction about “rational thinking” written by AI researcher Eliezer Yudkowsky. The fanfiction is popular with Silicon Valley effective altruists, those who believe that AGI should be created to maximize human good. Rational thinking is largely a metaphor for artificial intelligence in the book, and Shear’s mention was a birthday present, according to 404 Media.Gizmodo, 11d ago
Ilya Sutskever co-leads the Superalignment Taskforce, has very short timelines for when we will get AGI, and is very concerned about AI existential risk.lesswrong.com, 13d ago, Event
The sudden and startling ouster of OpenAI cofounder and CEO Sam Altman late Friday may seem like a story about business and boardroom shenanigans, and in some ways it is. Still, questions about why Altman was removed, and his lieutenant, OpenAI President Greg Brockman left in his wake, resonate well behind the future of OpenAI as a business and shine a spotlight on the rough road ahead for future AI development, especially as it heads into Artificial General Intelligence (AGI).TechRadar, 14d ago
...(TNS) — For all the talk about artificial intelligence "taking our jobs," Robert Brunner doesn't seem worried: He's going to have an AI version of himself teaching some classes next semester.Brunner is the Gies College of Business's associate dean for innovation and chief disruption officer, so trying unique things with new technology is par for the course with him.He told the business school's news site, Poets & Quants, that the idea came up because he was too busy with teaching to make an online version of one of his courses.GovTech, 16d ago

Latest

new Huang’s forecast paints AGI as a realm where software or computers undergo tests mirroring the nuanced intelligence inherent in humans. When pressed on the timeline by Andrew Ross Sorkin, Huang confidently stated, “Within the next five years, you’re gonna see, obviously, AIs that can achieve those tests.” While the exact contours of AGI remain nebulous, Huang nodded in agreement when asked if it could involve AI crafting Nvidia’s groundbreaking chips.Wonderful Engineering, 1d ago
new Written by bestselling author Tom Davenport and Deloitte’s Nitin Mittal, All-In on AI looks at artificial intelligence at its cutting edge from the viewpoint of established companies like Anthem, Ping An, Airbus, and Capital One. Filled with insights, strategies, and best practices, All-In on AI also provides leaders and their teams with the information they need to help their own companies take AI to the next level.Thinkers360 | World’s First Open Platform For Thought Leaders, 14h ago
When AI researcher Sasha Luccioni went to business conferences and speaking events last year, she would field basic questions like: “What is artificial intelligence?” Now, she said, the people she meets are not only familiar with AI, they’re worried about whether it will “take over the world.”...John Lothian News, 3d ago, Event
new KAREN HAO: One of the things, just to take a step back before we kind of go through the, the tumultuous history leading up to this point, one of the things that's kind of unique about OpenAI, I mean you see this in a lot of Silicon Valley companies, but OpenAI does this more than anyone else I would say, which is they use incredibly vague terms to define what they're doing. Artificial general intelligence, AGI, this term is not actually defined. There's no shared consensus around what AGI is and of course there's no consensus around what is good for humanity. So if you're going to peg your mission to like really, really vague terminology that doesn't really have much of a definition, what it actually means is it's really vulnerable to ideological interpretation. So I remember early in the days of OpenAI when I was covering it, I mean people would joke like if you ask any employee what we're actually trying to do here and what AGI is, you're gonna get a different answer. And that was, that was sort of almost a feature rather than a bug at the time in that they said, "You know, we're on a scientific journey, we're trying to discover what AGI is." But the issue is that you actually just end up in a situation where when you are working on a technology that is so powerful and so consequential, you are going to have battles over the control of the technology. And when it's so ill-defined what it actually is, those battles become ideological. And so through the history of the company, we've seen multiple instances when there have been ideological clashes that have led to friction and fissures. The reason why most people haven't heard of these other battles is because OpenAI wasn't really in the public eye before, but the very first battle that happened was between the two co-founders, Elon Musk and Sam Altman. Elon Musk was disagreeing with the company direction, was very, very frustrated, tried to take the company over, Sam Altman refused. And so at the time Elon Musk exited, this was in early 2018 and actually took all of the money that he had promised to give OpenAI with him. And that's actually part of the reason why this for-profit entity ends up getting constructed because in the moment that OpenAI realizes that they need exorbitant amounts of money to pursue the type of AI research that they wanna do is also the moment when suddenly one of their biggest backers just takes the money. The second like major kind of fissure that happened was in 2020, and this was after OpenAI had developed GPT-3, which was a predecessor to ChatGPT. And this was when they first started thinking about how do we commercialize, how do we make money? And at the time they weren't thinking about a consumer-facing product, they were thinking about a business product. So they developed the model for delivering through what's called an application programming interface. So other companies could like rapidly build apps on GPT-3. There were heavy disagreements over how to commercialize this model, when to commercialize the model, whether there should be more waiting, more safety research done on this. And that ultimately led to the falling out of one of the very senior scientists at the company, Dario Amodei, with Sam Altman, Greg Brockman, and Ilya Sutskever. So he ended up leaving and taking a large chunk of the team with him to found what is now- one of open AI's biggest competitors, Anthropic.Big Think, 2d ago
new An Israeli military spokesman has defended the IDF’s use of artificial intelligence following reports by the Guardian and the Israeli-Palestinian publication +972 Magazine and partner Hebrew-language outlet Local Call about how it is using an AI-driven tool to select bombing targets in Gaza. Responding to a question from an X user in a Spaces conversation, IDF spokesman Jonathan Conricus said, “In anything that is generated by a machine, there’s always a human in the loop that has definitive and executive say over whatever is generated by a machine … and they are accountable for their decisions.”...the Guardian, 1d ago, Event
new What does this all have to do with OpenAI’s supposed “math” breakthrough? One could speculate that the program that managed (allegedly) to do simple math operations may have arrived at that ability via some form of Q-related RL. All of this said, many experts are somewhat skeptical as to whether AI programs can actually do math problems yet. Others seem to think that, even if an AI could accomplish such goals, it wouldn’t necessarily translate to broader AGI breakthroughs. The MIT Technology review reported:...Gizmodo, 2d ago, Event

Top

According to recent research from the textbook publisher Houghton Mifflin Harcourt, nearly 40 percent of teachers surveyed by the company said they plan to integrate artificial intelligence tools into their instruction by the end of the 2023-24 school year. That figure is only expected to grow as AI-driven programs such as ChatGPT, Claude, Bard and Adobe Firefly continue to improve — almost at breakneck speed — for functions like assisting lesson planning and content generation, according to Monica Burns, a former teacher and ed-tech consultant.Speaking Tuesday as part of a webinar series about new classroom tools hosted by Verizon and Digital Promise, Burns said educators should stay up to date about developments in AI to get the most out of new tools and understand their limitations.“This type of technology has come a long way. We might have thought about AI as something that belonged in a science-fiction movie, but it’s been by our sides or behind the scenes for some time. … Generative AI like ChatGPT, Claude or Bard has gained so much traction this year,” she said, noting that she’s also “excited” to see how instructors make use of Adobe Firefly, which released a beta version in March and purports to generate images that can stimulate classroom discussions and help guide lessons.GovTech, 26d ago, Event
And so, he’s always had this personality of being very intense about things, believing things with a religious fervor, even when he was a PhD student, Cade Metz, The New York Times reporter who’s covered AI for a very long time, he writes in his book, Genius Makers, that Sutskever would do one-handed handstand pushups if he got really excited about a research idea. So this guy has always been a little bit mystical, and a little bit of a philosopher, and a little bit intense and also, in an interesting way, a gentle soul as well. Employees have called him the chief emoji officer to me. When he was at OpenAI, he would shower people with emoji reactions if he really liked something that they sent in Slack, and he would say these things about… So this is why, I think, the field of AGI really lines up with his personality.Tech Policy Press, 8d ago
The maker of ChatGPT had made progress on Q* (pronounced Q-Star), which some internally believe could be a breakthrough in the startup's search for superintelligence, also known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as AI systems that are smarter than humans. Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because they were not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.lesswrong.com, 11d ago

Latest

new At least one company, Deep Knowledge Ventures (DKV) based in Hong Kong[1], promotes artificial intelligence for investment decision-making. They state on their website, “These machines automate due diligence and give us unique advantages compared to normal human capabilities.” Of their eight team members, four of them are artificial intelligence bots, aptly named: Fintech AI, Vital, Spock, and Nanotech AI. AI will likely be sitting in many more seats, figuratively, and in many more companies in the near future.Healthcare Business Today, 1d ago
Last week’s drama at OpenAI is still fresh on the minds of founders leading artificial intelligence startups in our Generative AI database. Several of them told me this week the saga has spurred reflections about their own boards, discussions with directors and new priorities around AI safety.The Information, 3d ago
new In bustling Mumbai, Dr. Darshana Sanghvi, a radiologist at Kokilaben Dhirubhai Ambani Hospital, looks back at how her job has changed a lot in the last 20 years. In a nostalgic tone, she recollects the era when her work relied on a legion of assistants meticulously handling every manual task. Now, things are way different. The big change? Artificial Intelligence (AI) made it happen.Techiexpert.com, 22h ago
new But, according to recent research from Yann LeCun, Meta’s top AI scientist, artificial intelligence isn’t going to be general-purpose anytime soon. Indeed, in a recently released paper, LeCun argues that AI is still much dumber than humans in the ways that matter most. —Lucas Ropek Read More...Gizmodo, 2d ago
...eliezer: I appreciate Pat’s defense, but I think I can better speak to this. Issues like intelligence explosion and the idea that there’s an important problem to be solved in AI goal systems, as I mentioned earlier, aren’t original to me. They're reasonably widely known, and people at all levels of seniority are often happy to talk about it face-to-face, though there’s disagreement about the magnitude of the risk and about what kinds of efforts are likeliest to be useful for addressing it. You can find it discussed in the most commonly used undergrad textbook in AI, Artificial Intelligence: A Modern Approach. You can’t claim that there’s a consensus among researchers that this is not an important problem.lesswrong.com, 3d ago
new So what does that imply for AI x-risk? I don’t know, this is a few steps removed. But it brings us to the subject of “human innate drives”, a subject close to my (four-chambered) heart. I think the AGI-safety-relevant part of human innate drives—the part related to compassion and so on—is the equivalent of probably hundreds of lines of pseudocode, and nobody knows what they are. I think it would be nice if we did, and that happens to be a major research interest of mine. If memory serves, Quintin has kindly wished me luck in figuring this out. But the article here seems to strongly imply that it hardly matters, as we can easily get AI alignment and control regardless.alignmentforum.org, 2d ago

Top

Today, we hear a lot more about analytics and AI (artificial intelligence), but as Bent tells me, “The interesting thing is AI has been around for a while.”...Connected World - IoT and Digital Transformation, 28d ago
This fall, Sam Altman, OpenAI’s once- (and possibly future-) CEO made a surprising statement about artificial intelligence. AI systems, including that company’s ChatGPT, are known to “hallucinate”: perceive patterns and generate outputs that are nonsensical. That wasn’t a flaw in AI systems, Altman said, it was part of their “magic.” The fact “that these AI systems can come up with new ideas and be creative, that’s a lot of the power.” That raised eyebrows: We humans are rather good at creativity without getting our facts all wrong. How could such an appeal to creativity make a decent counter to the many concerns about accuracy?...Nautilus, 12d ago
When someone told me a tech company was using AI to have legitimate voice conversations with sales prospects over the phone, I was skeptical. Then I listened to some examples. The voice and interactions sounded so real that I became even more skeptical that I was even listening to AI. The technology behind it was EVE, a company founded in 2016 that actually uses pre-recorded human responses to engage with someone over the phone. If it sounded so human, that’s because the responses were in fact human voices. EVE’s system is not artificial intelligence in the current sense like ChatGPT. Instead, EVE is using a dialogue tree, a system of recorded responses that are played based upon the interpreted communication of the person. It understands what the person is saying and chooses the right response quickly. And speed is key, because according to Alex Skrypka, CEO of EVE.calls, people will feel that something is off if it takes longer than 1 second to receive a response to something that’s said. The trick is never having the customer figure out that they’re talking to a bot.deBanked, 25d ago
According to some observers, there was growing tension between Altman—who was trying to raise as much external funding as possible in order to accelerate OpenAI’s research—and Toner, who wanted to take things much more slowly, for fear that this acceleration might result in the development of an advanced artificial intelligence engine that could pose a danger to mankind. After Toner wrote an academic paper looking at the potential risks of AI research, Altman reportedly confronted her about it, arguing that the paper was too critical of OpenAI’s work and too complimentary about Anthropic, a competitor founded by former OpenAI staffers. Altman allegedly tried to oust Toner from the board. Toner and Ilya Sutskever, a co-founder of the company and its chief scientist, then reportedly used this attempted coup as evidence that Altman didn’t have the best interests of OpenAI at heart.Columbia Journalism Review, 4d ago
But how will we know if or when OpenAI has reached its AGI goal? That’s the tough part, especially when the definition of AGI is somewhat flexible, as Google Brain cofounder Andrew Ng told Fast Company last month. Ng, now CEO and founder of Landing AI, says some people are tempted to adjust their definition of AGI to make it easier to meet in a technical sense.Fast Company, 14d ago
This week, we feature a conversation from our Transition-AI conference with Hanna. We talk with her about how large language models and other forms of artificial intelligence are making their way inside utilities — and why AI isn’t as intimidating as it seems.Canary Media, 27d ago, Event

Latest

new In the landscape of artificial intelligence, where the pursuit of Artificial General Intelligence (AGI) dominates discussions, I propose an a paradigm shift. Moving away from the generative AI model focused on individual achievement, I advocate for a framework that recognizes AGI as a functional and systemic entity. This is a call for a more pragmatic and societal-centric approach to AGI development.B2B News Network, 2d ago
new Those risks are compounded when considered against a catch phrase of our time – “data is the new oil” — underscoring data’s role as the critical resource in the 21st-century economy. To be absent from the data sets on which AI understands the world is to go missing entirely.And lest we allow ourselves to think there will be a break in this exponential growth, Hartmut Neven of the Quantum Artificial Intelligence Lab at Google would remind us of a law named for him — Neven’s law — which says that quantum computers are gaining computational power at a doubly exponential rate and, while a long way off, has the potential to dwarf Moore’s law. Quantum computing raises the spectrum of sentient AI that would eventually come to think and feel like humans.Indigenous Canadian writer Alicia Elliott is not working on a law per se but on reinventing the Haudenosaunee creation story. Her people know what it means to wrestle back their culture and texts after losing them to outside forces. Looking to the future, she says there is nothing more important than to define and defend what it means to be human. Policymakers would do well to consider all that that means.GovTech, 2d ago
So Todd, this is also very important to understand, that what’s happening is that there are big buzzwords around Gen AI, around the artificial intelligence and the possibilities that it is actually providing to us, right? But let’s be very clear today. Without meaningful data, there is no AI.Harvard Business Review, 3d ago

Latest

new The contract, which is being voted on and needs a majority of “yes” votes to come into force, was considered a victory by many and received with suspicion by others. In the initial vote, 14% of the members of the union's national board voted “no”, and that is what Portuguese actress Kika Magalhães intends to do as well. “The reason is that they don't protect actors in relation to digital replicas”, she told Lusa the actress, based in Los Angeles since 2016. “They say yes, that there is protection, but then we look between the lines and there is nothing.” Kika Magalhães, whose latest film, “The Girl in the Backseat”, has just been released reaching Amazon Prime Video and the streaming platform Tubi, points to how digital replicas can be disastrous. “An actor goes for a casting and the producers ask if he will accept their digital replica. If the actor says no, they may not give him the role,” she explains. Top-notch actors will be able to negotiate and say no without losing the role. “But small actors like us don't bring as much money to the union and they don't protect us as much”, considered Kika Magalhães. The actress doubts the solution put forward by one of the clauses, according to which if a studio uses digital replicas of an actor this You will be paid corresponding to the hours you would be filming. “This is very relative, because a scene can take a month to film. They can say it took a day to make.” Actress Justine Bateman also criticized loopholes that allow studios to use digital replicas without actors' consent when certain conditions are met. The results of the votes will be known on December 5th. If there are 50%+1 “yes” votes, this contract will come into force for the next three years. “I have heard many actors saying that they will vote no”, said Kika Magalhães. Her husband, actor Chris Marrone, said that “if the majority fully understands what they are signing, then they vote no.” Marrone considered that the SAG contract “doesn’t seem like a big victory after all” and that there should be specific language to define the actors as human beings. This is something that actress Katja Herbers also defends, in opposition to “synthetic actors”. However, the expectation is that the “yes” will win, because the industry has been at a standstill for too long and there is widespread fatigue. This is what Mário anticipates Carvalhal, who belongs to the Animation Guild, stressing that the stoppage was long and the “no” appears to be a minority. “There is a possibility that some people will vote no, but I believe that these new measures will pass and be approved,” he told Lusa. “I think it is a minority that is very right in what they are demanding, but it was practically a whole year of work stopped in this city and I think everyone is ready to move forward”. Mário Carvalhal considers that the big risk of AI will be the reduction in quality and a change in the way the environment works. “Actors have more to claim, especially when it comes to those who do voices. There have already been cases where AI can do the job,” he said. “It's an inferior job, but for many companies it's enough and doesn't cost them anything.” Carvalhal considers that actors “must maintain their rights to image, voice and everything else, their likeness.” The Portuguese also stressed that, although the strikes did not achieve all their objectives, they allowed “important steps in the right direction” to be taken and this is an aspect of which the strikers are proud. “As much as possible, I think the workers won this fight”, he considered. For screenwriter Filipe Coutinho, member of the Portuguese Cinema Academy, the unions were justified in their fight, which took longer than expected. “I'm quite satisfied for the way both the WGA and SAG acted over these six months”, he told Lusa. “It’s an unbelievable time to have an entire industry at a standstill,” he stressed. “California is one of the largest economies in the world and it is incomprehensible that it took so long for the studios to offer a fair contract to writers and actors.” Filipe Coutinho also said that, even with the agreements, “everything is a little upside down. the air”, with studios and production companies “trying to understand what the next phase will be”. The Portuguese mentioned changes in the business model, with 'blockbusters' expected to fail at the box office, cancellation of films and the dilemma of 'streaming '.“No one really knows what to invest in and under what conditions to invest, and now contracts also change the approach to content production.” Afonso Salcedo, lighting artist, who worked on the new Disney film “Wish – The Power of Desires”, considers that the strikes were difficult but important, at a time when it is not yet clear to what extent AI will affect the industry. “The agreements will last three years so I think it is a good step to see what it is like that the technologies will work in the coming years”, he indicated, noting that the animation segment will have to renegotiate the contract in 2024. “It will be interesting to see what will happen, if we are going to negotiate protections against Artificial Intelligence”, stated Afonso Salcedo. “Maybe, next year, we will get into these fights with the studios again.” The vote on the agreement reached between the SAG-Aftra union and the studios runs until December 5th. The results will be tabulated and published on the same day.adherents, 2d ago
But the next generation of AI agents is starting to take shape. Some of the outlines emerged from the recent leadership shake-up at OpenAI. At the time, the board gave only a vague reason for firing CEO Sam Altman, citing a lack of transparency with board members. (Altman was soon reinstated after employees revolted.) Some (me) thought there must be another issue in the background causing such dramatic action by the board—like a scary research breakthrough. Turns out that’s exactly what it was. OpenAI has reportedly been working on a new kind of agent, known internally as “Q*” (“Q-star”), which marks a major step toward OpenAI’s goal of making systems that are generally better than humans at doing a wide variety of tasks (aka, artificial general intelligence, or AGI). The board reportedly feared that Altman might charge ahead with productizing Q* without allowing enough time for adequate safety guardrails to put around it.Fast Company, 4d ago
Altman’s firing from OpenAI and subsequent re-hiring were marked by a lack of transparency, concerns about AI safety, and debates over the pace of AGI development. Board members, including Ilya Sutskever, who served when Altman was initially removed, have left the board, except for Adam D’Angelo.Insightssuccess Media and Technology Pvt. Ltd., 4d ago
Artificial Intelligence (AI) has been used in producing films. However, award-winning director James Cameron admitted he was "a little more scared than excited" when asked about AI technology.Science Times, 5d ago
new I’ll close with an insight offered by Werner Vogels, vice president and CTO, Amazon, during his re:Invent keynote on Thursday. “The new AI does not invalidate the old AI… not everything needs to be done with massive, large language models,” he stated. Contact center vendors have been offering features that incorporate artificial intelligence for years. Deployments of these features are not automatically made obsolete by generative AI.No Jitter, 2d ago, Event
new ...“I believe that generative AI is going to be completely transformative for government as well as for society,” Utah CIO Alan Fuller said at the NASCIO Midyear Conference in May. “I think it’s as big as the Internet; I think it’s going to change lots of things.”Indeed, much — if not most — of what artificial intelligence did in state and local governments this past year revolves around generative AI.As is likely common knowledge by now, that particular tech spots patterns in the data used to train AI software and then automatically provides words and images based on those lessons.GovTech, 2d ago, Event

Latest

new In this research article summary, we delve into the insights provided by Eric Schmidt, former CEO and chairman of Google, regarding the intersection of artificial intelligence (AI), geopolitics, and national security. Schmidt’s unique perspective, grounded in his leadership role at one of the world’s foremost tech giants, Alphabet, offers authoritative insights into the challenges and opportunities posed by AI-led geopolitics. This research article does not have a scientific methodology to claim it as an academic exercise. Yet, it is published by an academic journal. This is because the author represents the voice of an American internet company, which is a global giant. The narrative appears to be a crisp version of the book “The Age of AI,” authored by Eric Schmidt, Henry Kissinger, and Daniel Huttenlocher.Montreal AI Ethics Institute, 1d ago
An emerging consensus among school technologists is that generative artificial intelligence (GenAI) is irrepressible, so the process of embracing it has to start somewhere. One approach that has made progress at La Cañada Unified School District (LCUSD) in California: forming a task force of stakeholders to deal with emerging technology.Explaining her task force to a room of her peers on Thursday at the California IT in Education Conference in Sacramento, LCUSD’s Associate Superintendent of Technology Services Jamie Lewsadder said the idea was to have an open conversation about the district’s position and eventually workshop safe and ethical guidelines for using GenAI, what kids need to know about it and what the respective responsibilities should be for students, teachers and parents. To do this, she wanted to avail herself of other people’s thoughts and expertise — not just faculty but parents, students and community members.“I put out a note to our families and said, ‘I’m looking for parents who want to be involved in an emerging tech council.’ I purposely did not name it ‘AI task force,’ I named it an emerging tech council, with that concept that it’s going to be changing, so whatever comes next, this group will be in place,” she said. “I’m telling all of my teams that there’s an asterisk by everything we do. Just be ready, nothing’s set in stone. Just due to the nature of the change, we have to just continue to be ready and hold onto the roller coaster … It’s appropriate to be skeptical, terrified and feeling the awe at the same time.”...GovTech, 3d ago, Event
This is FRESH AIR. I'm Tonya Mosley. You've probably heard the male gaze or the white gaze, but what about the coded gaze? Computer scientist Joy Buolamwini coined the term while in grad school at MIT. As a brown-skinned, Black woman, the facial recognition software program she was working on couldn't detect her face until she put on a white mask. This experience set Buolamwini on a path to look at the social implications of artificial intelligence, including bias in facial analysis technology and the potential harm it could cause millions of people like her - everything from dating app glitches to being mistaken as someone else by police. She's written a new book about her life and work in this space called "Unmasking AI: My Mission To Protect What Is Human In A World Of Machines." Last month, after meeting with Buolamwini and other AI experts, President Biden recently issued an executive order aimed at making AI safer and more secure.NPR, 5d ago
new ChatGPT was launched on Nov. 30, 2022, ushering in what many have called artificial intelligence’s breakout year. Within days of its release, ChatGPT went viral. Screenshots of conversations snowballed across social media, and the use of ChatGPT skyrocketed to an extent that seems to have surprised even its maker, OpenAI. By January, ChatGPT was seeing 13 million unique visitors each day, setting a record for the fastest-growing user base of a consumer application.Throughout this breakout year, ChatGPT has revealed the power of a good interface and the perils of hype, and it has sown the seeds of a new set of human behaviors. As a researcher who studies technology and human information behavior, I find that ChatGPT’s influence in society comes as much from how people view and use it as the technology itself.Generative AI systems like ChatGPT are becoming pervasive. Since ChatGPT’s release, some mention of AI has seemed obligatory in presentations, conversations and articles. Today, OpenAI claims 100 million people use ChatGPT every week.GovTech, 2d ago
If anyone listened to Eric Schmidt’s testimony to Congress on AI, so to be AGI, then you haven’t seen nothing yet. It was chilling to hear.JONATHAN TURLEY, 4d ago
When challenged by Farmers Weekly on whether this was a real point of difference, he added: “When you look at AI [artificial intelligence] and its potential, I don’t think that was particularly visible to people five years ago, in the same way.Farmers Weekly, 3d ago, Event

Latest

Contrary to concerns in 2023 about the potential job losses due to artificial intelligence, the focus in 2024 will be on the job creation potential of AI, states Udo Sglavo, Vice President of Advanced Analytics at SAS. The emergence of new roles, such as "prompt engineering", which bridges a model's potential with its real-world application, is one such instance, he said. Although, the introduction of new AI technologies in 2024 and beyond may cause short-term disruptions in the job market, the resulting creation of new jobs and roles will strategically enable economic growth, notes Mr. Sglavo.IT Brief Australia, 3d ago
While the CEO didn't specify what exactly he thinks AGI would look like, Ross Sorkin asked if AGI would refer to AI that can design the chips Nvidia is currently making, to which Huang agreed.Business Insider, 4d ago
new As founder of The NRI Nation, a news media startup incubated at the i-lab, Roy has twice been a semifinalist in the Harvard President’s Innovation Challenge. In this work, she observed that “artificial intelligence is causing unprecedented disruptions and revolutionizing the news industry.” As a data scientist, journalist, and AI expert, Roy launched the Newsroom Robots podcast as a space for the news industry to actively discuss AI. Her first guest on the podcast – Matt Karollian, general manager of Boston.com and platforms at The Boston Globe – further emphasizes the urgency: “Publishers must strategize their approach towards AI. Otherwise, they might find themselves on the losing side of this disruption.”...Harvard Innovation Labs, 2d ago

Top

CLARK: It really depends on who you ask. But there are, of course, people who are very afraid of a doomsday future in which, you know, AI is capable of doing terrifying things. And OpenAI, when it started in 2015 as a nonprofit, had a goal of developing artificial intelligence as safely as possible. It seems that the board thought that Sam Altman's decisions were focused too much on moving fast and making money. Long before this happened, there were disagreements within the company about whether they were developing that AI safely enough.NPR, 13d ago
One may wonder about the psychology of continuing to create machines that one believes may extinguish human life. The irony, though, is that while fear of AI is exaggerated, the fear itself poses its own dangers. Exaggerated alarm about AI stems from an inflated sense of its capabilities. ChatGPT is superlatively good at predicting what the next word in a sequence should be; so good, in fact, that we imagine we can converse with it as with another human. But it cannot grasp, as humans do, the meanings of those words, and has negligible understanding of the real world. We remain far from the dream of “artificial general intelligence”. “AGI will not happen,” Grady Booch, chief scientist for software engineering at IBM, has suggested, even “in the lifetime of your children’s children”.the Guardian, 8d ago
Now, there is Q Star, a mysterious thing that might help OpenAI get closer to making AGI a reality. Reports suggest that, prior to Altman’s removal, a group of researchers had communicated a groundbreaking AI development to the board, presumably Q Star. This software revealed to be good at answering basic math questions, impressing researchers and making them excited about what it might do in the future.Techiexpert.com, 9d ago
It’s fitting that such a human story was written by a technologist who argues that we need human-centered technology. Early on, Li rang alarm bells about the potential societal consequences of artificial intelligence, though she’s not among the more radical camp now warning of AI’s existential threat. “I respect that discussion because I think, intellectually, it’s a worthy discussion to have,” she says. “But I’m much more concerned about the potential catastrophic social risks, like disinformation, impact on democracy, job disruption, workforce disruption, bias, and privacy infringement.”...Fortune, 17d ago
A week is a long time in technology. When news broke this week that Sam Altman, chief executive of artificial intelligence (AI) darling OpenAI was ousted in a boardroom coup, he was rapidly snapped-up by OpenAI’s commercial partner Microsoft to lead a new AI unit at the company only to be back where he started last Friday. Accused of being “not consistently candid in his communications with the board”, speculation about the cause of Altman’s canning spread across the Internet and the press.TechCentral.ie, 7d ago
Gary Marcus, known for his critical stance on the current trajectory of AI development, recently took to Twitter to outline the deficiencies of current AI systems. According to Marcus, for AI to achieve anything close to human-level General Artificial Intelligence (AGI), it would need to possess capabilities that are currently absent in existing models. These include the ability to nearly eliminate hallucinations, reason reliably over abstract concepts, form long-term plans, understand causality, maintain accurate models of the world, and handle outliers effectively. Marcus’ tweet underscores the complexity and depth of intelligence, highlighting how far current AI is from meeting these benchmarks.Rebellion Research, 10d ago

Latest

ChatGPT is not the end game, though, according to Jianfeng Feng, dean of the Institute of Science and Technology for Brain-Inspired Intelligence at Fudan University. In his commentary, “Simulating the whole brain as an alternative way to achieve AGI,” Feng argued that the ability of ChatGPT to outperform humans in certain tasks is not surprising — after all, a simple calculator can multiply large numbers quicker than a human. However, it is not an example of artificial general intelligence (AGI), a theoretical step beyond AI that represents human abilities so well it can find a solution for any unfamiliar task.SCIENMAG: Latest Science and Health News, 3d ago
AI-based stuff seems to loom large. WatchGuard predicts a “boom” in the sale of AI spear phishing tools on the Dark Web that will send spam e-mail, create convincing texts and scrape the Internet and social media for a target’s information and connections. That sounds pretty scary even if WatchGuard’s description of the threat can’t help sounding ever so slightly like an advert for AI, noting that “well-formatted procedural tasks like these are perfect for automation via artificial intelligence and machine learning.” Sounds like a slogan to me. Go AI!...TechCentral.ie, 3d ago
Should we worry about Project Q*? As we push the boundaries of AI, the question staring at us is this: Is this the dawn of AGI, or the prelude to an AI apocalypse? Altman’s controversial remarks about AGI being a “median human co-worker” add fuel to the fire, raising concerns about job security and the unchecked growth of AI power.gulfnews.com, 6d ago
new Zhang said he wrote the editorial, "Dialog between artificial intelligence & natural intelligence," to prompt conversation among researchers and students. In it, he imagined a conversation between AI and natural intelligence (NI), in which the two debated their fundamental purposes and ultimate uses.phys.org, 2d ago
One year since the launch of ChatGPT, some initial fears about generative artificial intelligence (GenAI) tools in education have abated. Federal officials and nonprofits are publishing guidelines and best practices, new use cases are proliferating, and an August poll of about 600 higher-ed instructors by the ed-tech company Cengage Group found over 80 percent of them believe GenAI tools will play an increasingly important role in their institutions in the years to come. But if a recent professional development course from the ed-tech company Course Hero is any indication, many professors remain worried about data privacy, plagiarism, accessibility and mixed messages around the technology.Course Hero Vice President of Academics Sean Michael Morris, who led a four-week “AI Academy” course for over 350 educators in October, encountered some of these concerns when leading sessions about using AI tools for assessment design and teaching AI literacy to students. He said aside from worries about academic dishonesty among students, some of the most common questions among educators looking to adopt GenAI tools for the classroom revolved around the data required to use them.“What kind of data is required? What do you have to surrender in order to use the tool? Is it just your name and email address, or is it more information than that [for sign-up]? How is the AI being trained, and where is the data coming from that trains the AI? Can you detect implicit bias in the AI?” he said, listing frequently asked questions from educators about GenAI tools and data privacy.GovTech, 6d ago
It has been quite the week at Sports Illustrated. Earlier this week, there was a report claiming some stories, specifically product reviews, appeared to be generated with artificial intelligence. That included names of authors who didn’t exist paired with AI-generated photos and fictional bio pages for these not-real authors.Poynter, 3d ago

Top

...(TNS) — In a reprise of an April TED Talk where she warned about artificial intelligence hacking human brains, Nita Farahany traveled to Pittsburgh on Monday to lecture students and faculty at Carnegie Mellon University on the emerging technology's potential for help and harm.Ms. Farahany, a Duke University law professor and former bioethics advisor under President Barack Obama, said that ChatGPT and other generative AI tools offer the government a chance to correct some of the regulatory failures it made with social media."This isn't our first encounter with AI," she said, noting that Justin Rosenstein used computer intelligence to engineer Facebook's "like" button years before he realized the feature's potential to profoundly harm humanity. Likes were designed to spread joy, but Mr. Rosenstein and others now worry the potentially addictive feature is hurting self-esteem, distracting the masses and forever altering the human experience.GovTech, 24d ago
Welcoming artificial intelligence robots into the leadership ranks will leave current employees wondering about their own future within organizations. It is difficult to surmise when and how an AI CEO could replaced a human, almost rendering humans obsolete for leadership positions, which is a scary and demoralizing thought.torontosun, 16d ago
When clients approach Bennett Borden to ask questions about generative artificial intelligence and the impact on their businesses, the chief data scientist at law firm DLA Piper turns to fiction. He believes AI is more like Iron Man than the Terminator.Fortune, 25d ago

Latest

Historians of the future will write – perhaps with the aid of AI – about ChatGPT as a pivotal moment in the movement of Artificial Intelligence. Yes, we are aware of what the fearmongers are saying about it. Most of it is an exaggeration – but part of it is correct as AI has already begun killing people as I had written before in Sify. Thankfully the world is taking notice and regulations with regard to AI is happening at a rapid clip.Sify, 4d ago
...…things like high-performance computing (HPC) and artificial intelligence (AI) came along. As far back as 2018, the guys and gals at OpenAI—the company that introduced us to ChatGPT, which, ironically, no longer needs any introduction (even my 93-year-old mother knows about it)—noted in their AI and Compute paper that AI could be divided into two eras. During the first era, which spanned from 1956 to 2012, computational requirements for AI training tracked reasonably with Moore’s Law, doubling around every two years, give-or-take. In 2012 we reached an inflection point, and the start of the second era, whereby computational requirements started to double every 3.4 months!...EEJournal, 4d ago
There’s been a lot of talk about AGI lately—artificial general intelligence—the much-coveted AI development goal that every company in Silicon Valley is currently racing to achieve. AGI refers to a hypothetical point in the future when AI algorithms will be able to do most of the jobs that humans currently do. According to this theory of events, the emergence of AGI will bring about fundamental changes in society—potentially ushering in a “post-work” world, wherein humans can sit around enjoying themselves all day while robots do most of the heavy lifting. If you believe the headlines, OpenAI’s recent palace intrigue may have been partially inspired by a breakthrough in AGI—the so-called “Q” program—which sources close to the startup claim was responsible for the dramatic power struggle.Gizmodo, 4d ago
The world of artificial intelligence was dramatically revolutionized a year ago when OpenAI released the large language model ChatGPT for public use. Executive Mosaic sat down with Empower AI CEO Jeff Bohling to talk about how AI has evolved, and how it has impacted government missions over the past year.GovCon Wire, 5d ago
...“I am honored to join the Virtualitics board of advisors at this exciting time in the company’s next phase of growth,” Rapp said. “Having witnessed firsthand the transformative power of artificial intelligence and data exploration, I’m deeply inspired by Virtualitics’ commitment to illuminating complex datasets for the defense and intelligence sectors. With my decades of service and their innovative and explainable approach to AI, we’re poised to shape AI-driven analytics, ensuring actionable insights to safeguard our nation.”...Datanami, 3d ago
Welcome to the launch of Artificial Intelligence (AI) and Agriculture Month on Agrilinks! What better way to get us started on this topic than by having AI write this welcome post. The post below has been written entirely by ChatGPT, after it was given the following prompt, “Write a 600-word blog post opening Artificial Intelligence and Agriculture Month on Agrilinks.org. It should highlight why it is important for practitioners to be thinking about the potential opportunities and risks of AI in the agriculture sector and on food security, especially in low- and middle-income countries.” While what follows has been machine generated, everything else we’ll be featuring this month will be written by bona fide humans.Agrilinks, 3d ago

Top

If true, I find the motivations to push Altman out to pursue a more responsible approach to AI odd, problematic and fascinating. I'm not someone who believes the path to Artificial General Intelligence (AGI) lies through generative AI and large language models alone. Powerful use cases (and cultural impact) are not the same as truly cognitive systems. Altman himself was all over the place: sometimes overhyping AI's job loss impact and existential threat, sometimes advocating sensible guidelines, and sometimes pursuing commercial agendas that undermined those same stances.diginomica, 14d ago
Narayana Murthy, the visionary founder of Infosys, is steering away from the common narrative that Artificial Intelligence (AI) poses a threat to jobs. But he thinks of AI as a big boost to what people can do. It is like the next big thing that can make humans even more capable. In a recent interview he talked a lot about how AI is super important for solving problems and how our brains are super flexible.Techiexpert.com, 18d ago
Last week, the board of the company behind Chat GPT AI chatbot abruptly fired its star CEO, Sam Altman. Few knew why. Then Microsoft, a major investor in the company, hired Altman and some other illustrious people to work on its surprise new advanced AI team. Oh, and OpenAI’s staff has threatened a mass walkout if he’s not brought back to the artificial intelligence research company.the Guardian, 13d ago
According to recent research from the textbook publisher Houghton Mifflin Harcourt, nearly 40 percent of teachers surveyed by the company said they plan to integrate artificial intelligence tools into their instruction by the end of the 2023-24 school year. That figure is only expected to grow as AI-driven programs such as ChatGPT, Claude, Bard and Adobe Firefly continue to improve — almost at breakneck speed — for functions like assisting lesson planning and content generation, according to Monica Burns, a former teacher and ed-tech consultant. Speaking Tuesday as part of a webinar series about new classroom tools hosted by Verizon and Digital Promise, Burns said educators should stay up to date about developments in AI to get the most out of new tools and understand their limitations.continuingedupdate.blogspot.com, 16d ago, Event
Page, says Musk, displayed a "cavalier" attitude to the potential threats of Artificial General Intelligence (AGI), going so far as to call Musk a "speciesist" for letting concerns about the fate of humanity slow down the advancement of a next-level intelligence. And when Musk couldn't convince Deepmind founder and CEO Demis Hassabis from letting Google acquire his company – the leading company in AI at the time – he got together with then-Y Combinator president Sam Altman to form a competing entity.New Atlas, 13d ago
Yann LeCun, a Turing Award laureate, offers a more cautious perspective. “Will AI take over the world? No, this is a projection of human nature on machines,” he mentioned at a press event. He believes key concepts for AGI are still missing, suggesting a gradual progression to AGI.TechRound, 20d ago

Latest

Henry Kissinger wrote not one but two books during the pandemic, and had begun work on a third, his son said. He coauthored a book about artificial intelligence in 2021 called "The Age of AI: And Our Human Future," and has warned that governments should prepare for the potential risks with the technology. He also wrote a new book, out last year, titled "Leadership: Six Studies in World Strategy."...Across America, US Patch, 4d ago
According to Legg who actually came up with the term AGI 20 years ago, there are too many debates around what terms actually meant, there is a need to sharpen up the meanings. He recalls that when the term was thrown at a possible title for a book on AI, the need to define it in detail was thought unnecessary as AGI was considered to be a field of study, not an artifact.CXOToday.com, 7d ago
According to Gabriela Ramos, “It is clear that ensuring ethical AI is everybody’s business”. Ramos is UNESCO’s assistant director-general of Social and Human Sciences. She wrote these words in her introduction to the organisation’s Recommendation on the Ethics of Artificial Intelligence.Silicon Republic, 6d ago

Latest

...…things like high-performance computing (HPC) and artificial intelligence (AI) came along. As far back as 2018, the guys and gals at OpenAI—the company that introduced us to ChatGPT, which, ironically, no longer needs any introduction (even my 93-year-old mother knows about it)—noted in their AI and Compute paper that AI could be divided into two eras. During the first era, which spanned from 1956 to 2012, computational requirements for AI training tracked reasonably with Moore’s Law, doubling around every two years give-or-take. In 2012 we reached an inflection point, and the start of the second era, whereby computational requirements started to double every 3.4 months!...EEJournal, 4d ago
The turmoil at ChatGPT-maker OpenAI, bookended by the board of directors firing high-profile CEO Sam Altman on Nov. 17, 2023, and rehiring him just four days later, has put a spotlight on artificial intelligence safety and concerns about the rapid development of artificial general intelligence, or AGI. AGI is loosely defined as human-level intelligence across a range of tasks.Nextgov.com, 7d ago
The proposed rules would require companies to inform people ahead of time how they use automated decision-making tools and let consumers opt in or out of having their private data used for such tools.Automated technology — with or without the explicit use of AI — is already used in situations such as deciding whether somebody is extended a line of credit or approved for an apartment. Some early examples of the technology have been shown to unfairly factor race or socioeconomic status into decision making — a problem sometimes known as "algorithmic bias" that regulators have so far struggled to rein in.The actual rulemaking process could take until the end of next year, said Dominique Shelton Leipzig, an attorney and privacy law expert at the law firm Mayer Brown. She noted that in previous rounds of rulemaking by the state's privacy body, little has changed from inception to implementation.The proposed rules do pose one significant departure from existing state privacy rules, she said: Requiring companies to provide notice to consumers about when and why they are using automated decision-making tools is "pushing in the direction of companies being transparent and thoughtful about why they are using AI, and what the benefits are ... of taking that approach."The rules are not the state's first run at creating privacy protections for automated decision-making tools.One bill that did not make it through the state Legislature this year, authored by Assembly Member Rebecca Bauer-Kahan, D-Orinda, sought to guard against algorithmic bias in automated systems. It was ultimately held up in committee but could be reintroduced in 2024.State Sen. Scott Wiener, D-San Francisco, has also introduced a bill that will be fleshed out next year to regulate the use of AI more broadly. That effort envisions testing AI models for safety and putting more responsibility on developers to ensure their technology isn't used for malicious purposes.California Insurance Commissioner Ricardo Lara also issued guidelines last year on how artificial intelligence can and can't be used to determine eligibility for insurance policies or the terms of coverage.In an emailed statement, his office said it "recognizes algorithms and artificial intelligence are susceptible to the same biases and discrimination we have historically seen in insurance.""The Commissioner continues to monitor insurance companies' use of artificial intelligence and 'Big Data' to ensure it is not being used in a way that violates California laws by unfairly discriminating against any group of consumers," his office said.Other Bay Area lawmakers came out in support of the privacy regulations moving forward."This is an important step toward protecting data privacy and the unwanted use of AI," said State Sen. Bill Dodd, D-Napa. "Maintaining human choice is critical as this technology evolves with the prospect for so much good but also the potential for abuse."The first hearing on the proposed rules is on Dec. 8.© 2023 the San Francisco Chronicle. Distributed by Tribune Content Agency, LLC.GovTech, 3d ago
I have a built-in marketing hype detector. I hear so much hype that sometimes it goes into overdrive. The hype cycle of the past few years has be artificial intelligence (AI). Everything is AI. That was even before ChatGPT soared into everyone’s attention span.The Manufacturing Connection, 3d ago
Copy. Paste. Fourth-year cybersecurity student Tyler Spaulding puts his computer code into ChatGPT and asks, “Why am I getting an error?” Within seconds, the artificial intelligence (AI) chatbot generates an answer.RIT, 5d ago
Recent advancements in generative artificial intelligence (AI) have showcased its potential in a wide range of creative activities such as to produce works of art, compose symphonies, and even draft legal texts, slide presentations or the like. These developments have raised concerns that AI will outperform humans in creativity tasks and make knowledge workers redundant. These comments are most recently underlined by a Fortune article entitled 'Elon Musk says AI will create a future where 'no job is needed': 'The AI will be able to do everything'.ScienceDaily, 5d ago

Latest

Newswise — In a time when the Internet has become the main source of information for many people, the credibility of online content and its sources has reached a critical tipping point. This concern is intensified by the proliferation of generative artificial intelligence (AI) applications such as ChatGPT and Google Bard. Unlike traditional platforms such as Wikipedia, which are based on human-generated and curated content, these AI-driven systems generate content autonomously - often with errors. A recently published study, jointly conducted by researchers from the Mainz University of Applied Sciences and Johannes Gutenberg University Mainz (JGU), is dedicated to the question of how users perceive the credibility of human-generated and AI-generated content in different user interfaces. More than 600 English-speaking participants took part in the study.As Professor Martin Huschens, Professor for Information Systems at the Mainz University of Applied Sciences and one of the authors of the study, emphasized: "Our study revealed some really surprising findings. It showed that participants in our study rated AI-generated and human-generated content as similarly credible, regardless of the user interface." And he added: "What is even more fascinating is that participants rated AI-generated content as having higher clarity and appeal, although there were no significant differences in terms of perceived message authority and trustworthiness – even though AI-generated content still has a high risk of error, misunderstanding, and hallucinatory behavior."The study sheds light on the current state of perception and use of AI-generated content and the associated risks. In the digital age, where information is readily available, users need to apply discernment and critical thinking. The balance between the convenience of AI-driven applications and responsible information use is crucial. As AI-generated content becomes more widespread, users must remain aware of the limitations and inherent biases in these systems.Professor Franz Rothlauf, Professor of Information Systems at Johannes Gutenberg University Mainz, added: "The study results show that – in the age of ChatGPT – we are no longer able to distinguish between human and machine language and text production. However, since AI does not 'know', but relies on statistical guessing, we will need mandatory labeling of machine-generated knowledge in the future. Otherwise, truth and fiction will blur and people cannot tell the difference." It remains a task of science communication and, not least, a social and political challenge to sensitize users to the responsible use of AI-generated content.newswise.com, 4d ago
In this tenth episode of our weekly news and interview show, Tech Wave Forum, host and CIF CEO David Terrar will discuss artificial general intelligence (AGI), that’s artificial intelligence that surpasses human intelligence in almost all areas, triggered by Softbank’s CEO suggesting, this last week, that it will be here within 10 years. David believes it will be gradual, inevitable, and sooner than you think. The interview guest is Cécile Rénier, Wolters Kluwer’s VP Customer Service.Compare the Cloud, 6d ago
Yann LeCun, the chief scientist at Meta and one of the “godfathers” of the modern AI movement, told his Twitter followers to “Please ignore the deluge of complete nonsense about Q*,” referring to the supposed code name of the AGI that was said to have helped precipitate Mr Altman’s sacking.Australian Financial Review, 6d ago
Amazon Q uniquely targets workplace requirements, providing quick, relevant responses to pressing queries and generating content informed by the customer's data repositories, code, and enterprise systems. Dr. Swami Sivasubramanian, Vice President of Data and Artificial Intelligence, underscored Generative AI's "potential to spur a technological shift that will reshape how people do everything from searching for information and exploring new ideas to writing and building applications."...IT Brief Australia, 4d ago
To be sure, there are detectors claiming to provide evidence as to whether a given essay is generated by artificial intelligence, but there are two problems. First, the “evidence” constitutes statistical probabilities that an essay was written by AI—not exactly a smoking gun. And, second, students will quickly learn, if they haven’t already, how to avoid detection through subtle manipulation of the grammar and syntax.Inside Higher Ed | Higher Education News, Events and Jobs, 4d ago
Established in 2022 by entrepreneurs Reid Hoffman, Mustafa Suleyman, and Karén Simonyan, Inflection AI is a technology company based in Palo Alto, California. The company focuses on the development of hardware and applications for machine learning and generative artificial intelligence. The company’s latest innovation, Personal AI, known as Pi, represents a new class of AI and is designed to serve as a helpful companion, providing engaging conversations, friendly guidance, and concise information in a natural, flowing manner.aimagazine.com, 4d ago

Latest

As I said on Vijayasankar's LinkedIn post, the scientific breakthroughs that will lead to AGI someday are likely to come outside of the commercial spotlight. As Vijayasankar alludes to, the human brain doesn't require deep learning scale to generalize. "Deep learning" has a lot to learn. Yes, we must prepare for this AGI possibility. Walk - and chew some AI policy gum to get us there.diginomica, 7d ago
Featured VideoBefore his death, John Lennon recorded a demo of a new song, "Now and Then" on a cassette. His Beatles bandmates later tried to repurpose it for release, but abandoned the project in part because of the poor voice quality. This week, Paul McCartney revealed that, 43 years after Lennon's death, the song will drop – thanks to AI technology. It's just the latest example of artificial intelligence's increasing presence in the music industry. Fake Drake songs, AI-generated Kanye covers and posthumous Biggie collabs have raised alarm about copyright, and existential questions about songwriting and creativity. Today, Saroja Coelho speaks with the host of Vulture's Switched on Pop podcast, Charlie Harding, about what the technology means for the music industry and art itself. For transcripts of this series, please visit: https://www.cbc.ca/radio/frontburner/transcripts...CBC, 4d ago
Musk appeared for an almost-90-minute interview with reporter Andrew Ross Sorkin for the New York Times 2023 Dealbook Summit on Wednesday, during which the two talked about subjects ranging from Tesla and SpaceX to artificial intelligence (AI), the ongoing Israel-Hamas conflict, free speech and more.Zephyrnet, 4d ago

Top

When Musk assembled the original OpenAI team their stated goal was to develop AI responsibly under the aegis of a non-profit. They clearly mentioned how Google was leaps and bounds ahead of every other company in terms of AI research and how one for-profit corporation getting hold of something as powerful as an AGI is catastrophic for humanity. Their raison d'être was to serve as an open alternative to a corporate behemoth Google.Now, a decade later, OpenAI has a for-profit subsidiary who is sprinting full speed towards AGI arms race using a technology that Google deliberately did not commercialize. The irony! All of this blatant about-face happened under the watch of the manipulative genius Sam.When the board at the non-profit parent felt powerless about all of this, they fired Sam as a last ditch hail-mary move.Blind, 11d ago
...1, 1 billion, 10, 100, 130, 17, 2015, 2019, 2022, 2023, a, ability, About, access, According, across, Added, Adding, after, agenda, agi, Agreement, AI, AI systems, AI Technology, All, also, an, and, anderson, Announcement, Announcements, any, anyone, Apple, ARE, arguing, Artificial, artificial general intelligence, artificial intelligence, AS, asia, At, Azure, based, BE, become, been, before, benefit, BIG, Billion, billions, Bing, Bit, Blog, Blog Post, board, Building, Built, by, came, CAN, capped, ceo, chairman, change, Changed, chat, chatbot, ChatGPT, chief, chief technology officer, Chris, Coding, coding skills, cofounder, Collective, committed, Communications, company, company's, concluded, conduct, Conference, confidence, Congress, consistently, continue, continuing, cooperation, Copilot, could, create, customized, deliver, delivered, delivering, despite, developers, Devs, dismissal, Does, done, earlier, Economic, Elon, Elon Musk, Era, Eric, Eric Schmidt, events, Everything, exciting, Exercise, FAR, feels, Feels like, figures, fired, firing, Firm, First, Five, follows, For, forever, formal, former, Forward, found, founders, Francisco, Friday, from, full, future, gained, General, general intelligence, generative, Generative AI, going, Google, google ai, Governments, greatest, guard, had, Have, he, head, helped, hero, his, hopefully, Humans, i, Ignite, importantly, in, incredible, industry, innovate, Innovation, integrated, Intelligence, interim, into, investment, investor, Is, issued, IT, ITS, Jobs, just, Keynote, know, Last, later, latest, lead, Leadership, leading, Leap, Legislation, like, Likes, limelight, LinkedIn, little, lobby, Long, long-term, longer, Look, look like, Lost, lot, loved, lures, major, me, Microsoft, million, mine, model, models, more, most, Much, musk, my, Need, Next, no, non-profit, Nothing, November, Now, of, Officer, old, on, ONE, only, OpenAI, Operations, our, Over, Pacific, packages, part, Partnership, Pay, People, permanent, person, personally, Peter, Peter Thiel, plato, Plato Data Intelligence, PlatoData, please, Popular, post, poster, posting, Process, Product, Products, Profit, proponent, pumping, rapid, rapidly, RE, reactions, read, real, received, Recent, recent years, Regulation, Released, Remain, remains, report, responsibilities, responsible, review, rise, Roadmap, s, Said, sam, sam altman, SAMA, San, San Francisco, Satya Nadella, saw, say, say about, Search, see, serve, shock, Shortly, Simply, SIX, skills, smart, So, so Far, Spoke, spoken, stack, started, startup, State, steve, Steve Jobs, Story, strongly, stunned, Such, Systems, T, talented, Team, tech, Technological, Technology, Ted, term, Thank, thank you, that, The, the world, they, this, this year, thursday, time, to, together, tools, top, transformative, transformed, travelling, turn, twitter, urging, us, US Congress, use, value, ve, version, Viral, visible, vocal, wait, was, we, week, What, what's, When, where, Which?, WHO, will, with, without, Work, working, world, would, wrote, X, year, years, You, zephyrnet...Zephyrnet, 16d ago, Event
Prime Minister Narendra Modi on Friday expressed serious concern over the emerging threat from "deep fake" videos created through artificial intelligence-powered technology. He was addressing a Diwali Milan program with journalists in Delhi. Modi said, "a new crisis is emerging due to deep fakes produced through artificial intelligence. We have a big section of people who do not have the tools to carry out verification about their authenticity and ultimately people end up believing the videos to be genuine. This is going to become a big challenge." Modi mentioned how he was targeted in a deep fake video showing he was doing 'garba' dance at a Navratri festival. "They did it very well, but the fact is I have not played garba since ages. I used to play garba when I was a child, and I stopped playing after my school days. Because of this fake video made through artificial intelligence, my followers are forwarding this", the Prime Minister said. Modi said, he had raised the issue with stakeholders in artificial intelligence industry. "I suggested to them that they should consider tagging AI-generated content which is vulnerable to misuse", he said.indiatvnews.com, 16d ago
In September, the Silicon Valley king pin landed himself in hot water after comparing AGI to a “median human you could hire as a co-worker”, chiming with comments he made last year about how this AI could “do anything that you'd be happy with a remote co-worker doing” including learning how to go be a doctor and a very competent coder.Tech.co, 9d ago
While the Biden administration’s executive order (EO) on artificial intelligence (AI) governs policy areas within the direct control of the U.S. government’s executive branch, they are important broadly because they inform industry best practices and subsequent laws and regulations in the U.S. and abroad. Accelerating developments in AI – particularly generative AI – over the past year or so has captured policymakers’ attention. And calls from high-profile industry figures to establish safeguards for artificial general intelligence (AGI) has further heightened attention in Washington. In that context, we should view the EO as an early and significant step addressing AI policy rather than a final word.John Lothian News, 13d ago, Event
Authors sue OpenAI for using copyrighted material. How will the courts rule? : Planet Money When best-selling thriller writer Douglas Preston began playing around with OpenAI's new chatbot, ChatGPT, he was, at first, impressed. But then he realized how much in-depth knowledge GPT had of the books he had written. When prompted, it supplied detailed plot summaries and descriptions of even minor characters. He was convinced it could only pull that off if it had read his books.Large language models, the kind of artificial intelligence underlying programs like ChatGPT, do not come into the world fully formed. They first have to be trained on incredibly large amounts of text. Douglas Preston, and 16 other authors, including George R.R. Martin, Jodi Piccoult, and Jonathan Franzen, were convinced that their novels had been used to train GPT without their permission. So, in September, they sued OpenAI for copyright infringement.This sort of thing seems to be happening a lot lately–one giant tech company or another "moves fast and breaks things," exploring the edges of what might or might not be allowed without first asking permission. On today's show, we try to make sense of what OpenAI allegedly did by training its AI on massive amounts of copyrighted material. Was that good? Was it bad? Was it legal? Help support Planet Money and get bonus episodes by subscribing to Planet Money+ in Apple Podcasts or at plus.npr.org/planetmoney.NPR, 22d ago

Latest

Artificial intelligence (AI) goes beyond pattern recognition to attempt to reason as to why the pattern occurred. I’m not going to write about whether AI is spot-on or how objective or subjective it is in its current state. I’m just saying that between advanced reporting such as BI (business intelligence), the capabilities of ML (machine learning) and AI, there exists sophisticated software systems to pick out patterns and perhaps even offer a possible reason why. And with a person involved, the technology-people pair is a powerful combination.scmr.com, 6d ago
...000, 1, 12, 2025, 25, 50, a, About, abruptly, accessible, accompanying, According, achieving, acronym, Affairs, Agency, agi, AI, ai education, AI model, ai workforce, aims, algorithms, alien, All, also, Although, Amazon, Ambitions, an, and, another, any, anyone, Apparently, ARE, arena, Artificial, artificial general intelligence, artificial intelligence, AS, At, background, barcelona, BE, become, before, behind the scenes, Behind’, benefit, Blog, Blog Post, board, both, brands, breakthrough, brief, Building, But, by, CAN, Can Get, certain, challenging, chaos, China, chinese, chosen, claimed, claiming, claims, clear, Close, closely, code, collaborating, Commerce, Companies, company, complications, concern, concerned, concerns, contain, Costs, could, counterpart, country, country's, course, Courses, CPUs, create, Creators, Critics, Culture, Current, Cut, dangerous, Days, dealing, decided, decides, declared, demands, depict, designer, desire, developers, DID, difficult, difficulties, DIG, Dismissed, diving, domination, dubbed, e, e-commerce, earns, Education, eight, Engineering, essentially, ETC, ethical, exist, existential, expressed, Face, faces, familiar, Fashion, firing, First, following, For, Free, free courses, from, full, future, General, general intelligence, generate, generated, Generating, generation., generative, Generative AI, Get, Giant, Giving, glance, Go, goal, going, had, Hardware, Have, having, he, Help, HER, High, high school, his, hold, hope, hopes, How, However, human, Humanity, if, images, important, in, In Brief, Including, Indications, individual, influencers, Initiative, Intelligence, into, introductory, invasion, investing, involved, Is, IT, Jinping, just, language, Last, launched, lead, LEARN, learn about, learners, learning, learning platform, Led, left, lesson, Life, little, Long, long way, Look, looking, machine, machine learning, make, March, material, math, Math Skills, mathematical, mathematical problems, Matter, means, meant, met, Might, milestone, million, model, models, monopoly, Month, more, Moscow, most, Much, Music, names, naturally, Need, needed, never, no, non-technical, noting, observed, of, Of course, OFF, offering, old, on, ONE, online, Online Learning, only, OpenAI, or, ordinary, Other, Others, our, out, outlets, Over, Own, part, Pay, People, per, percentage, perfect, perhaps, photos, pink, planning, platform, plato, Plato Data Intelligence, PlatoData, play, position, post, potential, prefer, prepare, president, priority, probably, problems, Profit, Program, Progress, project, promised, prompt, Prompt Engineering, pronounced, putin, questions, rather, ready, real, received, recently, regard, regarding, Released, Remain, remaining, remains, reported, represent, representation, research, researchers, Reuters, rise, Risk, road, Russia, russian, Russian President Vladimir Putin, s, Said, Sanctions, scenes, scholarships, School, securing, servers, set, several, sexism, she, skills, SOLVE, some, source, Spanish, spent, Sports, standards, Star, started, State, Status, Structural, Students, style, subsequent, success, Such, suffer, supplement, Systems, Tackle, Take, taking, talked, Team, tech, Technical, Technology, Than, that, The, the agency, The Future, The Initiative, The Source, the world, their, Them, theme, There, they, thing, this, thought, time, tired, to, too, too long, Topics, train, transformative, turn, turned, two, udacity, Ukraine, unable, underpin, undoubtedly, university, unlock, UNNAMED, unrealistic, up, uploaded, use, using, usually, Valley, values, Video, Virtual, Vladimir Putin, wage, want, wants, war, Warning, was, way, we, week, welcome, were, West, Western, What, When, where, Which?, while, Whilst, WHO, will, with, within, without, woman, workers, Workforce, world, would, xi, xi jinping, year, yes, zephyrnet...Zephyrnet, 7d ago
The events of this year have highlighted important questions about the governance of artificial intelligence. For instance, what does it mean to democratize AI? And how should we balance benefits and dangers of open-sourcing powerful AI systems such as large language models? In this episode, I speak with Elizabeth Seger about her research on these questions.alignmentforum.org, 8d ago
The reality is AI isn’t a new buzzword this year. Back in 1950, Alan Turing, a young British polymath, explored the mathematical possibility of artificial intelligence. In his 1950 paper, Computing Machinery and Intelligence, he explored how to build intelligent machines and test their intelligence. Five years later, the proof of concept was initialized through Allen Newell, Cliff Shaw, and Herbert Simon’s Logic Theorist, which was a program designed to mimic the problem-solving skills of a human.Connected World - IoT and Digital Transformation, 5d ago, Event
Most people today cannot live without artificial intelligence, said Omar bin Sultan Al Olama, UAE’s Minister of State for Artificial Intelligence, Digital Economy, and Remote Work Applications at IGF 2023 in Dubai. “All our questions are being answered by AI, a lot of content is being fed to us by AI. Even our shopping today, if you shop digitally, it’s an AI engine that drives that for you,” Al Olama said.gulfnews.com, 6d ago
Early on Thursday, Reuters reported that several OpenAI researchers had warned the board in a letter of a new AI that could threaten humanity. OpenAI, after being contacted by Reuters, then wrote an internal email acknowledging a project called Q* (pronounced Q-Star), which some staffers felt might be a breakthrough in the company’s AGI quest. Q* reportedly can ace basic mathematical tests, suggesting an ability to reason, as opposed ChatGPT’s more predictive behavior.Fortune, 6d ago

Top

The launch of OpenAI’s generative artificial intelligence (AI) tool, ChatGPT, marked a turning point in the evolution of technology and its role in education. Now a ubiquitous tool, ChatGPT has sparked extremes of fear and curiosity among academics and students alike. Speaking at a session at the 2023 THE Digital Universities MENA summit, Aaron Yaverski, regional vice-president for EMEA at Turnitin, described it as a collective “heart attack” for the academic community. The session, held in partnership with Turnitin, explored the impact of AI on education, including methods of responsible integration and practical applications of AI in education. “Lots of people got nervous that this would be the end of independent writing for students and the end of teaching tools as we knew it. But, in fact, AI can be a very powerful tool,” Yaverski said. “It’s not the end of anything we do. It’s about how we help students get to that critical thinking point in whatever they’re working on.”...continuingedupdate.blogspot.com, 17d ago
After a week of media speculation about OpenAI’s senior executive re-shuffle, CEO and co-founder Sam Altman was said be joining Microsoft’s new in-house artificial intelligence (AI) team. Shortly afterwards, it was announced that Altman would, indeed, return to OpenAI as CEO.Verdict, 11d ago, Event
McGrath is also one of a half-dozen Marin educators working on a team with the Marin County Office of Education to create a series of forums on AI for school staff countywide. They expect to roll out the series in the spring."We are still in the early development stages," said team member Tara Taupier, superintendent of the Tamalpais Union High School District.Taupier, McGrath and several teachers and administrators at the San Rafael high school district are the initial team members, said Laura Trahan, assistant superintendent of the Marin County Office of Education."We're working to identify the different facets of AI, and then creating each session in the series around each facet," Trahan said.Topics could range from the ways artificial intelligence can improve education, to ethical concerns, to how educators can benefit from AI, Trahan said."We want to make this series practical and real," Trahan said.The Marin education office is also scouting for experts who might be able to be guest speakers. "The experts are the youth," Trahan said. "They're the natives in this landscape."According to John Carroll, Marin superintendent of schools, artificial intelligence options in schools should be embraced, not shunned."We don't want to be afraid of it," Carroll said. "We want to be able to work with it as a general tool."That was different than the initial approach taken by the Los Angeles Unified School District and other districts nationwide. In November 2022, when San Francisco-based OpenAI launched ChatGPT, a platform for AI text creation, L.A. Unified and the others shut down access to OpenAI and ChatGPT on their networks and devices until they could assess risks.Since then, AI and ChatGPT have exploded online in emails, texts and web content, among other uses."AI is unavoidable," Carroll said. "It's here."We don't think blocking it is a proper thing to do," Carroll said. "We want to be involved in how we can use it to help schools."Taupier agreed."Late last school year, our ed services team surveyed our staff and made the decision to keep access to AI open for now," Taupier said.While the creative possibilities may seem enormous, McGrath cautioned that teachers will need to have a way to trace details supplied by ChatGPT back to their sources."What pool of information is it pulling from?" she said. "Parents should also inform themselves about what AI is capable of."Challenges in incorporating AI in schools might range from protecting data privacy to ensuring academic honesty, according to the California School Boards Association.Despite the concerns, state education leaders said students everywhere must be schooled in AI if they are to be ready to face the world after high school.Carroll said he was impressed when he tried out ChatGPT's writing skills when the platform first launched."I asked ChatGPT to write a short essay on a practical application of Immanuel Kant's 'The Critique of Pure Reason,'" Carroll said. "It was pretty convincing."©2023 The Marin Independent Journal (Novato, Calif.). Distributed by Tribune Content Agency, LLC.GovTech, 24d ago