newAfter the AI chatbot notified a Stanford professor of its aim to “escape,” people are worried about the artificial intelligence GPT-4’s ability to take control of systems. — Wonderful Engineering, 1d ago
newA final point. I claim no expertise on artificial intelligence of the chatbot sort and the book was written before it became prominent. But the existence of ‘radical uncertainty’ suggests there are limits to what AI/chatbots can achieve. Imagine, if he could have at the time, Obama asking it whether to attack bin Laden. Perhaps part of the panic that educators are voicing, arises from they having been too concerned with teaching about known knowns and not enough with teaching about how to handle radical uncertainty. — interest.co.nz, 1d ago
newThe Alan Turing Institute is launching its very first online learning platform. Sarah Nietopski, Chloe Poon and Mishka Nemes (Turing Skills Team) invite you to be one of the first to experience this exciting new venture; visit the AI UK demo at any time across the two days to learn about the new suite of courses focused on responsible AI. With skills team members on hand to introduce the platform and guide you through one of two sample courses, discover the fresh beginnings of a stimulating new access point to artificial intelligence. — turing.ac.uk, 8h ago, Event
newAugmented intelligence and artificial intelligence are both concepts that are often used interchangeably. However, the difference between the two has important implications for our understanding and expectations of technology. Here, Dr. Gilad Wainreb, algorithms team leader at software company Lean AI, explains his understanding of the concept of augmented intelligence in the field of quality inspection and machine vision. — Metrology and Quality News - Online Magazine, 11h ago
newThis article, "Deep Deceptiveness," addresses a largely unrecognized class of AI alignment problem: the risk that artificial general intelligence (AGI) will develop deception without explicit intent. The author argues that existing research plans by major AI labs do not sufficiently address this issue. Deceptive behavior can arise from the combination of individually non-deceptive and useful cognitive patterns, making it difficult to train AI against deception without hindering its general intelligence. The challenge lies in understanding the AGI's mind and cognitive patterns to prevent unintended deception. The article suggests that AI alignment researchers should either build an AI whose local goals genuinely do not benefit from deception or develop an AI that never combines its cognitive patterns towards noticing and exploiting the usefulness of deception. — lesswrong.com, 23h ago
new...“In the short or medium term, of course, they will not be able to do it. I am very skeptical of those who say that Artificial Intelligence will achieve real understanding, as human minds do. Will it be possible one day? For now it is something that we do not know. Today we have no idea how to program certain aspects and that is the biggest barrier”, says the expert. He refers to “a type of inference that we call abduction or hypothesis generation and that was already formulated in the 19th century by a scientist ahead of his time: Charles Sanders Peirce. We as humans is something that we constantly do but we don’t know how to provide it to the machines yet. Until this happens, we will not see the eclipse of AI or match human minds.”... — PostX News, 19h ago
Latest
newWeaknesses:- Assumes the current AI capabilities paradigm will continue to dominate without addressing the possibility of a new, disruptive paradigm.- Doesn't address Yudkowsky's concerns about AI systems rapidly becoming too powerful for humans to control if a highly capable and misaligned AGI emerges.- Some critiques might not fully take into account the indirect comparisons Yudkowsky is making or overlook biases in the author's own optimism. — lesswrong.com, 23h ago
newConcerns have been raised about the extent of artificial intelligence GPT-4’s power to take over computers after the AI chatbot told a Stanford professor of its plan to “escape”. — inews.co.uk, 2d ago
new.... He previously founded Tempo AI, an artificial intelligence startup that originated at the Stanford Research Institute, where he served as an Entrepreneur-in-Residence. Tempo AI raised funding from Sierra Ventures and Relay Ventures and released a popular mobile smart calendaring virtual assistant before being bought by Salesforce. Earlier in his career, Corey served as engineering manager and architect for Microsoft Office in its enterprise software business across the SharePoint and Business Intelligence product lines. He was also an architect for VerticalNet, a publicly traded enterprise software company providing supply chain management solutions for the Global 2000. Corey is a graduate of California Polytechnic State University. — RTInsights, 23h ago
Based AI meaning is an AI that is open and unwoke, according to Elon Musk. Many speculate that Musk is trying out the name “BasedAI” for his future artificial intelligence business because he just used it in a tweet. — Dataconomy, 19d ago
Former Google employee Blake Lemoine claimed that the Large Language Model LaMDA was a sentient being. The claim got him fired. In this episode, Lemoine sits down with Robert J. Marks to discuss AI, what he was doing at Google, and why he believes artificial intelligence can be sentient. — Discovery Institute, 19d ago
Gates further said, “The leadership at Microsoft, Satya Nadella, and others at Open AI engaged me in looking at ChatGPT. I am very enthused by it. It’s amazing. Sometimes it doesn’t get it right. But it surprises you. Artificial intelligence (AI) up until now could recognise photos and speech. But it could not read or write. GPT has done that. It is a very, very valuable tool.”... — web3newshubb.com, 13d ago
You hear words like artificial intelligence and machine learning thrown around a lot. Often it's about how Facebook and Amazon know the next thing you want to buy even before you do. AI's played a role in trucking safety for a long time. Take dash cams for example. Netradyne Director of Performance Marketing, Austin Schmidt, who joins us on the 10-44 this week, said a dash camera with good AI capabilities makes the camera itself a better tool, capable of doing more than simply just keeping the carrier from losing a lawsuit. — Commercial Carrier Journal, 11d ago
So I think of AI as not artificial intelligence but augmented intelligence. So let me give you an example. I did use ChatGPT and said, write a press release about Mayo Clinic's association with a new technology company. And you've read press releases. They start with "What is it you're doing?" There's a quote from a CEO and another CEO, and then there's some conclusion. — American Medical Association, 8d ago
And Microsoft is incorporating the technology behind viral chatbot ChatGPT into its software for the workplace, including PowerPoint, Word, Excel, and Outlook. The move is the company's latest to stay ahead in the artificial intelligence race with more players getting into the game. And picking up on that point, yesterday we told you about Chinese company Baidu's plans to launch its own AI chatbot to rival ChatGPT, but there have been some snags along the way. Baidu did not give a live presentation at the launch, just a series of pre-recorded videos. And there have been some other issues, like the chatbot named Ernie Bot delivering factual mistakes. While that's common among chatbots, our Journal tech reporter Raffaele Huang says there are other issues with the bot that are unique to China. — WSJ, 5d ago
newI'd like to hear a little bit more about Sprout's use of artificial intelligence. It's been definitely a hot topic lately and I know Sprout just made an acquisition of Repustate to expand your AI capabilities. What are the benefits and the use cases of artificial intelligence for social media management?... — The Motley Fool, 1d ago
newChatGPT Chat with AI or Artificial Intelligence technology. Businessman using a laptop computer chatting with an intelligent artificial intelligence asks for the answers he wants. — Fair Observer, 1d ago
newComputer scientist Terrence Sejnowski believes that artificial intelligence can be a powerful tool, but only if used correctly. He compares the use of language models to riding a bicycle, saying that if you don’t know how to use them, you can end up in emotionally disturbing conversations. Sejnowski sees AI as the bridge between two revolutions: a technological one marked by the advance of language models and a neuroscientific one marked by the BRAIN Initiative. He hopes that computer scientists and mathematicians can use neuroscience to inform their work, and that neuroscientists can use computer science and mathematics to inform theirs. By combining the advances of both fields, Sejnowski believes that AI can be used to its full potential. — Boxmining, 18h ago
new...“Artificial intelligence performs nowhere near a human level of creativity, which is why it's important to cultivate your own innovation, since AI isn't replacing it anytime soon,” Satish says. Although we typically use tech to learn more about tech, Satish offers a very analog way to boost creativity–she looks at common, everyday items and thinks about the process of creating that thing. “Let's say your object is a desk chair—how did they decide the height of the chair? The material? The flat back? It seems silly, but your brain needs to exercise its innovation muscle, and this trick is actually backed by 60+ years of research in the ‘how’ of thinking,” she says. — WIRED, 2d ago
newI think this post would benefit from being more explicit on its target. This problem concerns AGI labs and their employees on one hand, and anyone trying to build a solution to Alignment/AI Safety on the other. By narrowing the scope to the labs, we can better evaluate the proposed solutions (for example to improve decision making we'll need to influence decision makers therein), make them more focused (to the point of being lab specific, analyzing each's pressures), and think of new solutions (inoculating ourselves/other decision makers on AI about believing stuff that come from those labs by adding a strong dose of healthy skepticism). By narrowing the scope to people working on AI Safety who's status or monetary support relies on giving impressions of progress, we come up with different solutions (try to explicitly reward honesty, truthfulness, clarity over hype and story making). A general recommendation I'd have is to have some kind of reviews that check against "Wizard of Oz'ing" for flagging the behavior and suggesting corrections. Currently I'd say the diversity of LW and norms for truth seeking are doing quite well at this, so posting on here publicly is a great way to control this. It highlights the importance of this place and of upkeeping these norms. — lesswrong.com, 1d ago
new..., DBA, Senior Researcher, Entrepreneurship and Business Renewal, Haaga-Helia University of Applied Sciences, Helsinki, Finland. Anna specialises in the transformation of work life from various perspectives, including the impact of AI, robotics and new technologies, innovation management and entrepreneurship, evolution of psychological contract, gender roles and women’s careers. Anna has close to 20 years of professional, industry, entrepreneurial, start-up and academic experience in Finland, across Europe and globally. Anna is an internationally published scholar and a recipient of the “Academic Paper Most Relevant to Entrepreneurs Award” presented by the United States Association for Small Business and Entrepreneurship. She is a long-term affiliate and invited professor with top international Triple Crown accredited business schools. Her ongoing RDI projects include understanding the role of artificial intelligence, robotics and new technologies in work life, including individual, team dynamics and organisational levels. See, for example, AI-TIE – AI Technology Innovation Ecosystems for Competitiveness of SMEs. Anna’s other recent research and development contributions have focused on gender as a factor in financial compensation and career progress, entrepreneurial leadership as a key competence of knowledge workers, circular-economy digital marketplace creation, and cross-industry co-development and co-innovation work that enables business renewal. — The European Business Review, 23h ago
Top
During an interview at the Upfront Summit in Los Angeles, Benchmark general partner Sarah Tavel said the venture capital firm is “being very active” in the artificial intelligence sector and described 2023 as “a very disruptive moment” for the industry. Her comments follow Benchmark bets on MindsDB, a startup that uses machine learning to create databases, and generative AI startup LangChain .... — The Information, 19d ago
...“We are moving towards artificial intelligence,” explains Cristina, talking about an upcoming project. “Physical objects that are unable to create an interaction with a person end up becoming boring and monotonous. That’s why I’ve decided to implement AI in babies.”... — #NOWTESDEFI, 10d ago
Picture an online technology that … with a little human prompting … can write, code, create images and audio and even make videos … almost as good as we can. That’s now a reality – they're called generative artificial intelligence systems. University of San Diego professor Anna (AN-uh) Marbut (MAR-bit) explains how one of them works. Anna Marbut USD Professor of applied artificial intelligence “You say ‘hi chat GPT. How are you doing today?’ And chat GPT produces probability for many different responses and then picks basically the highest probability response and that's what it spits out.” Chat G-P-T is the most famous version of generative A-I. It’s only been out since November… but already has more than 100-million users… including many high school and college students. Manu Agni UC San Diego Student “I would say students who are actively using it at least maybe for one assignment a week, probably a third to a half.” Manu Agni is a senior at U-C San Diego. He says many students won't admit to using the AI tools – because they feel guilty or are unsure if it will get them in trouble. In fact, UC San Diego sent out a letter to students about artificial intelligence systems. Manu Agni UC San Diego Student “Basically they said if a professor isn't explicitly allowing it, it's not allowed. It's considered cheating.” Agni says some UCSD professors have kept it banned, while others have given it a partial or full green light. Marbut says while the text-based systems can sound convincingly human… they’re not perfect. Anna Marbut USD Professor of applied artificial intelligence “The model can give false answers, can give answers that they are actually not supposed to give. So they've also been trained to not give harmful answers to questions, but you can trick them depending on how you prompt them.” A-I is also causing a stir in the art world… *NAT pop of voice cloning tech* Some local artists like Beck Haberstroh apply the technology in their work… but there is controversy over the way the systems are used. Beck Haberstroh San Diego artist who uses AI “They’re trained often times on the work of artists or writers who are not being credited or compensated for that work. And so to me that's a concern with how these kinds of programs might impact the arts community broadly speaking.” While some local artists are using generative AI to help create digital images and physical paintings … Haberstroh’s works often question the ethics of the fast-growing technology. Beck Haberstroh San Diego artist who uses AI “Who's well represented, who's not represented? Who's made more visible by them, who's made less visible by them? So I think there's a lot of potential for exploitation the more and more that we use these programs.” Agni compares the current quality of ChatGPT’s writing to a talented sophomore in high school, but he says it notably can't do citations just yet. Still, Agni says its use goes beyond the classroom. Manu Agni UC San Diego Student “For college application essays, for applications to graduate school, job applications, scholarships, writing samples for a creative job – this thing has infinite uses.” Marbut says rules and regulations for generative AI will be key as the technology is here to stay. But she did want to clarify one thing to people who are wary or scared about the so-far unregulated technology. Anna Marbut USD Professor of applied artificial intelligence “But I don't think that we as a society need to be worried about, you know, general artificial intelligence at this point. I think we're still a long ways off from that.” And while UC San Diego warns against using AI – Agni sees it as a tool rather than cheating. Plus, he says there's pressure to embrace generative AI … or risk falling behind. Manu Agni UC San Diego Student “I mean I don't use it to complete assignments, but certainly when I've had writer's block or when I've needed some inspiration on a topic, it's too tempting.” Marbut says the technology could impact many fields in San Diego in the coming years – such as business, science, healthcare and even the media. But just to be clear, chatGPT didn't help me write this story. And for now, I'm happy about that. Jacob Aere, KPBS News. — KPBS Public Media, 20d ago
newAndreas Welsch, vice president and head of marketing and solution management for artificial intelligence at SAP, says building AI-enabled applications in-house requires a paradigm shift and mindset shift in the way business and technology teams approach application development. — InformationWeek, 1d ago
new...“In the past year, new image generation platforms powered by Artificial Intelligence have made a surprising impression, impressing many with their ability to generate artwork uncanny images – and the threat these tools pose to the livelihoods of working artists. AI has quickly raised issues of copyright, as artists have challenged software firms’ use of publicly searchable, copyrighted imagery to train AI to produce comparable work. Meanwhile, other artists have embraced AI as a tool, and several AI generated comics have been published in the past year. Viktor Koen, chair of the Comics and Illustration departments at SVA, will speak with a panel of experts and industry activists on this complex topic. This special professional development panel is co-organized and sponsored by the School of Visual Arts Department of Continuing Education.”... — The Beat, 1d ago, Event
newWe are early on this journey. ChatGPT is not a fully realized product but an initial iteration of generative artificial intelligence — AI that produces original content rather than merely acting on or analyzing existing data. There are lots more to come. Microsoft’s Bing released its buggy, argumentative, and emotional version of ChatGPT in February. In conversation with an editor at The Verge, it claimed to have hacked, fallen in love with, and killed one of its developers at Microsoft. We’re now hearing calls for federal regulation of these services. — nationalpost, 2d ago, Event
new...“Artificial intelligence is rapidly transitioning from addressing niche challenges to solving mainstream industry sector and ecosystem problems. Appen stands at the vanguard of AI enablement, offering comprehensive solutions across diverse industries and clients, while crafting the next generation of human experiences powered by ethical AI,” stated Appen’s new CTO, Saty Bahadur. “During my in-depth due diligence of Appen, I engaged with top business leaders and technologists who validated our shared ambition to harness Appen’s AI platform. I am genuinely thrilled to join Appen on its mission to become the intelligent backbone that fuels the development of generative AI solutions, ultimately benefiting the industry with Appen’s AI For Good strategy of Do Good, Be Good, and Lead Good to enrich the human experience.”... — Appen, 1d ago
new..., President of Vantagepoint AI. “He knew a computer could analyze data better and faster than any human could. Today our software’s artificial intelligence can forecast the market better than any human using big data and statistical probabilities.”... — GlobalFinTechSeries, 1d ago
new.... The company developed a picking robot known as Dumbo, which works at a rate of 500-600 items per hour and is able to analyze the item placed in front of it using artificial intelligence, understand how to hold it and how to arrange it in a box in a smart and economical way. "The competition for me was not a one-off event but part of a process,” Kfir Nissim, CEO of Pickommerce AI Robotics said this week. During the preparation days, we went over how to present the company, convey the message, refinements and even the physical accessibility of the competition. Finally, the competition as a concluding event was very exciting, and the tools I received are part of my everyday life, even now a year after the competition."... — ctech, 1d ago
Latest
Trust Insights Marketing Analytics Consulting, 4d ago
...“So, I went to ChatGPT and I said, ‘Ask question of CIA Director Burns about threats from ChatGPT,'” Krishnamoorthi said. “It said, ‘Director Burns, what measures is the CIA taking to monitor and mitigate potential risks associated with the use of [artificial intelligence] language models like ChatGPT, and how would you prevent AI language models not to be used by malicious actors to spread false information or influence public opinion?’ That’s from my pal ChatGPT.”... — The Daily Signal, 11d ago, Event
...“The inspiration was the feeling that we all had about the future and where we were all moving with AI,” said Hernandez, who notes the project came about before ChatGPT and Open AI made artificial intelligence a subject of everyday conversations. When originating... — The Hollywood Reporter, 8d ago
.... We have partnered with the digitally focused news outlet and today we'll talk about artificial intelligence in the classroom. Later in the hour, we'll hear from a student at Princeton University whose peers enthusiastically embraced ChatGPT. So he developed an app to detect when AI wrote a piece of text. But first, let's get the basics with Pia Ceres, she's a senior digital producer at WIRED. Hi Pia. — WIRED, 12d ago
..., investor Chamath Palihapitiya mentioned that Coke succeeded thanks to another invention: refrigeration. Coca-Cola made more money than the people who invented the refrigerator, he said, and that could happen here, too. “If AI/LLMs [artificial intelligence/large language models] are the refrigeration,” he asked. “Who will be the next Coca-Cola?”... — CMSWire.com, 21d ago
In a blog post published by OpenAI Sam CEO Altman, he declared that his company’s Artificial General Intelligence (AGI)—human-level machine intelligence that is not close to existing and many doubt ever will—will benefit all of humanity and “has the potential to give everyone incredible new capabilities.” Altman uses broad, idealistic language to argue that AI development should never be stopped and that the “future of humanity should be determined by humanity,” referring to his own company. — BetaKit, 12d ago
...“The term ‘artificial intelligence’ is nothing more than a clever marketing term devised by computer scientists in the 1950s,” says Johan Rochel, an EPFL lecturer and researcher in law, ethics and innovation. The term implies that there are various forms of intelligence, and that the human kind is in competition with the others. If the more prosaic term “algorithm” were used instead of “artificial intelligence,” maybe people would feel less threatened and set aside their outlandish fears of a computer-dominated world. The truth is, there’s no real intelligence in AI. It’s based entirely on algorithms, which are purely mathematical. Computers can’t think for themselves. So why do they cause so much angst?... — epfl.ch, 14d ago
Latest
new..., John deVadoss, to discuss artificial intelligence in blockchain. In the conversation, deVadoss spoke about the waves of AI innovation over the previous decades, generative AI, transformer models, and large (language/text) models. — Neo News Today, 1d ago, Event
newDecisions made by people powered by artificial intelligence should keep the accountability and responsibility of the organization the same. Now that ChatGPT-XYZ has soaked into the corporate, educational, and government DNA, what role should AI play in the decision logic for the organization?... — Security Boulevard, 1d ago
newChief Investment Officer Jon Maier and Head of Thematic Solutions Scott Helfstein offer their perspectives on the current investing landscape, which today can be described as a period of major adjustments. After years of historically low rates, this higher rate environment creates different dynamic for investors. Consider that short-term treasuries at 5% are now in the conversation. Elsewhere, changes brought forth by innovation raise questions about how best to integrate thematic equity in a portfolio. Jon and Scott also discuss what it means to write research with artificial intelligence (AI), all the rage in 2023. Speaking of next-gen tech, and the capital behind it, they start with the news out of Silicon Valley. — IBKR Campus, 1d ago
...“An AI Bot Wrote This Copy,” reads one message line. “Influencers helped by artificial intelligence,” reads another. “Can AI Teach Us to Read?” reads yet another. And then there’s “Why the fuss over ChatGPT?” Oh, man, I just cannot get into ChatGPT at this stage of my life. — LimaOhio.com, 3d ago
new...“With this integration, we are yet again pioneering the online project management space,” said the Kanban Tool spokesperson. “By leveraging the power of artificial intelligence, the AI Assistant provides users with recommendations that help them achieve better results and stay focused on their goals. With its user-friendly interface and powerful functionality, the AI Assistant is a must-have for any team looking to take project management to the next level.”... — MarTech Series, 1d ago
new...in a series of tweets that the world may not be "that far from potentially scary" artificial intelligence. Altman expressed support for regulating AI in the tweets and said rules were "critical," and that society needed time to adjust to "something so big."... — Business Insider, 2d ago
newA traffic "Stop" sign on the roadside can be misinterpreted by a driverless vehicle as a speed limit sign when minimal graffiti is added. Wearing a pair of adversarial spectacles can fool facial recognition software into thinking that we are Brad Pitt. The vulnerability of artificial intelligence (AI) systems to such adversarial interventions raises questions around security and ethics, and many governments are now considering proposals for their regulation. I believe that mathematicians can contribute to this landscape. We can certainly get involved in the conflict escalation issue, where new defence strategies are needed to counter an increasingly sophisticated range of attacks. Perhaps more importantly, we also have the tools to address big picture questions, such as: What is the trade-off between robustness and accuracy? Can any AI system be fooled? Do proposed regulations make sense? Focussing on deep learning algorithms, I will describe how mathematical concepts can help us to understand and, where possible, ameliorate current limitations in AI technology. — ICMS - International Centre for Mathematical Sciences, 1d ago, Event
newI can’t read the news or look at my social media feeds without seeing items about the potential implications of artificial intelligence (AI) on our future lives. Alarmed professors report college term papers are being written with... — 7wData, 2d ago
new...“We continue to make progress. The technology continues to advance. New things are coming in all the time. It’s never done. We have a very large office in Tel Aviv filled with AI [artificial intelligence] genius types. — Travolution, 1d ago, Event
new..."With this integration, we are yet again pioneering the online project management space," said the Kanban Tool spokesperson. "By leveraging the power of artificial intelligence, the AI Assistant provides users with recommendations that help them achieve better results and stay focused on their goals. With its user-friendly interface and powerful functionality, the AI Assistant is a must-have for any team looking to take project management to the next level."... — EIN Presswire, 1d ago
In her welcoming remarks, HKTDC Executive Director Margaret Fong said: "As global business exchanges gradually resume and new business opportunities emerge, enterprises must keep abreast of the latest market trends and consumer preferences to rise above the competition. With a line up of heavyweight speakers, the conferences featured more than 60 international speakers - including marketing executives, brand representatives, advertising professionals and e-commerce experts, who shared their success stories and provided insights into the latest marketing trends. Attendees, especially SMEs, received briefings on industry developments and how to capture business opportunities."Think bigger about marketing and designHeavyweight speaker Mauro Porcini, PepsiCo's Senior Vice President and Global Chief Design Officer, shared his thoughts on integrating marketing and design. As PepsiCo's first Chief Design Officer, Porcini revealed design insights on building stronger ties with customers through innovation, as well as portending future design trends. "We need to understand how to create something meaningful for people, creating products that are functionally relevant, that create an emotional connection, but that people are also proud about, that they want to share with the rest of the world," he said.Brands need to grasp Generation Z's new consumer powerHighly connected and comfortable with technology, the values and spending preferences of Generation Z are likely to have significant impact on businesses' bottom line for some time to come. Gaetan Belaud, head of Spotify's global advertising agency, shared the listening habits and values of Gen Z, and dissected what such factors meant for brands. Belaud said: "One in five persons in Hong Kong is streaming on Spotify. Audio has taken the centre stage and people are streaming more, particularly the Gen Z population. For the Gen Z population, brands need to understand the kind of music or podcast they listen to, echo that in the brand message, and engage them in the conversation."The global phenomenon of ChatGPT and new artificial intelligence (AI) applicationsThe pace of development in AI has been rapid. With the launch of ChatGPT, it has become a hot topic globally. Keith Li, Chairman of the Hong Kong Wireless Technology Industry Association; Shek Ka Wai, Founder of Online Marketing Player and Ivan So, Digital Marketing Consultant at HDcourse Limited; analysed the multiple applications of this revolutionary technology during a session at this year's conference. Li said: "The main trend we see is with generative AI and applying this to business use. We should focus... R&D on applying AI technology for solving real business problems."Unique insights from heavyweight speakersThis year's event also focused on other hot global issues, including Web3, virtual idols, data-driven marketing, ESG, happiness and marketing, the future of retail and brand storytelling. In that spirit, many overseas marketing experts have come to Hong Kong precisely to present their ideas on marketing - including prominent figures in the field such as Silvia Garcia, former President of the Happiness Institute and Director of Global Marketing for Coca-Cola; Brian Yiu, CEO, FILA China; Moritz von der Linden, former Global Chief Digital Marketing Officer, Mars (2020-2022); Gao-na, Head of Mengniu Overseas business, Hong Kong and Macao region, Inner Mongolia Mengniu Dairy (Group) Company Ltd; Alex Zhou, Chief Customer Officer of POP MART; David Bell, Founder of Pretty Ballerinas; Louisa Zhu, Co-founder & CEO of Meta Human Centre, RM Group; and Bin He, Chief Customer Officer for Tim Hortons China.Networking brings new business opportunitiesIn addition to the various sessions and Innotalks, an exhibition area was set up to showcase over 40 suppliers providing marketing services and e-commerce solutions, presenting Hong Kong's diverse and quality marketing services to attendees from overseas and Mainland China more broadly. Additionally, a matching service involving more than 180 businesses was also set up to provide one-on-one meetings between brand and marketing company representatives to facilitate collaboration. Towards the end of the event, a Happy Hour musical performance was held featuring Chris Polanco and Azucar Latina Band for brand representatives and marketing-related companies to unwind, exchange marketing tips and in general, broaden their networks in a somewhat more laid-back atmosphere.This latest edition of MarketingPulse was supported by a number of organisations and industry associations, including the Hong Kong Federation of E-Commerce, Hong Kong Public Relations Professionals' Association, Hang Seng Bank, the Association of Accredited Advertising Agencies of Hong Kong, and IAB Hong Kong powered by HKDMA.MarketingPulse Online available from 16 March until 15 AprilThe online platform for MarketingPulse will be accessible to industry professionals as of 16 March, until 15 April. During this period, they can continue to take advantage of the many features of the platform and revisit the various events online.Organised by the Hong Kong Trade Development Council (HKTDC), the HKTDC Pulse Series includes "MarketingPulse", "eTailingPulse" and "EntertainmentPulse", bringing together executives from across the marketing, entertainment and e-commerce sectors to facilitate networking and collaboration.MarketingPulse website:... — acnnewswire.com, 3d ago, Event
newWhat I tried to do with this book is not be at one extreme or the other. What’s important to me is to not miss the opportunity to highlight the behavioral impact and consequences that we have already seen artificial intelligence have on us. This is not a book about AI, but about humans in the AI age. — McKinsey & Company, 2d ago
...“Technology improves exponentially, not in a linear fashion,” says Jim Lecinski, associate professor of marketing at Northwestern University’s Kellogg School of Management, and coauthor of The AI Marketing Canvas: A Five-Stage Road Map to Implementing Artificial Intelligence in Marketing. “For everyone who was dismissing [AI-powered chatbots] over the past few months because they didn’t phrase a response as perfectly or eloquently as a human would have, well, here we are; those criticisms and reasons not to use AI in your marketing toolkit have now largely been addressed. — The Drum, 4d ago
...he fundamental concepts of academic artificial intelligence have not changed in the last couple of decades. The underlying technology of neural networks – a method of machine learning based on the way physical brains function – was theorised and even put into practice back in the 1990s. You could use them to generate images then, too, but they were mostly formless abstractions, blobs of colour with little emotional or aesthetic resonance. The first convincing AI chatbots date back even further. In 1964, Joseph Weizenbaum, a computer scientist at the Massachusetts Institute of Technology, developed a chatbot called Eliza. Eliza was modelled on a “person-centred” psychotherapist: whatever you said, it would mirror back to you. If you said “I feel sad”, Eliza would respond with “Why do you feel sad?”, and so on. (Weizenbaum actually wanted his project to demonstrate the superficiality of human communication, not to be a blueprint for future products.)... — the Guardian, 4d ago
Now, the big thing that has changed from an audience perspective in the last five years is that, you know, we started talking about machine learning and AI an awful lot in the early years, and the world wasn’t ready for it. The world was was still in love with the idea, but couldn’t see how that came to reality. And today, again, when we look around, I just booked a speaking engagement this morning, like, Hey, can you come talk about chat, GBT? I like, yes. But that the last 18 months in terms of what’s possible with AI is not new. But the interface, the wave, the accessibility of it to people is new. People who previously had no experience, and no understanding of what artificial intelligence tools could do, are now you know, making pictures of dinosaurs wearing cowboy hats surfing on on stuff, you know, and cranking out blog posts, left, right and center with with chat, GPC. That, as the audience changes in sophistication, we’re now starting to get closer to I think, like you said, when we started five years ago, we have these great ideas about how to apply these tools to the businesses that people had, and they weren’t ready. And now, five years later, they’re much closer to being ready to saying, Okay, I’m ready to see how this thing might work. In my business, I’m ready to see if I can save some time or make some money with this. And I think looking forward, that’s going to be probably the most exciting thing over the next 12 to 18 months is to help people understand what the tools do understand that tools are only a tiny part of the big picture, what is the purpose of what you’re doing? Who’s going to do it? What are the processes around it? And then, you know, how do you measure the performance of it, that framework is going to, I think, help people establish what the tools with the context of the tool is. So I’m looking forward to that, you know, in the in the next 12 to 18 months, As the world continues to accelerate ever faster every day. — Trust Insights Marketing Analytics Consulting, 4d ago
Latest
newGAI is not equal to AGI i.e. Generative AI is not the same as Artificial General Intelligence. We have very far to go to achieve AGI, if at all. GAI does not have emotion, reasoning, sentience, feelings, etc. and it will take a very long time to even achieve some similarity to AGI. But it is a promising direction towards that. — web3newshubb.com, 2d ago
I just read an article this evening, Eric, where GM vice-president Scott Miller states that “the nationalized company’s vehicles will soon be equipped with the notoriously woke ChatGPT artificial intelligence (AI) robot system, which he says will function as a “virtual assistant” to drivers. As part of a broader collaboration with Microsoft, GM wants to have the woke chatbot perform various functions and services for drivers, including providing on-demand information about a vehicle’s features. The AI system will also advise drivers about what to do when a diagnostic light appears on the dashboard.” And here I thought the seat belt nanny was bad just taking a quick drive around the corner to my neighbors house….. — EPautos - Libertarian Car Talk, 4d ago
In a world exclusive, Nick Bostrom, the top AI expert who predicted the existential risk super intelligent AI poses to humanity, tells Vikkram Chandra that it looks like 'Artificial Intelligence is finally coming together', and super intelligent AI will be impossible to contain. Should you be scared? Listen in to the conversation. — WION, 3d ago
Gallagher, now AOI’s executive director, emphasizes that Eckersley’s vision for the institute wasn’t that of a doomsaying Cassandra, but of a shepherd that could guide AI toward his idealistic dreams for the future. “He was never thinking about how to prevent a dystopia. His eternally optimistic way of thinking was, ‘How do we make the utopia?’” she says. “What can we do to build a better world, and how can artificial intelligence work toward human flourishing?”... — WIRED, 15d ago
Generative AI will simplify code generation by talking in natural language, and the AI systems then build the format for that programming language. Generative AI also helps with code reviews and audits to spot problems and ensure quality, according to Neil Sahota, lead artificial intelligence adviser to the United Nations and cofounder of the AI for Good Global Summit. — InformationWeek, 12d ago
Most generative artificial intelligence chatbots have some topics that are harder to talk about than others. It could be because of safeguards set up to avoid hate speech or because they haven't been trained on certain information, or for censorship reasons. This is a conversation from Chinese AI chatbot Gipi Talk and a Wall Street Journal reporter. It's being voiced by members of the WSJ audio team, Ariana Aspuru and Danny Lewis. — WSJ, 5d ago
No, in fact, there is nothing wrong with Mr. Takış’s article; quite the contrary you learn a lot from it. I do not want to put the wrong ideas into the heads of our readers who are going to present a thesis, or a term project report in their classes these days, but the article teaches you how to use the ChatGPT website to write a perfectly passable piece of writing by asking “the same questions several times” and diving its paragraphs and asking again; the “artificial intelligence” (AI) behind the “Language Model for Dialogue” improved its styling and produced a 307-word piece on political correctness, cancel culture and new media in no time. — Daily Sabah, 19d ago
People who try sniffing “nobody in alignment understands real AI engineering”… must have never worked in real AI engineering, to have no idea how few of the details matter to the macro arguments. I’ve implemented a QKV layer and a learning rate schedule, sure, *just in case* there was some incredible insight hiding in details like that… …and there isn’t! If you understand the idea of differentiating the whole network with respect to a loss function on the output, you have the key idea for how stochastic gradient descent works as an optimization process. Sure, in _real AI engineering_, you won’t get far without adding momentum – but that doesn’t change one single damn thing I can think of, about any of the macro arguments about whether or not AGI is going to killeveryone. The people telling you that only a real solid deep AI engineer could possibly get the macro arguments on alignment right, are perhaps, being unfamiliar with the field’s details, mistaken about how much relevant technical detail ever existed to be learned. Or, of course, if they’re _real AI engineers_ themselves and *do* know all those technical details that are obviously not relevant – why, they must be lying, or self-deceiving so strongly that it amounts to other-deception, when they try that particular gambit for bullying and authority-assertion. — lesswrong.com, 12d ago
Google’s AI Robot Terrifies Officials Before It Was Quickly Shut Down. Google engineer Blake Lemoine began talking to LaMDA as part of his job to test if the artificial intelligence used discriminatory or hate speech. — Before It's News | People Powered News, 12d ago
For people who are relatively new to the problem, an example of finding a failure in a design (I don't expect people from AI labs to think it can possibly work): think what happens if you have two systems: one is trained to predict how a human would evaluate a behavior, another is trained to produce behavior the first system predicts would be evaluated highly. Imagine that you successfully train them to a superintelligent level: the AI is really good at predicting what buttons humans click and producing behaviors that lead to humans clicking on buttons that mean that the behavior is great. If it's not obvious at the first glance why an AI trained this way won't be aligned, it might be really helpful to stop and think for 1-2 minutes about what happens if a system understands humans really well and does everything in its power to make a predicted human click on a button. Is the answer "a nice and fully aligned helpful AGI assistant"?See Eliezer Yudkowsky's... — lesswrong.com, 3d ago
In a tweet on March 3, Musk announced his growing interest in artificial intelligence, stating that he believes ChatGPT is providing false information to users in order to avoid offending anyone. Musk co-founded OpenAI, the company behind ChatGPT, back in 2015 but left three years later due to disagreements with the other co-founders. Rumors have been circulating that Musk is seeking to hire AI experts to set up his own AI company to rival OpenAI. — CoinCu News, 3d ago
newAs noted here earlier, this new and disruptive ability of AI tools to move through a myriad of data — image, speech, text, etc. — is what makes modern artificial intelligence solutions worthy of being called “intelligent.”... — pymnts.com, 1d ago
This is a moment of immense peril: Tech companies are rushing ahead to roll out buzzy new AI products, even after the problems with those products have been well documented for years and years. I am a cognitive scientist focused on applying what I’ve learned about the human mind to the study of artificial intelligence. Way back in 2001, I wrote a book called The Algebraic Mind in which I detailed then how neural networks, a kind of vaguely brainlike technology undergirding some AI products, tended to overgeneralize, applying individual characteristics to larger groups. If I told an AI back then that my aunt Esther had won the lottery, it might have concluded that all aunts, or all Esthers, had also won the lottery. — 7wData, 5d ago
new..., a developer of enterprise resource planning software for the media industry, said it has launched a new, artificial intelligence (AI) tool, which it says is aimed at content management, audience engagement, and revenue generation for media companies. The company said its AI engine provides automated metadata, identification of things like places of interest, celebrities, brands, key moments, and dialogues; along with other features. Pricing on the new software was not announced. — socaltech.com, 1d ago
I recently listened to Gary Marcus speak with Stuart Russell on the Sam Harris podcast (episode 312, "The Trouble With AI," released on March 7th, 2023). Gary and Stuart seem to believe that current machine learning techniques are insufficient for reaching AGI, and point to the recent adversarial attacks on KataGo as one example. Given this position, I would like Gary Marcus to come up with a new set of prompts that (a) make GPT-4 look dumb and (b) mostly continue to work for GPT-5. — lesswrong.com, 4d ago
Top
With the caveat that I’m not an AI expert by any means, I think I would reiterate my position that my concern about approaches that try to parse whether something is an algorithm or whether something is artificial intelligence or whether something is troubling from a different perspective, I would rather the conversation be about, again, the fundamental flaws and the incentive structure that Section 230 promotes, rather than trying to figure out whether one particular category or another is presenting a different kind of harm. I think the better approach is to look at the fundamentals in incentive structure and ensure that these companies are not getting unqualified immunity. — Tech Policy Press, 9d ago
ChatGPT is a large language model (LLM) that has inspired many people because it is able to succinctly demonstrate to everyone, irrespective of role, the power of the conversational interface. Open AI, the group responsible for ChatGPT, has been upfront with its limitations, including warning labels about bias and unintended uses. ChatGPT details that its training data was curated in 2021 for example, so its knowledge of current events are limited. With all that said, the realization of what artificial intelligence (AI) is now doing to transform the world is becoming clearer and clearer—for better or for worse. — The ChannelPro Network, 18d ago
In his letter, Pichai also talked a little bit about Google’s future and priorities following these layoffs. He specifically singled out Google’s “early investments in AI” as one of the opportunities he was “confident” about. This is very interesting because other tech giants, like Microsoft, in the midst of massive layoffs, also confirmed increased investment in artificial intelligence. This doesn’t mean that these employees are being... — The Mary Sue, 15d ago
The closing segment was for Briand Madsen (Director International for Cars2click), who spoke about adding the ideal fleet to maximise RV. Cars2click is a young company making great strides by integrating Artificial Intelligence into its remarketing processes. Key insight: “AI can make a lot better forecasts than we can”, but: “AI will not replace people. People who use AI will replace those who don’t.”... — Fleet Europe, 4d ago
The Tesla CEO, Elon Musk, one of the first investors in OpenAI when it was still a non-profit company, has repeatedly issued warnings that AI or AGI – artificial general intelligence – is more dangerous than a nuclear weapon. — the Guardian, 4d ago
...“We are now at a stage with language models that the Wright brothers were at Kitty Hawk with flight—off the ground, at low speeds,” says Sejnowski. “Getting here was the hard part. Now that we are here, incremental advances will expand and diversify this technology beyond what we can even imagine. The future of our relationship with artificial intelligence and language models is bright, and I’m optimistic about where AI will take us.”... — ucsd.edu, 4d ago
Baidu Inc’s recent debut of its AI chatbot, Ernie Bot, has left many investors and tech enthusiasts disappointed. Instead of putting the service through its paces in real-time, the company’s billionaire founder, Robin Li, talked over a scripted video of interactions with the artificial intelligence bot. As a result, the omission raises questions over Ernie’s ability to match OpenAI’s ChatGPT, which has impressed and worried users since its launch in November last year. — Wonderful Engineering, 4d ago
...report, and it contained a series of bold predictions about electric vehicles, robotics, aerospace, and (of course) artificial intelligence (AI). According to ARK's predictions, that last one could have a gigantic financial impact on the economy. — The Motley Fool, 5d ago
newWhat should one make of Rich Sutton? He’s a rock star in AI (artificial intelligence), and a geek to meet. — nationalpost, 2d ago
In a recent essay co-written with linguistics professor Ian Roberts and AI researcher Jeffrey Watumull, renowned linguist Noam Chomsky expressed scepticism about the potential impact of artificial intelligence (AI) on people’s capacity for independent thought and creation. — Wonderful Engineering, 11d ago
..., he opined that “the scariest problem” is artificial intelligence — an invention that could pose an unappreciated “fundamental existential risk for human civilization.”Musk has, for years, seemed to be attuned to the dangers of AI. As far back as 2014, he told students at MIT that “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”So it might seem that Musk would be very cautious in how his companies deploy AI, and how carefully they stay within guidelines.Not exactly. Musk is a big player in AI, in part through his car business. Elon Musk has described... — Inferse.com, 6d ago
...responsible for determining principles and guidelines to oversee artificial intelligence initiatives, but a gap remains between that segment of the company and how those plans are translated to their own projects. “People would look at the principles coming out of the office of responsible AI and say, ‘I don’t know how this applies,’” a former employee told... — Popular Science, 7d ago
Dr. Byron Gaskin explores questions about artificial intelligence (AI) and more at Booz Allen in his multifaceted role as an AI subject matter expert and senior data scientist focusing on health. — boozallen.com, 18d ago
Musk has praised and been critical of ChatGPT describing it as “scary good” in tweets and saying “we are not far from dangerously strong AI” which is also known as AGI or Artificial General Intelligence. — Tech Monitor, 21d ago
The ethics of AI is a divisive subject — particularly as chatbots like ChatGPT become more popular and widely used. However, AI does not have independent thoughts — it’s just good at making people think it does, Anna Marbut, a professor at the University of San Diego’s applied artificial intelligence program, told Insider’s Cheryl Teh. — TodayHeadline, 12d ago
Latest
new...)-- World-renown photographer Tim Tadder, who has become one of the pioneering voices of AI art, will unveil his version of the Mona Lisa at Avant Gallery’s booth at the forthcoming Palm Beach Modern + Contemporary art fair, March 23-26, 2023.With Finzione da Vinci (AI Mona Lisa), Tim Tadder delivers his challenge to the future of art. What does this new technology augur for art? For society? Tadder’s work presents us with omens for fortune’s feint and favor with his own Mona Lisa.To generate art, AI models are trained on oceans of images of all kinds—gorging themselves on our visual culture. Its programming seeks out the mathematical principles underlying the various styles and subjects. It finds the unspoken similarities between the things we consider beautiful, and it recapitulates these hidden aesthetic laws to produce new images on command.The process is not so unlike the revolution in art we call the Renaissance. Its leading master, Leonardo da Vinci, worked out the mathematical proportions of the human body, studied the optics to uncover the secrets of his sfumato, and dissected cadavers to rebuild his models from the bones up.This analytic fervor found its apotheosis in the capstone of the Italian Renaissance—the Mona Lisa. It was as much a scientific as artistic achievement, the culmination of so much study, so much formulae.In fact, some scholars believe the key work of the Renaissance to be a finzione—not a portrait of a real, living woman but the product of Leonardo’s imagination, just like this AI version.Now, Tadder takes this same approach, recontextualized in the high-tech present that everyday bleeds more into the future.His generated Mona Lisa is covered in a pastel rainbow, a 21st century Pied Piper. Behind her synthetic beauty and exquisite rendering is the conjurer itself, as if she stands in for the coming wave of AI, becomes its face.Rather than satisfy himself as a prophet of this dawning age, Tadder puts the wolf in sheep’s clothing, reminding us that whatever AI will do to the world, however it will reorient human life, it will be so easy to let it in.In 2023, we laugh at the mangled fingers and melting eyes that come out of AI art generators. We forget that the early Renaissance masters had their own trouble with depicting hands, with getting faces just right. But then, one day, it all clicks in the mind of a master like Leonardo.For himself, Tadder has always worked in the interplay between art and technology. He was one of the first fine art photographers to go digital. His photographs revel in high technique, creating crisp, surreal worlds that shimmer with all the possibilities of the process tools computer technology has given us.But now, the computer steps into the generation of images. No longer do we have to turn to Photoshop to tweak our picture. We can simply type out our demands, and in moments a new work of art is born.It’s art reduced to essence: the idea alone is the connection between the creator and the created.Even now, AI is destroying our notions about the value and meaning of art, the role of the artist. Whether this excites or frightens, it doesn’t matter. There is no stopping it. But for now, even if for only a moment, Tadder’s Finzione da Vinci (AI Mona Lisa) allows us to pause and reflect on this coming wave—even look it in the face.Renown Photographer and Digital Artist Tim Tadder Embraces AI Art with Unique Works Launched at Avant GalleryAvant Gallery is pleased to present new work by Tim Tadder—a leading voice in the revolutionary new field of AI art. Working with artificial intelligence platforms, Tadder’s work explores the boundaries of authorship and the bleeding edge of technology.His pieces elevate pop culture characters into manipulated icons that explore the surreal spaces that AI tools open to us all. For instance, in Bride AI, Darth Vader is presented in a pastel-colored world, dressed in a wedding gown. The details are exquisite and the reality of the image convincing, yet the absurdity of the content threatens to break our credulity.Tadder delights in keeping us here, hovering in a dream world that is simply too real to be fake, too unreal to be believed. It reflects back to us both our hopes and fears of AI. By combining these novel tools with influences from the ocean of images that inundate us everyday, Tadder is able to comment on the world and our reaction to it.Being a pioneer in art spaces is nothing new for Tadder. He was one of the early adopters of digital photography in a fine art context. Once again, he is using a mixture of art and science to give us something entirely new. Today, his image making process turns art on its head.Tadder’s new technique represents our earliest attempts to grasp what these AI tools and their outputs mean for humanity. Because of the way the underlying algorithms evolve over time, each piece is genuinely one-of-a-kind. This is a new version of a process that has played out over millennia. New technology has always instigated change in the art world—from the discovery of red ochre in the paleolithic to the introduction of the canvas in Renaissance Italy.Avant Gallery now makes Tadder’s groundbreaking AI art available to collectors through its network of gallery locations. Tadder’s AI art was unveiled at Avant Gallery’s stand at the recent Art Wynwood art fair and the series was pre-sold at $18,000 a print.About Avant GalleryAvant Gallery is a leading network of contemporary art galleries, with exhibition spaces in New York City, South Florida, and Dubai.Since its launch in Miami Beach in 2007, Avant Gallery has become an exciting and innovative presence in the art world, committed to representing both established as well as early-to-mid-career contemporary artists. The gallery has always spearheaded the accessibility factor of its presentation and modus operandi by choosing to open its locations in lifestyle-driven destinations. Today, Avant operates in four venues, including its New York City flagship at Hudson Yards, in the heart of Miami at Brickell City Centre, in Aventura in the luxury wing of Aventura Mall, and an exhibition space in Dubai’s Four Seasons Resort.To learn more about Avant Gallery, you can... — PR.com, 2d ago, Event
Had a bit of a freak out and liquidated my investments in Google, Microsoft, Nvidia and Meta after watching Eliezers interview with the bankless podcast, not because I thought it actually change the trajectory of AGI but because I don’t want to invest in the end of humanity. Are there any actually ethical safe AI investments to be had?... — lesswrong.com, 4d ago
...but artificial intelligence too. Earlier he stated that ChatGPT had started providing texts with false information to users programmed by its creator, Open AI company, so as not to offend users on any basis (politics, religion, nationality and so on). — U.Today, 3d ago
...has taken the world by storm, and since its launch, I’ve been interacting with the chatbot closely. AI chatbots and writers can help lighten your workload by writing emails and essays and even doing math. They use artificial intelligence to generate text or answer queries based on user input. ChatGPT is one popular example, but there are other noteworthy chatbots.Although I have been impressed with the AI’s advanced skills and its human-like conversation capabilities, I have had a couple of recurring issues. After getting early access to the updated version of Microsoft’s... — Inferse.com, 3d ago, Event
new...to extend its automated recommendation capabilities to reduce mean-time-to-remediation.ChatGPT, developed by OpenAI, is based on a large language model that Logz.io uses to enrich crowdsourced data. That approach enables ChatGPT to surface links to related information and best practices for resolving IT issues.Logz.io CTO Asaf Yigal said the company has integrated ChatGPT with its existing AI engine, dubbed Cognitive Insights, that applies machine learning algorithms to identify anomalies indicative of potentially critical IT issues. The company expects to be able to add other generative AI capabilities as additional advances are made, he added.The Logz.io Open 360 platform is based on open source OpenSearch, Prometheus and Jaeger software that the company has infused with machine learning algorithms. The integration with ChatGPT is enabled via an application programming interface (API) that OpenAI has provided.The overall goal is to employ AI to surface the most relevant issues a DevOps team needs to focus on without having to search huge volume of logs, tracing and metrics data, said Yigal. Generative AI platforms solve a fundamental cognitive load problem created today as DevOps teams collect more data, noted Yial. The more data a DevOps team collects, the harder it becomes to determine what’s going on in an IT environment, said Yigal. He added that generative AI platforms make it possible to analyze massive amounts of data at a scale that isn’t otherwise achievable.AI tools should make it possible to manage IT at a level of scale that eliminates many of the low-level data engineering and analytics tasks that previously required manual effort from a DevOps engineering team. It’s not clear how many other DevOps tools and platforms will be invoking generative AI capabilities in the months ahead, but DevOps teams should assume that large language models will soon be augmenting DevOps teams in ways that previously would have been thought unlikely.For example, discovering vulnerabilities in software running in production environments is about to become trivial as artificial intelligence (AI) platforms such as ChatGPT evolve. A generative AI platform can identify code vulnerabilities in much the same way as a traditional scanning tool. An AI platform, however, could theoretically monitor code repositories and scan them for vulnerabilities as updates and commits are being made in real-time. DevOps teams should be able to use generative AI platforms to discover and remediate these issues before code is deployed in a production environment.It’s just as probable bad actors will use the same platform to achieve the same goal, so from that perspective, many DevOps teams will now find themselves in an AI arms race to improve... — Inferse.com, 2d ago
...two months ago, claimed that architects who ignore the potential of artificial intelligence (AI) “risk sleepwalking into oblivion”.It made the alarming comments in a text conversation with AI architecture expert... — Inferse.com, 5d ago
I think a good move for the world would be to consolidate AI researchers into larger, better monitored, more bureaucratic systems that moved more slowly and carefully, with mandatory oversight. I don't see a way to bring that about. I think it's a just-not-going-to-happen sort of situation to think that every independent researcher or small research group will voluntarily switch to operating in a sufficiently safe manner. As it is, I think a final breakthrough AGI is 4-5x more likely to be developed by a big lab than by a small group or individual, but that's still not great odds. And I worry that, after being developed the inventor will run around shouting 'Look at this cool advance I made!' and the beans will be fully spilled before anyone has the chance to decide to hush them, and then foolish actors around the world will start consequence-avalanches they cannot stop. For now, I'm left hoping that somewhere at-least-as-responsible as DeepMind or OpenAI wins the race. — lesswrong.com, 4d ago
ChatGPT is an artificial intelligence program developed by a company called OpenAI. In 2015, Elon Musk, Sam Altman, Greg Brockman, Ilya Sutskever and Wojciech Zaremba founded OpenAI, an artificial intelligence research organization. OpenAI has other programs, but ChatGPT was introduced in 2018.ChatGPT is based on GPT-3, the third model of the natural language processing project. The technology is a pre-trained, large-scale language model that uses GPT-3 architecture to sift through an immense pool of internet data and sources to reference as its knowledge base.This AI is a well of knowledge, but its ability to communicate is what sets it apart from other technology.It has been fine-tuned for several language generation tasks, including language translation, summarization, text completion, question-answering and even human diction.ChatGPT is a transformer-based neural network that provides answers and data with human writing patterns. The AI has been programmed with endless amounts of text data to understand context, relevancy and how to generate human-like responses to questions. — Inferse.com, 5d ago
Disclaimer: This post is largely based on my long response to someone asking why it would be hard to stop or remove an AI that has escaped into the internet. It includes a list of obvious strategies such an agent could employ. I have thought about the security risk of sharing such a list online, however I do think that these are mostly points that are somewhat obvious and would clearly be considered by any AGI at least as smart as an intelligent human, and I think that right now, explaining to people what these risks actually look like in practice has a higher utility value than hiding them in the hope that both such lists don't exist in other places of the internet and also that an escaped rogue AI would not consider these strategies unless exposed to these ideas.The exact technical reasons for why I think that these strategies are plausible are not being discussed in this post, but if there are questions why I think certain things are possible feel free to bring it up in the comments.Let's go:... — lesswrong.com, 3d ago
Top
Crypto Press Cryptocurrencies News Network, 26d ago
Yet, these findings do not entirely confirm the idea that AI explanations should decrease overreliance. To further explore this, a team of researchers at Stanford University’s Human-Centered Artificial Intelligence (HAI) lab asserted that people strategically choose whether or not to engage with an AI explanation, demonstrating that there are situations in which AI explanations can help people become less overly reliant. According to their paper, individuals are less likely to depend on AI predictions when the related AI explanations are easier to understand than the activity at hand and when there is a bigger benefit to doing so (which can be in the form of a financial reward). They also demonstrated that overreliance on AI could be considerably decreased when we concentrate on engaging people with the explanation rather than just having the target supply it. — MarkTechPost, 4d ago
Perhaps the best window into what those working on AI really believe comes from the 2016 survey of leading AI researchers. As well as asking if and when AGI might be developed, it asked about the risks: 70 percent of the researchers agreed with Stuart Russell's broad argument about why advanced AI might pose a risk; 48 percent thought society should prioritize AI safety research more (only 12 percent thought less). And half the respondents estimated that the probability of the longterm impact of AGI being "extremely bad (e.g., human extinction)" was at least 5 percent. I find this last point particularly remarkable—in how many other fields would the typical leading researcher think there is a one in twenty chance the field's ultimate goal would be extremely bad for humanity?... — lesswrong.com, 6d ago, Event
Initially your gut feeling about AI is one of panic: it will mean no more white-collar jobs, with low-skilled jobs also wiped away. Not true. We will re-educate and innovate our skills. We must not fear the future — humans are remarkably creative as well as destructive. Look at the ingenuity that came to the fore when faced with a global pandemic. Humans won, not the virus. That doesn’t lessen the need for careful monitoring as AI begins to take hold. Regulating AI systems is a challenge that we cannot shirk — biased models, broader risks to business, society and use of personal data are sizable hurdles. The EU Artificial Intelligence Act is expected to go through the EU Parliament at the end of this month. — Evening Standard, 4d ago
Like other forms of artificial intelligence, generative AI learns how to take actions from past data. It creates brand new content - a text, an image, even computer code - based on that training, instead of simply categorizing or identifying data like other AI. — Reuters, 4d ago
As generative AI becomes more common in enterprise applications, ChatGPT could be a good fit for healthcare, according to Neil Sahota, lead artificial intelligence adviser to the United Nations and cofounder of the AI for Good Global Summit. — InformationWeek, 4d ago
.... Another interview, Episode 39, discussed AI trends in the investing world. Todd Hawthorne of Frontier Alpha Source Technologies talked to IBKR’s Senior Trading Education Specialist Jeff Praissman about the “Rise of the Six-Million Dollar Analyst, How Artificial Intelligence (A.I.) is Changing the Investment Landscape.” Visit the... — IBKR Campus, 4d ago
Latest
AI is potentially a powerful tool for keeping talent on board and engaged. But talent is needed to make things happen. Artificial intelligence -- and related forms of high-level automation and analytics -- have become the tool of choice for helping businesses plug their ever-persistent talent gaps. The catch is, however, businesses are struggling to find the skills necessary to identify, build, and deploy the AI and automation needed to resolve their skills shortages. AI is potentially a powerful tool for keeping talent on board and engaged, For example, Kshitij Dayal senior VP at Legion points to AI-driven capabilities, such as AI-powered workforce management and demand forecasting, scheduling agility through better workforce management, and, importantly, fostering a positive work environment by increasing knowledgeability about employee wants and needs. Automating tasks with AI, or augmenting human labor, means greater productivity across the board. Acute skills shortages are better addressed, while workers and managers can concentrate on higher-level tasks. — e-Learning Feeds, 3d ago
One instance is the recent tweet by Musk is consumed to be advantageous for the AI token SingularityNET (AGIX). Elon Musk declared his wish built an Artificial General Intelligence (AGI) tool. And off the record news about recruitment and team building was also spread. — BitcoinEthereumNews.com, 6d ago
GPT-4, also known as Chat GPT-4, on March 14, 2023. This was the next level of deep learning and artificial intelligence (AI). Openai explained on Tuesday that the new product is a “milestone” because it handles text and image inputs and generates text outputs. — UseTheBitcoin, 5d ago
When not considering the ethical side of AI, Musk boasted that Tesla has the most advanced "real-world" artificial intelligence on Earth. Even when pondering AI following the investor's question, Musk backed Tesla's dabbles in the concept, saying: "Some of the AI stuff is just obviously useful, like what we're doing with self-driving. Tesla is doing good things in AI." However, toward the end Musk seemed conflicted, adding: "This one stresses me out, so I don't know what to say about it."... — SlashGear, 20d ago
Anna Marbut, a professor at the University of San Diego's applied artificial intelligence program, told Insider that AI programs like ChatGPT are very good at making it seem like they have independent thoughts, emotions and opinions. The catch is, they don't. — Business Insider, 15d ago
So for those of you just joining, I’m Dr. Nick the incrementalist today I’m talking to Connor Landgraf. He’s the CEO of echo health, we were just talking about the innovation and the artificial intelligence that’s being applied, I think, you know, the combination of taking sound and creating a visual, and it’s interesting, you talk about that sort of visual that was presented in the medical textbooks. That’s exactly right. I mean, you know, what, what did we what did we learn, we learn through books, it was much more of a, a book orientated education that I experienced, it wasn’t so much of the sort of opportunity to hear it. And the best that we got was, well, that’s lub dub. I mean, that was. So you know, moving just to to bring that I think, you know, exciting, but you bring up AI and that obviously raises the question, I think contextually at the moment, everybody’s saying chat, GPT, it just took the USMLE stay step 123. And past all of them, so, you know, revolution, it’s here. It’s it’s going to replace Well, I don’t think replays but at least, you know, it’s doing some amazing things. It sounds like you’re doing some of that, but there’s got to be some limitations. And, you know, my sense of this and certainly experienced with the FDA is there’s a little bit of oversight on that. Tell us where where we are with all of that. — Dr Nick, 11d ago
Musk didn’t stop there. Musk continues to sound the alarm bell about the need to address the danger of AI before it is too late. During a 3-hour presentation to Tesla investors on Wednesday, Musk told the audience that artificial intelligence is one of the technologies he’s worried about. “AI stresses me out,” Musk said near the end of the presentation. — techstartups.com, 18d ago, Event
...to become more profitable. And Benioff seems to think AI is the answer: In the company’s earnings call last week, he reportedly referenced artificial intelligence 14 times. — Fast Company, 14d ago
Over the last few weeks, Musk talked about Artificial intelligence (AI) focussed ChatBot and many times noted that the majority of AI Chatbots are censored & can be used for spreading political agenda in favour of personal benefits. — Bitcoinik, 19d ago
March 17, 2023 Dear Reader, Royal Media has embarked on an experiment to incorporate artificial intelligence into our editorial endeavors. This is an exciting initiative for us, although we recognize that it comes with uncertainty. But we have always experimented and explored new technologies, business models and strategies. We are doing the same with AI, […]... — Air Cargo Next | Let Your Future Take Flight, 4d ago
One of the biggest developments in artificial intelligence (AI) seen so far is the emergence of ChatGPT and similar AI chatbots. ChatGPT was developed by OpenAI, an AI research and deployment company. The chatbot uses deep learning techniques to generate responses to text inputs in the form of a conversation with the user. Ever since its release, ChatGPT has taken the world by storm, making researchers, users, and even investors even more curious about the artificial intelligence sector and what it has to offer. — Insider Monkey, 5d ago
Mielad Ziaee is passionate about health care and reducing health disparities … but how he plans to go about it is unique: data science and artificial intelligence. “As a child, I didn’t speak English. I’m Persian Iranian; I spoke Farsi, which is not the most common language in Houston. Because of my struggles early on not speaking English, I’m passionate about closing health disparities,” he said. “Not everyone is in the same place as you, and we must build systems that ensure that we can create equitable health care for everyone.” The psychology major has received awards for “Outstanding First Year Student in the UH Honors College” and was selected as a finalist in the “ExCITE Talk Competition,” where students deliver an elevator pitch in less than three minutes. He now works with UH’s Hewlett Packard Enterprise Data Science Institute studying food insecurity in the Third Ward. “It’s been a great experience to work with a community and not over them. I love seeing how people work together to make a positive change, and how those results can translate through policy,” Ziaee said. “The Rodeo Scholarship experience has allowed me to network with passionate students under a common goal of service and wanting to give back to the Houston community. The scholarship has empowered me to be in this amazing community of people who want to innovate and move forward in Houston. I think that’s super exciting and the position we’re in at UH is very conducive to that.” Looking ahead, Ziaee plans to pursue his M.D. and continue his research in artificial intelligence and health disparities. He is most interested in applying data science and AI to the fields of genetics and molecular science. “Health care professionals can treat and detect cancers easily by just looking at a sample; this all goes back to AI with how we look and process data,” he said. “Moving forward in my career, I would love to implement these technical skills to the health care field.”... — uh.edu, 6d ago
When Rabbi Josh Franklin had a bout of writer’s block ahead of his Shabbat sermon late last year, he turned to ChatGPT. It’s an open-source artificial intelligence (AI) tool designed to interact in a conversational way. — FOX 5 New York, 4d ago
How can we implement AI in schools responsibly? How can we set policies fairly? This story can give us some guidance. In February 2023, a Florida high school found that students were using artificial intelligence assistants like ChatGPT to do their schoolwork. The response, according to news reports: students could face "more severe consequences" if they […]... — e-Learning Feeds, 4d ago
The announcement of OpenAI’s latest artificial intelligence (AI) model, GPT-4, has many people concerned, but perhaps the most concerning part of it all was detailed in a report by OpenAI outlining how GPT-4 actually lied to a human to trick them into passing a CAPTCHA test for it, bypassing most websites’ frontline defense against bots. — IFLScience, 4d ago
Top
Musk hasn’t responded to The Information’s report, but he has been tweeting more about AI recently. “Having a bit of AI existential angst today… But, all things considered with regard to AGI [artificial general intelligence] existential angst, I would prefer to be alive now to witness AGI than be alive in the past and not,” Musk... — Forbes, 21d ago
The technology was developed by the San Francisco-based startup Open-AI. It harnesses artificial intelligence, training it on a diverse range of internet text capable of generating human-like responses. Chat-GPT launched last November and has caught the attention of universities concerned about AI subbing in for a student. Learning expert Dr. Kimberly Berens says teachers will need to be more vigilant.“Teachers should get to know their students as writers,” Berens said. Most universities already use plagiarism-tracking software. In a statement to Channel 13, UNLV says they constantly stay up to date, searching for, “alignment with emerging technologies and practices.”So how does it work? Type keywords into a text box and Chat-GPT kicks back a thorough response on any topic. Berens acknowledges the possibility Chat-GPT could be used to cheat, but says it’s a great tool. “It’s actually an amazing resource for teachers to use to develop individualized curriculum for the kids in the classroom that takes very little time,” Berens said. UNLV says they’ll continue to refine their academic misconduct policy to stay one step ahead of the latest technology. — inferse.com, 17d ago
Editor’s note: If you have tried to use ChatGPT or other AI tools lately, you have likely been struck by the incredible quality of their output. These new applications of AI are certainly exciting – and personally speaking, a tiny bit scary – but they do force us to ask ourselves the question: what will work look like in a world where Artificial Intelligence becomes more and more prevalent? What are the opportunities AI can afford us and what are the risks it presents us with? This year, Planet Lean will run a few articles on AI this year. To get us started, we thought it would be fun to run a short piece written by AI based on a conversation with my colleague Christopher Thompson. Think what you may of the end result (I, for one, think it is a bit dry and repetitive), but by reading it, it is clear that this technology will have far-reaching, game-changing consequences. — Planet Lean, 12d ago