Latest

new The article critiques Eliezer Yudkowsky's pessimistic views on AI alignment and the scalability of current AI capabilities. The author argues that AI progress will be smoother and integrate well with current alignment techniques, rather than rendering them useless. They also believe that humans are more general learners than Yudkowsky suggests, and the space of possible mind designs is smaller and more compact. The author challenges Yudkowsky's use of the security mindset, arguing that AI alignment should not be approached as an adversarial problem.lesswrong.com, 23h ago
new Whether either side is correct remains to be seen, technology of any description, no matter how potentially sentient, will in all likelihood be ethically determined by the uses or misuses it suffers at the hands of humans.AIBC, 1d ago
new ...“The AI itself is neither good nor evil; it is simply blind and will maximize any goal you give it.”...Wonderful Engineering, 1d ago
new However, when we turn to the future of artificial intelligence (AI) and psychiatry, we have little history to fall back on. With the advance and emergence of ChatGPT, now version 4, certain intellectual capabilities are beginning to rival—if not at times exceed—that of humans. The upgrades now are coming fast and furious.Psychiatric Times, 23h ago
new This seems clearly false in the case of deep learning, where progress on instilling any particular behavioral tendencies in models roughly follows the amount of available data that demonstrate said behavioral tendency. It's thus vastly easier to align models to goals where we have many examples of people executing said goals. As it so happens, we have roughly zero examples of people performing the "duplicate this strawberry" task, but many more examples of e.g., humans acting in accordance with human values, ML / alignment research papers, chatbots acting as helpful, honest and harmless assistants, people providing oversight to AI models, etc. See also:...alignmentforum.org, 23h ago
new All of these unfounded fears “have been part of the Silicon Valley culture for decades, often having futuristic sci-fi visions of technology. But they are not true. What real AI programmers really do is refine data sets and tweak features and algorithms to get the right results.”...PostX News, 19h ago

Latest

In this article, Francisco Mainez highlights the difference between convergent and divergent thinking and where AI and humans best fit in the picture of AML processes.lucinity.com, 4d ago
new As noted here earlier, this new and disruptive ability of AI tools to move through a myriad of data — image, speech, text, etc. — is what makes modern artificial intelligence solutions worthy of being called “intelligent.”...pymnts.com, 1d ago
new AI, AI, AI! The Tech Is Everywhere — But Why Is It Evolving So Fast? [Thoughts After Dark]...thomasnet.com, 23h ago

Top

In the long run, if AI gets good enough, and not just writes papers, questions would be asked about the implications for certain jobs, the value of education, etc., Gates told on the podcast hosted by Rachman. “That’s a long way south in the future,” he added.Benzinga, 17d ago
BAIR Robotics Symposium presents: Exploration vs Exploitation: Different Ways of Pushing AI and Robotics Forward...University of California, Berkeley, 10d ago, Event
AI already features heavily in all of these fields, and the advent of chatbots is only pushing it further into the public imagination.techxplore.com, 18d ago
The computing side of things also parallels Tesla's plans. Advanced AI, which is capable of learning and solving puzzles, is a key part of any humanoid robot. Instead of building specific robots for specific tasks, companies can build one and the bots can work things out from there. If there weren't enough parallels with the mind of Elon Musk, Adcock is also talking about his robots potentially helping humanity colonize space.SlashGear, 19d ago
...have placed AI in the spotlight once more, for both good and bad. The seemingly covert employment of text-generating AI to write SEO-friendly articles for...What’s New in Publishing | Digital Publishing News, 11d ago
ChatGPT is just one of many ventures pushing the boundaries of large language models and AI. The technology is fascinating, but ChatGPT isn’t the only example out there. And clearly the sexiness of the name isn’t what propelled ChatGPT into popular consciousness.reworked.co, 21d ago

Latest

new Hi Jennifer, and team. Good morning. Thanks for giving some information about the other BioStrand deal that you guys are working on. I was wondering if you could also comment on kind of what the pipeline looks like beyond that for future AI drug discovery deals? And also, how you expect the economics of those deals to look over time?...Insider Monkey, 1d ago, Event
new It will be interesting to monitor progress. The current collaboration is for research not clinical purposes; the latter requires various FDA certifications. One earlier measure of success will be the generation of scientific papers, agreed Zhou. The hope is that many of the learnings will eventually make their way into healthcare.HPCwire, 9h ago
new AI - might play these sort of games well, but I challenge AI to beat me in a contest of splitting wood.Sott.net, 2d ago
Meanwhile, the next batch of AI integrations in the workplace, if not all that exciting, promises to at least be helpful for productivity. “The technology is pretty impressive already,” said Neubig.Quartz, 3d ago
new I’m sure most of you know about ChatGPT by now. It’s that buzzword floating around conversations whenever the topic changes to AI, whether it’s Microsoft’s...TECHTELEGRAPH, 1d ago
CEO ของบริษัทยังได้กล่าวด้วยว่า เป้าหมายของเขาคือ การสร้างหุ่นยนต์ที่มีจิตสำนึกรู้คิดหรือ “ปัญญาประดิษฐ์ทั่วไป” AI ที่มีความรู้สึก...Siam Blockchain, 4d ago

Top

Generative AI instruments like ChatGPT and Midjourney have prompted debates for AI’s inclusion throughout enterprise and technological areas, and AI has even reached the crypto world. In consequence, many AI-based crypto tasks at the moment are gaining traders’ consideration.Bitcoin Press UK, 10d ago
There are two big things for me at this point; one is the intellectual challenge of developing Thymia’s clinical solution. It’s a technically and scientifically complex task which involves multiple scientific disciplines, not just Neuroscience, Psychology and Linguistics (my specialities) but also Computer Vision, ethical Artificial Intelligence and multi-modal Machine Learning. I love bouncing ideas off of Stefano, my co-founder, and other members of the team and seeing these take shape within hours.march8.com, 21d ago
...work on AI capabilities, both seem maximally bad, with ‘which is worse’ being a question of scope.lesswrong.com, 20d ago

Latest

new ...in which the computer is said to have passed if the human asking the question cannot discern between the replies of a human and the replies of a computer. The newer AI models are, from the perspective of content and semantics, impressively human-like. Yet something seems to be missing.7wData, 2d ago
Fans of AI may well promise it can help us to better understand the future beyond our intellectual limitations. But for plagiarised artists and writers, it now seems the best hope is that it will teach humans yet again that we should doubt and check everything we see and read.the Guardian, 3d ago
Executives are “leaving the magic stuff” relating to AI and ML behind, focusing on the growth, simplification, acceleration, and innovation that can arise from these technologies. “Maybe that’s the bridge that takes us into the ML and AI future,” suggests Bob.Acceleration Economy, 4d ago
new ...“The AI itself is not good or evil, it’s just blind, it will just optimise whatever goal you give it.”...inews.co.uk, 2d ago
new The impact in the marketplace has, naturally, extended to academia. Digital health and e-medicine may be buzzwords now yet, according to Hong, academic institutions have varying definitions of the term in the curriculum programmes they offer: “Some gear it towards biomedical information automatics, and on the other spectrum, it’s geared towards AI computing, big data and statistics. But soon, digital health won’t be a separate discipline; it will be simply healthcare.”...WAN-IFRA, 1d ago
...: with LLM-assisted bot building. Basically, the ideas behind Cognigy’s release is to assess the drawbacks of both Conversational AI and ChatGPT, and combine the most advantageous facets of the two.web3newshubb.com, 4d ago

Top

Other hot technology topics that have been bucketized into a single term include metaverse, AI, edge computing, and cloud, just to name a few. The current shrieking and hand-waving over large language model chat AI paints all AI with the same brush, making assumptions that all AI suffers the same flaws and advantages. There are multiple ways to do AI. Even worse, the term AI itself has become all-encompassing to include machine learning (ML) as well. If a product uses ML, more than likely the marketing department has renamed that feature “AI” to get the bounce from AI’s current moment in the sun.Verdict, 21d ago
Yet, I was recently surprised when I asked dozens of teachers if they were aware of AI capabilities. Among the 70+ asked, a handful acknowledged knowing about, let alone understanding, the good, the bad, and the ugly about ChatGPT and other AI tools rapidly making their way to the screens of students and tech geeks (like me).TechLearningMagazine, 18d ago, Event
...“Something I scrutinize closely about AI companies is Responsible & Explainable AI – which is why Dataiku is included in our AI/Hyperautomation Top 10 Shortlist. With the emergence of tools like ChatGPT, AI seems simple to use – and it can be given the right modeling and setup – but behind the scenes, AI is pretty complex. That’s why it’s vitally important to ensure the AI inputs and outputs are explainable in a way that they can be questioned and validated by anyone – data scientists and stakeholders alike.”...Acceleration Economy, 15d ago
Make no mistake, the unique research potential for learning about brain development, structure and function is hugely exciting and important. If you care about what some called the hard question of AI, and others cognitive science, then this is the field to watch. But it won’t be commercially interesting for many decades, if at all, at least not in the model suggested of super low-energy, massively efficient data analytic add-ons for more conventional computers. Even keeping the blobs fed and alive at scale is a roadmap of mysteries. Talking about “organoid intelligence” is a fine and honorable way to fund further research, in these days where AI/ML is hyper-fashionable, but it’s not the real meat.CoinGenius, 15d ago
Make no mistake, the unique research potential for learning about brain development, structure and function is hugely exciting and important. If you care about what some called the hard question of AI, and others cognitive science, then this is the field to watch. But it won't be commercially interesting for many decades, if at all, at least not in the model suggested of super low-energy, massively efficient data analytic add-ons for more conventional computers. Even keeping the blobs fed and alive at scale is a roadmap of mysteries. Talking about "organoid intelligence" is a fine and honorable way to fund further research, in these days where AI/ML is hyper-fashionable, but it's not the real meat.theregister.com, 13d ago
...“We do work a lot with AI/ML, but to be honest, very few of the IoT applications we have integrated so far use AI. For example, none of the production sites that we have helped use true AI/ML, apart from some prototypes. The potential for analyzing images, text, [and] sound using generative AI is great, but generative AI adoption so far: none.”...IoT Analytics, 7d ago

Latest

None of the abuse alleged by women in the community makes the idea of AI safety less important. We already know all the ways that today’s single-tasking AI can distort outcomes, from racist parole algorithms to sexist pay disparities. Superintelligent AI, too, is bound to reflect the biases of its creators, for better and worse. But the possibility of marginally safer AI doesn’t make women’s safety less important, either.Australian Financial Review, 5d ago, Event
new Microsoft:"We are going to put AI into your coffee and donut, and Outlook, and Edge, and Word, and...."Google:"Microsoft is putting AI into everything, we better do that too. Announce something."Amazon:"How can we dupe people into buying more stuff? I know! Make a catalog, but call it a "browser with AI !!"...slashdot.org, 1d ago
A number of of those AI initiatives have seen a rise in token costs alongside the rise of ChatGPT. But, person adoption is the true litmus take a look at, and solely then can we make certain that these platforms remedy an actual drawback for the person. These are nonetheless early days for AI and decentralized knowledge initiatives, however the inexperienced shoots have emerged and look promising.ILCA Crypto News, 3d ago

Latest

Artificial intelligence (AI) is seemingly taking over a lot of human jobs, including manufacturing, some aspects of medicine, and customer service. As AI becomes more and more sophisticated, many are naturally concerned that it will replace all human jobs. Businesses staffed entirely by AI machines isn’t a likely outcome, however. While pop culture may have […]...ReadWrite, 4d ago
I have a disturbing feeling that arguing to future AI to "preserve humanity for pascals-mugging-type-reasons" trades off X-risk for S-risk. I'm not sure that any of these aforementioned cases encourage AI to maintain lives worth living.lesswrong.com, 3d ago
new This being said, while AI is not expected to fully mimic humans’ abilities, it is becoming good at performing repetitive basic, or robotic tasks.#NOWTESDEFI, 2d ago
new All in all, the paper strays from making any declarative statements about job impacts. It instead analyzes jobs that are more likely to have some “exposure” to AI generation, meaning it will take 50% less time to complete a job’s common task. All in all, most high-paying white collar work will find AI pushing into their fields. Those in science or “critical thinking” fields, as the paper calls it, will have less exposure, pointing to the modern AI’s utter limitations in creating novel content. Programmers and writers, on the other hand, are likely to see quite a lot of exposure.Gizmodo, 1d ago
new ...for profiles related to generative AI or LLMs specifically. These jobs are getting created because of the emerging hype in the field. But what about the existing jobs that might get...Analytics India Magazine, 1d ago
new He is uniquely suited for the task. He checks off all the academic boxes. He’s a known known in the world of AI. He’s also libertarian and thinks, deeply, about why and how it’s OK for people to want different things and be unconstrained in their individual choices. And he applies this thinking to AI.nationalpost, 2d ago

Latest

Since the slide is a bit dense, let me unpack it by listing the “Future of Work with ML and AI” use cases across the four broad segments used by Workday in that slide: HCM, Experience, Finance, and Future examples.Acceleration Economy, 4d ago
Chairman and CEO Mitch Glazier noted, “Human artistry is irreplicable. Recent developments in AI are remarkable, but we have seen the costs before of rushing heedlessly forward without real thought or respect for law and rights. Our principles are designed to chart a healthy path for AI innovation that enhances and rewards human artistry, creativity, and performance.”...Variety, 4d ago, Event
new I think the dominant feature of the AI age is that life in itself—if not the world in itself—...McKinsey & Company, 2d ago
new ..."In the specific case of the more mechanistic end of journalism—sports reports, financial results—I do think that AI tools are replacing, and likely increasingly to replace, human delivery," he said.techxplore.com, 1d ago
However, according to OpenAI, GPT-4 is still “not totally dependable” and is still capable of giving “hallucinatory” or erroneous, unexpected results. With the introduction of ChatGPT by OpenAI in November, the IT industry was rocked, raising existential concerns about the future of industries including education, journalism, and healthcare.TechMoran, 5d ago
As the world rushed to hype and hail GPT-4 this week – the latest, multimodal iteration of Open AI’s large-language AI – one company appears to be giving GPT technology a deeper, more useful enterprise spin.diginomica, 4d ago

Latest

new Among other criticisms, the paper argued that much of the text mined to build GPT-3 — which was initially...VentureBeat, 1d ago
new The speakers noted that friction emerges in three areas: Technical, organizational, and financial. Technical friction is often well understood. AI and machine learning is hard. Building models is hard. Deploying those models on the right infrastructures is hard.RTInsights, 1d ago
These days you can't move without hearing the words AI, Chat GPT, and Stable Diffusion. But rest assured, behind the hype are real-world use cases for Generative AI that, when used commercially, solve persistent industry pain points. And an industry embracing the technology is...Tech.eu, 4d ago

Top

One vantage-point on the new AI is the 1990s discussion over 4E cognitive science. The (programmatic) work of Rodney Brooks with creatures, mobots and subsumption architectures seems to suggest notions about intelligence and how to build (many) AI systems which are, on the face of it, deeply at odds with today's generative AI. ChatGPT, Dall-E and LaMDA seem to be more directly inheritors of connectionism than 4E cog sci and if anything examples of 0E cognition. Moreover, the analysis of such systems by their creators seem to be shot through with (versions of) representationalist assumptions many believed needed to be transcended. One might think that with the advent of deep learning and generative AI at least the radical anti-representationalist flavours of 4E cognitive science may now face a severe challenge.The University of Sussex, 13d ago
My speculations about possible avatar mediation questioned some assumptions and focused mostly on the empty part of the glass. We would like to think that AI could not reproduce human skills such as communication and empathy that are necessary for good mediation. Of course, AI can’t truly reproduce real human cognitions and emotions – but the simulations are likely to be increasingly good approximations. And avatar mediation could reproduce the worst aspects of human ADR processes.indisputably.org, 7d ago
At least one group of tech workers is excited about ChatGPT and AI-powered apps like it for what — and, perhaps who — it can bring to their profession.Business Insider, 17d ago
The debates on ethical AI creation and use will continue. But even in this early and somewhat flawed stage of development, Wong believes that ChatGPT shows enormous promise to evolve human-machine collaboration in our favour.Waterloo News, 7d ago
Conversely, most researchers will attest to AI being a black box, albeit with some dissenting views. However, with the advancements in LLMs such as GPT-3 and LLaMA, the veil is lifting, and AI is no longer a mystery. These models work by taking input and predicting the next word to generate text.Analytics India Magazine, 18d ago
...saying that AI is risky. They talk about the implications of such arguments being bad, or the social reasons one might make such arguments.lesswrong.com, 12d ago

Latest

new Because it is a serious critique of economics, the book has to go back into the foundations of the discipline. A useful starting point is the distinction between ‘risk’ and ‘uncertainty’ which faces anybody thinking about, or preparing for, the future. Briefly, risk is about where there are probabilities attached to future events; uncertainty is where there are not. The latter events include the ‘unknown unknowns’, or as Kay and King call them ‘radical uncertainties’.interest.co.nz, 1d ago
new Sure, math is limited, but the limitations of it are our limitations. There are no limitations inherent to math.lesswrong.com, 2d ago
new The path of: [low level OpenAI employees think better about x-risk -> improved general OpenAI reasoning around x-risk -> improved decisions] seems high EV to me.lesswrong.com, 1d ago
Users of LinkedIn will receive AI-driven writing recommendations to improve the quality of their headlines and section summaries. An AI will look over your work history and skills, pick out the most amazing details, and use those to write the headline and summary for your about page.Dataconomy, 5d ago
Generative AI already creates lesson plans, grades assignments, advises students and answers learner questions. Much of the interesting work in the coming months will be to design interfaces to adapt the technology to actual work roles in both supportive and possibly in replacement modes. Generative AI already creates lesson plans, grades assignments, advises students, and answers learner questions. And, certainly, generative AI has an important role to play in research. Will it become a formal or informal co-investigator and co-author of research? What status will we give generative AI in higher ed?...e-Learning Feeds, 3d ago
Zoe Thomas: I wonder, companies have been using AI when it comes to looking through their stack of potential candidates. Is this just flipping the tables on that, now AI is on the side of the applicants instead of just the companies?...WSJ, 4d ago

Top

Ultimately, NLP lets you boil down large bodies of information, compile them, and summarize them, just like a teacher. That teacher might not be the best one, however. Any biases or errors can potentially influence a lot of people. Tackling bias must be the number 1 priority of anyone entering the world of natural language processing, no matter if you’re a startup or if you’re Google.Acceleration Economy, 11d ago
..., is investigating “automatic summarization of narrative, a problem at the intersection of artificial intelligence and the humanities and social sciences. Much of the previous work in summarization has focused on single-document news. Our lab has shown, however, that while this domain is important, the task is limited due to the nature of how factual news articles are written — the critical information is almost always contained in the first couple sentences of the article. Focusing on narrative therefore poses a much more interesting challenge for summarization.”...Amazon Science, 20d ago
Kim, who got interested in AI safety while working on self-driving cars at Cruise, isn’t sure whether such a horrific chain of events could really happen. But with the explosion of computer-made art and literature this past year, thanks to...Mission Local, 14d ago

Latest

new Incentives often push people toward acceleration and away from voicing concerns. When people at AI labs, I think it’s worth commending.lesswrong.com, 1d ago
new ...is becoming real. Artificial general intelligence is the next step.) It can explain various scientific concepts:And it can write basic academic essays. (Such systems are going to cause...Inferse.com, 1d ago
new Artificial intelligence chatbots such as OpenAI’s ChatGPT and Microsoft’s BingGPT have generated lots of buzz lately, much of it centered around how helpful or harmful these tools might be for media companies. While the helpful camp welcomes the benefits of assisting reporters with initial research, aggregating data and...Digital Content Next, 1d ago
new So our society tells all of these stories about how good, true, and beautiful it is to do whatever Pharma wants.substack.com, 2d ago
This has been a very popular topic recently, mostly due to the release of Chat GPT. There are a lot of questions about the future of AI, how it will affect humans, and I am following this with interest. A lot of people are worried about AI, that it will replace humans, replace their jobs. There is a lot of discussion on this topic. However, observing people who already use the help of AI, I see rather an expansion of possibilities, acceleration of work. AI itself can write texts, there are already tools for detecting this, but what can be noticed, the way of writing is similar, so it's a bit like all paintings are painted by one painter - still human intervention is needed. And in our context, we are looking at and analyzing the possibilities of using AI to, for example, administer Kubernetes, scale services, prepare basic configuration, recalculate infrastructure costs. But also using Kubernetes to host AI. It's a powerful topic that is sure to bring further changes to the IT world.bitcourier.co.uk, 6d ago
...used ChatGPT, while an additional 41% are currently exploring use cases for it. And for good reason: Unlike brand experiences in the metaverse, generative AI has practical marketing implications today. As my colleague...Forrester, 5d ago

Top

Strong models of AI risk & good judgment about what ideas are worth spreading...lesswrong.com, 18d ago
Of late I have sat in on some conversations among people who are obviously much smarter than me and enjoy much higher pay grades discussing the transcendence and transformation of AI -- generative AI in particular.InformationWeek, 18d ago
The hype? Sure. But, the differences here are that the “AI” have way more useful applications as tools than NFTs, and so the fundamentals are here to stay, even if you’re not going to be reading constant headlines about ChatGPT and Bing a few months down the road.Techdirt, 19d ago
...all the future generations of humans, until the end of the universe, will be locked into our ideas of what human meta-values should be...lesswrong.com, 14d ago
Now, machine learning is how the computer system develops that intelligence. So while these two concepts are very related and connect to each other, they're different. And speaking of data literacy, one of the things that we find, and you've been in meetings, people use those term interchangeably and they're not the same thing. And so think of machine learning as how that system develops intelligence, and AI is really the idea of that system behaving like a human, behaving with human intelligence.EisnerAmper, 12d ago, Event
There's a wave of people, of various degrees of knowledge and influence, currently waking up to the ideas of AI existential risk. They seem to be literally going through every box of the bad alignement bingo card takes.lesswrong.com, 12d ago

Latest

...exploring and probing generative AI technologies, they are diving into it with a great deal of energy, enthusiasm, and ambition.Acceleration Economy, 6d ago
In that context, what could the Metaverse offer citizens and healthcare professionals that would not open a Pandora’s box of new or unnecessary problems?...diginomica, 7d ago
new .... i.e humans failed to give the AI the correct goal. Yudkowsky has since stated the originally intended lesson was of...lesswrong.com, 2d ago

Latest

When GPT-4 is fed an image, it can understand it and describe what is happening behind it. It even accurately describes the intent and the humor. An internet meme humorously depicting the state of computer vision and AI was articulately understood by GPT-4, and it even explained the humor behind it.MarkTechPost, 3d ago
It’s not surprising that in conceiving of a new world awash with AI art, we haven’t accounted for a society-wide change in taste. We tend to assume that in the future we will want the same things we want now, and that only the ability to achieve them will evolve. One famous study dubbed this the “...WIRED, 5d ago
.... The paper concluded: “Literature provides a site of imaginative thinking through which AI researchers can consider the social and ethical consequences of their work.“ One AI researcher admitted: “Where to push and which direction we should push, and all these things are probably, one way or the other, influenced by literature.”...National Geographic, 4d ago
new How could this technology be used in Apple’s products? Well, as it happens, I can think of a few ways that it might be deployed, not all of which are simply about just creating a chatbot.Macworld, 1d ago
new The next step is to add phrases that your user is most likely to ask and how the bot responds to them. The bot builder offers suggestions, but you can create your own as well. The best part is that since the bots are NLP-powered, they are capable of recognizing intent for similar phrases as well. The more phrases you add, the more amount of data for your bot to learn from and the higher the accuracy.web3newshubb.com, 2d ago
new ...“Modern robotics might then feel like a particular kind of cultural paradox, where the best kind of religion is the one that eventually involves no humans at all,” the anthropologist concludes. “But in this circularity of humans creating robots, robots becoming gods, and gods becoming human, we’ve only managed to, once again, re-imagine ourselves.”...Wonderful Engineering, 2d ago

Latest

new Nevertheless, there are some challenges to beat first. For example, builders should guarantee AR windscreens and MR experiences aren’t too distracting for drivers behind the wheel. Furthermore, apps and instruments for constructing in VR will should be intuitive sufficient to make sure workforce adoption. If we are able to overcome these challenges, a future in XR automotive experiences might not be too far-off.Crypto News Bay, 2d ago
I think all of these considerations in-aggregate make me worried that a lot of current work in AI Alignment field-building and EA-community building is net-negative for the world, and...lesswrong.com, 7d ago
On Equity this week, Alex and I spoke about the above, but more interestingly, the future of AI. We talk about the technology’s impact of smart people writing books, context and general tech exuberance. We need it, and I’m not just saying that because I live a stone’s throw away from Cerebral Valley.TechCrunch, 4d ago
When I looked around for inspiration of what the future might look like if we got it right, I saw few examples. Most were entertaining but dystopian stories of how we destroy ourselves with nuclear weapons, killer robots, AI, climate disasters, and disease. But very few were visionary positive versions of any kind of future we might actually want to live in. Star Trek is one of the most well known futuristic science fiction series that envisions a post-scarcity, space-facing scientific and technologically advanced society. From this vision many innovators created the technologies they saw, for example handheld communicators (cell phones), non-lethal phasers (electric tasers), tablet computers, touch-interface screens, replicators (3D printers), impulse space engines, and more.StartupNation, 4d ago
Yet, these findings do not entirely confirm the idea that AI explanations should decrease overreliance. To further explore this, a team of researchers at Stanford University’s Human-Centered Artificial Intelligence (HAI) lab asserted that people strategically choose whether or not to engage with an AI explanation, demonstrating that there are situations in which AI explanations can help people become less overly reliant. According to their paper, individuals are less likely to depend on AI predictions when the related AI explanations are easier to understand than the activity at hand and when there is a bigger benefit to doing so (which can be in the form of a financial reward). They also demonstrated that overreliance on AI could be considerably decreased when we concentrate on engaging people with the explanation rather than just having the target supply it.MarkTechPost, 4d ago
The hysteria and hype around GPT-4, too, alarms Michael Irwin Jordan a bit. He believes humanity should not be working to make AI smarter and surpass humans...Analytics India Magazine, 8d ago

Latest

.... Should we be intentionally creating these AI that are much-worse-aligned than random? Is the sliver of possibilities where the AI succeeds in survive-and-spread enough to demonstrate danger, but not enough to actually be dangerous, enough of an upside to justify the risk of it succeeding well enough to kill everyone?...lesswrong.com, 6d ago
...(or uses it in some way) while also including some type of moral judgment. AI is good. AI is bad. AI is going to mean the end of humanity. AI is going to ruin education or SEO or some other previously revered pseudo-constant.Inman, 6d ago
Unfortunate images of the future ruled by AI, like HAL 9000, probably come to mind. We should make every effort to allay customer concerns about unleashing Pandora’s box if advertisers want consumers to accept AI. It can be beneficial to humanize AI, and doing so has never been simpler than it is now.MarTech Series, 5d ago

Top

..., Stanford posits that “instead of thinking of automation as the...Energy Central, 14d ago
The concept works, and the potential it has for technology is immense. It isn't just mushrooms either: the same scientists have created mold-powered robots in the past, too. In the future, organic computing could be an advancement similar to AI, VR, brain implants, or like many of the various futuristic and innovative discoveries made in recent years.SlashGear, 13d ago
Bull vs. Bear is a weekly feature where the VettaFi writers’ room takes opposite sides for a debate on controversial stocks, strategies, or market ideas — with plenty of discussion of ETF ideas to play either angle. For this edition of Bull vs. Bear, Evan Harp and James Comtois debate the investment case for the continued development of robotics technology.ETF Trends, 20d ago
...“The novel idea of putting infants and AI head-to-head on the same tasks is allowing researchers to better describe infants’ intuitive knowledge about other people and suggest ways of integrating that knowledge into AI,” she adds.Nextgov.com, 21d ago
The investor believes that the current wave of AI fever has not made AI smarter, but has only expanded the data model, allowing AI to generate and produce the AIGC track. However, the real payment demand downstream is debatable, as AI automatic choreography, AI painting, and AI scripts are all highly competitive and free. Additionally, AI painting may reduce the value of traditional artistic creation. Furthermore, the hardware investment in the metaverse, AR/VR, etc. is difficult to wear lightly on the body, coupled with the purchase threshold, making it difficult for start-up companies to get out of the profit dilemma. Therefore, the investor believes that there is currently nothing to vote for in this wave of AI being hyped by ChatGPT.Boxmining, 15d ago
I’m particularly interested in the last two, convergence of language and vision, and language processing in humans and LLMs. You pick the third. For my two, read the Dragons piece I’ve linked to and an (old) article by David Hays,...lesswrong.com, 9d ago

Latest

...are some pretty big labor economy questions: namely, what does widespread access to an AI capable of (at least) sophisticated language generation mean for the future of work?...TodayHeadline, 4d ago
new Despite Apple's best efforts, Siri has been idling in learning mode. Perhaps this is because Siri is still primarily built on Machine Learning and not Generative AI. It's the difference between learning and creating.TechRadar, 2d ago, Event
In one of the panel discussions, leaders from NTT Data joined us. “If you assume technology as an individual, the persona of technology is very attractive. It is like a superwoman. Nothing stops it,” said Srividya Ram, VP Projects and Application services at NTT Data talking about the exponential nature of technology.Analytics India Magazine, 3d ago, Event
Draw nearer, citizens, for the dawn of tomorrow is upon us and it powered by artificial intelligence. In the wake of the hype surrounding ChatGPT, Microsoft has upped the ante with the release of Microsoft 365 Copilot, which intends making Office 365 and related applications smarter, easier to use, and…well, intelligent. On the face of it, what a great idea. Natural language interfaces using the applications most of us are familiar with as everyday tools. The examples provided by Microsoft in its promotional materials make this a must have: who among us enjoys making a PowerPoint presentation? Arguably, a fate worse than the infamous ‘death by’. With 365 Copilot, just ask your computer and it does the lot for you.IT Brief Australia, 4d ago
I will try to answer this question under the additional premise of "... with the exception of an AGI whose intellectual problem solving ability is at least equal to (proficient) human level across the majority of tasks, but not necessarily genius or extremely superhuman in most of them.lesswrong.com, 3d ago
...“An AI Bot Wrote This Copy,” reads one message line. “Influencers helped by artificial intelligence,” reads another. “Can AI Teach Us to Read?” reads yet another. And then there’s “Why the fuss over ChatGPT?” Oh, man, I just cannot get into ChatGPT at this stage of my life.LimaOhio.com, 3d ago

Top

The company said the creative option will give “original and imaginative” responses, while the precise version will focus more on accuracy. The balanced mode, of course, will be something in between. SiliconANGLE played with the new personalities, asking: “Can you tell me about any dystopian elements regarding AI that might come true in the future?”...SiliconANGLE, 18d ago
It’s possible, maybe probable that the realm of most concern, after all, is not in the realm of creativity but of purpose. To what purpose might “AI” frameworks of automation be used?...Publishing Perspectives, 19d ago, Event
So like, for example, OpenAI. Does OpenAI have, like, an alignment department? With all the AI innovation going on, what does the commercial side of the AI alignment problem look like? Like, are people trying to think about these things? And to what degree are they being responsible?...lesswrong.com, 8d ago, Event