Latest

new For all of the bravado of AI, before we become too dazzled or dismayed (depending on which side of this issue you reside), at AI’s potential impact on projects, it is worthwhile to sit back with a skeptical bird’s eye view. After all, as we humans evolved throughout the information age of the mid to late twentieth century, AI can just be considered the latest development in a long series of technological advances driven by the engine of human information input. Even if every one of the improvements stated above are tangible, how well does it translate to the ultimate outcome? Organizations around the globe have been trying to digitalize and computerize, some as early as the 1960s and 1970s. Digitization and computerization promised a revolution in productivity through automation, streamlining, and making humans redundant. But the road to a more significant productivity boost has been arduous at best. One of the primary reasons is captured in the Theory of Constraints (TOC) developed by Eliyahu M. Goldratt, an Israeli physicist turned business management guru. In his landmark book The Goal, which sold over 6 million copies, he explains the importance of focusing on the right constraints to improve the overall system. For projects, the constraints are generally not within the technology and tools, but within humans.Healthcare Business Today, 1d ago
new Incorporating bioethics principles into medical AI algorithms is undoubtedly a crucial aspect of creating trustworthy technology that serves to advance society. While the paper highlights multiple critical topics that should be considered and offers robust solutions, gaps and questions remain. More research needs to be done regarding when these AI tools are considered to be conscious of their own actions and, by extension when liability is placed on humans or the machines. Additionally, while the authors’ proposed solution of instituting liability fees and other systems between public and private parties is an interesting point to consider, it may be difficult to establish in countries such as the United States, where the healthcare system is incredibly disjointed. Additionally, in places with many competing private healthcare companies, it should be considered that these parties do not necessarily have patients’ best interests at heart. Instead, they tend to prioritize profits over patient well-being—thus adding another obstacle to ensuring ethical medical AI is instituted.Montreal AI Ethics Institute, 1d ago
new Our paper shows many examples of AI systems that have learned to deceive humans. Reinforcement Learning (RL) agents trained to play strategic games have learned to bluff and feint, while large language models(LLMs) will output falsehoods in creative ways that help achieve their goals. One particularly concerning example of AI deception is provided by Meta’s AI, CICERO, which was trained to play the alliance-building world-conquest game Diplomacy. Meta put a lot of effort into training CICERO to be “largely honest and helpful,” claiming that CICERO would “never intentionally backstab” its allies. But when we investigated Meta’s rosy claims by studying games that CICERO had played, we found that Meta had unwittingly trained CICERO to be quite effective in its deception.Montreal AI Ethics Institute, 1d ago
new Diabetic retinopathy (DR) is one area where AI is being used as a screening tool. According to Dr Gopal Pillai, professor and head of department of ophthalmology at Amrita Institute of Medical Sciences in Kochi, approximately one third of people with diabetes will have DR in India, and about 10-12 per cent of those will have vision-threatening DR, indicating that the development of DR has seriously affected the patient’s vision and failure to treat it in a timely manner will result in irreversible vision loss. One thing that was hampering early detection of DR was that it is asymptomatic, until, that is, the person completely loses his vision. “One might be driving, reading, watching TV and doing their activities without even knowing that a time bomb is ticking inside the eye. And because the patient would often come in late, there was no way of early diagnosis,”says Pillai, who is leading a government-sponsored clinical trial network.theweek.in, 1d ago
new The validation results reveal a remarkable alignment between LLMCarbon’s projections and real-world data for diverse LLMs, surpassing the performance of existing tools like mlco2. LLMCarbon’s adaptability to various data center specifications and its capability to pinpoint optimal parallelism settings enhance overall operational efficiency. This adaptability, combined with its ability to accurately gauge the environmental impact of LLMs, positions LLMCarbon as a pragmatic tool in assessing and mitigating the carbon footprint associated with LLMs, offering an indispensable resource for the future of sustainable AI development.Montreal AI Ethics Institute, 1d ago
new Predictive work systems and automated testing equipment are already saving health systems time and money, and the benefits are sure to grow as the technologies advance. However, human oversight is still needed. A technician should still use their expertise to determine if a computer-generated solution is indeed appropriate, rather than trusting the proposed solution without critical thinking. It’s also important for health systems to engage with technicians when deploying AI tools—to gather feedback and ensure new procedures & technologies are enhancing their work experience rather than adding to their list of responsibilities.Healthcare Business Today, 2d ago

Latest

new In brains, online learning (editing weights, not just context window) is part of problem-solving. If I ask a smart human a hard science question, their brain may chug along from time t=0 to t=10 minutes, as they stare into space, and then out comes an answer. After that 10 minutes, their brain is permanently different than it was before (i.e., different weights)—they’ve figured things out about science that they didn’t previously know. Not only that, but the online-learning (weight editing) that they did during time 0<t<5 minutes is absolutely critical for the further processing that happens during time 5<t<10 minutes. This is not how today’s LLMs work—LLMs don’t edit weights in the course of “thinking”. I think this is safety-relevant for a number of reasons, including whether we can expect future AI to get rapidly smarter in an open-ended way without new human-provided training data (related discussion).alignmentforum.org, 2d ago
new Now these are really hard conversations to have. I'm using very human words to describe an AI which don't fit, but the shorthand is unavoidable. Now, doesn't AI have a value? No. Does it pretend? Yes. Can it mimic values? Yes. Can it do it? Well, not yet. All these things are going to change. A lot of this is in flux, but I think this is really interesting as these AIs become capable of doing things that used to be the exclusive purview of humans. So now I want to give the big problem here. All of the AIs that we have are built, are designed, are trained, and are controlled by for-profit corporations.harvard.edu, 2d ago
new California’s report also usefully lays out the potential risks associated with using these new tools, making clear that while there are some new potential harms, in many cases many of the risks are common to the use of any technology. Governments need to be conscious of the fact that tools that enable the easy generation of high-quality content could be misused to dupe consumers and residents.Perhaps because 35 of the 50 leading AI businesses are in California, as the state's report points out at the outset, it is silent on the risks to governments and those they serve of relying excessively on technologies developed and governed by unaccountable companies, especially when those technologies are procured by public servants without a deep knowledge of the tech.GovTech, 2d ago

Top

We are facing a dilemma. Our AI systems would be much safer and more useful if they possessed a modicum of adult-level common sense. But one cannot create adult-level common sense without first creating the common sense of a child on which such adult-level abilities are based. Three-year-old common sense is an important first step, as even such young children have the fundamental understanding that their own actions have consequences and actually matter. But on their own, the abilities of a three-year-old aren’t commercially viable. Further, AI’s focus on super-human narrow capabilities with an expectation that these will broaden and merge into common sense hasn’t borne fruit and is unlikely to any time soon.RTInsights, 4d ago
What does this mean in practice? It means that cyber security and disinformation, which are already prominent and incredibly challenging features of modern war, will become even more of a problem in conditions of intensive automation. Adversaries have incentives to manipulate or poison the data that feeds AI systems.78 AI will thus expand the range of counterintelligence risks to worry about. It also means that adversaries have incentives to move conflict in unexpected directions, i.e., where AI systems have not been trained and will likely perform in undesired or suboptimal ways. This creates not only data problems but judgment problems as well. Combatants will have to reconsider what they want in challenging new situations. As intelligent adversaries escalate conflict into new regions, attack new classes of targets, or begin harming civilians in new ways, how should AI targeting guidance change, and when should AI systems be withheld altogether? We should expect adversaries facing AI-enabled forces to shift political conflicts into ever more controversial and ethically fraught dimensions.Texas National Security Review, 26d ago
Unpredictability – The unpredictability of sentient AI is a significant concern. Humans, driven by their emotions, engage in conflicts and harmful actions, showcasing the range of both positive and negative feelings. If an AI entity, especially one entrusted with control over automated systems, attains sentience, there is a risk that it could become as unpredictable as a human. This unpredictability carries potentially serious implications.E-Crypto News - Exclusive Cryptocurrency and Blockchain News Portal, 13d ago
Alongside these opportunities, AI also poses significant risks, including in those domains of daily life. To that end, we welcome relevant international efforts to examine and address the potential impact of AI systems in existing fora and other relevant initiatives, and the recognition that the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed. We also note the potential for unforeseen risks stemming from the capability to manipulate content or generate deceptive content. All of these issues are critically important and we affirm the necessity and urgency of addressing them.lesswrong.com, 26d ago
However, with great promise comes great responsibility. As we celebrate the potential benefits of AI in agriculture, it is crucial to acknowledge and address the associated risks. The deployment of advanced technologies in farming raises concerns about data privacy, ethical considerations and the potential for exacerbating existing inequalities. How can we ensure that the benefits of AI are equitably distributed, reaching smallholder farmers who are the backbone of many agricultural economies?...Agrilinks, 3d ago
Innovative problem-solving. While it has its flaws, the non-sentient AI we already use has been known to come up with creative solutions to human problems. It is now common for humans to ask ChatGPT for advice for everything from career development to relationship problems. Now, keep in mind that a sentient AI would be the most self-aware tech to ever exist. Not only would it have access to virtually all the information that has ever been present and analyze it at the drop of a hat, but it would also understand first-hand how human feelings work. It’s been suggested that sentient AI could tackle issues like world hunger and poverty because it would have both the technical understanding of such problems and the emotional intelligence needed to navigate human nuances.Coinspeaker, 14d ago

Latest

new ROBERT CHAPMAN-SMITH: In the lead up to, you know, the release of some of OpenAI's models, there, there's been sort of like a speaking tour of folks in going to Washington talking to legislators about AI and there was a, the worry at that time was about regulatory capture. Like are they, are folks going to essentially gate the technology in such a way that smaller players are not gonna be able to play ball? And we've seen regulatory capture happen a lot within the political realm within Washington. But there's also this question of like effectiveness in terms of regulation. Like just because the regulation has passed doesn't mean it's actually a good regulation or if this body of Congress is actually able to regulate this fast-moving technology well, like they can't even pass a budget, like how are they gonna keep up with the pace of AI change? So I'm curious about that as a tool for dealing with AI safety because it in some sense it feels like one the the legislative body or processes or capable to be captured by interested parties and two, even when they do regulate sometimes they just do a poor job, they just miss the thing that is the key regulatory factor. So I'm curious about your conception there, and how to deal with some of the messiness that comes with those types of approaches to dealing with technological safety.Big Think, 2d ago
Portfolio optimization is another area where Quantum AI can make a significant impact. Building an optimal investment portfolio involves considering multiple factors, such as asset allocation, risk tolerance, and expected returns. With the computational power of Quantum AI, financial professionals can analyze a wide range of investment options and their potential outcomes, taking into account various constraints and objectives. This can lead to more efficient and profitable portfolios.Techiexpert.com, 3d ago
Excessive dependence on AI has the capacity to erode human empathy, logical thinking, innovation and interpersonal abilities as well. As AI becomes more integrated into diverse facets of our existence, there is concern that it might endanger fundamental human characteristics and the bonds within our communities.Techiexpert.com, 3d ago
...stranger: Pat is thoroughly acquainted with the status hierarchy of the established community of Harry Potter fanfiction authors, which has its own rituals, prizes, politics, and so on. But Pat, for the sake of literary hypothesis, lacks an instinctive sense that it’s audacious to try to contribute work to AI alignment. If we interrogated Pat, we’d probably find that Pat believes that alignment is cool but not astronomically important, or that there are many other existential risks of equal stature. If Pat believed that long-term civilizational outcomes depended mostly on solving the alignment problem, as you do, then he would probably assign the problem more instinctive prestige—holding constant everything Pat knows about the object-level problem and how many people are working on it, but raising the problem’s felt status.lesswrong.com, 3d ago
This research agenda focuses on self-improving systems, meaning systems that take actions to steer their future cognition in desired directions. These directions may include reducing biases, but also enhancing capabilities or preserving their current goals. Many alignment failure stories feature such behaviour. Some researchers postulate that the capacity for self-improvement is a critical and dangerous threshold; others believe that self-improvement will largely resemble the human process of conducting ML research, and it won't accelerate capabilities research more than it would accelerate research in other fields.lesswrong.com, 3d ago
Goal: To better understand on how internal search and goal representations are processed within transformer models (and whether they exist at all!). In particular, we take inspiration from existing mechanistic interpretability agendas and work with toy transformer models trained to solve mazes. Robustly solving mazes is a task may require some kind of internal search process, and gives a lot of flexibility when it comes to exploring how distributional shifts affect performance — both understanding search and learning to control mesa-optimizers are important for the safety of AI systems.alignmentforum.org, 3d ago

Top

...(Which, for instance, seems true about humans, at least in some cases: If humans had the computational capacity, they would lie a lot more and calculate personal advantage a lot more. But since those are both computationally expensive, and therefore can be caught-out by other humans, the heuristic / value of "actually care about your friends", is competitive with "always be calculating your personal advantage."I expect this sort of thing to be less common with AI systems that can have much bigger "cranial capacity". But then again, I guess that at whatever level of brain size, there will be some problems for which it's too inefficient to do them the "proper" way, and for which comparatively simple heuristics / values work better. But maybe at high enough cognitive capability, you just have a flexible, fully-general process for evaluating the exact right level of approximation for solving any given problem, and the binary distinction between doing things the "proper" way and using comparatively simpler heuristics goes away. You just use whatever level of cognition makes sense in any given micro-situation.)...lesswrong.com, 16d ago
I’ve convened hearings that explore AI safety, risk procurement of these tools and how to prepare our federal workforce to properly utilize them. But as policymakers, we also have to explore the broader context surrounding this technology. We have to examine the historical, the ethical and philosophical questions that it raises. Today’s hearing and our panel of witnesses give us the opportunity to do just that. This is not the first time that humans have developed staggering new innovations. Such moments in history have not just made our technologies more advanced, they’ve affected our politics, influenced our culture, and changed the fabric of our society. The industrial revolution is one useful example of that phenomenon. During that era, humans invented new tools that drastically changed our capacity to make things. The means of mass production spread around the world and allowed us to usher in a modern manufacturing economy. But that era brought with it new challenges.Tech Policy Press, 25d ago
Regardless of its exact nature, ‘Q*’ potentially represents a significant stride in AI development, so the fact that it’s at the core of an existential debate of OpenAI rings true. It could bring us closer to AI systems that are more intuitive, efficient, and capable of handling tasks that currently require high levels of human expertise. However, with such advancements come questions and concerns about AI ethics, safety, and the implications of increasingly powerful AI systems in our daily lives and society at large.CoinGenius, 9d ago

Latest

The future of LLMs and their integration into our daily lives and critical decision-making processes hinges on our ability to make these models not only more advanced but also more understandable and accountable. The pursuit of explainability and interpretability is not just a technical endeavor but a fundamental aspect of building trust in AI systems. As LLMs become more integrated into society, the demand for transparency will grow, not just from AI practitioners but from every user who interacts with these systems.unite.ai, 3d ago
Udo Sglavo, SAS Vice President of Advanced Analytics, said: “In 2023, there was a lot of worry about the jobs that AI might eliminate. The conversation in 2024 will focus instead on the jobs AI will create. An obvious example is prompt engineering, which links a model’s potential with its real-world application. AI helps workers at all skill levels and roles to be more effective and efficient. And while new AI technologies in 2024 and beyond may cause some short-term disruptions in the job market, they will spark many new jobs and new roles that will help drive economic growth.”...Datanami, 3d ago
But there are tradeoffs. While AI can improve the simulation speed, it comes at the cost of accuracy. In addition, while large language models can understand the question that’s being asked of it, the actual answer it returns needs to be put in context. “It still needs the EDA vendor or the software vendor to feed and curate the context,” Slater said. “Generative AI can give you amazing things if you’re drawing on a really large database of information, but when it comes to design, that information may not be readily available.”...Semiconductor Engineering, 4d ago
Another instance illustrating effective use-case strategy is found in the context of the Health Insurance Portability and Accountability Act (HIPAA). Although HIPAA was established before the widespread use of algorithms in healthcare, it has consistently played a pivotal role in shaping how AI tools handle patient data and maintain health privacy. Serving as a formidable guardrail, HIPAA compels healthcare technologies to prioritize the protection of patient data, ensuring that the integration of AI aligns with stringent standards of privacy and security, without restricting innovation in the sector.Tech Policy Press, 4d ago
Generative AI will enable firms to integrate and analyze both private and public financial information on another scale. AI tools will synthesize data from earnings calls and comprehensive reports, providing a deeper understanding of company and market health. In scientific research, next year will bring a leap in generative AI assistance. Researchers will have sophisticated AI tools at their disposal that can compare their work against others around the world, predicting future research trajectories and identifying unexplored areas that are ripe for discovery.Fast Company, 4d ago
AI will create jobs“In 2023, there was a lot of worry about the jobs that AI might eliminate. The conversation in 2024 will focus instead on the jobs AI will create. An obvious example is prompt engineering, which links a model’s potential with its real-world application. AI helps workers at all skill levels and roles to be more effective and efficient. And while new AI technologies in 2024 and beyond may cause some short-term disruptions in the job market, they will spark many new jobs and new roles that will help drive economic growth.”– Udo Sglavo, Vice President of Advanced Analytics, SAS...MarTech Series, 4d ago

Top

Joseph Thacker, researcher at AppOmni, notes that a “doom for humanity” scenario is still a matter of great debate among AI experts: “Experts are split on the actual concerns around AI destroying humanity, but it’s clear that AI is an effective tool that can (and will) be used by forces for both good and bad … The declaration doesn’t cover adversarial attacks on current models or adversarial attacks on systems which let AI have access to tools or plugins, which may introduce significant risk collectively even when the model itself isn’t capable of anything critically dangerous. The declaration’s goals are possible to achieve, and the companies working on frontier AI are familiar with this problem set. They spend a lot of thinking about it, and being concerned about it. The biggest challenge is that the open source ecosystem is really close to enterprises when it comes to making frontier AI. And the open source ecosystem isn’t going to adhere to these guidelines – developers in their basement aren’t going to be fully transparent with their respective governments.”...CPO Magazine, 26d ago
One common concern when implementing AI in sales is the fear of job displacement. However, it’s important to emphasize that AI enhances human capabilities rather than replacing them. By automating repetitive tasks and providing data-driven insights, AI enables sales professionals to focus on building relationships with customers and making strategic decisions. This collaboration between humans and AI can lead to increased productivity and better outcomes for both the business and its employees.ValiantCEO, 17d ago
Yeah, I’d say that’s pretty fair. I think the one thing I’d add… I’d say that with software development, for example… so offense-defense balance is something that’s often discussed in terms of open-sourcing and scientific publication, especially any time you have dual-use technology or scientific insights that could be used to cause harm, you kind of have to address this offense-defense balance. Is the information that’s going to be released going to help the bad actors do the bad things more or less than it’s going to help the good actors do the good things/prevent the bad actors from doing the bad things? And I think with software development, it’s often in favor of defense, in finding holes and fixing bugs and rolling out the fixes and making the technology better, safer, more robust. And these are genuine arguments in favor of why open-sourcing AI systems is valuable as well.alignmentforum.org, 8d ago
The second challenge for the ongoing legislative efforts is the fragmentation. AI systems, much like living organisms, transcend political borders. Attempting to regulate AI through national or regional efforts entails a strong potential for failure, given the likely proliferation capabilities of AI. Major corporations and emerging AI startups outside the EU’s control will persist in creating new technologies, making it nearly impossible to prevent European residents from accessing these advancements. In this light, several stakeholders[4] suggest that any policy and regulatory framework for AI must be established on a global scale. Additionally, Europe’s pursuit of continent-wide regulation poses challenges to remaining competitive in the global AI arena, if the sector enjoys a more relaxed regulatory framework in other parts of the world. Furthermore, Article 6 of the proposed EU Artificial Intelligence Act introduces provisions for ‘high-risk’ AI systems, requiring developers and deployers themselves to ensure safety and transparency. However, the provision’s self-assessment nature raises concerns about its effectiveness.Modern Diplomacy, 20d ago
The EU AI Act, in its current form, risks creating a regulatory environment that is not only burdensome and inappropriate for open-source AI developers but also counterproductive to the broader goals of fostering innovation, transparency, and competition in the AI sector. As the EU’s ongoing negotiations over the AI Act continue, particularly around the regulation of foundation models, policymakers need to adequately address these issues. If they do not amend the Act to better accommodate the unique nature and contributions of open-source AI, it could hamper the progress and openness in the AI sector. It is crucial for policymakers to recognize and preserve the distinct advantages that open-source AI brings to the technological landscape, ensuring that regulations are both effective and conducive to the continued growth and dynamism of the AI field.itif.org, 14d ago
The use of AI in cybersecurity raises ethical and privacy concerns, as highlighted by several experts. AI models may inadvertently perpetuate biases in data, leading to unfair or discriminatory outcomes. Ensuring the responsible use of AI and addressing biases in algorithms is paramount. It is essential to develop and adhere to ethical guidelines to balance security and individual privacy, providing a responsible framework for AI-driven cybersecurity.DATAQUEST, 7d ago

Latest

However, the increasing pervasiveness and power of AI demand that ethical issues be a primary consideration. As AI systems become more integrated into our daily lives, the risk of unintended consequences and ethical dilemmas grows exponentially. From biased algorithms in surveillance systems to the misuse of AI in deepfake technology, we have all witnessed the threats that AI going rogue can pose. Therefore, it is imperative to approach AI development with ethical principles firmly in mind.Zephyrnet, 4d ago
Regulators need to implement guidelines that help standardize the language around AI, which will help with understanding the model being used, and ultimately regulate risk parameters for these models. Otherwise, its potential to take the exact same data set, and draw wildly different conclusions based upon its biases—conscious or unconscious—that are ingrained from the outset. More importantly, without a clear understanding of the model, a business cannot determine if outputs from the platform fit within its own risk and ethics criteria.TechRadar, 4d ago
But at a high level, the EO puts pressure on organizations that produce AI-enabled, AI-generated, or AI-dependent products to adopt new application security (AppSec) practices for assessing these systems for safety, security, and privacy. They will need to account for risks, such as those from cyberattacks, adversarial manipulation of AI models, and potential theft or replication of proprietary algorithms and other sensitive data. Required security measures include penetration testing and red-team procedures to identify potential vulnerabilities and other security defects in finished products.Security Boulevard, 4d ago

Latest

It's also worth looking at how AI will be offered. If the technology is integrated into a vendor's tech stack from the beginning, its inner workings will be more effectively obscured behind extra layers of security, reducing customer risk. Sometimes this technology is entirely distinct to a vendor, while other times, like Zoho's partnership with OpenAI, the vendor is more focused on honing existing technology for its particular ecosystem. Regardless, advances in the tech can be pushed across the system instantaneously, ensuring that whatever generative AI produces is the most tailored result possible at any given moment, eliminating the risk of wasted time implementing something outdated. Past customer success stories and use cases are an effective way of scoping out a potential tech vendor's customer-centric approach to AI.diginomica, 4d ago
Companies that use artificial intelligence systems should utilize AI auditing services to ensure their systems comply with all laws and rules applicable to AI usage. AI audits allow businesses to spot any possible problems within their AI systems quickly, while providing strategies to deal with these potential issues.Tech Resider, 4d ago
The Academy of Management Review invites scholars to contribute to a Special Topic Forum (STF) on AI’s multifaceted role in management and organization theory. Theoretical exploration is paramount to advance our understanding of how AI is reshaping the organizational landscape and redefining management as we know it. The STF seeks to tackle essential research questions at multiple levels of theorizing. At the micro level, researchers could delve into how predictive and generative AI applications alter key managerial tasks, such as communication, decision making, employee engagement, problem solving, and innovation. At the meso level, scholars might explore how organizations leverage predictive and generative AI for new strategies, structures, and capabilities. At the macro level, scholars could scrutinize the ethical, regulatory, and societal ramifications of integrating AI into organizations’ day-to-day management.AOM_CMS, 4d ago
Perhaps most importantly, leaders and educators need to resist the temptation to become overly focused on—or even panicked about—how AI might change teaching and learning. The dawn of ubiquitous AI should serve as a reminder that children still need to develop a deep foundation of knowledge to use these tools well, and that the best use of AI in traditional schools is to free up the time of educators to do more work directly with students. Outside of schools, AI can help cultivate the “weirder” ecosystem of educational options needed for a system of education that empowers families to access the educational opportunities their children need to thrive. When used thoughtfully, AI tools have the potential to move us closer to an education system that provides a more diverse range of experiences to meet the unique needs of every student.The Thomas B. Fordham Institute, 4d ago
...• Bring: Boards on board. Unless board members understand GenAI and its implications, they will be unable to judge the likely impact of a company’s AI strategy and related decisions regarding investments, risk, talent, and technology on their stakeholders. “Our conversations with board members reveal that many of them admit they lack this understanding,” McKinsey says.DATAQUEST, 4d ago
Artificial intelligence (AI) and cleantech adoption are largely considered to be one-and-the-same these days. AI is compressing and analyzing the massive amounts of data that the cleantech industry produces. It can help optimize solar and wind farms, simulate climate and weather, enhance power grid reliability and resilience, and advance carbon capture and power fusion breakthroughs. Its original prophecy to improve efficiency, reduce costs, and speed up R&D has been proven. However, ethical AI concerns do haunt the cleantech industry.CleanTechnica, 4d ago

Latest

AI is optimizing and automating the CI/CD pipeline, which is crucial for the efficient delivery of software. AI tools like Jenkins X utilize artificial intelligence to predict build failures, analyze code changes, and automate deployment processes. Predictive analytics in CI/CD helps in identifying patterns that may lead to deployment failures, ensuring more reliable software delivery. By automating these processes, developers teams can achieve faster and more efficient software delivery, responding swiftly to changing requirements and market demands.The European Business Review, 4d ago
In response to the introduction of this new type of technology in healthcare, the CAR has set up a Radiology AI Validation Network (RAIVN). This assembly consists of AI specialists in the field of radiology tasked with assisting with post-market assessment of AI applications. As a resource, RAIVN would serve as the national body responsible for evaluating the performance of these technologies and pre-identifying any potential issues that may affect patient care. While this program is still in its infancy, we are hopeful its integration will be smoothly executed in the months ahead. We also believe that the RAIVN framework can and should be applied more generally to all AI based solutions in healthcare.Hospital News, 4d ago
Ayesha Iqbal, IEEE senior member and engineering trainer at the Advanced Manufacturing Training Centre added: "AI has significantly evolved in recent years, with applications in almost every business sector. However, there are some barriers preventing organisations and individuals from adopting AI, such as a lack of skilled individuals, complexity of AI systems, lack of governance, and fear of job replacement. With AI growing more rapidly than ever before, and already being tested and employed in education, healthcare, transportation, finance, data security, and more, it’s high time that the Government, tech leaders, and academia work together to establish standards and regulation for safe and responsible development of AI-based systems. This way, AI can be used to its full potential for the collective benefit of humanity."...electronicspecifier.com, 4d ago
The reported tension between Toner and Altman may smack of personal politics, but it is also a microcosm of a broader tension in the world of AI research as to the field’s goals and the best—or least dangerous—ways to get there. As I wrote recently in this newsletter, there are, broadly, two schools of thought when it comes to the potential dangers of AI research. One focuses on the risk that people will unwittingly give birth to an all-powerful artificial intelligence, with potentially catastrophic results for humanity. (Many believers in effective altruism fall into this camp.) Geoffrey Hinton, seen by many in the field as the godfather of modern AI research, said recently that he left Google specifically so that he could raise the alarm about the dangers of super-intelligent AI. Last month, President Biden issued an executive order in an attempt to set boundaries for the development of AI; this week, sixteen countries including the US agreed to abide by common research standards.Columbia Journalism Review, 4d ago
Are these conversations to truly discuss the risks and safety regarding AI or is there a larger agenda at play? It is no secret that both the U.S. and China are leveraging AI but to what extent may remain a mystery. Whether or not both nations come to a consensus on the application of AI, both must consider the consequences (good or bad) surrounding the use of AI. AI has proved to ease the burden of repetitive data processing tasks. However, the illuminated potential of AI extends beyond conventional applications, gaining notable attention in fields such as space, intelligence, and cyber. It is imperative to approach these technologies cautiously, as the unchecked deployment of AI may evolve into a influential and clandestine instrument, capable of exerting prompt and destructive effects on a global scale. The repercussions of such employment may remain undetectable until their manifestation reaches an irreversible magnitude, highlighting the critical necessity for strategic caution and preemptive measures in the utilization of AI across diverse sectors.Modern Diplomacy, 4d ago
There’s also the way we find love and romance. Already, AI dating tools are aping online dating, except the person on the other end isn’t a person at all. There’s already one company that has AI doing the early-stage we-met-on-Tinder thing of sending flirty texts and even sexy selfies, and (for a fee) sexting and segueing into being an online girlfriend / boyfriend. Will most people prefer to the warm glow of a phone screen to an actual human? I don’t think so. But enough will to, I suspect, cause a lot of problems. Because while on the one hand it seems find if under-socialized humans can find some affection with a robot, I question whether directing an under-socialized person’s already-limited social skills to a robot designed to always be amenable and positive and accommodating and compliant really serves that person, or anyone who has to be around that person. Interacting with other human beings can be challenging. That’s part of the point: It’s how we learn to regulate our own emotions and consider those of others; it’s how we start to discern which people are our people; it’s how we learn to compromise and negotiate and building layered and meaningful relationships. You can’t get that from AI. But what you can get is a facsimile of a human being who more or less does what will make you happy in the moment—which, again, is not at all a recipe to be happy in the aggregate.Ms. Magazine, 5d ago

Latest

...“However, while generative AI-powered LLMs are making life easier in numerous ways, we need to be acutely aware of their limitations. For a start, they’re not accurate: GPT-4 Turbo has the most up-to-date data since its inception, but still only contains world knowledge up to April 2023. These systems also hallucinate and have a clear tendency to deliver biased responses. The real concern with ChatGPT is the way these LLMs are presented. They give a ‘human-like’ interaction which inclines us to trust them more than we should. To stay safe navigating these models, we need to be much more skeptical with the data we are given. Employees need in-depth training to keep them up to date with the security risks posed by generative AI and also what its limitations are.”...TechRound, 5d ago
Another key to your data foundation is integrating data across your data sources for a more complete view of your business. Typically, connecting data across different data sources requires complex extract, transform, and load (ETL) pipelines, which can take hours—if not days—to build. These pipelines also have to be continuously maintained and can be brittle. AWS is investing in a zero-ETL future so you can quickly and easily connect and act on all your data, no matter where it lives. We’re delivering on this vision in a number of ways, including zero-ETL integrations between our most popular data stores. Earlier this year, we brought you our fully managed zero-ETL integration between Amazon Aurora MySQL-Compatible Edition and Amazon Redshift. Within seconds of data being written into Aurora, you can use Amazon Redshift to do near-real-time analytics and ML on petabytes of data. Woolworths, a pioneer in retail who helped build the retail model of today, was able to reduce development time for analysis of promotions and other events from 2 months to 1 day using the Aurora zero-ETL integration with Amazon Redshift.Zephyrnet, 4d ago
The regulation of general-purpose AI and foundation models continues to play a central role in the current trilogue negotiations between the Council, Parliament and the Commission and is the subject of controversial debate. Following the last political trilogue on October 24, 2023, an agreement on a tiered approach to the regulation of foundation models initially appeared to be on the cards. According to this, stricter obligations would apply in particular to the most powerful AI models with a greater impact on society. As a result, these would primarily affect leading – mostly non-European – AI providers. The Parliament thus abandoned its original plan to introduce horizontal rules for all foundation models without exception.Tech Policy Press, 5d ago

Top

Similar to some previous sections, for security teams, this directive presents a unique challenge. It's not enough to ensure that the AI systems the company uses are secure; they must also be free from discriminatory bias. This is a complex task that goes beyond traditional security measures. It requires a good bit of understanding how AI models work, how they can inadvertently lead to discriminatory outcomes, and how to test for it. Many teams may need to hire — or work closely with — external data scientists to ensure that the AI models being used are being tested for bias.wiz.io, 12d ago
This research investigates problem-solving, the foundations of computational insights, and the role of prior knowledge. It advocates for incorporating insights from cognitive science into concepts, representations, and self-explanation to create flexible AI mathematicians. The research also calls for improved collaboration tools and more opportunities for convening. By emphasizing a multi-disciplinary approach, it anticipates that AI systems will contribute to a better understanding of human mathematical cognition, highlighting the pivotal role of joint efforts across diverse fields.MarkTechPost, 5d ago
In doing so, the CAIS model introduces a set of alignment-related affordances that are not immediately apparent in traditional models that view general intelligence as a monolithic, black-box agent. In fact, not only does the AI-services model contribute to the developmental and operational robustness of complex AI systems, but it also facilitates their alignment with human values and ethical norms through models of human approval. Specifically, the CAIS model allows for the introduction of several safety mechanisms, including the use of optimisation pressure to regulate off-task capabilities as well as independent auditing and adversarial evaluations to validate each service's functionality. Furthermore, functional transparency and the monitoring of the communication channels can mitigate the inherent complexity and opaqueness of AI algorithms and components, while enabling resource access control policies that can further constrain undesirable behaviours.lesswrong.com, 17d ago
...“SMEs looking to ethically incorporate AI should seek to understand the problems they are attempting to solve to be sure that AI solutions are fit for purpose and can solve their challenges rather than add to them. In hiring for example, AI solutions should be proactive in identifying and addressing potential biases, acknowledging their substantial impact on individuals and communities. Clarity on AI algorithm development and functionality is crucial with an emphasis on the need to train algorithms on diverse, representative data to reduce bias.Dynamic Business, 12d ago
This is where the uniquely human skill to think critically becomes indispensable. Logical reasoning enables us to dissect AI outputs, identifying potential flaws or inconsistencies. Reflective thinking encourages employees to consider the broader implications and contexts of the information presented to them. Rational thought allows us to weigh the evidence, discerning between the relevant and the extraneous. Unbiased evaluation ensures that we remain vigilant to potential biases, both from the AI and from our own preconceptions. Employees cannot afford to be passive recipients of generative AI output. They must become active evaluators, synthesizers, and decision-makers. An employee’s ability to critically assess, challenge, and refine AI outputs will determine the success of the human-AI collaboration.Fast Company, 17d ago
Limited expert knowledge: AI algorithms may have limitations in accessing and processing expert knowledge in specific subject areas, leading to potential gaps or inaccuracies in the training content. Without a comprehensive understanding of the expertise and insights provided by human trainers, AI tools may struggle to offer in-depth and accurate training content. It’s crucial to acknowledge the limitations of AI in terms of accessing expert knowledge and to involve human experts in the curation and validation of training content. By combining the expertise of human trainers with the capabilities of AI, organizations can ensure that the training materials are up-to-date and well-informed.Training Industry, 28d ago

Latest

Artificial intelligence has revolutionized tourist decision-making by shifting the focus from price considerations to personalized alternatives. Tourists can now choose destinations, places, and activities that best suit their preferences, thanks to AI’s implementation of personalization techniques and recommender systems. These systems leverage the vast quantity of information available on the internet, including User-Generated Content (UGC), to provide more tailored and informed experiences. Travel assistants that leverage advancements in artificial intelligence, mobile devices, natural language processing, and speech recognition have become increasingly popular. These applications are designed to cater to user preferences, interests, and availability, offering on-demand or autonomous suggestions that proactively anticipate their needsvand they enhance the travel experience through personalized and intuitive assistance. These systems leverage the vast quantity of information available on the internet, including User-Generated Content (UGC), to provide more tailored and informed experiences. ServiceNow leverages generative AI to provide relevant, direct and conversational responses, seamlessly connecting interactions to digital workflows across the Now Platform. For example, when users inquire through Now Assist for Virtual Agent, generative AI quickly provides concise answers, supplying information such as internal codes for product and engineering teams, product media, document links, or relevant knowledge base article summaries. This ensures accurate conversations across departments and systems, improving productivity, boosting self-solve rates, and expediting issue resolution within ServiceNow. In today’s technology-driven era, the increasing AI footprint in the hospitality industry is a positive development.DATAQUEST, 5d ago
BUOLAMWINI: I'm concerned with the way in which AI systems can kill us slowly already. I'm also concerned with things like lethal, autonomous weapons, as well. So for me, you don't need to have superintelligent AI systems or advanced robotics to have a real harm. A self-driving car that doesn't see you on the road can be fatal and harmful. I think of this notion of structural violence where we think of acute violence - there's the gun, the bullet, the bomb, we see that type of violence. But what's the violence of not having access to adequate health care? What's the violence of not having housing in an environment free of pollution? And so when I think about the ways in which AI systems are used to determine who has access to health care and insurance, who gets a particular organ, you know, in my mind, there are already - and in, also, the world, we see there are already many ways in which the integration of AI systems lead to real and immediate harms. We don't have to have superintelligent beings for that.NPR, 5d ago
Consent, privacy, and responsible AI use are only a few of the concerns that should be included in ethical considerations. Developers and researchers ought to follow guidelines that put the rights and welfare of the people whose likenesses are being replicated first. Collaborating across the industry can help exchange best practises and forge a shared commitment to the responsible development and application of deepfake technology.MarTech Series, 5d ago
In conclusion, Gaia’s benchmark for evaluating General AI Assistants on real-world questions has shown that humans outperform GPT-4 with plugins. It highlights the need for AI systems to exhibit robustness similar to humans on conceptually simple yet complex questions. The benchmark methodology’s simplicity, non-gameability, and interpretability make it an efficient tool for achieving Artificial General Intelligence. Furthermore, the release of annotated questions and a leaderboard aims to address open-ended generation evaluation challenges in NLP and beyond.MarkTechPost, 5d ago
The emergence of generative AI has introduced further opportunities to apply AI to security priorities. Security operations (SecOps) is a particularly fertile ground for innovation. Since attackers seek to evade detection, security analysts must correlate evidence of suspicious activity across a staggering volume of inputs. They must quickly prioritize identifiable threats in this data for response, making the constantly shifting playing field between attacker and defender a race against not only innovation but time, given that attacks can have an impact within minutes. Security analytics and SecOps tools are purpose-built to enable security teams to detect and respond to threats with greater agility, but the ability of generative AI to comb through such volumes of data, extract valuable insight, and present it in easily consumable human terms should help alleviate this load. Early applications of generative AI in this context show promise for enabling analysts — often limited in number relative to the challenges they face — to spend less time on data collection, correlation and triage, and to focus instead where they can be most effective. Generative AI can also be useful in finding and presenting relevant insights to less experienced analysts, helping them build expertise as they grow in the field (thus augmenting their productivity, rather than replacing them) — an option that could prove useful in helping organizations counter the enduring challenges of sourcing and retaining cybersecurity skills.spglobal.com, 5d ago
Unlike conventional systems, in which data is centralised and exploited in a limited number of powerful servers, edge computing offers data storage and processing that is as close as possible to where the intelligence was produced, thereby reducing the circulation of large masses of information. While these connected objects are much less powerful than data centres, they are most importantly less costly and less energy-intensive. They of course cannot be used to train complex AI models, but they can run algorithms that are already operational.CNRS News, 5d ago

Top

Where does that leave the EU’s ambition to set the global rule book for AI? In this column, based on our recent paper (Kretschmer et al. 2023), we explain the complex “risk hierarchy” that pervades the proposed AI Act (European Commission 2023), currently in the final stages of trilogue negotiation (European Parliament 2023). This contrasts with the US focus on “national security risks”, which appears to be the area where there are existing federal executive powers that can compel AI companies (Federal Register 2023). We point out shortcomings of the EU approach that requires comprehensive risk assessment (ex ante), at the level of technology development. Using economic analysis, we distinguish exogenous and endogenous sources of potential AI harm arising from input data. We propose that from the perspective of encouraging ongoing innovation, (ex post) liability rules can provide the right incentives to improve data quality and AI safety.CEPR, 12d ago
In February 2023, South Korea’s National Assembly proposed the Act Fostering the AI Industry and Establishing a Foundation for Trustworthy AI (text in Korean). This Act has two main goals: (1) fostering the growth of AI technology; and (2) protecting data and privacy right to build trust in AI. In an effort to help grow the AI industry, the Act allows for the development of AI technologies without government pre-approval. Yet, for certain “high-risk AI,” those that directly affect privacy and safety, the Act establishes ethical principles that developers must follow.natlawreview.com, 24d ago
In this TNGlobal Q&A with Gaurav Keerthi, Head of Advisory and Emerging Business at Ensign InfoSecurity, we delve into the rapidly evolving landscape of Generative AI (GenAI) in the context of cybersecurity challenges and opportunities. With a unique perspective shaped by his experiences in both public and private sectors, Keerthi provides a nuanced understanding of GenAI’s dual role in cybersecurity. He highlights its transformative potential for enhancing defensive capabilities while simultaneously acknowledging the escalating threats posed by its misuse by attackers. Keerthi underscores the importance of understanding the technology at a fundamental level, including its inherent risks, to effectively harness its benefits and mitigate its dangers.TNGlobal, 10d ago

Latest

The surge in AI capabilities has resulted in an inundation of information and vendor propositions. This influx, while reflective of the burgeoning potential, shows just how critical it is to take a measured approach. Making informed decisions becomes all-important in understanding opportunities and charting a course that aligns with your long-term vision. Agencies must carry out rigorous assessments to determine the most beneficial and ethical applications of AI, weighing up short-term gains against the sustainable impact on society.The Mandarin, 5d ago
VIAVI’s AI-driven systems are a critical part of fortifying networks against the threat of natural disaster. Using our AI models, VIAVI can predict how natural disasters might disrupt communications. Using this data, network operators can both design more resilient networks and are prepared to automatically adjust the network during disasters to maintain critical communications. This proactive approach ensures that emergency services remain operational, and families stay connected during crises, exemplifying how AI can help ensure we all have a lifeline in our most challenging times. Moreover, even after the disaster has passed, network providers often look for intelligent and automated prioritization triage to quickly get their networks back online. The AI-powered network monitoring and root cause analysis functions we offer can be instrumental in speeding repairs and reducing the need for extensive manual labor when restoring critical services. We can provide specific guidance about which particular parts of the networks should be fixed – and in what order – to restore connectivity as quickly as possible. This capability can be essential in preserving human lives and mitigating the repercussions of disasters on businesses and the environment.VIAVI Perspectives, 5d ago
If generative AI enables corporate grassroots innovation, it gives fraudsters a powerful tool to carry out more sophisticated attacks more rapidly. This reality makes the need to protect corporate and consumer data an absolute requirement. Companies should strategically and seamlessly assemble a variety of detection capabilities that include authenticated identity data, device-risking, email and mobile risking, behavioral biometrics, document verification and other risk signals to spot early indicators of potential fraud instantly.SiliconANGLE, 5d ago
One of the central answers to that latter question is practical training. Notably, the media organisations that have had the most successful implementations of AI so far have robust codes of practice in place detailing its usage and limitations. Whether it is by writing more effective prompts that shorten the iteration cycle or understanding the technology’s limitations and how and where it can most appropriately assist in the newsroom, it is essential to have a detailed AI strategy rather than a succession of ad hoc responses.TVBEurope, 6d ago
The guidelines provide essential recommendations for AI system development and emphasize the importance of adhering to Secure-by-Design principles that CISA has long championed. It is imperative for stakeholders, including data scientists, developers, managers, decision-makers, and risk owners, to thoroughly review these guidelines. Doing so will empower them to make well-informed decisions regarding the design, deployment, and operation of their machine learning AI systems.Industrial Cyber, 6d ago
Overall, assessing the long-term impact of AI on educational systems, teaching methodologies, and cognitive and social development of learners is imperative. Continuous research and evaluation are necessary to ensure that the implementation of AI in education is beneficial, sustainable, and aligned with the holistic development of learners. Addressing these challenges requires a collaborative effort among educators, policymakers, technologists, and communities. As we navigate through these complexities, the potential to create a more personalized, engaging, and effective educational experience through Generative AI remains an inspiring objective. By fostering open discussions and collaborative solutions, we can work towards harnessing the full potential of Generative AI in education, making learning a more enriching and accessible venture for all.CXOToday.com, 6d ago

Top

Social and emotional intelligence are also essential components of high-quality care, enabling staff to empathise, communicate effectively and meet patient needs. Analysis by the Office for National Statistics (ONS) in 2019 – which found that medical practitioners were one of the three occupations at lowest risk of automation – noted that health-related words such as ‘patient’ and ‘treatment’ frequently appeared in the task descriptions of jobs at low risk of automation. The ONS suggested this reflects the dimension of ‘working with people’ and ‘the value added by humans in these roles, which is difficult to computerise’. Again, emerging research indicates AI could support empathetic communication – for instance, by generating draft responses to patient questions – but this is different to being empathetic, which requires the ability to read and understand the feelings of other people, and to express and reason with emotion.The Health Foundation, 20d ago
Legal expert systems, like ROSS Intelligence, and AI in medicine showcase AI’s ability to mimic human judgment in specific domains. They represent narrow AI’s evolution towards AGI. As these systems can analyse large amounts of data and provide precise responses, they serve as early indicators of AGI’s potential for decision-making in complex, real-world scenarios.TechRound, 20d ago
It’s not enough to simply put AI tools out into the world and watch them work. It can be particularly important to understand the decision-making process with certain AI applications. In some cases, it can be difficult to understand why certain AI tools came to conclusions. This can have sizeable implications, especially in industries such as healthcare or law enforcement where influencing factors must be considered, and real human lives are at stake.The ChannelPro Network, 17d ago
Finding patterns of any sort, in everything from crime to waste, fraud to abuse, occurs infrequently and often involves legions of inspectors. Regulators take months to painstakingly look through compliance forms, unable to process a request based on its own distinctive characteristics. Field workers equipped with AI could quickly access the information they need to make a judgment about the cause of a problem or offer a solution to help residents seeking assistance. These new technologies allow workers to quickly review massive amounts of data that are already in city government and find patterns, make predictions, and identify norms in response to well framed inquiries.Fast Company, 17d ago
Finally, in the year since Megathreats appeared, AI has become an even bigger topic, owing to the public release of generative AI platforms like ChatGPT. I had originally predicted that deep-learning architectures (“transformer networks”) would revolutionise AI, and that does seem to be what has happened. The potential benefits – and pitfalls – of generative AI are profound, and they are becoming increasingly clear. On the positive side, productivity growth could be sharply increased, vastly enlarging the economic pie; but, as was true of the first digital revolution and the creation of the internet and its applications, it will take time for such gains to emerge and achieve scale.interest.co.nz, 9d ago
...⦁ DATA: Organisations cannot neglect the importance of having data ‘AI-ready’. While data serves as the backbone needed for AI operations, it is also the area where readiness is the weakest, with the greatest number of Laggards (9%) compared to other pillars. 73% of all respondents claim some degree of siloed or fragmented data in their organisation. This poses a critical challenge as the complexity of integrating data that resides in various sources and making it available for AI implications can impact the ability to leverage the full potential of these applications.CRN - India, 18d ago

Latest

AI has exciting potential to revolutionise the world and bring vast benefits to sectors such as healthcare including diagnosis of diseases and new (de novo) drug development with greater efficiencies. However, there are also challenges with data privacy as AI models rely upon large sets of data to train and develop models for commercial usage, and issues such as imbalance in datasets resulting in bias that may in turn result in unfair and unreasonable outcomes.bbntimes.com, 6d ago
AI systems reproduce the biases that are fed into them. There are well-documented episodes of AI systems — in hiring, for example — reproducing existing biases based on the datasets and models they deploy.The Financial Brand, 6d ago
At its core, the essence of transparency policies places the responsibility for alleviating (ethical) harm on the individual.15 Going back to the examples above, it is the individual who is expected to recalibrate their buying habits, change consumption choices, or even stop their habits in response to energy labels, sugar, and calorie content information, and alerts about smoking risks. A similar assumption holds for AI transparency policies. Here, it is the user who engages with AI systems, such as LLMs, who is responsible for adapting their behavior once they know they interact with AI. Take popular AI language models like ChatGPT or Google’s Bard. Such models come with a warning about the AI producing potentially “inaccurate or offensive information,” assuming that users will factor it into their decision-making. Yet, our findings show that such disclaimers will likely not help when someone is motivated to leverage AI output to further their self-interests.psychologytoday.com, 6d ago

Latest

The guidelines are a curious collection of common sense suggestions, reiterating long-held general security precepts — such as managing the accumulated technical debt during a system’s lifecycle — and fleshing out current best practices for developing AI-based systems. “We are at an inflection point in the development of artificial intelligence, which may well be the most consequential technology of our time. Cybersecurity is key to building AI systems that are safe, secure, and trustworthy,” said Secretary of Homeland Security Alejandro Mayorkas, who also called it a “historic agreement.” I wouldn’t go that far, but it is still a useful read.SiliconANGLE, 6d ago
As new products — and new promises — hit the market, CISOs should always mind their gaps. In terms of resources, capabilities, and vulnerabilities, what’s currently missing from the SOC? Can AI credibly fill these voids, especially for repetitive tasks? Vendors and their products must be vetted conscientiously to guarantee that they’re adding real value. Be wary of AI tools that make sweeping, general proclamations, and focus on those that solve genuine, specific problems; and be wary of companies that lead with AI, rather than focusing on cybersecurity and the practical application of AI to potential use cases.gbiimpact.com, 6d ago
I believe that a conceptually similar scale should be developed for AI threats. Although we have no means of quantifying the probability with which any of the hypothesized threats could end up manifesting themselves, nor the damage they would cause to humanity (or to smaller scale environments and systems), it is still a useful exercise to paint a qualitative map where the perceived or assessed likelihood of outcomes is on the horizontal axis, and on the vertical axis is the severity of the outcome. This may help us start a discussion on the hierarchy of those threats, which would guide us toward paying more attention and studies to the ones which maximize the product of likelihood and severity.Science 2.0, 7d ago
The Q* model could indeed mark a pivotal moment in the pursuit of AGI. However, it’s clear that as AI systems grow more sophisticated, the governance of such technology must evolve in tandem. The balance between innovation and safety, the ethical use of AI, and the societal implications of AGI will continue to be pressing issues as we step into this uncharted territory. The Q* model is not yet available to the public, OpenAI has not yet made any official announcements regarding Q*.IO, 7d ago
Despite possessing some inherent security risks, AI systems are built to be as accurate as possible when making predictions or categorisations. However, prioritising its accuracy means that fairness and inclusion often get overlooked in development. For instance, algorithms trained to scan job applications and CVs in bulk may accurately predict that male applicants are more qualified.Women in Technology, 7d ago
This is all well and good for digital drawing or painting disciplines, but it can fall apart beyond that. Let’s say you’re a photographer. How do you prove that an image you submit is your own? Footage of you holding a camera and pressing the shutter doesn’t really go a long way in that regard. For matters like these, more advanced techniques may be required. Tools could theoretically be developed to look for telltale signatures at the pixel level that reveal a particular AI image generator was used, but by that point, you’re getting way off into the weeds. Suddenly a black box is in charge of determining whose images are legitimate, and whose aren’t, and there’s always the potential for false positives or false negatives to ruin somebody’s day.Hackaday, 7d ago

Latest

The press release explains that organised criminals move ‘scammed’ funds through a series of ‘mule’ accounts to disguise them. For the past five years, Mastercard has been helping banks counter this by helping them follow the flow of funds through these accounts, and then close them down. Now, by overlaying insights from this tracing activity with specific analysis factors such as account names, payment values, payer and payee history, and the payee’s links to accounts associated with scams, the new AI-based Consumer Fraud Risk tool provides banks with the intelligence necessary to intervene in real time and stop a payment before funds are lost. UK banks, such as TSB, which were early adopters of Mastercard’s tool, claim that the benefits are already visible. Mastercard will soon be offering the service in other geographies, including India.Electronics For You, 7d ago
AI will undoubtedly continue to make headlines, not just as a new concept, but through real-life examples and applications of how it's being used, showcasing both its positive and negative impacts. Even in 2023, we're seeing AI being used by the bad guys to generate more effective and elusive phishing emails, as well as generating zero-day attacks. I believe 2024 will see the proliferation of second and third-generation AI-based security tools that can defend and counter-attack AI-based attacks in real time. We could start to see the AI version of Battle Bots, with organisations as the arenas in the combat zone.IT Brief Australia, 7d ago
AI’s predictive maintenance, supply chain optimization and waste sorting minimize waste creation while maximizing resource utilization. These AI algorithms use data extracted from sensors and historical performance to predict potential equipment failures. Thanks to this approach, the equipment can be used for a longer duration, thus leading to fewer premature replacements. This approach is perfectly in line with the durability and longevity principles of the circular economy.CXOToday.com, 7d ago
This is why we were delighted to be able to run the workshop ‘Co-creating Better Images of AI’ during London Data Week. It was a chance to bring together over 50 members of the public, including creative artists, technologists, and local government representatives to each make our own images of AI. Most images of AI that appear online and in the newspapers are copied directly from existing stock image libraries. This workshop set out to see what would happen when we created new images from scratch. We experimented with creative drawing techniques and collaborative dialogues to create images. Participants’ amazing imaginations and expertise went into a melting-pot which produced an array of outputs. This blogpost reports on a selection of the visual and conceptual takeaways! I offer this account as a personal recollection of the workshop—I can only hope to capture some of the main themes and moments, and I apologise for all that I have left out.aihub.org, 7d ago
It's been interesting that the focus, how it's been stated as sort of broad AI models as agents, as tools, but it's perhaps unsurprising that even though large language models weren't called up specifically in the remit, actually that's where the attention has been. I think that the areas which have been particularly challenging are around control, if that's the right word, of data in terms of know that these models, unlike more narrow applications of AI in medicine, these models have been generated on enormous data sets that aren't on a traditional consent, opt-in type model. There's been really interesting conversations, thoughtful analysis on what this means for patient autonomy. What does the right to be forgotten, if we look at this in a in a more sort of GDPR context, what does that look like in the context of large language model? I think also what it'll be trying to holding a touch of, is the opportunity to improve patient care. And actually a really strong theme has been around patient as the leaders. This is not something that's being done to patients or on behalf patients, this is all of us together as that wider public, as humanity, with different roles, some as patients, some as carers, some as health practitioners, some as engineers, trying to find a way together.The Health Foundation, 7d ago
Recent efforts to mimic Biosafety Levels in AI with a typology define the highest risks of AI as “speculative”. The fact that “speculative” doesn’t outright say “maximally dangerous” or “existentially dangerous” points also to “vibes-based” models. The whole point of Biosafety Levels is to define containment procedures for dangerous research. The most dangerous level should be the most serious and concrete one - the risks so obvious that we should work hard to prevent them from coming into existence. As it currently stands, "speculative" means that we are not actively optimizing to reduce these risks, but are instead waltzing towards them based on the off-chance that things might go fine by themselves.alignmentforum.org, 7d ago

Latest

Many CIOs/CTOs have had their very own reservations in embarking on modernization initiatives attributable to a large number of challenges referred to as out firstly—quantity of SME time wanted, impression to enterprise attributable to change, working mannequin change throughout safety, change administration and lots of different organizations and so forth. Whereas Generative AI just isn’t a silver bullet to unravel all the issues, it helps this system by acceleration, discount in price of modernization and, extra considerably, de-risking by making certain no present performance is missed out. Nonetheless, one wants to know that it takes effort and time to carry LLM Fashions and libraries to enterprise atmosphere needs-significant safety and compliance evaluations and scanning. It additionally requires some targeted effort to enhance the information high quality of information wanted for tuning the fashions. Whereas cohesive Generative AI-driven modernization accelerators will not be but on the market, with time we are going to begin seeing emergence of such built-in toolkits that assist speed up sure modernization patterns if not many.BlaQue Crypto News, 8d ago
The governor’s office calls for a balanced approach to AI, recognizing the transformative impact of this technology and the need to address safety concerns. This stance reflects a broader debate in the tech community, where opinions on AI range from warnings about over-reliance on automation to optimistic views on its potential to address global challenges like climate change and disease.BitcoinEthereumNews.com, 8d ago
I’ve been personally a bit concerned about the way that policy has been heading with the AI Executive Order. You see in the document some really profound ideas that I really stand behind. AI needs to work, it needs to not discriminate, we need to be testing and auditing these technologies before deploying them in sensitive contexts. And then, on top of that, you see stapled on these really intense existential risk driven concerns and policy that is now written in the force of law. And I think that this artifact illustrates how much policymakers have been heavily leaning on some of the companies that are developing these technologies, and OpenAI in particular, to advise them on how to regulate these things, and that is not great. Of course, policy makers should be talking to these companies, but they should not be relying on them as much as I think they have, based on the policy documents that we’ve been seeing coming out of these consultation processes.Tech Policy Press, 8d ago

Top

DATA: Organizations cannot neglect the importance of having data ‘AI-ready’. While data serves as the backbone needed for AI operations, it is also the area where readiness is the weakest, with the greatest number of Laggards (9%) compared to other pillars. 73% of all respondents claim some degree of siloed or fragmented data in their organization. This poses a critical challenge as the complexity of integrating data that resides in various sources and making it available for AI implications can impact the ability to leverage the full potential of these applications.DATAQUEST, 17d ago
What is potentially most challenging in recruiting “AI talent” is identifying the actual skills, capacities, and expertise needed to implement the EO’s many angles. While there is a need, of course, for technological talent, much of what the EO calls for, particularly in the area of protecting rights and ensuring safety, requires interdisciplinary expertise. What the EO requires is the creation of new knowledge about how to govern—indeed, what the role of government is in an increasingly data-centric and AI-mediated environment. These are questions for teams with a sociotechnical lens, requiring expertise in a range of disciplines, including legal scholarship, the social and behavioral sciences, computer and data science, and often, specific field knowledge—health and human services, the criminal legal system, financial markets and consumer financial protection, and so on. Such skills will especially be key for the second pillar of the administration’s talent surge—the growth in regulatory and enforcement capacity needed to keep watch over the powerful AI companies. It’s also critical to ensure that these teams are built with attention to equity at the center. Given the broad empirical base that demonstrates the disproportionate harms of AI systems to historically marginalized groups, and the President’s declared commitment to advancing racial equity across the federal government, equity in both hiring and as a focus of implementation must be a top priority of all aspects of EO implementation.Brookings, 18d ago
Of course, nobody has yet laid out specifically what financial information is actually deemed to be decision-useful in order to establish which data must be converted to a new format; that’s an unresolved first-order problem that leaves open the risk of unwarranted burdens on local governments during the initial implementation phase. All the oversight boards need to tread carefully.What proponents of the FDTA had in mind when they lobbied Congress, standardization on the extensible financial reporting language platform that has become commonplace in the private sector, was only a first-stage rocket in this new space race. The federal legislation did not give XBRL a monopoly per se, specifying only the use of “structured” data formats.Clearly, what most parties in the legislative process could never have anticipated last year was the possibility that existing financial reports using generally accepted accounting terminology may themselves already be computer-readable because of the new large language machine learning models that can read plain English typeset produced by word-processing software as well as alphanumeric images contained in the commonly used PDF documents that typically encapsulate governments’ audited annual financial reports. All of a sudden, “structured data” may ultimately prove to be little more than what we already have in place with conventional text documents that can be ingested by new AI systems with superior analytics already integrated with database utilities, without costly data entry hurdles.Governing, 20d ago
In an exclusive conversation with the Business Post, Calleary delves into the intricacies of this transformative journey, shedding light on the challenges and opportunities presented by AI. His insights underscore the critical role of the upcoming EU laws, providing a regulatory framework that not only addresses the risks associated with AI but also aims to harness its potential for the betterment of society and the economy.BitcoinEthereumNews.com, 29d ago
Indeed, consultant and research reports show that employers are overwhelmingly adopting generative AI as fast as they can, and pushing its development along. Does anyone even remember the letter signed by 1,100 tech leaders asking for a six-month moratorium on the training of AI more powerful than ChatGPT-4? That was six months ago and, meanwhile, ChatGPT-4+ and other AI products have amped up their game with visual and audio capabilities of great interest to industries from security, advertising and medicine to counselling, music and art.Times Higher Education (THE), 11d ago
...2. Provide context to simplify risk analysis and compliance reporting — Complying with a complex array of regulations can be a significant challenge for compliance and audit teams, who often lack clear guidance on how to address risks. Frequently, the processes identified for validating controls are also inconsistent, further complicating the process. But while enormous quantities of data are time consuming (and boring) for humans to analyze and process, properly trained AI systems can automatically analyze vast quantities of risk data to provide context and identify patterns and trends. An AI solution makes it simpler for compliance and audit teams to evaluate risks and controls and generate guidance and remediation recommendations.securitymagazine.com, 17d ago

Latest

The field of eXplainable AI (XAI) examines how machine learning models can be made more understandable to people. For example, XAI projects have focused on creating human-understandable explanations of why an AI system made a particular medical diagnosis, how the AI models in an autonomous vehicle work, and what data an AI system uses to generate insights about consumer behavior. However, current XAI research is predominantly focused on functional and task-oriented domains, such as financial modeling, so it is difficult to apply XAI techniques to artistic uses of AI directly. Moreover, in the Arts, there is typically no “right answer” or correct set of outputs that we are trying to train the AI to arrive at. In the Arts, we are often interested in surprising, delightful, or confusing outcomes, as these can spark creative engagement.Montreal AI Ethics Institute, 8d ago
Note that this only tangentially a test of the relevant ability; very little of the content of what-is-worth-optimizing-for occurs in Yudkowsky/Beckstead/Christiano-style indirection. Rather, coming up with those sorts of ideas is a response to glimpsing the difficulty of naming that-which-is-worth-optimizing-for directly and realizing that indirection is needed. An AI being able to generate that argument without following in the footsteps of others who have already generated it would be at least some evidence of the AI being able to think relatively deep and novel thoughts on the topic.alignmentforum.org, 9d ago
The use of predictive analytics powered by AI allows financial institutions to anticipate customer needs. AI can offer personalized recommendations for financial products, and investment opportunities by analyzing historical data and predicting future trends, enhancing the overall customer experience. In addition, AI-driven algorithms are increasingly being used to create customized investments. AI can tailor investments to meet the specific needs of individual investors, for profitability earnings have been positive and consistent with their financial goals by considering factors such as risk tolerance, investment objectives and market conditions. Further, in the insurance sector, AI contributes to personalized offerings by assessing individual risk profiles. AI can customize insurance policies to provide optimal coverage while adjusting premiums based on individual risk assessments by analyzing factors such as lifestyle, health data, and historical insurance claims. Thus, these factors drive the growth of the artificial intelligence in BFSI market.Allied Market Research, 9d ago
Adding to the conversation, Cesar Gon, the founder and CEO of CI&T, added that the speed of change and the broad nature of AI’s impact is what really sets it apart. Unlike some previous technological disruptions, AI has the potential to disrupt every industry and aspect of human life.Grit Daily News, 9d ago
AI systems should reliably operate in accordance with their intended purpose. A good way for companies to prevent harm is to conduct pilot studies with intended users in safe spaces before technology is unleashed on the public. This helps to avoid situations like the infamous chatbot Tay. Tay ended up generating racist and sexist hate speech due to an unforeseen and therefore untested vulnerability in the system.techxplore.com, 9d ago
Of further concern is that the use of certain technologies risks undermining the core principles of dispute resolution, such as due process and fairness. For instance, our findings show that it is felt in practice that the risk of manipulation increases when electronic means are involved in relation to evidence – especially with regards to AI. Another issue revolves around equal access; are virtual hearings fair if one party is not digitally literate or does not have access to a stable Internet connection? Or if an international dispute hearing is conducted in an unfavourable timezone for one party? These problems require further discussion.The Singapore Law Gazette, 9d ago

Top

AI’s just going to be everywhere. And I think we’ll stop realising that we’re using it. One prospect that now becomes real (and we saw this with protein structure prediction already, which will translate to other fields) is to conduct in silico research: you can ask AI models questions rather than doing the actual experiment. This will help to speed up research and will obviously completely change the scientific process. The turnaround time will be much faster because you can get answers to complex questions quickly, without conducting an experiment. Innovation in science will then depend on our ability to formulate well-defined questions that are suitable for AI systems and to correctly judge the answers from these models – what they know and where there are gaps in their predictions. This is a big challenge right now, but I think it will be solvable. But even if AI speeds up research, I very much believe that experiments will remain at the heart of research, because we need to confirm the predictions that AI give us, and use the knowledge gained from AI to push scientific discovery forward. Ultimately, we will have more informed hypotheses to start from.EMBL, 12d ago
This collaboration enables developing countries to create a strong and inclusive AI innovation ecosystem, where technology is not only seen as a tool but also as a means to solve local problems and improve the quality of life. Synergy between these sectors is crucial to ensure that AI development is not only technologically advanced but also brings equitable benefits to all layers of society.Modern Diplomacy, 15d ago
Nathalie: As we know, AI crosses boundaries. You can’t have a boundary around AI, so in order to be successful it’s important to be collaborative. I think that the issues that we face as companies, as a society and even globally can be helped through a joined up approach to AI deployment. This will help us address business challenges as well as larger issues such as climate change. No single body can provide all of the answers or all of the strategy, so we have to work together. The FCA collaborates with organisations both inside and outside of financial services, as well as chairing the Global Financial Innovation Network, a collaboration between 80 regulators which is currently discussing AI, cryptocurrency, ESG and other key areas of interest.bcs.org, 12d ago