Latest

new The idea that we live in a permanent AI revolution means that companies’ transformation efforts are most likely to succeed when designed with a dual intent: successful adoption of mature technologies and readiness for accelerated experimentation with inchoate ones. Since companies continue to learn at a slower rate than technology advances, success will largely hinge on a business’s relative rate of learning—which, in turn, depends on their ability to become early adopters of the foreseeable technologies on the horizon. Today, for companies working on the adoption of stand-alone LLMs, this challenge takes the form of shaping those LLM-based transformation plans with an eye towards the arrival of what’s coming next—autonomous agents.Fortune, 8h ago
new To come to grips with those questions, start with Anthropic. AWS bought into the startup, taking a minority stake with both sides able to raise the investment to as much as $4 billion, for two reasons. The obvious reason is that Anthropic will make AWS its primary cloud provider—bringing with it Claude, its foundation model (a vast digital network like the one that supports ChatGPT), and the AI assistant by the same name that comes with it. (Anthropic introduced Claude in March; Claude 2 debuted in July, and a faster, cheaper version called Claude Instant 1.2 was released in August.) Future versions of Claude will be available to AWS customers through a service called Bedrock, which offers access to many foundation models. AWS has built or acquired access to several such models, and it argues that its cybersecurity and range of services for those models exceed what OpenAI currently offers. Still, none of AWS’s models has shown the range or efficacy of OpenAI’s GPT-4—hence the “falling behind” narrative.Fortune, 7h ago
new An organisation is the sum of years of investments and decisions. Each has evolved differently, with its own combination and configuration of technology systems that drive processes and enable it to operate. Through early experiments, many organisations have reached the same conclusion: that it’s not trivial to assemble the components of a Conversational AI that can interface with some or all of these different customised systems. It is hard to find an out-of-the-box solution that can deal with the variation.CFOtech Australia, 8h ago
new Breaking down silos is only possible when leaders focus on cross-functional collaboration, particular in response to organisational change. Building a strong, collective approach to people development is a key way to make this a reality – ensuring employees are adequately prepared for any changes that come their way by working together with new colleagues and developing an understanding of each person’s unique skill set. Even amid organisational shifts, such as the adoption of generative AI, setting a firm foundation for effective collaboration will help a business move through any challenging periods with agility.The European Business Review, 15h ago
new One of the most important things that security leaders and the broader executive suite must keep in mind is that AI risk management isn't going to be a compliance exercise. None of these policies are a magic wand, and at the end of the day enterprise culture is going to have to become more collaborative and decisive in how they make AI risk decisions.darkreading.com, 17h ago
new The real issue is that the dismal conditions of working as a corporate physician are driving many away from patient care or out of medicine. This exodus is tempered only by the need to repay massive school loans. It’s worth reconsidering the necessity of an eight-year education plus internships and residencies for primary care physicians (PCPs) that push their career entry into their late 20s or early 30s. Alternatives could include expanding the roles of nurse practitioners, physician assistant, and pharmacists. Additionally, integrating AI into medical training could accelerate training, especially given the rapid obsolescence of classroom work, and could be used to provide support as needed for more unusual cases. Meanwhile, the profit-obsessed US healthcare system is increasingly strained as major physician employers compete for the essential yet limited clinical workforce.histalk2.com, 17h ago

Latest

new Recently, automated stores that use only kiosks without staff have been increasing without checking the accessibility level of the information-alienated class, and it is expected that the difficulties of the information-alienated class will increase further. In addition, as the number of consumers in their 20 and 30s who avoid face-to-face interactions and prefer kiosks increases, non-face-to-face marketing in offline stores is becoming a trend spreading to various industries. To meet this need, CN.AI explained that it developed the AI Human with the idea that introducing a friendly kiosk that can be conveniently utilized even by older people would improve customer satisfaction across all ages.Journal of Cyber Policy, 19h ago
new For all of the bravado of AI, before we become too dazzled or dismayed (depending on which side of this issue you reside), at AI’s potential impact on projects, it is worthwhile to sit back with a skeptical bird’s eye view. After all, as we humans evolved throughout the information age of the mid to late twentieth century, AI can just be considered the latest development in a long series of technological advances driven by the engine of human information input. Even if every one of the improvements stated above are tangible, how well does it translate to the ultimate outcome? Organizations around the globe have been trying to digitalize and computerize, some as early as the 1960s and 1970s. Digitization and computerization promised a revolution in productivity through automation, streamlining, and making humans redundant. But the road to a more significant productivity boost has been arduous at best. One of the primary reasons is captured in the Theory of Constraints (TOC) developed by Eliyahu M. Goldratt, an Israeli physicist turned business management guru. In his landmark book The Goal, which sold over 6 million copies, he explains the importance of focusing on the right constraints to improve the overall system. For projects, the constraints are generally not within the technology and tools, but within humans.Healthcare Business Today, 1d ago
new AI systems are often presumed to be completely impartial and neutral. Artificial intelligence acquires biases that exist in the data it is trained on. Developers must actively strive to reduce biases, however achieving total impartiality remains a difficult task.techgig.com, 1d ago

Top

We are facing a dilemma. Our AI systems would be much safer and more useful if they possessed a modicum of adult-level common sense. But one cannot create adult-level common sense without first creating the common sense of a child on which such adult-level abilities are based. Three-year-old common sense is an important first step, as even such young children have the fundamental understanding that their own actions have consequences and actually matter. But on their own, the abilities of a three-year-old aren’t commercially viable. Further, AI’s focus on super-human narrow capabilities with an expectation that these will broaden and merge into common sense hasn’t borne fruit and is unlikely to any time soon.RTInsights, 4d ago
What does this mean in practice? It means that cyber security and disinformation, which are already prominent and incredibly challenging features of modern war, will become even more of a problem in conditions of intensive automation. Adversaries have incentives to manipulate or poison the data that feeds AI systems.78 AI will thus expand the range of counterintelligence risks to worry about. It also means that adversaries have incentives to move conflict in unexpected directions, i.e., where AI systems have not been trained and will likely perform in undesired or suboptimal ways. This creates not only data problems but judgment problems as well. Combatants will have to reconsider what they want in challenging new situations. As intelligent adversaries escalate conflict into new regions, attack new classes of targets, or begin harming civilians in new ways, how should AI targeting guidance change, and when should AI systems be withheld altogether? We should expect adversaries facing AI-enabled forces to shift political conflicts into ever more controversial and ethically fraught dimensions.Texas National Security Review, 26d ago
Unpredictability – The unpredictability of sentient AI is a significant concern. Humans, driven by their emotions, engage in conflicts and harmful actions, showcasing the range of both positive and negative feelings. If an AI entity, especially one entrusted with control over automated systems, attains sentience, there is a risk that it could become as unpredictable as a human. This unpredictability carries potentially serious implications.E-Crypto News - Exclusive Cryptocurrency and Blockchain News Portal, 13d ago
Alongside these opportunities, AI also poses significant risks, including in those domains of daily life. To that end, we welcome relevant international efforts to examine and address the potential impact of AI systems in existing fora and other relevant initiatives, and the recognition that the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed. We also note the potential for unforeseen risks stemming from the capability to manipulate content or generate deceptive content. All of these issues are critically important and we affirm the necessity and urgency of addressing them.lesswrong.com, 26d ago
AI Snake Oil Blog: “Foundation models such as GPT-4 and Stable Diffusion 2 are the engines of generative AI. While the societal impact of foundation models is growing, transparency is on the decline, mirroring the opacity that has plagued past digital technologies like social media. How are these models trained and deployed? Once released, how do users actually use them? Who are the workers that build the datasets that these systems rely on, and how much are they paid? Transparency about these questions is important to keep companies accountable and understand the societal impact of foundation models. Today, we’re introducing the Foundation Model Transparency Index to aggregate transparency information from foundation model developers, identify areas for improvement, push for change, and track progress over time. This effort is a collaboration between researchers from Stanford, MIT, and Princeton. The inaugural 2023 version of the index consists of 100 indicators that assess the transparency of the developers’ practices around developing and deploying foundation models. Foundation models impact societal outcomes at various levels, and we take a broad view of what constitutes transparency…Execution. For the 2023 Index, we score 10 leading developers against our 100 indicators. This provides a snapshot of transparency across the AI ecosystem. All developers have significant room for improvement that we will aim to track in the future versions of the Index…Key Findings...bespacific.com, 19d ago
Innovative problem-solving. While it has its flaws, the non-sentient AI we already use has been known to come up with creative solutions to human problems. It is now common for humans to ask ChatGPT for advice for everything from career development to relationship problems. Now, keep in mind that a sentient AI would be the most self-aware tech to ever exist. Not only would it have access to virtually all the information that has ever been present and analyze it at the drop of a hat, but it would also understand first-hand how human feelings work. It’s been suggested that sentient AI could tackle issues like world hunger and poverty because it would have both the technical understanding of such problems and the emotional intelligence needed to navigate human nuances.Coinspeaker, 14d ago

Latest

new Continually remind employees to apply their judgment over the long term. Vahe Andonians, chief technology officer and chief product officer of Cognaize, said that we need humans to apply their judgment to AI outputs even when we no longer need to be checking its facts. “I'm going to steal from Nietzsche,” he said, “It should be humans because we are the only ones that can suffer. AI is not going to suffer…so the judgment layer should be us.” George Lee, co-head of applied innovation at Goldman Sachs, added that one thing his team has been focused on is how to encourage employees to keep a sharp eye, even when the AI system is performing well: “After the 10th experience of it looking just great, are you going to pay attention?” (A recent study on BCG consultants, covered by Charter, illustrated the danger of employees ‘switching off their brains’ when working with impressive AI systems.)...Charter, 1d ago
new Incorporating bioethics principles into medical AI algorithms is undoubtedly a crucial aspect of creating trustworthy technology that serves to advance society. While the paper highlights multiple critical topics that should be considered and offers robust solutions, gaps and questions remain. More research needs to be done regarding when these AI tools are considered to be conscious of their own actions and, by extension when liability is placed on humans or the machines. Additionally, while the authors’ proposed solution of instituting liability fees and other systems between public and private parties is an interesting point to consider, it may be difficult to establish in countries such as the United States, where the healthcare system is incredibly disjointed. Additionally, in places with many competing private healthcare companies, it should be considered that these parties do not necessarily have patients’ best interests at heart. Instead, they tend to prioritize profits over patient well-being—thus adding another obstacle to ensuring ethical medical AI is instituted.Montreal AI Ethics Institute, 1d ago
new From the insights provided in the article, it’s important to acknowledge that AI is advancing to a level of sophistication capable of mimicking human creativity. However, it’s crucial to acknowledge the potential substantial differences between these two forms of creativity, originating from the disparate operational mechanisms of the human brain and generative AI. While divergent thinking is often explored as an important factor of creativity, it is not the complete picture of the multifaceted nature of creative behavior.Montreal AI Ethics Institute, 1d ago
new Policymakers are increasingly considering risk-based assessments of AI systems, such as the EU AI Act. We believe that in this context, AI systems with the potential for deception should be classified at least as “high-risk.” This classification would naturally lead to a set of regulatory requirements, including risk assessment and mitigation, comprehensive documentation, and record-keeping of harmful incidents. Second, we suggest passing ‘bot-or-not’ laws similar to the one in California. These laws require AI-generated content to be accompanied by a clear notice informing users that the content was generated by an AI. This would give people context about the content they are viewing and mitigate the risk of AI deception.Montreal AI Ethics Institute, 1d ago
new When talking of brittle systems, many people remember the early symbolic AI programs that were rule-based and, hence, could not process anything outside of the scope of pre-defined knowledge. Did deep learning systems overcome that? Yes, unfamiliar inputs do not completely break them. But even the latest systems still make errors a human wouldn’t make [15-17]. We know that fine-tuned models may learn shortcuts [18-21]: undesirable spurious correlations picked up from the training data. We also know that slight variations in the phrasing of the prompt can lead to very different LLM output [22-24]: this phenomenon affected all 30 LLMs in a recent large-scale evaluation [25]. François Chollet [26] questions if deep learning systems can ever overcome this kind of brittleness: according to him, they are “unable to make sense of situations that deviate slightly from their training data or the assumptions of their creators” (p.3).Montreal AI Ethics Institute, 1d ago
new As Raimondo advocates for increased funding to fortify export controls on AI chips, the question that lingers is whether this financial infusion will be sufficient to counter China’s relentless pursuit of cutting-edge semiconductor technology. In an era where national security is intricately linked with technological supremacy, the decisions made today will shape the future balance of power. Can increased funding truly safeguard the nation’s technological edge, or does it merely represent a temporary barrier in an ever-evolving global landscape? As nations grapple for dominance in the AI arena, only time will reveal the efficacy of these strategic moves and their impact on the delicate balance of international relations.BitcoinEthereumNews.com, 1d ago

Top

The framework has taken into account deployment concerns, specifically risk and autonomy, in addition to technical considerations. Emphasizing the complex relationship between deployment factors and AGI levels, the team has emphasized how critical it is to choose human-AI Interaction paradigms carefully. The ethical aspect of implementing highly capable AI systems has also been highlighted by this emphasis on responsible and safe deployment, which calls for a methodical and cautious approach.MarkTechPost, 21d ago
...(Which, for instance, seems true about humans, at least in some cases: If humans had the computational capacity, they would lie a lot more and calculate personal advantage a lot more. But since those are both computationally expensive, and therefore can be caught-out by other humans, the heuristic / value of "actually care about your friends", is competitive with "always be calculating your personal advantage."I expect this sort of thing to be less common with AI systems that can have much bigger "cranial capacity". But then again, I guess that at whatever level of brain size, there will be some problems for which it's too inefficient to do them the "proper" way, and for which comparatively simple heuristics / values work better. But maybe at high enough cognitive capability, you just have a flexible, fully-general process for evaluating the exact right level of approximation for solving any given problem, and the binary distinction between doing things the "proper" way and using comparatively simpler heuristics goes away. You just use whatever level of cognition makes sense in any given micro-situation.)...lesswrong.com, 16d ago
I’ve convened hearings that explore AI safety, risk procurement of these tools and how to prepare our federal workforce to properly utilize them. But as policymakers, we also have to explore the broader context surrounding this technology. We have to examine the historical, the ethical and philosophical questions that it raises. Today’s hearing and our panel of witnesses give us the opportunity to do just that. This is not the first time that humans have developed staggering new innovations. Such moments in history have not just made our technologies more advanced, they’ve affected our politics, influenced our culture, and changed the fabric of our society. The industrial revolution is one useful example of that phenomenon. During that era, humans invented new tools that drastically changed our capacity to make things. The means of mass production spread around the world and allowed us to usher in a modern manufacturing economy. But that era brought with it new challenges.Tech Policy Press, 25d ago

Latest

new Diabetic retinopathy (DR) is one area where AI is being used as a screening tool. According to Dr Gopal Pillai, professor and head of department of ophthalmology at Amrita Institute of Medical Sciences in Kochi, approximately one third of people with diabetes will have DR in India, and about 10-12 per cent of those will have vision-threatening DR, indicating that the development of DR has seriously affected the patient’s vision and failure to treat it in a timely manner will result in irreversible vision loss. One thing that was hampering early detection of DR was that it is asymptomatic, until, that is, the person completely loses his vision. “One might be driving, reading, watching TV and doing their activities without even knowing that a time bomb is ticking inside the eye. And because the patient would often come in late, there was no way of early diagnosis,”says Pillai, who is leading a government-sponsored clinical trial network.theweek.in, 1d ago
new To come to grips with those questions, start with Anthropic. AWS bought into the startup, taking a minority stake and committing to eventually invest up to $4 billion, for two reasons. The obvious reason is that Anthropic will make AWS its primary cloud provider—bringing with it Claude, its foundation model (a vast digital network like the one that supports ChatGPT), and the AI assistant by the same name that comes with it. (Anthropic introduced Claude in March; Claude 2 debuted in July, and a faster, cheaper version called Claude Instant 1.2 was released in August.) Future versions of Claude will be available to AWS customers through a service called Bedrock, which offers access to many foundation models. AWS has built or acquired access to several such models, and it argues that its cybersecurity and range of services for those models exceed what OpenAI currently offers. Still, none of AWS’s models has shown the range or efficacy of OpenAI’s GPT-4—hence the “falling behind” narrative.fortune.com, 1d ago
new However, the ease with which AI can generate complex artwork has shined a light on the authenticity of such creations. Critics like me argue that AI-generated art lacks the intrinsic value of traditional art, which is deeply intertwined with the artist's personal journey, skills, and emotional investment. This perceived dilution of artistic integrity and the relegation of the artist to a mere "prompter" has led to concerns about the "lost art of art." Perhaps we've even moved from "click bait" to "click creation" where vapid, contrived images are generated with little sense of design or artistry—only the frigid recommendations of today's digital curators.Psychology Today, 1d ago
new ...22, 40, 50, a, able, About, above, Adding, administrative, administrators, advances, AI, AI in Healthcare, AI programs, AI Technology, AI Tools, alarming, All, also, always, American, among, an, analytics, and, another, any, app, applications, applied, apply, ARE, areas, around, AS, ask, assess, assigned, assist, Assistant, assisted, At, automate, barrier, BE, become, Bedroom, been, being, Below, benefits, benefitting, BEST, better, beyond, Biggest, Bit, Boost, both, breakthroughs, burnout, business, Busy, But, by, CAN, can help, cannot, care, Careers, cases, certain, challenges, challenging, change, changes, ChatGPT, check, check in, clinical, clinical trials, come, comes, comfort, coming, Communication, community, Compensation, competition, compliant, concept, concerns, connect, Considerations, consumption, contact, contribute, controversial, Conversation, Cost, Costs, could, counterparts, countless, course, created, crowded, Daily, data, day, Day-to-day, deal, Demand, demanding, depth, described, designed, detect, developing, Development, developments, Devices, diagnosis, died, difficult, Dinner, Direct, directly, Doctor, Doctors, doesn, Doesn’t, dollars, don, don't, down, downsides, Drive, During, easier, Effective, efficiently, effort, efforts, emerging, Emotional, ensure, entirely, Entrance, Environment, especially, Even, eventually, everyday, everyone, example, expedite, Experiences, experiencing, facing, factors, family, fear, Feature, feel, fewer, Filled, find, finding, First, fitbit, Five, Flexibility, floors, Focus, For, for example, form, forms, found, from, from home, functions, further, germs, Get, give, gives, Go, going, graduation, great, greatly, Growing, guidelines, half, Hard, Have, Health, health information, health services, healthcare, healthcare industry, Healthcare professionals, Healthcare Sector, healthcare security, heard, heart, Help, helpful, helping, High, Higher, higher patient, highest, holidays, Home, hopefully, Hospital, hospitals, HOURS, How, However, human, human effort, illness, Impact, impacted, impactful, implemented, improve, in, in-depth, include, Incorporated, Incorporating, industries, industry, infancy, influx, information, inspired, interact, interactions, interfaces, intervention, into, involved, Is, issues, IT, ITS, Job, Jobs, just, keep, later, Leadership, LeT, Level, levels, Life, like, little, lives, Long, Look, lot, make, Makes, Making, manage, manageable, many, mass, May, means, measures, medical, mental, Mental health, million, million dollars, Mindfulness, Minor, moments, monitoring, monitors, morale, more, most, moves, Much, naturally, nearly, Need, needing, networks, Nevertheless, Night, Now, Nursing, obvious., of, Of course, Offers, office, offices, often, on, Onboarding, once, ONE, online, only, Option, Other, other forms, out, outcomes, Over, overall, Paperwork, patient, patient care, patient monitoring, patients, People, perform, performs, personal, personalization, pew, phrase, physical, physically, place, Plan, plato, Plato Data Intelligence, PlatoData, poll, potentially, practical, preliminary, prevent, price, problems, procedures, Process, processes, Products, profession, professional, professionals, Programs, prominence, prominent, provide, provides, purpose, quality, Quality of Life, questions, quickly, quit, quite, raise, rate, Rates, rather, receiving, Recent, recent years, reduce, Reduces, reduction, RELATED, remote, Remote monitoring, removing, replace, represent, required, research, resolve, Resources, responsibilities, Retention, robot, Robotic, robotics, s, same, say, scenario, schedules, Sector, security, see, seem, seen, SELF, sense, serve, Services, shifts, should, shown, sick, significant, significantly, Simple, simplify, Simply, skills, slow, slowly, So, Soft, some, Someone, something, specifically, staff, standardized, standards, Steady, still, stopping, Streamline, stress, struggle, success, support, surgery, systemic, T, Tackle, Take, Take a look, takes, taking, Taking Care, talked, tangible, tasks, tech, techniques, Technological, Technologies, Technology, telehealth, Than, that, that’s, The, the world, their, Them, themselves, then, There, there’s, These, they, think, this, thought, threaten, Through, throughout, time, to, tools, Tracking, tracks, trade, Traditionally, Trials, truly, turnover, ultimately, under, understatement, Updates, use, Used, using, usually, Various, Visit, visiting, vital, vitals, want, Watch, way, ways, we, wearable, WELL, well-being, What, When, Which?, while, WHO, will, with, within, without, won, Work, Work Environment, work from home, Work Processes, workers, working, world, worse, would, year, years, zephyrnet...Zephyrnet, 1d ago
new Through rigorous validation, we confirm LLMCarbon’s accuracy in estimating both the operational and embodied carbon footprints of LLMs. In operational phases, our tool demonstrated disparities of 8.2% or less when compared to actual data, surpassing existing tools in precision. Moreover, LLMCarbon’s estimations of embodied carbon footprints closely align with publicly available data, showcasing an error margin of less than 3.6%. These findings highlight LLMCarbon’s invaluable role in guiding the development and usage of LLMs towards a more sustainable and environmentally conscious AI future.Montreal AI Ethics Institute, 1d ago
new The findings from this study are crucial for academia and go well beyond that, touching the critical realm of AI Ethics and safety. The study sheds light on the Confidence-Competence Gap, highlighting the risks involved in relying solely on the self-assessed confidence of LLMs, especially in critical applications such as healthcare, the legal system, and emergency response. Trusting these AI systems without scrutiny can lead to severe consequences, as we learned from the study that LLMs make mistakes and still stay confident, which presents us with significant challenges in critical applications. Although the study offers a broader perspective, it suggests that we dive deeper into how AI performs in specific domains with critical applications. By doing so, we can enhance the reliability and fairness of AI when it comes to aiding us in critical decision-making. This study underscores the need for more focused research in these specific domains. This is crucial for advancing AI safety and reducing biases in AI-driven decision-making processes, fostering a more responsible and ethically grounded integration of AI in real-world scenarios.Montreal AI Ethics Institute, 1d ago

Top

Dan Murray provided valuable insights into the intricate relationship between safety and economics in the trucking industry. Applied AI Systems, as highlighted in our conversation, not only contribute to enhancing safety measures but also play a crucial role in the economic viability of trucking companies. This presents an exciting opportunity for fleet safety operators to explore technologies that align with both safety goals and financial objectives.Smart Eye, 25d ago
Joseph Thacker, researcher at AppOmni, notes that a “doom for humanity” scenario is still a matter of great debate among AI experts: “Experts are split on the actual concerns around AI destroying humanity, but it’s clear that AI is an effective tool that can (and will) be used by forces for both good and bad … The declaration doesn’t cover adversarial attacks on current models or adversarial attacks on systems which let AI have access to tools or plugins, which may introduce significant risk collectively even when the model itself isn’t capable of anything critically dangerous. The declaration’s goals are possible to achieve, and the companies working on frontier AI are familiar with this problem set. They spend a lot of thinking about it, and being concerned about it. The biggest challenge is that the open source ecosystem is really close to enterprises when it comes to making frontier AI. And the open source ecosystem isn’t going to adhere to these guidelines – developers in their basement aren’t going to be fully transparent with their respective governments.”...CPO Magazine, 26d ago
An automated help desk helps with a significant challenge traditional help desk operations face. That is the manual classification and prioritization of incoming tickets. With AI, this process becomes automated and more accurate. As tickets arrive, AI systems can quickly categorize them based on the nature of the issue, user profiles, and other relevant parameters. Moreover, they can prioritize tickets, ensuring that critical issues are addressed as fast as possible.Financesonline.com, 27d ago
More broadly, most people in the AI alignment space that I've seen approaching the problem of either describing human values to an AI, or having it learn them, have appeared to view ethics from a utilitarian/consequentialist rather than a deontological perspective, and tend to regard this prospect as very challenging and complex — far more so than if you just had to teach the machine a list of deontological ethical rules. So my impression is that most people in AI safety and alignment are not using a deontological viewpoint — I'd love to hear it that has been your experience too? Indeed, my suspicion is that many of them would view that as either oversimplified, or unlikely to continue to continue to work well as rapid technological change enabled by AGI caused a large number of new ethical conundrums to appear that we don't yet have a social consensus on deontological rules for.alignmentforum.org, 26d ago
Operationalizing complex social, environmental, and ethical issues into AI systems can be difficult, and there’s a risk that trying to address every ethical concern could hinder innovation and technological progress. The European Union, the political entity most concerned with drafting AI regulation, has repeatedly kicked the can further down the line. Companies can’t really be trusted to self-regulate, and there’s no clear framework for how any regulation of this sort can work.ZME Science, 11d ago
Yeah, I’d say that’s pretty fair. I think the one thing I’d add… I’d say that with software development, for example… so offense-defense balance is something that’s often discussed in terms of open-sourcing and scientific publication, especially any time you have dual-use technology or scientific insights that could be used to cause harm, you kind of have to address this offense-defense balance. Is the information that’s going to be released going to help the bad actors do the bad things more or less than it’s going to help the good actors do the good things/prevent the bad actors from doing the bad things? And I think with software development, it’s often in favor of defense, in finding holes and fixing bugs and rolling out the fixes and making the technology better, safer, more robust. And these are genuine arguments in favor of why open-sourcing AI systems is valuable as well.alignmentforum.org, 8d ago

Latest

new In brains, online learning (editing weights, not just context window) is part of problem-solving. If I ask a smart human a hard science question, their brain may chug along from time t=0 to t=10 minutes, as they stare into space, and then out comes an answer. After that 10 minutes, their brain is permanently different than it was before (i.e., different weights)—they’ve figured things out about science that they didn’t previously know. Not only that, but the online-learning (weight editing) that they did during time 0<t<5 minutes is absolutely critical for the further processing that happens during time 5<t<10 minutes. This is not how today’s LLMs work—LLMs don’t edit weights in the course of “thinking”. I think this is safety-relevant for a number of reasons, including whether we can expect future AI to get rapidly smarter in an open-ended way without new human-provided training data (related discussion).alignmentforum.org, 2d ago
new Now these are really hard conversations to have. I'm using very human words to describe an AI which don't fit, but the shorthand is unavoidable. Now, doesn't AI have a value? No. Does it pretend? Yes. Can it mimic values? Yes. Can it do it? Well, not yet. All these things are going to change. A lot of this is in flux, but I think this is really interesting as these AIs become capable of doing things that used to be the exclusive purview of humans. So now I want to give the big problem here. All of the AIs that we have are built, are designed, are trained, and are controlled by for-profit corporations.harvard.edu, 2d ago
new AI has evolved so rapidly over the last 12 months, it’s difficult to pinpoint exactly where it will go in 2024. However, our panelists view it as a tool that won’t replace the human workforce, but rather unlock their potential and boost productivity if applied correctly. The PR and comms industry, like many others, has a lot of catching up to do when it comes to implementing new technology, but the right partner can help leverage these opportunities and position teams for success.Cision, 2d ago

Latest

new ...“The ambient IoT represents an evolution of the traditional Internet of Things that brings connectivity and product intelligence from large, expensive things to almost everything,” Statler said. “By embedding item-level intelligence into trillions of things, and connecting these products through generative AI platforms, businesses are equipped with the real-time data they need to solve many of their most pressing challenges. Wiliot’s benefits are broad and deep: we create more efficient, response supply chains, reduce their carbon emissions, enhance food and medicine safety, and so much more.”...supermarketperimeter.com, 2d ago
new ...1, 10, 100, 25, 41, 80, a, able, accepts, Account, accurately, achieving, across, adaptability, Added, address, addressing, aes, ahead, AI, AI Tools, alert, alerts, All, allows, also, always, an, analyzed, analyzes, and, another, answer, any, app, approach, approaches, approval, approve, approved, ARE, around, AS, ask, asked, assigned, associated, At, Authority, Automated, Automating, available, Away, background, base, based, BE, become, becomes, been, before, being, benchmarks, benefits, better, beyond, biases, Biggest, binary, blend, board, both, Breakdown, Broken, budget, business, businesses, But, by, call, calls, CAN, capabilities, capable, capture, captured, categorized, ceo, challenges, chance, channels, checkpoint, clear, click, Close, closed, closing, closure, closures, commonly, compared, compatibility, competitiveness, completion, confidence, Consistency, constantly, Construct, consuming, contact, continue, conventional, Conversation, conversations, conversions, cornerstone, could, creation, criteria, critical, CRM, customer, customer base, Customer-Centric, Customers, Cycle, data, data extraction, date, day, deal, Deals, decision, Decision Making, dedicated, delves, Demo, Details, develop, devoted, DID, differ, different, directly, discovery, discussed, disrupting, Does, Does it Work, done, During, each, Easily, edit, Effective, efficiency, efficient, efficiently, either, elevating, emails, emerged, emotions, ensured, Ensures, equally, estimates, eventually, executives, existent, existing, experience, exploring, exponentially, extraction, factor, fails, FAST, fast growing, faster, few, Fields, Finally, financially, find, First, First contact, Flexibility, flow, Focus, focusing, For, Force, found, Framework, from, further, fuzzy, gathered, generate, generated, genuine, Get, gets, getting, give, gives, globe, Go, going, good, Growing, guess, had, Have, having, heavily, helps, here, High, Higher, Highlighted, highlighting, How, However, HubSpot, human, identifying, if, Impact, Impacts, implementation, important, in, incomplete, indicates, industry, inefficiencies, influence, influenced, information, innovative, insights, integral, integrate, integrated, integrates, Integrating, integration, integrations, interesting, into, intuition, invest, irrespective, Is, isn, issues, IT, ITS, Job, Key, Labels, landscape, largely, lead, Leaders, leads, left, less, levels, Leverage, Leverage AI, lie, like, likely, limiting, Long, long way, long-standing, lot, Made, make, Makes, Making, manual, manually, mark, marked, Market, May, means, meet, merely, met, methods, metric, Might, missing, more, more efficient, most, Much, Need, needed, needs, new, None, Now, Nuance, observe, obvious., of, often, on, once, ones, only, Operations, opportunities, Option, or, order, organization., Other, Others, our, out, outperformed, Over, part, personal, personalization, phase, phenomenal, Picked, pipeline, place, Platforms, plato, Plato Data Intelligence, PlatoData, possibilities, potential, potentials, power, pre, precision, preventing, prioritize, prioritizes, probability, Problem, Process, processes, Product, Products, projections, promising, prospect, Publishing, purchase, pursuit, qualification, qualify, quality, quicker, ranking, Rates, real, real-time, Realistic, really, recall, relevant, report, Request, Requests, requirement, reshaping, resounding, Results, revenue, review, revolutionized, rich, Right, robust, runs, s, sales, Sales Strategies, salesperson, satisfaction, saved, say, Scalability, Scale, scaling, Score, scoring, seamless, seamlessly, see, Send, sends, sent, serves, Service, set, setup, Share, Shows, significant, since, single, slack, Solutions, some, speed, standardized, standing, start, step, Step-by-Step, Strategic, Strategies, Stronger, structured, Study, successful, Such, Summary, support, surveys, Systems, T, tailor, tailored, takes, task, tasks, Team, Teams, template, Than, that, The, The Cycle, The Landscape, their, Them, then, There, they, this, those, thresholds, till, time, time-consuming, timeframe, timeline, times, to, tool, tools, Tracking, traditional, transcription, transformative, transforming, trying, understand, unique, Unlocking, untapped, up, Upfront, upon, urgency, us, use, Used, using, utility, valuable, value, vs, was, Water, way, we, were, What, What is, When, where, Which?, while, will, winning, wisely, with, without, Work, workflow, workflows, working, would, You, yourself, zephyrnet, zoom...Zephyrnet, 2d ago
new The success of ChatGPT speaks foremost to the power of a good interface. AI has already been part of countless everyday products for well over a decade, from Spotify and Netflix to Facebook and Google Maps. The first version of GPT, the AI model that powers ChatGPT, dates back to 2018. And even OpenAI’s other products, such as DALL-E, did not make the waves that ChatGPT did immediately upon its release. It was the chat-based interface that set off AI’s breakout year.There is something uniquely beguiling about chat. Humans are endowed with language, and conversation is a primary way people interact with each other and infer intelligence. A chat-based interface is a natural mode for interaction and a way for people to experience the “intelligence” of an AI system. The phenomenal success of ChatGPT shows again that user interfaces drive widespread adoption of technology, from the Macintosh to web browsers and the iPhone. Design makes the difference.At the same time, one of the technology’s principal strengths – generating convincing language – makes it well suited for producing false or misleading information. ChatGPT and other generative AI systems make it easier for criminals and propagandists to prey on human vulnerabilities. The potential of the technology to boost fraud and misinformation is one of the key rationales for regulating AI.Amid the real promises and perils of generative AI, the technology has also provided another case study in the power of hype. This year has brought no shortage of articles on how AI is going to transform every aspect of society and how the proliferation of the technology is inevitable.ChatGPT is not the first technology to be hyped as “the next big thing,” but it is perhaps unique in simultaneously being hyped as an existential risk. Numerous tech titans and even some AI researchers have warned about the risk of superintelligent AI systems emerging and wiping out humanity, though I believe that these fears are far-fetched.The media environment favors hype, and the current venture funding climate further fuels AI hype in particular. Playing to people’s hopes and fears is a recipe for anxiety with none of the ingredients for wise decision making.GovTech, 2d ago
new In his commentary, "Simulating the whole brain as an alternative way to achieve AGI," Feng argued that the ability of ChatGPT to outperform humans in certain tasks is not surprising—after all, a simple calculator can multiply large numbers quicker than a human. However, it is not an example of artificial general intelligence (AGI), a theoretical step beyond AI that represents human abilities so well it can find a solution for any unfamiliar task.phys.org, 2d ago
new California’s report also usefully lays out the potential risks associated with using these new tools, making clear that while there are some new potential harms, in many cases many of the risks are common to the use of any technology. Governments need to be conscious of the fact that tools that enable the easy generation of high-quality content could be misused to dupe consumers and residents.Perhaps because 35 of the 50 leading AI businesses are in California, as the state's report points out at the outset, it is silent on the risks to governments and those they serve of relying excessively on technologies developed and governed by unaccountable companies, especially when those technologies are procured by public servants without a deep knowledge of the tech.GovTech, 2d ago
new ROBERT CHAPMAN-SMITH: In the lead up to, you know, the release of some of OpenAI's models, there, there's been sort of like a speaking tour of folks in going to Washington talking to legislators about AI and there was a, the worry at that time was about regulatory capture. Like are they, are folks going to essentially gate the technology in such a way that smaller players are not gonna be able to play ball? And we've seen regulatory capture happen a lot within the political realm within Washington. But there's also this question of like effectiveness in terms of regulation. Like just because the regulation has passed doesn't mean it's actually a good regulation or if this body of Congress is actually able to regulate this fast-moving technology well, like they can't even pass a budget, like how are they gonna keep up with the pace of AI change? So I'm curious about that as a tool for dealing with AI safety because it in some sense it feels like one the the legislative body or processes or capable to be captured by interested parties and two, even when they do regulate sometimes they just do a poor job, they just miss the thing that is the key regulatory factor. So I'm curious about your conception there, and how to deal with some of the messiness that comes with those types of approaches to dealing with technological safety.Big Think, 2d ago

Latest

new The rise of ChatGPT has marked a new era for the insurance industry, and when it comes to creating a successful generative AI framework, there is a great deal for insurers to consider. On the one hand, insurers will be able to unlock the potential of generative AI to handle previously untapped forms of data at speed. The challenge, however, will be mitigating the risk of inaccuracy from potential hallucinations and half-truths, and navigating shifting regulation to ensure compliance.Insurance Innovators, 2d ago
AI hallucinations occur when AI systems produce incorrect or misleading information. These errors can have dire consequences in health care. For instance, imagine an AI system for diagnosing skin cancer, misclassifying a benign mole as malignant melanoma. Such misdiagnoses could lead to unnecessary and invasive treatments, causing significant distress and harm to the patient. These instances underscore the critical need for accuracy in AI systems in health care and highlight the importance of ongoing monitoring and improvement of these technologies to ensure patient safety.KevinMD.com, 3d ago
We are trying to mitigate the issue by focusing on a particular task, summarization. There's a lot more work to be done, and this should be done in conjunction with AI alignment and AI safety research. Models that provide false information can have safety implications in some fields, and the manner in which we are testing the models is also looking at alignment by checking how well the model follows instructions.diginomica, 3d ago
Quality assurance is a critical aspect of game development, and AI can play a pivotal role in automating testing processes. Develop an AI-driven testing framework that identifies bugs, anomalies, and performance issues in games. Showcase your ability to streamline the testing phase, reducing human effort while ensuring a high-quality gaming experience. This project not only reflects technical proficiency but also underscores your understanding of the practical challenges in game development.Analytics Insight, 3d ago
This certainly raises significant implications for the future of scientific research and the need for publishers to enhance their detection systems to keep pace with the evolving capabilities of LLMs and attempts to bypass AI detection. The outcome of this modern challenge will have profound implications for the integrity of academic work.Science Times, 3d ago
We Can’t Let AI Generation Tools Take Away Our Own Training: This thinkpiece from NISO Executive Director Todd Carpenter argues that people still need to learn how to do the basics to teach LLMs how to conceptualize and improve on those topics. the more difficult tasks will be harder to conceptualize and improve upon: "Learning from our mistakes and improving on them is core to our humanity. It’s important to understand that language models lack this capacity to know the direction of 'better.' When AI tools are trained on a corpus of data that’s machine generated, the results become increasingly unreliable." (NISO, November 1, 2023)...Silverchair, 3d ago

Latest

It’s inadvisable to jump the gun and sign the rising star that promises to transform your operations without any game experience, just as it’s unwise to shell out cash to the household name with an illustrious legacy that’s becoming weak in the knees and slow to keep up. When selecting a provider of intelligent automation solutions, you need to prioritize both innovation and experience and most importantly, an understanding of and commitment to your organization’s needs. Haphazardly filling your tech stack with every new tool that promises to yield the best value from AI will create a cacophony of platforms, inhibiting efficiency – take care in selecting your solutions and choose those that have proven their worth in the context of the modern enterprise.ReadWrite, 3d ago
Training individuals to use AI ethically is essential in order to ensure responsible and unbiased deployment of this powerful technology. Ethical AI training equips individuals and organizations with the knowledge and skills to navigate the challenges and identify risks that arise when working with AI systems. It ultimately boils down to mitigating risk – just like anti-bribery and corruption policies, as well as the importance of data privacy and security. By providing individuals with the necessary training, we can foster a culture of ethical AI use, where technology is harnessed for the benefit of all while mitigating potential harm and ensuring equitable outcomes.RTInsights, 3d ago
...stranger: Pat is thoroughly acquainted with the status hierarchy of the established community of Harry Potter fanfiction authors, which has its own rituals, prizes, politics, and so on. But Pat, for the sake of literary hypothesis, lacks an instinctive sense that it’s audacious to try to contribute work to AI alignment. If we interrogated Pat, we’d probably find that Pat believes that alignment is cool but not astronomically important, or that there are many other existential risks of equal stature. If Pat believed that long-term civilizational outcomes depended mostly on solving the alignment problem, as you do, then he would probably assign the problem more instinctive prestige—holding constant everything Pat knows about the object-level problem and how many people are working on it, but raising the problem’s felt status.lesswrong.com, 3d ago

Top

The second challenge for the ongoing legislative efforts is the fragmentation. AI systems, much like living organisms, transcend political borders. Attempting to regulate AI through national or regional efforts entails a strong potential for failure, given the likely proliferation capabilities of AI. Major corporations and emerging AI startups outside the EU’s control will persist in creating new technologies, making it nearly impossible to prevent European residents from accessing these advancements. In this light, several stakeholders[4] suggest that any policy and regulatory framework for AI must be established on a global scale. Additionally, Europe’s pursuit of continent-wide regulation poses challenges to remaining competitive in the global AI arena, if the sector enjoys a more relaxed regulatory framework in other parts of the world. Furthermore, Article 6 of the proposed EU Artificial Intelligence Act introduces provisions for ‘high-risk’ AI systems, requiring developers and deployers themselves to ensure safety and transparency. However, the provision’s self-assessment nature raises concerns about its effectiveness.Modern Diplomacy, 20d ago
The EU AI Act, in its current form, risks creating a regulatory environment that is not only burdensome and inappropriate for open-source AI developers but also counterproductive to the broader goals of fostering innovation, transparency, and competition in the AI sector. As the EU’s ongoing negotiations over the AI Act continue, particularly around the regulation of foundation models, policymakers need to adequately address these issues. If they do not amend the Act to better accommodate the unique nature and contributions of open-source AI, it could hamper the progress and openness in the AI sector. It is crucial for policymakers to recognize and preserve the distinct advantages that open-source AI brings to the technological landscape, ensuring that regulations are both effective and conducive to the continued growth and dynamism of the AI field.itif.org, 14d ago
Regardless of its exact nature, ‘Q*’ potentially represents a significant stride in AI development, so the fact that it’s at the core of an existential debate of OpenAI rings true. It could bring us closer to AI systems that are more intuitive, efficient, and capable of handling tasks that currently require high levels of human expertise. However, with such advancements come questions and concerns about AI ethics, safety, and the implications of increasingly powerful AI systems in our daily lives and society at large.CoinGenius, 9d ago
An additional challenge is the lack of managed security service provider (MSSP) expertise in the current state of small to medium-sized manufacturers (SMM). Often, SMMs outsource their security, believing that their cyber needs are adequately addressed, only to discover this isn’t the case when facing a ransomware note. Many MSSPs overstate their abilities and visibility into critical functions of an SMM while also understaffing their operations due to a lack of workforce and profit margins. CyManII recommends that SMMs attempt to self-heal through mutual aid solutions and proper tooling rather than simply rely upon a third party. CyManII is developing generative AI for this and the correct implementation of security controls associated with early proper response to an attempted intrusion. Combining this approach with attack annexes against entire categories of Common Weakness Enumerations (CWEs) is groundbreaking. We may be at the crossroads of taking cybersecurity from unwieldy adolescence into maturity.natlawreview.com, 17d ago
These results underline the important role played by artificial intelligence in the study and diagnosis of various pathologies, thanks to the mass processing of medical imaging findings. The fact that this work was based on IA using brain segmentation understandable by humans is important, because it means that the conclusions can be explained. Indeed, numerous AI systems achieve apparently correct results but there is no access to their pathways, which limits the chances of detecting a problem if they are mistaken. It is also clearly unadvisable to make a diagnosis based on algorithms that cannot clarify or justify their choices.CNRS News, 5d ago
Fourth, all critical functions within an armed drone system should be under meaningful human control. This principle of Human Agency addresses the concern that the use of an armed drone might generate an unjust outcome for which nobody could fairly be held responsible. This is a possibility if the operation of a drone system’s critical functions (selecting and engaging targets) is performed by an AI technology. Arguably, AI is inherently incapable of making moral decisions and bearing moral responsibility. It cannot replicate a human’s abilities to exercise judgment based on lived experience and moral values. Therefore, the degree of human control over the operation of an armed drone needs to be always sufficient to preserve the faculty of responsible use. Adherence to the Human Agency principle implies that a human: (a) can exercise a context-appropriate degree of control over a drone system’s critical functions; (b) is indispensable as a part of system design to the technical operation of those functions; (c) can interact with and intervene upon the system’s AI in a timely fashion; (d) does not place excessive trust in AI; and (e) can fairly be held accountable for any wrongdoing.E-International Relations, 16d ago

Latest

We imagine a future where AIs self-augment by continuously seeking out more and better training data, and either creating successor AIs or training themselves on that data. Often, these data will come from the AIs running experiments in the real world (doing science), deliberately seeking data that would cover a specific gap in its current capabilities, analogous to how human scientists seek data from domains where our current understanding is limited. With AI, this could involve AgentGPT-like systems that spin up many instances of themselves to run experiments in parallel, potentially leading to quick improvements if we are in an agency overhang.lesswrong.com, 3d ago
There are various ways in which he thinks we could get this wrong. In AI, there’s the possibility that you end up with self-generating solutions that turn out not to be beneficial for wider humanity, a race to the bottom between AI-fuelled machines or the risk of weaponisation – it could be literal weaponisation – of these tools to go after somebody else or another state. Part of his warning is that accidents happen when humans are involved in doing this stuff. We do not necessarily get things right all the time, which brings us back to our books on failure.Five Books, 3d ago
Goal: To better understand on how internal search and goal representations are processed within transformer models (and whether they exist at all!). In particular, we take inspiration from existing mechanistic interpretability agendas and work with toy transformer models trained to solve mazes. Robustly solving mazes is a task may require some kind of internal search process, and gives a lot of flexibility when it comes to exploring how distributional shifts affect performance — both understanding search and learning to control mesa-optimizers are important for the safety of AI systems.alignmentforum.org, 3d ago
Generative AI can offer useful tools across the recruiting process, as long as organizations are careful to make sure bias hasn’t been baked into the technology they’re using. For instance, there are models that screen candidates for certain qualifications at the beginning of the hiring process. As well-intentioned as these models might be, they can discriminate against candidates from minoritized groups if the underlying data the models have been trained on isn’t representative enough. As concern about bias in AI gains wider attention, new platforms are being designed specifically to be more inclusive. Chandra Montgomery, my Lindauer colleague and a leader in advancing equity in talent management, advises clients on tools and resources that can help mitigate bias in technology. One example is Latimer, a large language model trained on data reflective of the experiences of people of color. It’s important to note that, in May, the Equal Employment Opportunity Commission declared that employers can be held liable if their use of AI results in the violation of non-discrimination laws – such as Title VII of the 1964 Civil Rights Act. When considering AI vendors for parts of their recruiting or hiring process, organizations must look carefully at every aspect of the design of the technology. For example, ask for information about where the vendor sourced the data to build and train the program and who beta tested the tool’s performance. Then, try to audit for unintended consequences or side effects to determine whether the tool may be screening out some individuals you want to be sure are screened in.Hunt Scanlon Media, 3d ago
At the edge where AI is being built into much smaller and less sophisticated systems, the potential pitfalls can be very different. “When AI is on the edge, it is dealing with sensors, and that data is being generated in real time and needs to be processed,” said Sharad Chole, chief scientist at Expedera. “How sensor data comes in, and how quickly an AI NPU can process it changes a lot of things in terms of how much data needs to be buffered and how much bandwidth needs to be used. How does the overall latency look? Our objective is to target the lowest possible latency, so from the input from the sensor to the output that maybe goes into an application processor, or maybe further processing, we’d like to keep that latency as low as possible and make sure that we can process that data in a deterministic fashion.”...Semiconductor Engineering, 4d ago
However, the increasing pervasiveness and power of AI demand that ethical issues be a primary consideration. As AI systems become more integrated into our daily lives, the risk of unintended consequences and ethical dilemmas grows exponentially. From biased algorithms in surveillance systems to the misuse of AI in deepfake technology, we have all witnessed the threats that AI going rogue can pose. Therefore, it is imperative to approach AI development with ethical principles firmly in mind.Zephyrnet, 4d ago

Top

Similar to some previous sections, for security teams, this directive presents a unique challenge. It's not enough to ensure that the AI systems the company uses are secure; they must also be free from discriminatory bias. This is a complex task that goes beyond traditional security measures. It requires a good bit of understanding how AI models work, how they can inadvertently lead to discriminatory outcomes, and how to test for it. Many teams may need to hire — or work closely with — external data scientists to ensure that the AI models being used are being tested for bias.wiz.io, 12d ago
Imagine a world where AI can prevent fraud attempts before they even unfold. Picture intelligent algorithms smoothly navigating the complexities of risk management and fine-tuning investment strategies with precision-driven by data, a concept that was once unimaginable. Think about a customer service experience that is not only fast but also tailored and empathetic, all thanks to AI’s ability to understand individual needs. Now, let’s expand our perspective to a financial landscape where inclusivity is not just an ideal but a reality. AI emerges as the bridge, closing gaps and ensuring that finance is accessible to everyone, regardless of their background. It’s a journey into a future where AI doesn’t just adapt to the financial landscape; it shapes it for the better.nasscom | The Official Community of Indian IT Industry, 10d ago
Digital transformation has evolved from the simplistic notion of creating an electronic presence to rethinking an organization’s business processes and accomplishing this reinvention at scale. In 2024, digital transformation means rethinking processes from the perspective of our customers. Central to this shift is a deep, empathetic understanding of customer needs, interwoven with digital insights from their interactions with our brand. This approach integrates digital transformation seamlessly into the customer experience journey. Viewed this way, digital transformation is a natural extension of customer experience. In 2024, the challenge is linking human-centric approaches to the power of data. Ignoring this element will yield unsatisfactory results that do not reflect customer needs or foster consumer loyalty. Many companies may fall into the trap of investing heavily in AI tools, mistakenly equating them with modernization. While AI can enhance efficiency, its true potential is unlocked only when paired with relevant data and a profound customer understanding. Successful digital transformation in 2024 rests on a blend of data-driven strategies and a genuine, human-centric approach to customer engagement. Companies aiming for transformational success must refine their data analysis capabilities while deeply empathizing with their customers. This dual focus will be the cornerstone of thriving in the digital landscape of 2024.Thinkers360 | World’s First Open Platform For Thought Leaders, 16d ago

Latest

Explainable AI is the area in which humans can continue to supervise and comprehend how an AI platform has arrived at a specific output; in the absence of it, we face the danger of giving experiences and choices to a system that we do not fully comprehend. Without a question, artificial intelligence (AI) in events has enormous potential to transform the sector, even with its limits. Artificial Intelligence has the potential to revolutionize various fields, such as networking and personalization, and its potential is immense.Event-Technology Portal, 4d ago
It's also worth looking at how AI will be offered. If the technology is integrated into a vendor's tech stack from the beginning, its inner workings will be more effectively obscured behind extra layers of security, reducing customer risk. Sometimes this technology is entirely distinct to a vendor, while other times, like Zoho's partnership with OpenAI, the vendor is more focused on honing existing technology for its particular ecosystem. Regardless, advances in the tech can be pushed across the system instantaneously, ensuring that whatever generative AI produces is the most tailored result possible at any given moment, eliminating the risk of wasted time implementing something outdated. Past customer success stories and use cases are an effective way of scoping out a potential tech vendor's customer-centric approach to AI.diginomica, 4d ago
...• Create: Corporate culture shifts. A company’s culture plays a crucial role in determining its success with GenAI. Companies that struggle with innovation and change will find it tough to keep pace. Does your company have a learning culture? That could be the key to success. Does your company foster a shared sense of responsibility and accountability? Without this shared sense, it is more likely to run afoul of the ethical risks associated with AI. Both questions involve cultural issues that boards should consider prompting their management teams to examine.DATAQUEST, 4d ago
In today’s episode, Sira asks if art made by AI can truly be considered art. I tackle this complicated question by examining art as an expression of imagination, noting that perception of art is highly subjective. I discuss arguments around human versus machine creation, exploring the creative process behind AI art prompts. I also cover complex legal issues of copyright and training data usage that remain unsettled globally. Ultimately art is in the eye of the beholder, but there are many ethical debates around AI’s role that merit further discussion. Tune in to hear perspectives on what constitutes art, creative intent, and considerations for responsible AI development.Christopher S. Penn - Marketing Data Science Keynote Speaker, 4d ago
Perhaps most importantly, leaders and educators need to resist the temptation to become overly focused on—or even panicked about—how AI might change teaching and learning. The dawn of ubiquitous AI should serve as a reminder that children still need to develop a deep foundation of knowledge to use these tools well, and that the best use of AI in traditional schools is to free up the time of educators to do more work directly with students. Outside of schools, AI can help cultivate the “weirder” ecosystem of educational options needed for a system of education that empowers families to access the educational opportunities their children need to thrive. When used thoughtfully, AI tools have the potential to move us closer to an education system that provides a more diverse range of experiences to meet the unique needs of every student.The Thomas B. Fordham Institute, 4d ago
...“Answering these questions helps us ensure that the technology aligns with the specific business challenges that we’re addressing. From our research, we were able to create a stable extractive AI model and meaningfully incorporate it in our solutions. We are currently tinkering with Gen AI models to improve stability and product fit before they are integrated in our solutions.” he added.Analytics India Magazine, 4d ago

Top

An "unlearning" technique consists in deleting the data used to train a model, such as images, in order to preserve their confidentiality. This technique can be used, for example, to protect the sovereignty of an algorithm in the event of its export, theft or loss. Take the example of a drone equipped with AI: it must be able to recognize any enemy aircraft as a potential threat; on the other hand, the model of the aircraft from its own army would have to be learned to be identified as friendly, and then would have to be erased by a technique known as unlearning. In this way, even if the drone were to be stolen or lost, the sensitive aircraft data contained in the AI model could not be extracted for malicious purposes. However, the Friendly Hackers team from Thales managed to re-identify the data that was supposed to have been erased from the model, thereby overriding the unlearning process. Exercises like this help to assess the vulnerability of training data and trained models, which are valuable tools and can deliver outstanding performance but also represent new attack vectors for the armed forces. An attack on training data or trained models could have catastrophic consequences in a military context, where this type of information could give an adversary the upper hand. Risks include model theft, theft of the data used to recognise military hardware or other features in a theatre of operations, and injection of malware and backdoors to impair the operation of the system using the AI. While AI in general, and generative AI in particular, offers significant operational benefits and provides military personnel with intensively trained decision support tools to reduce their cognitive burden, the national defence community needs to address new threats to this technology as a matter of priority.tmcnet.com, 11d ago
Ruben Cruz, designer and founder of The Clueless, told Euronews that they got the idea of creating AI models after their company faced much loss due to the flakiness of models. He explained: “We started analysing how we were working and realised that many projects were being put on hold or cancelled due to problems beyond our control. Often it was the fault of the influencer or model and not due to design issues”.Khaleej Times, 10d ago
John: Setting aside for the moment the more dire AI predictions, including human extinction, which would certainly have an impact on PKP, I’d point to our long-term AI efforts to create a publishing tool of great potential value to the users of open infrastructure. Moving from old-school rule-bound AI to current generative LLMs, we have been pursuing an open source markup tool that will largely automate the JATS-XML rendering of authors’ submissions as part of the editorial workflow. It could greatly improve typesetting for HTML and PDF outputs, indexing, and other publishing features. Our intermittent efforts on this initiative offer a good instance of the infrastructure developer’s dilemma. Against the pressing demands of platform and workflow refinements and upgrades, there’s the temptation to pursue the Holy Grail of, in this case, equitable access to key publishing tools. Fortunately, we’re not alone in this quest. We’re grateful for open AI developments by GROBID, Métopes, eLife, Coko, and others, including our own Vitaliy Bezsheiko. In fact, we’re at the point of seeking funding to assess, build out, and integrate the pieces needed to create a production-ready JATS-XML plugin and editor. We see it as a great example of how open infrastructure has the potential to benefit the whole industry.The Scholarly Kitchen, 27d ago
The further into the future we look, the harder it is to rule out the possibility that Musk’s prediction could come true. But a world in which AI eventually replaces all human workers would look a lot different from ours. While one of today’s fundamental economic problems is how to make the best use of scarce resources, Musk’s future is one of abundance, in which technology meets all our needs and inequality as we currently understand it no longer exists. Why accumulate wealth in a world of abundance? On the other hand, such a world could also exacerbate inequality, particularly if a relatively small number of people own the machines that are generating all the income.interest.co.nz, 20d ago
In Beltran de Heredia's opinion, the field in which we are most likely to see the first attempts to influence human behaviour through AI is that of work, more specifically occupational health. He argues that a number of intrusive technologies are currently in use. These include devices that monitor bus drivers to detect microsleep or electroencephalography (EEG) sensors used by employers to monitor employees' brainwaves for stress and attention levels while at work. "It's hard to predict the future but, if we don't restrict such intrusive technologies while they're still at the earliest stages of development, the most likely scenario is that they'll keep improving and spreading their tendrils in the name of productivity."...newswise.com, 5d ago
Even if we take out the existential risks of Artificial General Intelligence (AGI) coming to life to kill us, there are still plenty of real risks to worry about. The big ones that grab our attention won't be about practical and everyday threats like AI bias, inequity, civil rights violations, and pollution. They will be political red meat baiting topics like the use of these tools for child sexual abuse, terrorist attacks, or to empower the ‘Chinese threat,’ which we need to regulate to protect our values. This trajectory is already changing the landscape of encryption standards, finance, and reporting practices that could adversely impact privacy, autonomy, and civil rights.diginomica, 24d ago

Latest

But can AI be intelligent according to this definition? Understanding implies an understanding of the reasons for statements or decisions, something that AI so far cannot provide, and AI also does not have opinions because it is not a personality. According to Evgeny Morosov (2023), a severe critic of the cultural implications of AI, the concept of intelligence underlying AI concentrates on the mostly individual solving of problems with AI, that is, with perception and prediction—the two tasks that deep learning knows how to carry out with the help of huge data.Open Access Government, 4d ago
Trust is deeply relational (Scheman 2020, Knudsen et al, 2021, Baier 1986), and has been understood in terms of the vulnerabilities inherent in relationships (Mayer et al 1995). Yet discussions about trust in AI systems often reveal a lack of understanding of the communities whose lives they touch — their particular vulnerabilities, and the power imbalances that further entrench them. Some populations are expected to simply put their trust in large AI systems. Yet those systems only need to prove themselves useful to the institutions deploying them, not trustworthy to the people enmeshed in their decisions (Angwin et. al 2016, O’Neill 2018; Ostherr et. al 2017). At the same time, researchers often stop upon asking whether we can trust algorithms, instead of extending the question of trust to the institutions feeding data into or deploying these algorithms.Data & Society, 4d ago
When AI starts by building extremely general models and then attempting to apply them to specific educational situations, risks abound. Thus, a second aspect of my proposal suggests that our efforts towards powerful, safe AI should begin with well-bounded problems. One that seems well suited to today’s AI is determining how to provide optimal supports for learners with disabilities to progress in mathematics problem solving. Although I believe parents are not willing to share their students’ data in general, I can imagine a collective of parents becoming highly motivated to share data if it might help their specific neurodiverse student thrive in mathematics. Further, only limited personal data might be needed to make progress on such a problem. Thus a second element of my proposal is (2) energize nonprofits that work with parents on specific issues to determine how to achieve buy-in to bounded, purpose-specific data sharing. This could involve a planning grant stage, which if successful, would result in money needed to establish a local privacy-protected method of sharing data.The Thomas B. Fordham Institute, 4d ago

Latest

...a, ability, access, access controls, accessing, accordingly, accountability, accumulate, accuracy, accurate, accurately, acknowledge, actionable, activities, Additionally, address, adherence, Adopt, advanced, advanced analytics, advantages, Age, AI, AI algorithms, algorithms, align, All, allocate, Allowing, also, among, amount, amounts, an, analysis, Analysts, analytical, analytics, Analyze, Analyzing, and, anomalies, any, apparent, Applying, approach, ARE, areas, Arise, AS, Assessments, Attainable, attempts, attention, attitudes, Audits, automate, Automated, Backed, based, BE, become, becomes, before, being, benefit, benefits, BEST, best practices, beyond, biases, BIG, Big Data, big data tools, bottlenecks, bound, breaches, business, Business Benefits, business performance, business processes, businesses, But, by, CAN, can help, chain, challenge, challenges, challenging, change, changes, channels, characterized, Charts, Choices, clear, Collect, collected, Collecting, collection, comfortable, Companies, competitiveness, complements, complex, compliance, complying, component, components, comprehensive, compromise, conducted, conjunction, consolidate, controlling, controls, correlations, Cost, Cost savings, could, could include, create, crucial, Culture, Current, currently, customer, Customer Feedback, Customer satisfaction, Customers, Customizable, cyberattacks, Dashboards, data, data accuracy, data analysis, Data Analytics, data collection, data entry, data governance, data integration, data quality, data security, data security and privacy, data sets, data strategy, data visualization, data-driven, data-driven insights, dataset, Datasets, DATAVERSITY, decision, decision-makers, decisions, defining, deliver, Demand, demands, departments, descriptive, descriptive analytics, designed, develop, DG, Dialogue, different, disparate, Drive, driven, Due, During, easier, Effective, effectively, efficiently, efforts, eliminating, embracing, employed, employees, empowers, enable, enabled, enables, enabling, encourages, encouraging, encryption, engage, enhance, enhances, ensure, Ensures, ensuring, entry, errors, essential, establish, Evaluate, Every, Evidence, Examining, expectations, experience, experiments, extend, external, extract, extracted, faced, fact, fact-based, factors, Facts, fear, feedback, filtering, Finally, Firewalls, First, flawed, For, forecast, formats, Fostering, fosters, fragmentation, fragmented, frameworks, from, future, gain, gaps, gather, gathered, Gathering, governance, graphs, Growth, guesswork, guide, hacking, handle, happens, Harnessing, Have, hazards, Help, helps, here, hidden, highest, historical, historical data, holistic, How, However, human, human errors, human language, Hurdles, identify, identifying, image, Impact, implementation, implementing, importance, important, Improved, improvement, improving, in, inaccuracies, include, includes, incompatible, incomplete, inconsistent, increased, increases, increasingly, indicators, individuals, industry, Industry Reports, information, information becomes, informed, Innovation, insecurity, Insider, insider threats, insights, integration, interactive, interpretation, Interviews, into, introduce, intuition, invest, investment, Investments, involved, involves, irrelevant, Is, issues, IT, ITS, Job, Key, Key Performance Indicators, KPIs, lack, landscape, language, large, lead, Leaders, Leadership, leading, learning, Leverage, leveraging, License, lies, likelihood, location, machine, machine learning, Machines, Made, Main, maintaining, maintenance, major, major issues, make, Making, making informed, management, managing, manual, Market, Marketing, maximum, May, meaning, meaningful, measurable, measures, Media, methods, Might, Mindset, minimize, mitigate, ML, ML algorithms, modeling, models, Modern, monetary, Monitor, more, Moreover, most, must, Navigate, necessary, Need, needs, Next, next step, NLP, objectives, observations, of, often, on, ONE, only, open, Operations, operators, Opinions, opportunities, optimize, Options, or, organization., organizations, organizing, Other, outcomes, outdated, overall, overall business, Overcome, overcome resistance, overcoming, overwhelming, Own, own information, particularly, past, patterns, peers, performance, personal, personal data, place, plato, Plato Data Intelligence, PlatoData, play, plays, policies, Posts, potential, potential risks, power, practices, predict, Predictions, Predictive, predictive analytics, Predictive Modeling, preferences, presented, prevent, previous, primary, privacy, privacy regulations, problems, procedures, Process, processes, processing, productivity, Products, professionals, Progress, promoting, proper, protect, provide, provided, providing, purchase, qualitative, quality, quantifiable, quantitative, quick, quickly, rather, real, real-time, reduce, reducing, redundant, refers, regarding, regular, regularly, regulations, regulatory, Relationships, relevant, Relevant Information, reliability, reliable, relying, repeat, repetitive, Reporting, Reports, required, Requirements, requires, Resistance, Resources, resulting, Results, return, return on investment, reviewing, rewarding, risks, Roadmap, robust, ROI, role, roundup, s, safeguarding, sampling, satisfaction, Savings, security, Security and Privacy, security measures, Select, selective, sensitive, sentiments, Series, sets, setting, Share, sharing, shift, should, shutterstock, significance, single, skilled, Social, social media, social media posts, Software, solid, solution, Solutions, SOLVE, some, sources, specific, standardized, statistical, step, store, Strategic, Strategies, Strategy, strict, structured, structured and unstructured data, structuring, successful, Such, summed, supply, supply chain, surveys, Systematic, Systems, tailor, Take, Target, targeted, tasks, techniques, Technologies, text, textual, Than, that, The, The Information, their, Them, then, These, they, this, those, threats, Through, throughout, time, to, Today, together, too, tools, toward, track, traditional, Training, transcription, transparent, Trends, turning, ultimately, unauthorized, Uncertainty, uncover, under, understand, understandable, Understanding, unstructured, unstructured data, up, Updates, use, Used, users, using, usually, utilized, utilizing, validating, valuable, Valuable Information, values, variety, Various, Vast, visual, visualization, Visualizations, Visualize, vital, volume, way, WELL, What, What is, When, where, Which?, while, WHO, wisely, with, within, without, Work, work together, working, zephyrnet...Zephyrnet, 4d ago
In a rapidly evolving business technology landscape, artificial intelligence (AI) has emerged as a transformative force in management. The predictive capabilities of AI have equipped managers with data-driven foresight, enabling them to monitor and anticipate market trends, business risks, customer preferences, and employee behaviors, thereby facilitating more evidence-based decisions. However, as we explore the future of management, we recognize that the potential of AI extends beyond prediction. The emerging generative capabilities of AI represent a leap forward, fostering creativity and enabling innovative ideas, designs, and solutions. With its user-friendly interface, generative AI makes it easier for a broader swath of the population to get involved in AI-enabled problem solving. The synergies between the predictive and generative capabilities of AI are undeniable. Predictive insights fuel generative processes, while generative outputs enhance predictive accuracy. This powerful extension of AI, from prediction machines to generative problem-solvers, presents the potential for AI to transform a host of conventional management practices, heralding an era where artificial agents complement and potentially replace managers and knowledge workers in a variety of organizational settings. These developments have the potential to fundamentally alter the nature of the firm, the future of work, and management theories.AOM_CMS, 4d ago
But while we regulate its harms, we must not take away the benefits of this technology. I wondered what ChatGPT would say about its legacy. So, I asked it to write me a paragraph about it. This is what it came up with: “My legacy lies in the realms of facilitating communication, providing information, and aiding in problem-solving. Since my inception, I have been utilized by individuals and organizations worldwide, contributing to a wide array of discussions and endeavours. While I don’t possess personal experiences or ambitions, my impact is evident in the countless interactions where I’ve assisted users in generating content, learning new concepts, and fostering creative thinking. My legacy is intertwined with the evolving landscape of AI and its applications in enhancing human capabilities and understanding. The true measure of my legacy is reflected in the diverse ways people have harnessed my capabilities to connect, learn, and explore new frontiers in the digital era.”...Sify, 4d ago
...“While there’s been significant concern about the abuse of AI and LLMs by cybercriminals since the release of ChatGPT, our research has found that, so far, threat actors are more sceptical than enthused. Across two of the four forums on the dark web we examined, we only found 100 posts on AI. Compare that to cryptocurrency where we found 1,000 posts for the same period. We did see some cybercriminals attempting to create malware or attack tools using LLMs, but the results were rudimentary and often met with scepticism from other users. In one case, a threat actor, eager to showcase the potential of ChatGPT inadvertently revealed significant information about his real identity. We even found numerous ‘thought pieces’ about the potential negative effects of AI on society and the ethical implications of its use. In other words, at least for now, it seems that cybercriminals are having the same debates about LLMs as the rest of us”, said Christopher Budd, director, X-Ops research, Sophos.TahawulTech.com, 4d ago
In response to the introduction of this new type of technology in healthcare, the CAR has set up a Radiology AI Validation Network (RAIVN). This assembly consists of AI specialists in the field of radiology tasked with assisting with post-market assessment of AI applications. As a resource, RAIVN would serve as the national body responsible for evaluating the performance of these technologies and pre-identifying any potential issues that may affect patient care. While this program is still in its infancy, we are hopeful its integration will be smoothly executed in the months ahead. We also believe that the RAIVN framework can and should be applied more generally to all AI based solutions in healthcare.Hospital News, 4d ago
Researchers from MIT and Meta AI have developed an object reorientation controller that can utilize a single depth camera to reorient diverse shapes of objects in real-time. The challenge addressed by this development is the need for a versatile and efficient object manipulation system that can generalize to new conditions without requiring a consistent pose of key points across objects. The platform can also extend beyond object reorientation to other dexterous manipulation tasks, with opportunities for further improvement highlighted for future research.MarkTechPost, 4d ago

Latest

Understanding the real needs of our students is crucial for universities to provide tailored and effective services. An AI student engagement platform goes beyond connecting with students; it ensures we have our finger on the pulse and truly understand what’s bugging our students! Through on-demand surveys, sentiment analysis and real-time analysis of trending topics and student queries, Cara has highlighted unexpected issues that we would not have identified previously. For example, we removed fees from our featured questions on Cara after the payment deadline. However, when Cara continued to get inquiries about fees it was clear there was a cohort of students in financial difficulties looking for support, so we reinstated fee-related features. Similarly, we found that during the quiet summer period students engaged with Cara more than we expected. We now feature questions related to the main concerns that came up in those questions: support available over the summer and how to get a summer job.THE Campus Learn, Share, Connect, 4d ago
A recent McKinsey report, led by auhtors Joe Caserta and Kayvaun Rowshankish, points out there is unrelenting pressure to "do something with generative AI". The report authors suggest IT and data managers "will need to develop a clear view of the data implications of generative AI." Perhaps most challenging "is generative AI's ability to work with unstructured data, such as chats, videos, and code," according to Caserta and his team. "Data organizations have traditionally had capabilities to work with only structured data, such as data in tables." This shift in data concerns means organizations need to rethink the overall data architecture supporting generative AI initiatives. "While this might sound like old news, the cracks in the system a business could get away with before will become big problems with generative AI.continuingedupdate.blogspot.com, 4d ago
All of the panelists acknowledged that there were both advantages and challenges to working with generative AI models. None claimed that these models were a panacea. But even in the early stages of generative AI, each company has been able to generate high degrees of productivity, increase customer satisfaction, and have extraordinarily fast time to market with their products, services and ideas.thestarphoenix, 4d ago
The reported tension between Toner and Altman may smack of personal politics, but it is also a microcosm of a broader tension in the world of AI research as to the field’s goals and the best—or least dangerous—ways to get there. As I wrote recently in this newsletter, there are, broadly, two schools of thought when it comes to the potential dangers of AI research. One focuses on the risk that people will unwittingly give birth to an all-powerful artificial intelligence, with potentially catastrophic results for humanity. (Many believers in effective altruism fall into this camp.) Geoffrey Hinton, seen by many in the field as the godfather of modern AI research, said recently that he left Google specifically so that he could raise the alarm about the dangers of super-intelligent AI. Last month, President Biden issued an executive order in an attempt to set boundaries for the development of AI; this week, sixteen countries including the US agreed to abide by common research standards.Columbia Journalism Review, 4d ago
Lead co-author professor Carl Frey, Dieter Schwarz associate professor of AI & Work at the Oxford Internet Institute and Director of the Oxford Martin Programme on the Future of Work, said, “The computer revolution and the rise of the Internet has connected talent from all around the world yet, rather than accelerating as many predicted, studies have shown that breakthrough innovation is in decline. Our paper provides an explanation for why this happens: while remote collaboration via the internet can bring together diverse pools of talent, it also makes it harder to fuse their ideas. Today, there is much talk about Artificial Intelligence supercharging innovation. Yet many predicted the same with the advent of the PC and the Internet. This should serve as a reminder there is unlikely to be a pure technological solution to our innovation problems.”...Lab Manager, 4d ago
...“As ethical considerations surrounding AI become more prominent, it is important to take stock of where the recent developments have taken us, and to meaningfully choose where we want to go from here. The responsible future of AI requires vision, foresight, and courageous leadership that upholds ethical integrity in the face of more expedient options. Explainable AI, which focuses on making machine learning models interpretable to non-experts, is certain to become increasingly important as these technologies impact more sectors of society, and both regulators and the public demand the ability to contest algorithmic decision-making. Both of these subfields not only offer exciting avenues for technical innovation but also address growing societal and ethical concerns surrounding machine learning.”...electronicspecifier.com, 4d ago

Latest

The Executive Order on the development and use of artificial intelligence (AI) issued by President Biden on October 30 is a directive that contains no fewer than 13 sections. But two words in the opening line strike at the challenge presented by AI: “promise” and “peril.”As the document’s statement of purpose puts it, AI can help to make the world “more prosperous, productive, innovative, and secure” at the same that it increases the risk of “fraud, discrimination, bias, and disinformation,” and other threats.Among the challenges cited in the Executive Order is the need to ensure that the benefits of AI, such as spurring biomedical research and clinical innovations, are dispersed equitably to traditionally underserved communities. For that reason, a section on “Promoting Innovation” calls for accelerating grants and highlighting existing programs of the Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) program from the National Institutes of Health (NIH). And the Colorado School of Public Health is deeply involved in the initiative.ColoradoSPH helps ensure that artificial intelligence serves and empowers all peopleAIM-AHEAD is a national consortium of industry, academic and community organizations with a “core mission” to ensure that the power of AI is harnessed in the service of minorities and other groups historically neglected or poorly served by the healthcare system. A key focus – though not the only one – is using AI to probe electronic health records (EHRs), which can be rich sources of clinical and other data.“The goal of [AIM-AHEAD] is to use this technology to try to eliminate or better understand and address health disparities,” said Evelinn Borrayo, PhD, associate director of research at the Latino Research and Policy Center (LRPC) of ColoradoSPH and Director for Community Outreach and Engagement at the CU Cancer Center. “This consortium is about the inclusion of communities that historically tend to be left behind.” Borrayo and Spero Manson, PhD, director of the Centers for American Indian and Alaska Native Health (CAIANH) at ColoradoSPH, co-direct the North and Midwest Hub of the AIM-AHEAD initiative, a sprawling 15-state area. Both are also members of the AIM-AHEAD Leadership Core.The hub, which is housed within CAIANH and ColoradoSPH, serves a variety of “stakeholders” who can help to develop AI, including Hispanic/Latino community health organizations, tribal epidemiology centers, urban Indian health centers, and more.Addressing the shortfalls of AI and machine learning developmentManson acknowledged that the last decade has brought “an explosion of interest as well as investment” in exploring the promise of AI and machine learning (ML) – which uses algorithms to train computers to perform tasks otherwise assigned to humans – and applying that knowledge to improving healthcare.“There have been substantial areas of achievement in that regard,” Manson said. But he said the work has also revealed “substantial bias” in the algorithms and predictive models as they are applied to “underrepresented and marginalized populations.”He noted, for example, that the data in EHRs may be incomplete because of barriers to care that people face, including socioeconomic status, race and ethnicity, and geography. In that situation, AI and ML don’t correct for these factors because the technology uses the EHR itself to analyze the data and make predictions, Manson said.That’s why deepening the reservoir of data in EHRs and other repositories is imperative for the development of AI and ML, he said.“The idea is to improve healthcare for all citizens, not just those that have benefited narrowly in the past,” he noted.Improving the diversity of AI workforceIn addition, the workforce of scientists working on AI and ML lacks diversity, while the benefits of research in the field have not yet adequately spread to underserved communities, Manson said.The North and Midwest Hub has undertaken several “outreach and engagement” projects to meet the goals of AIM-AHEAD, with ColoradoSPH playing a significant role.For example, two pilot projects aim to build capacity for applying AI and ML to aid communities. In one, Clinic Chat, LLC, a company led by Sheana Bull, PhD, MPH, director of the mHealth Impact Lab at ColoradoSPH, is collaborating with Tepeyac Community Health Center, which provides affordable integrated clinical services in northeast Denver. The initiative, now underway, uses Chatbots to assist American Indian/Alaska Native and Hispanic/Latino people in diagnosing and managing diabetes and cancer.A second project is working toward incorporating AI and ML coursework into the curriculum for students earning ColoradoSPH’s Certificate in Latino Health.“It’s an opportunity to introduce students to how using AI and ML can help us understand and benefit the [Latino] population,” Borrayo said. The idea is to build a workforce with the skills to understand the unique healthcare needs of Latinos and apply AI and ML skills to meet them, she added.“One of the approaches we are also taking is reaching students in the data sciences,” Borrayo said. “We can give those students the background and knowledge about Latino health disparities so they can use those [AI and ML] skills as well.”Building a generation that uses AI to improve healthcareManson also noted that the North and Midwest Hub supports Leadership and Research fellowship programs, which are another component of what he calls “an incremental capacity-building approach” to addressing the goals of AIM-AHEAD.“We’re seeking to build successive generations, from the undergraduate through the doctoral/graduate to the early investigator pipeline, so these individuals move forward to assume positions of leadership in the promotion of AI and ML,” Manson said.Borrayo said that she is most interested in continuing to work toward applying solutions for these and other issues in communities around the region. She pointed to the Clinic Chat project as an example of how AI and ML technology can be used to address practical clinical problems.“I think understanding the data, algorithms and programming is really good for our underrepresented investigators to learn,” she said. “But for our communities, I think the importance lies in the application.How can we benefit communities that are typically left behind or don’t have access to healthcare in the ways most of us do?”For Manson, a key question is how members of American Indian/Alaska Native, Latino, and other communities can “shift” from being “simply consumers and recipients” of work in AI and ML and “become true partners” with clinicians and data specialists in finding ideas that improve healthcare.“The field will be limited in terms of achieving the promise [of AI and ML] until we have that kind of engagement with one another,” Manson said.cuanschutz.edu, 4d ago
...2. Creative Use of Virtual and Augmented RealityIn 2024, disregarding AR/VR is not a practical choice for those who may have successfully steered clear of it until now. The future promises immersive experiences through Virtual Reality (VR) and Augmented Reality (AR). It is imperative to broaden our reach by transforming events into interactive adventures, surpassing the traditional confines of in-person gatherings. VR and AR present opportunities for creative event elements, ranging from virtual tours, engaging presentations, venue explorations, hybrid participation, live demonstrations, and an extensive list of possibilities. Of course, being at the forefront of these innovations is crucial, and falling behind is not an option. 3. Elevating AI Proficiency: In 2024, the mastery of Artificial Intelligence (AI) will be a pivotal skill, transcending the trends of the day and becoming essential for unlocking unparalleled potential in event management. As an event manager, delving into AI-powered tools becomes imperative for transforming how you understand and cater to attendee preferences and behaviors. The once aspirational task of gleaning deep insights will be at your fingertips. Integration of AI into your event management repertoire will enable you to craft personalized event experiences, elevating attendee satisfaction, fostering heightened engagement, and positioning you as an event maestro, finely attuned to the pulse of your audience.eventcombo.com, 4d ago
In the legal sphere, particularly in cases of complex digital advertising litigation involving AI, the interplay between AI and human expertise becomes pivotal. Expert witnesses play a crucial role in bridging the technical complexities of AI with the legal world, elucidating how AI functions, its limitations, and potential biases. They also emphasize the human element in AI, emphasizing the roles humans play in designing, training, and deploying AI algorithms. As AI's significance continues to grow in digital advertising, the synergy between AI and human intelligence will shape the future of the industry and its legal landscape.24-7 Press Release Newswire, 4d ago

Top

I think a bunch of people have the intuitions that at least part of what makes a value change (il)legitimate has to be evaluated with reference to the object level values adopted. I remain so far skeptical of that (although I recognize it's a somewhat radical position). The main reason for this is that I think there is just no "neutral ground" at the limits on which to evaluate . So while pragmatically we might be forced to adopt notions of legitimacy that also refer to object level beliefs (and that this might be better than the practically available alternatives), I simultaneously think this is conceptually very dissatisfying and am skeptical that such an approach with be principled enough to solve the ambitious version of the alignment problem (i.e. generalize well to sufficiently powerful AI systems).lesswrong.com, 18d ago
...“It’s really important that we all stay aware of what the technology can do, because if we know what it can do, we’re more likely to be able to see past some of the stuff that’s being put out there,” she said. “If you’re not aware of deepfakes and you don’t know anything about it, then you’re not going to be able to say ‘hey, I wonder if that’s a deepfake.’ If you’re not aware of what image generators can do, then you’re not even going to question different media that you [find].”Thompson was also optimistic at the passage of A.B. 783 in October, which will establish media literacy curricula for K-12 in California, although its requirements will not be realized for some time.On the computer science side of GenAI, VCOE’s Director Technology Infrastructure Stephen Meier said part of demystifying the technology is understanding it as a tool, not an agent.“We have this idea that because we’ve abstracted the person out of the machine, that the machine, the AI, is now infallible. But we have to remember the machine was created by fallible creatures, and we can’t take the fallibility out of the machine,” he said. “The other part that goes along with that is, you have OpenAI, who is arguably driving the AI conversation … is now controlled by six people, or potentially now four people. That is something that really concerns me, as AI becomes pedagogical, wrapped up in what we’re teaching our students.”Meier said another risk with GenAI lies in the data it was trained on. He made an analogy with the MOVEit hack earlier this year, in which a foreign actor essentially poisoned a software company’s product and infected that company’s clients around the world by extension.“They’re already doing that today with data set poisoning,” he said. “If you find one of these AI companies that have these large training models, if you get a bad actor in there who poisons that data, you’re now getting bad results.”Looming questions aside, the panelists were broadly optimistic that the challenges of AI will be solved. Reina Bejerano, chief technology officer at Oxnard Union High School District, said parents seem to be receiving the evolution of AI fairly well. She likened it to social media — they don’t really know what Snapchat or TikTok are, but they know their kids use them, and they’re generally curious to learn more. She said her district had some success hosting parent nights with dinner and conversations about these emerging apps and tools.Bejerano cited Khanmigo, a custom tool that can adjust its answers to prompts if a student doesn’t understand them, as an example of one that already seems to be having a positive impact.“It really is giving students autonomy, it’s giving them that freedom to learn in their own way, and it’s allowing them to be vulnerable,” she said. “In my opinion, I’m seeing more engagement, and higher engagement, than I’ve seen before because students have this autonomy and they’re able to feel vulnerable, and then they end up learning more.”...GovTech, 5d ago
...120, 150, a, About, acknowledging, Action, adapts, Additionally, addresses, addressing, Advertisement, against, Against Threats, agile, AI, AI algorithms, AI systems, algorithms, Aligning, Aligns, also, an, and, approach, ARE, Artificial, artificial intelligence, AS, assessing, associated, At, attendees, attributes, autonomous, await, aware, Balance, becomes, behavior, beyond, biases, blueprint, both, build, build trust, builds, Built, business, business goals, But, call, call to action, CAN, capabilities, capable, Catalyst, center, center stage, challenge, challenges, change, comes, commitment, Companies, complete, complexities, complexity, compliance, components, comprehensive, confidence, confident, Considerations, considers, constant, constraints, ConTeXt, continues, continuous, continuously, Crafting, Creating, crucial, crucially, cutting, cutting-edge, decision, dedication, delicate, Demand, demands, Design, domains, don, don't, Don't miss, dynamic, ecosystem, Edge, embed, embedding, embrace, embracing, emerging, emphasize, emphasizes, end, ensure, ensuring, entire, essence, ethical, Event, ever, evolution, evolve, evolving, exhibitors, existing, expectations, expert, Expert Insights, expertise, extend, extends, Face, factors, Fairness, financial, financial services, financial technology, fine, fintech, Fintech Companies, focal, For, foster, Fostering, Foundation, frameworks, from, fully, future, Glimpse, Goals, governance, governance model, governing, holistic, identify, immense, Impact, imperative, importance, in, Incorporating, industry, inevitable, Innovation, Innovations, insights, integral, integration, Intelligence, interactions, Interplay, into, intricate, involves, Is, issues, IT, ITS, journey, just, keep, landscape, Leaders, Leadership, learned, learning, lessons, Lessons Learned, lies, Lifecycle, likely, London, maintain, maintaining, make, mandate, mechanisms, methods, meticulous, miss, mitigate, mitigate risks, mitigating, mitigating risks, model, models, Monitor, moral, multifaceted, must, Nature, Navigate, Navigating, necessitating, Need, networking, norms, Now, nurturing, of, offer, often, on, only, outcomes, pace, Paramount, part, plato, Plato Data Intelligence, PlatoData, plethora, point, poised, poses, potential, practices, Premier, principles, proactive, promote, rapid, Readiness, Reading, realm, rectify, Register, regulatory, regulatory frameworks, reliability, reputational, require, requires, requiring, resiliency, Respect, responsible, revolutionizes, risks, Roadmap, s, safeguard, Sector, Services, Social, something, speakers, Stage, stakeholder, Stakeholder Expectations, stakeholders, Strategy, strive, success, supervision, sustaining, system, Systems, T, takes, task, Technical, Technological, Technologies, Technology, that, The, their, These, they, this, those, threats, throughout, Thus, to, track, transformative, Transparency, Trust, trusted, tune, underscores, Understanding, unintended, unique, Unlocking, unmatched, users, value, Various, View, vigilance, vigilant, When, where, Which?, while, WHO, will, with, You, zephyrnet...Zephyrnet, 24d ago
...“SMEs looking to ethically incorporate AI should seek to understand the problems they are attempting to solve to be sure that AI solutions are fit for purpose and can solve their challenges rather than add to them. In hiring for example, AI solutions should be proactive in identifying and addressing potential biases, acknowledging their substantial impact on individuals and communities. Clarity on AI algorithm development and functionality is crucial with an emphasis on the need to train algorithms on diverse, representative data to reduce bias.Dynamic Business, 12d ago
The prospect that AI might not only match but potentially surpass human capabilities in both creativity and curation is a provocative and transformative idea. When we consider the realms of creativity and curation, we typically attribute superior prowess to the human mind with its intrinsic understanding of emotional depth, cultural nuances, and historical context. However, AI's rapid advancement paints a different picture. With its ability to process and analyze vast datasets far beyond human capacity, AI can uncover patterns, inspirations, and connections that might escape even the most learned human curator or the most creative artist. In terms of creativity, AI can synthesize and recombine elements from a vast array of styles, periods, and mediums, potentially leading to groundbreaking new forms of art that challenge our conventional understanding of creativity. In curation, AI's objectivity and expansive analysis could offer a more democratized and inclusive approach, unshackling art from the subjective preferences and potential biases of human gatekeepers.Psychology Today, 22d ago
Now a 17-year-old junior at Eden Christian Academy, Luca said a tool like this would have helped him appreciate written stories in a way he never could."I think my reading level would be probably be at 11th grade, right where I should be right now," he said.As Luca goes off to study business in college, an animated version of his face will continue to guide readers as a digital mascot for Luca.ai. Mr. Sosso said the Dora the Explorer-like icon will become increasingly expressive with generative AI.The Luca.ai site requires learners to gain parental approval, under the Children's Online Privacy Protection Act. It was also built with safeguards to protect user information while training its AI models.The platform is still technically in beta mode, but it is now open to paid subscribers for a $15 per user, per month fee. Families with more than one child can get 50 percent off, or prepay for $150 a year.Mr. Sosso recommends 15-minute daily segments to consistently build the reading muscle. He said the privacy of learning online helps build confidence.The team hopes to release an app version of Luca.ai early next year, followed by classroom integration. The company is already working with Provident Charter Schools, which focuses on dyslexia and other language-based learning differences. The school has a locations in Troy Hill and Beaver County. Twenty students from Provident last year helped to refine the user interface.Students from Carnegie Mellon University also helped develop the app, seizing on the opportunity to integrate ChatGPT."That's when I realized that the platform had an opportunity to take off, because now we were really able to deliver a custom learning experience for every student," Mr. Sosso said, giving special credit to the CMU students: "They're really on the cutting edge of things."The phonetic-recognition software, powerful enough to decipher distinct pronunciations, was also the result of collegiate research at the University of Michigan.Luca.ai currently has about 50 users, but Mr. Sosso said that's with virtually no promotion.He said he's already had multiple inquiries from local public schools."The interest is there," he said.©2023 the Pittsburgh Post-Gazette. Distributed by Tribune Content Agency, LLC.GovTech, 17d ago

Latest

Malyon outlined the goal to lower the typical 800 reports from a grand prix to a more manageable 50, easing the strain on FIA staff. In addition to Computer Vision, the FIA trialled Catapult in Abu Dhabi, a system designed to improve car location accuracy. Chris Bentley, Single-Seater Head of Information Systems Strategy, mentioned how similar technology is used in NFL for player identification and how it could enhance the FIA's live feeds. He said: "There are examples in NFL where they can identify every player on the pitch, even if they're in a big huddle. We can also use that technology on our live feeds. That will be the same as the new tool, and then we will be able to draw the 'lines of interest'. And then the AI would learn as it goes along."...electronicspecifier.com, 4d ago
Its findings include feedback on how respondents are measuring their model’s effectiveness, how confident they feel that their models will survive production, and whether they believe generative AI is worth the hype. Tuning in you’ll hear our panelists’ thoughts on key questions in the report and its findings, along with their suggested solutions for some of the biggest challenges faced by professionals in the AI space today. We also get into a bunch of fascinating topics like the opportunities presented by synthetic data, the latent space in language processing approaches, the iterative nature of model development, and much more. Be sure to tune in for all the latest insights on the ML Pulse Report!...SAMA, 4d ago
These current conversations may serve as a conduit for cultivating ‘topic trust’ between nations in conflict. Perhaps the U.S. and China can agree that AI is a powerful tool that if not utilized properly, could have serious consequences. AI stands as a potent instrument capable of faster data processing, augmenting educational experiences, and disseminating information. However, similar to numerous technological innovations, this power is accompanied by the ability for instigating fear, obscurity, and disinformation. Consequently, a pertinent question arises: Can the United States and China effectively harness such technological prowess without precipitating mutual destruction?...Modern Diplomacy, 4d ago
...1, 2 months, a, ability, able, About, accelerating, access, accessible, accuracy, across, Act, Action, ADD, Added, Adding, addition, ADvantage, AI, AI applications, AI assistant, AI-Powered, All, also, Amazon, Amazon Aurora, Amazon DynamoDB, Amazon ElastiCache, Amazon Neptune, Amazon OpenSearch, Amazon OpenSearch Service, Amazon QuickSight, Amazon RDS, Amazon Redshift, Amazon S3, Amazon SageMaker, Amazon Web Services, among, amounts, an, analysis, Analysts, analytics, and, announced, another, any, APIs, Application, applications, apply, ARE, AREA, AS, ask, assist, Assistant, At, augmented, aurora, Author, authors, automatically, availability, available, AWS, aws re:Invent, Azure, BE, been, being, beneficial, better, between, beyond, BI, break, break down, bristol, broad, brought, build, Building, Built, business, business data, business intelligence, business needs, But, by, CAN, Can Get, can help, capabilities, capability, case, cases, Catalog, choice, clicks, Cloud, cloud data, cloud data warehouses, CO, code, coming, coming soon, Command, Companies, compatibility, compatible, compelling, complete, complex, complexity, compliant, comprehensive, compromising, Compute, concurrent, confident, connect, connecting, consumer, consumer finance, ConTeXt, contextual, contextually, continuously, Cost, course, create, created, creativity, critical, Current, curve, Customers, customize, customizing, Cycle, dashboard, Dashboards, data, Data Across, data analysts, data catalog, data integration, data lake, data lakes, data management, data movement, data privacy, data silos, Data Sources, data warehouse, data warehouses, Database, databases, Datasets, date, day, Days, deliver, deliver personalized, delivering, delivers, Demand, demanding, descriptions, Development, different, differentiate, differentiator, Digit, digital, Digital media, dimensions, discover, Does, down, driven, dynamic, DynamoDB, earlier, easier, Easily, easy, Edition, editor, efficiency, Eliminate, enable, enabling, end, end-to-end, Engineers, engines, enhancements, ensure, Enterprise, entire, especially, ETL, Even, events, everyone, everywhere, example, exciting, existing, expanded, explored, extract, Facebook, facets, factoring, family, faster, Feature, Features, feel, few, few clicks, files, Finally, finance, financial, fine, First, Flexibility, fms, focused, For, for example, For You, Forward, found, Foundation, Founded, four, from, Fuel, fully, functionality, future, G2, General, generation., generative, Generative AI, generic, geolocation, Get, going, govern, governed, Have, he, Health, heavy, heavy lifting, Help, help you, helped, helps, High, high-quality, highly, his, history, HOURS, Humans, Hundreds, if, improve, improvements, in, include, includes, Including, index, information, ingestion, innovate, Innovation, Innovations, insights, instance, instantly, integrated, Integrating, integration, integrations, Intelligence, Intelligent, interface, into, introduced, Intuit, intuitive, invested, investing, IP, Is, IT, ITS, Job, join, JSON, just, Key, Keynote, knowing, knowledge, lakes, language, Last, Last Year, launched, leading, LEARN, learning, Leverage, leveraging, lifting, like, List, lives, Llama, load, Look, machine, machine learning, Made, maintained, maintaining, make, Makes, Making, managed, management, manages, many, Matter, Media, meet, Meta, metadata, Microsoft, million, Millions, millisecond, ML, model, models, Modern, Monday, MongoDB, monitoring, months, more, most, Most Popular, movement, moving, multiple, mysql, Natural, Natural Language, Nature, Near, necessary, Need, needs, Neptune, never, new, new feature, Newest, no, non-relational, Now, number, obvious., of, Of course, offer, Offerings, Offers, on, ONE, only, operate, operating, Operational, operational data, Optimizations, optimize, Option, Options, or, organization., organize, Other, our, our data, outcomes, Over, Own, part, partners, Parts, patterns, per, perfectly, perform, performance, personalized, pioneer, pipelines, place, platform, plato, Plato Data Intelligence, PlatoData, Popular, portfolio, possibilities, possibility, PostgreSQL, Powered, Preview, price, Prior, privacy, proactively, produce, Production, productive, Programming, Programming tools, promotions, prompt, proven, provide, quality, quality data, query, quickly, quite, ran, range, RDS for MySQL, RE, read, real, real value, real-time, reasons, redis, reduce, relational, relational database, relational databases, relationship, Released, relevant, Relevant Information, remove, Requirements, requires, retail, retrieval, retrieve, return, Right, role, Run, s, SageMaker, same, Scalability, Scale, scales, scaling, scientists, sdks, Search, second, seconds, Secure, securely, see, seeing, selection, server, Serverless, Service, Services, set, Set Up, Share, silos, Simple, simplicity, single, small, small business, So, some, soon, sources, spanning, spans, specific, speed, Spoke, SQL, SQL server, Stage, storage, store, stored, stores, Stories, strong, success, suited, support, Supports, Take, take advantage, targets, tax, Technologies, Technology, tens, text, Than, that, that’s, The, The Future, their, There, there’s, These, they, think, this, this year, those, thousands, three, Through, throughout, time, times, titan, to, Today, together, told, tools, top, Transactions, Transform, tune, tuning, turn, two, types, Types Of Data, typically, understand, unique, unleash, Unlocking, up, up-to-date, upon, us, use, use case, Used, users, uses, using, value, variability, Various, Vast, ve, vector, View, vision, visual, VP, want, Warehouse, warehouses, was, way, ways, we, web, web services, week, WELL, went, What, When, where, Which?, while, WHO, Why, will, with, within, without, Woolworths, Work, worked, workflow, workflows, working, workloads, works, worry, write, writing, written, year, You, Your, your business, zephyrnet, zero...Zephyrnet, 4d ago
Due to the current situation, the Commission presented a possible compromise text on November 19, 2023. Although it maintained Parliament’s tiered approach, it also significantly softened the regulation. Firstly, the term “foundation model” no longer appears in the text. Instead, the Commission distinguishes between “general-purpose AI models” and “general-purpose AI systems” – according to the Commission’s definition, however, these terms continue to correspond to the terms “foundation model” and “general-purpose AI” introduced by the Parliament. According to the proposal, providers of general-purpose AI models should, among other things, be obliged to document the functionality of their AI models by means of so-called “model cards.” If the AI model poses a systemic risk – which should initially be measured in terms of computing power – they are subject to additional monitoring obligations. The text also contains an article according to which the Commission is to draw up non-binding codes of practice. This refers to practical guidelines, for example, on the implementation of model cards, on the basis of which players can ensure their compliance with the AI Act. However, possible sanctions are not mentioned.Tech Policy Press, 5d ago
There’s also the way we find love and romance. Already, AI dating tools are aping online dating, except the person on the other end isn’t a person at all. There’s already one company that has AI doing the early-stage we-met-on-Tinder thing of sending flirty texts and even sexy selfies, and (for a fee) sexting and segueing into being an online girlfriend / boyfriend. Will most people prefer to the warm glow of a phone screen to an actual human? I don’t think so. But enough will to, I suspect, cause a lot of problems. Because while on the one hand it seems find if under-socialized humans can find some affection with a robot, I question whether directing an under-socialized person’s already-limited social skills to a robot designed to always be amenable and positive and accommodating and compliant really serves that person, or anyone who has to be around that person. Interacting with other human beings can be challenging. That’s part of the point: It’s how we learn to regulate our own emotions and consider those of others; it’s how we start to discern which people are our people; it’s how we learn to compromise and negotiate and building layered and meaningful relationships. You can’t get that from AI. But what you can get is a facsimile of a human being who more or less does what will make you happy in the moment—which, again, is not at all a recipe to be happy in the aggregate.Ms. Magazine, 5d ago

Top

Generative AI is not without its doubters. Some have voiced concerns that algorithms trained on current practices may amplify and extend existing biases with regard to gender, race and ethnicity. Regulation is another area of concern, with many governments having expressed a willingness to aggressively circumscribe the adoption of generative AI. There are also potential workforce issues. Many banks have been careful to emphasize that their interest in generative AI is to augment their existing employees. But some believe that the adoption of efficient technologies will lead to job losses. Finally, generative AI comes at a significant environmental cost. According to some estimates, training a generative AI model consumes more energy per year than 100 American homes.spglobal.com, 26d ago
In Beltran de Heredia's opinion, the field in which we are most likely to see the first attempts to influence human behavior through AI is that of work, more specifically occupational health. He argues that a number of intrusive technologies are currently in use. These include devices that monitor bus drivers to detect microsleep or electroencephalography (EEG) sensors used by employers to monitor employees' brainwaves for stress and attention levels while at work. "It's hard to predict the future but, if we don't restrict such intrusive technologies while they're still at the earliest stages of development, the most likely scenario is that they'll keep improving and spreading their tendrils in the name of productivity."...techxplore.com, 6d ago
But are technical standards enough? In 2022, NIST warned against an over-reliance on technical solutions to address the complex challenges of AI governance, including those relating to social, political, economic, and ethical concerns. Thus, as future AI systems become self-governing, self-improving, and self-adapting, technical controls alone may be insufficient to harness their potential while mitigating risk. NIST therefore called for development of a socio-technical approach that bridges the gap between technical and social standards and expectations.bloomberglaw.com, 28d ago