Latest

new Amidst the recent hype surrounding generative AI, experts like Ken Mugrage, Principle Technologist, Office of the CTO at Thoughtworks, are cautioning against overlooking more immediate concerns such as sustainability and bias while also recognizing the genuine value of these systems. Rather than viewing generative AI as all-encompassing chatbots, experts envision it as a class of tools designed for specific niches, offering innovative ways to navigate specialized information domains. This perspective—outlined in Mugrage’s recent piece published for MIT Technology Review— acknowledges that generative AI’s true significance lies in its capacity to interact with vast and complex datasets.CDInsights, 15h ago
new The conference kicked off with keynotes and comments from Black Hat and DEF CON founder Jeff Moss and Azeria Labs founder Maria Markstedter, who explored the future of AI risk — which includes a raft of new technical, business, and policy challenges to navigate. The show features key briefings on research that has uncovered emerging new threats stemming from the use of AI systems, including flaws in generative AI that makes them prone to compromise and manipulation, AI-enhanced social engineering attacks, and how easily AI training data can be poisoned to impact the reliability of the ML models that depend on it. The latter, presented today by Will Pearce, AI red team lead for Nvidia, features research for which Anderson was a collaborator. He says the study shows that most training data is scoured from online sources that are easy to manipulate.darkreading.com, 17h ago
new Organizational Politics – Even when individual human constraints are effectively managed, there are still organizational challenges, starting with politics. There are many factors of politics such as disagreement of priorities, power struggles, resistance from influential stakeholders, hidden agendas, information asymmetry and manipulation, formation of alliances and coalitions, and fear of losing status leading to decision-making paralysis. Even the best of the AI systems cannot easily overcome human workplace politics.Healthcare Business Today, 1d ago
new There is a common misconception that AI functions independently without any human supervision. The majority of AI systems need human direction and supervision. Human participation is essential for the responsible and ethical implementation of AI.techgig.com, 1d ago
new With the above four aspects in mind, whether AI is an independent legal responsibility entity subject to liability is an ever-evolving debate. While current medical AI tools do not yet carry their own consciousness, what should institutions and AI developers do when the machines reach this point? The paper asserts that technology should serve to advance human society. As such, machine autonomy should not forgo the need for human subjectivity. In other words, the authors argue that measures should be taken to prevent the development of fully independent AI systems that can function well beyond human control.Montreal AI Ethics Institute, 1d ago
new The primary concern with generative AI revolves around its decision-making process. Since AI systems learn from existing data, there's a risk of inheriting biased or flawed logic. This necessitates stringent checks and balances to ensure AI systems operate fairly and accurately, adhering to all regulatory standards.WriteUpCafe.com, 1d ago

Latest

new Our paper shows many examples of AI systems that have learned to deceive humans. Reinforcement Learning (RL) agents trained to play strategic games have learned to bluff and feint, while large language models(LLMs) will output falsehoods in creative ways that help achieve their goals. One particularly concerning example of AI deception is provided by Meta’s AI, CICERO, which was trained to play the alliance-building world-conquest game Diplomacy. Meta put a lot of effort into training CICERO to be “largely honest and helpful,” claiming that CICERO would “never intentionally backstab” its allies. But when we investigated Meta’s rosy claims by studying games that CICERO had played, we found that Meta had unwittingly trained CICERO to be quite effective in its deception.Montreal AI Ethics Institute, 1d ago
new When talking of brittle systems, many people remember the early symbolic AI programs that were rule-based and, hence, could not process anything outside of the scope of pre-defined knowledge. Did deep learning systems overcome that? Yes, unfamiliar inputs do not completely break them. But even the latest systems still make errors a human wouldn’t make [15-17]. We know that fine-tuned models may learn shortcuts [18-21]: undesirable spurious correlations picked up from the training data. We also know that slight variations in the phrasing of the prompt can lead to very different LLM output [22-24]: this phenomenon affected all 30 LLMs in a recent large-scale evaluation [25]. François Chollet [26] questions if deep learning systems can ever overcome this kind of brittleness: according to him, they are “unable to make sense of situations that deviate slightly from their training data or the assumptions of their creators” (p.3).Montreal AI Ethics Institute, 1d ago
new In the ever-evolving landscape of health care, AI shows immense promise. AI refers to computer systems that mimic human cognitive functions such as learning and problem-solving, which can be performed with or without human supervision. From diagnostics to surgical precision, it is catalysing a transformation across the entire spectrum of medical care. Machine learning (ML) is a subfield of AI that enables machines to learn and make predictions by recognising patterns to support rational human decision-making and it is increasingly being applied to medicine. Deep learning, meanwhile, is a method in AI that teaches computers to process data in a way that is inspired by the human brain. Deep learning models can recognise complex patterns in pictures, text, sounds, and other data to produce accurate insights and predictions.theweek.in, 1d ago

Top

Human Error: While not yet perfect, AI tools are able to avoid (with enough training) errors that humans may otherwise make. For example, because AI systems operate using pre-set rules, they are less likely to make an error that a human may be prone to making with a large set of data.cicnews.com, 14d ago
A very important decision problem in this respect is understanding the distribution and flow of decision-making in an organization. Disaggregating decisions makes it possible for administrators to identify decision-making tasks that can be fully or partially automated versus those that must be performed by human beings. If a decision can be fully specified in advance — if X then Y — and if lots of data are available to classify situations — X or not X — then fully automated decision-making may be feasible. AI systems that play video games fall into this category: There is a clear goal of winning the game by getting the most points, and there are millions of previous games to learn from. Many successful implementations of AI, likewise, use automation at an abstract level but rely on human beings to make more fine-grained decisions at a local level. Thus, for instance, executives and engineers at a ride-sharing service have created a business model that can automate route-finding and billing in areas where there are standardized geospatial data available and lots of data about previous trips and rider demand patterns. But the human driver’s judgment is still required for passenger safety and navigation in crowded, cluttered environments. Organizations that want to adopt AI thus must make strategic decisions about organizational design and direction as well as ongoing operational decisions on a case-by-case basis.Texas National Security Review, 26d ago
Implementing AI systems at scale can have wide societal consequences well beyond the scope of ISED’s mandate. This implies other government ministries and agencies also need to play a formative role in crafting the AIDA legislation. Such government-wide collaboration can build on ISED’s current work with Justice Canada, Global Affairs Canada and the Treasury Board Secretariat in Canada’s negotiations with the Council of Europe (COE) to develop a treaty on AI that prominently values human rights, democracy and the rule of law. The COE’s Consolidated Working Draft of the Framework Convention on AI, human rights, democracy and the rule of law provides useful material for Canada’s own AI regime. Not only is Canada’s AI regulatory regime expected to conform to the convention once ratified, but its general provisions, obligations and principles are better aligned with the goals of avoiding harm, building trust and advancing the public interest than anything ISED has made public so far. Other ministries with obvious contributions to make include Employment and Social Development Canada (labour), Public Safety Canada (cybersecurity) and Canadian Heritage (content creators and artists). The Office of the Privacy Commissioner also has an important, but so far neglected, role to play.Centre for International Governance Innovation, 28d ago
Regardless of its exact nature, ‘Q*’ potentially represents a significant stride in AI development, so the fact that it’s at the core of an existential debate of OpenAI rings true. It could bring us closer to AI systems that are more intuitive, efficient, and capable of handling tasks that currently require high levels of human expertise. However, with such advancements come questions and concerns about AI ethics, safety, and the implications of increasingly powerful AI systems in our daily lives and society at large.CoinGenius, 9d ago
Project HOMINIS:Human-centered Open-source Model for Intelligent, Neutral and Inclusive Systems. Project HOMINIS aims to revolutionize AI by creating ethical, bias-free AI systems using open, inspectable datasets. Addressing toxicity in web-scale datasets, it proposes sustainable alternatives to AI foundation models, reducing environmental impact. The project’s four primary goals include: 1) Compiling high-value, diverse datasets from various sources, including scientific papers and knowledge bases, while eliminating biased content; 2) Conducting extensive ablation studies on transformer models and exploring alternative architectures; 3) Optimizing the foundation model using advanced techniques, leading to a preliminary release for community collaboration; 4) Implementing instruction tuning for ethical alignment, culminating in the final release of a responsibly trained AI model. Additionally, HOMINIS seeks to reduce energy consumption using innovative methods like Flash Attention and routing, enhancing data processing efficiency, and improving model interference and knowledge integration.Real AI’s partnership with UNINA AND NVIDIA:UNINA, The University of Naples Federico II, has a notable record in significant initiatives. For example, the first Apple iOS Developer Academy in 2016, the Cisco Digital. Transformation Lab in 2018 and most recently (together with Real AI) is founding member of Europe’s first Human Centered AI Masters.MarkTechPost, 11d ago
Open AI is a company dedicated to researching and deploying AI systems that are beneficial to humanity. They recognize the immense power of AI and prioritize developing systems that are safe, aligned with human values, and more important than profits. OpenAI is a leading force in the multimodal AI market, offering a range of innovative products and solutions including models such as GPT-4, DALL·E 2, and CLIP. GPT-4 is a powerful language model capable of processing both text and images, enabling versatile applications in text generation and image understanding. DALL·E 2 is an innovative AI system that creates images from textual descriptions, allowing for creative visual synthesis. CLIP efficiently learns visual concepts from natural language guidance, enabling various visual recognition tasks. These solutions collectively demonstrate OpenAI's expertise in integrating different modalities, offering advanced capabilities in understanding and generating content across text, images, and more.marketsandmarkets.com, 14d ago

Latest

new The playbook highlights the three unique, and most crucial abilities, that AI possesses for overcoming bottlenecks that exist on the path to net zero. This includes the ability to discern patterns, predict outcomes, and optimize performances in complex systems; the ability to accelerate the discovery and development of solutions like low-carbon materials, and climate-resilient crops; and finally the ability empower the sustainability workforce of the future.mid-east.info, 1d ago
new However, the bill is now hanging in the balance because of internal disagreement about some key aspects of the proposed legislation, especially those concerned with regulation of “foundation” AI models that are trained on massive datasets. In EU-speak these are “general-purpose AI” (GPAI) systems – ones capable of a range of general tasks (text synthesis, image manipulation, audio generation and so on) – such as GPT-4, Claude, Llama etc. These systems are astonishingly expensive to train and build: salaries for the geeks who work on them start at Premier League striker level and go stratospheric (with added stock options); a single 80GB Nvidia Hopper H100 board – a key component of machine-learning hardware – costs £26,000, and you need thousands of them to build a respectable system. Not surprisingly, therefore, there are only about 20 firms globally that can afford to play this game. And they have money to burn.the Guardian, 1d ago
new Artificial intelligence (AI) is at the forefront of boosting efficiency. AI systems can analyze large datasets, automate routine tasks, and even make predictive analyses, saving time and reducing the potential for human error.Techiexpert.com, 2d ago
new The findings from this study are crucial for academia and go well beyond that, touching the critical realm of AI Ethics and safety. The study sheds light on the Confidence-Competence Gap, highlighting the risks involved in relying solely on the self-assessed confidence of LLMs, especially in critical applications such as healthcare, the legal system, and emergency response. Trusting these AI systems without scrutiny can lead to severe consequences, as we learned from the study that LLMs make mistakes and still stay confident, which presents us with significant challenges in critical applications. Although the study offers a broader perspective, it suggests that we dive deeper into how AI performs in specific domains with critical applications. By doing so, we can enhance the reliability and fairness of AI when it comes to aiding us in critical decision-making. This study underscores the need for more focused research in these specific domains. This is crucial for advancing AI safety and reducing biases in AI-driven decision-making processes, fostering a more responsible and ethically grounded integration of AI in real-world scenarios.Montreal AI Ethics Institute, 1d ago
new Predictive work systems and automated testing equipment are already saving health systems time and money, and the benefits are sure to grow as the technologies advance. However, human oversight is still needed. A technician should still use their expertise to determine if a computer-generated solution is indeed appropriate, rather than trusting the proposed solution without critical thinking. It’s also important for health systems to engage with technicians when deploying AI tools—to gather feedback and ensure new procedures & technologies are enhancing their work experience rather than adding to their list of responsibilities.Healthcare Business Today, 2d ago
new Unsupervised learning, meanwhile, is a form of ML wherein AI algorithms are allowed to sift through large tranches of unlabeled data, in an effort to find patterns to classify. This kind of artificial intelligence can be deployed to a number of different purposes, such as creating the kind of recommendation systems that companies like Netflix and Spotify use to suggest new content to users based on their past consumer choices.Gizmodo, 2d ago

Top

To harness the potential of generative AI, it's important to underscore the significance of maintaining good data hygiene and human oversight. This helps mitigate biases, ensure ethical and legal compliance, improve AI performance, and enable AI systems to work collaboratively with human experts to address complex supply chain challenges.supplychainbrain.com, 13d ago
...“Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert system, natural language processing, speech recognition and machine vision. In general, AI systems work by ingesting large amounts of labelled training data, analysing the data for correlations and patterns, and using these patterns to make predictions about future states.” – t echtarget.com.jamaica-gleaner.com, 15d ago
...(Which, for instance, seems true about humans, at least in some cases: If humans had the computational capacity, they would lie a lot more and calculate personal advantage a lot more. But since those are both computationally expensive, and therefore can be caught-out by other humans, the heuristic / value of "actually care about your friends", is competitive with "always be calculating your personal advantage."I expect this sort of thing to be less common with AI systems that can have much bigger "cranial capacity". But then again, I guess that at whatever level of brain size, there will be some problems for which it's too inefficient to do them the "proper" way, and for which comparatively simple heuristics / values work better. But maybe at high enough cognitive capability, you just have a flexible, fully-general process for evaluating the exact right level of approximation for solving any given problem, and the binary distinction between doing things the "proper" way and using comparatively simpler heuristics goes away. You just use whatever level of cognition makes sense in any given micro-situation.)...lesswrong.com, 16d ago

Latest

new Concretely, I worry that training AI systems to produce outputs which look good to human evaluators will lead to AI systems which learn to systematically deceive their overseers, e.g. by introducing subtle errors which trick overseers into giving a too-high score, or by tampering with the sensors that overseers use to evaluate model outputs.alignmentforum.org, 2d ago
new Once a system has been compromised, any attempt by the attacker to exploit their access inevitably triggers abnormal behaviour in some part of the system. AI tools that constantly monitor systems operations can be very effective in rapidly detecting such abnormalities. They can alert humans to their discovery and, in many cases, are able to initiate appropriate countermeasures in far less time than it would take a human to do the same.IT Brief UK, 2d ago
new When ChatGPT and other generative AI tools were released to the public late last year, it was as if someone had opened the floodgates on a thousand urgent questions that just weeks before had mostly preoccupied academics, futurists, and science fiction writers. Now those questions are being asked by many of us—teachers, students, parents, politicians, bureaucrats, citizens, businesspeople, and workers. What can it do for us? What will it do to us? How do we use it in a way that’s both ethical and legal? And will it help or hurt our already-distressed democracy? Schneier, a public interest technologist, cryptographer, and internationally known internet security specialist whose newsletter and blog are read by a quarter million people, says that AI’s inexorable march into our politics is likely to start with small changes like using AI to help write policy and legislation. The future, however, could hold possibilities that right now we may have a hard time wrapping our minds around—like AI systems leading political parties or autonomously fundraising to back political candidates or causes. Overall, like a lot of other things, it’s likely to be a mixed bag of the good and the bad.harvard.edu, 2d ago
new ...1, 10, 100, 25, 41, 80, a, able, accepts, Account, accurately, achieving, across, adaptability, Added, address, addressing, aes, ahead, AI, AI Tools, alert, alerts, All, allows, also, always, an, analyzed, analyzes, and, another, answer, any, app, approach, approaches, approval, approve, approved, ARE, around, AS, ask, asked, assigned, associated, At, Authority, Automated, Automating, available, Away, background, base, based, BE, become, becomes, been, before, being, benchmarks, benefits, better, beyond, biases, Biggest, binary, blend, board, both, Breakdown, Broken, budget, business, businesses, But, by, call, calls, CAN, capabilities, capable, capture, captured, categorized, ceo, challenges, chance, channels, checkpoint, clear, click, Close, closed, closing, closure, closures, commonly, compared, compatibility, competitiveness, completion, confidence, Consistency, constantly, Construct, consuming, contact, continue, conventional, Conversation, conversations, conversions, cornerstone, could, creation, criteria, critical, CRM, customer, customer base, Customer-Centric, Customers, Cycle, data, data extraction, date, day, deal, Deals, decision, Decision Making, dedicated, delves, Demo, Details, develop, devoted, DID, differ, different, directly, discovery, discussed, disrupting, Does, Does it Work, done, During, each, Easily, edit, Effective, efficiency, efficient, efficiently, either, elevating, emails, emerged, emotions, ensured, Ensures, equally, estimates, eventually, executives, existent, existing, experience, exploring, exponentially, extraction, factor, fails, FAST, fast growing, faster, few, Fields, Finally, financially, find, First, First contact, Flexibility, flow, Focus, focusing, For, Force, found, Framework, from, further, fuzzy, gathered, generate, generated, genuine, Get, gets, getting, give, gives, globe, Go, going, good, Growing, guess, had, Have, having, heavily, helps, here, High, Higher, Highlighted, highlighting, How, However, HubSpot, human, identifying, if, Impact, Impacts, implementation, important, in, incomplete, indicates, industry, inefficiencies, influence, influenced, information, innovative, insights, integral, integrate, integrated, integrates, Integrating, integration, integrations, interesting, into, intuition, invest, irrespective, Is, isn, issues, IT, ITS, Job, Key, Labels, landscape, largely, lead, Leaders, leads, left, less, levels, Leverage, Leverage AI, lie, like, likely, limiting, Long, long way, long-standing, lot, Made, make, Makes, Making, manual, manually, mark, marked, Market, May, means, meet, merely, met, methods, metric, Might, missing, more, more efficient, most, Much, Need, needed, needs, new, None, Now, Nuance, observe, obvious., of, often, on, once, ones, only, Operations, opportunities, Option, or, order, organization., Other, Others, our, out, outperformed, Over, part, personal, personalization, phase, phenomenal, Picked, pipeline, place, Platforms, plato, Plato Data Intelligence, PlatoData, possibilities, potential, potentials, power, pre, precision, preventing, prioritize, prioritizes, probability, Problem, Process, processes, Product, Products, projections, promising, prospect, Publishing, purchase, pursuit, qualification, qualify, quality, quicker, ranking, Rates, real, real-time, Realistic, really, recall, relevant, report, Request, Requests, requirement, reshaping, resounding, Results, revenue, review, revolutionized, rich, Right, robust, runs, s, sales, Sales Strategies, salesperson, satisfaction, saved, say, Scalability, Scale, scaling, Score, scoring, seamless, seamlessly, see, Send, sends, sent, serves, Service, set, setup, Share, Shows, significant, since, single, slack, Solutions, some, speed, standardized, standing, start, step, Step-by-Step, Strategic, Strategies, Stronger, structured, Study, successful, Such, Summary, support, surveys, Systems, T, tailor, tailored, takes, task, tasks, Team, Teams, template, Than, that, The, The Cycle, The Landscape, their, Them, then, There, they, this, those, thresholds, till, time, time-consuming, timeframe, timeline, times, to, tool, tools, Tracking, traditional, transcription, transformative, transforming, trying, understand, unique, Unlocking, untapped, up, Upfront, upon, urgency, us, use, Used, using, utility, valuable, value, vs, was, Water, way, we, were, What, What is, When, where, Which?, while, will, winning, wisely, with, without, Work, workflow, workflows, working, would, You, yourself, zephyrnet, zoom...Zephyrnet, 2d ago
new The AI floodgates opened in 2023, but the next year may bring a slowdown. AI development is likely to meet technical limitations and encounter infrastructural hurdles such as chip manufacturing and server capacity. Simultaneously, AI regulation is likely to be on the way.This slowdown should give space for norms in human behavior to form, both in terms of etiquette, as in when and where using ChatGPT is socially acceptable, and effectiveness, like when and where ChatGPT is most useful.ChatGPT and other generative AI systems will settle into people’s workflows, allowing workers to accomplish some tasks faster and with fewer errors. In the same way that people learned “to google” for information, humans will need to learn new practices for working with generative AI tools.But the outlook for 2024 isn’t completely rosy. It is shaping up to be a historic year for elections around the world, and AI-generated content will almost certainly be used to influence public opinion and stoke division. Meta may have banned the use of generative AI in political advertising, but this isn’t likely to stop ChatGPT and similar tools from being used to create and spread false or misleading content.Political misinformation spread across social media in 2016 as well as in 2020, and it is virtually certain that generative AI will be used to continue those efforts in 2024. Even outside social media, conversations with ChatGPT and similar products can be sources of misinformation on their own.As a result, another lesson that everyone – users of ChatGPT or not – will have to learn in the blockbuster technology’s second year is to be vigilant when it comes to digital media of all kinds.Tim Gorichanaz, Assistant Teaching Professor of Information Science, Drexel UniversityThis article is republished from The Conversation under a Creative Commons license. Read the original article.GovTech, 2d ago
new Over the past year, researchers in the AI field have been debating the ability of AI systems, both in private and on social media. Some have suggested that AI systems are coming very close to having AGI while others have suggested the opposite is much closer to the truth. Such systems, all agree, will match and even surpass human intelligence at some point. The only question is when.techxplore.com, 2d ago

Top

Particular safety risks arise at the ‘frontier’ of AI, understood as being those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks – as well as relevant specific narrow AI that could exhibit capabilities that cause harm – which match or exceed the capabilities present in today’s most advanced models. Substantial risks may arise from potential intentional misuse or unintended issues of control relating to alignment with human intent. These issues are in part because those capabilities are not fully understood and are therefore hard to predict. We are especially concerned by such risks in domains such as cybersecurity and biotechnology, as well as where frontier AI systems may amplify risks such as disinformation. There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models. Given the rapid and uncertain rate of change of AI, and in the context of the acceleration of investment in technology, we affirm that deepening our understanding of these potential risks and of actions to address them is especially urgent.lesswrong.com, 26d ago
Distributed Intelligence, on the other hand, represents the analytical prowess of AI systems that excel in processing vast amounts of information, identifying patterns across multiple data sets, and providing consistent, objective analysis. It extends human capability by handling tasks that are too cumbersome or complex for the human brain, offering scalability and efficiency in problem-solving. This pole is crucial for making sense of big data, enabling predictive analytics, and supporting decision-making processes that benefit from a lack of emotional bias and the ability to synthesize diverse perspectives into coherent patterns.Integral Life, 27d ago
This research investigates problem-solving, the foundations of computational insights, and the role of prior knowledge. It advocates for incorporating insights from cognitive science into concepts, representations, and self-explanation to create flexible AI mathematicians. The research also calls for improved collaboration tools and more opportunities for convening. By emphasizing a multi-disciplinary approach, it anticipates that AI systems will contribute to a better understanding of human mathematical cognition, highlighting the pivotal role of joint efforts across diverse fields.MarkTechPost, 5d ago
This path is not inevitable. To improve human performance, we need to think beyond creating AI systems that seek to achieve artificial general intelligence or human parity. The emphasis on general intelligence is not just a chimera, but distracts from the more beneficial users of digital technologies to expand human capabilities. Making machines useful to humans is not a new aspiration. Many people were working on this agenda as early as 1949 and many technologies that have been foundational to our lives today, including the computer mouse hyperlinks menu-driven computer systems came out of this vision machine. Usefulness is more promising today than in the past. An irony of our current age is that information is abundant by useful information is scarce. AI can help humans become better problem solvers and decision makers by presenting useful information. For example, an electrician can diagnose rare problems and accomplish more complex tasks when presented useful information by AI systems.Tech Policy Press, 25d ago
In doing so, the CAIS model introduces a set of alignment-related affordances that are not immediately apparent in traditional models that view general intelligence as a monolithic, black-box agent. In fact, not only does the AI-services model contribute to the developmental and operational robustness of complex AI systems, but it also facilitates their alignment with human values and ethical norms through models of human approval. Specifically, the CAIS model allows for the introduction of several safety mechanisms, including the use of optimisation pressure to regulate off-task capabilities as well as independent auditing and adversarial evaluations to validate each service's functionality. Furthermore, functional transparency and the monitoring of the communication channels can mitigate the inherent complexity and opaqueness of AI algorithms and components, while enabling resource access control policies that can further constrain undesirable behaviours.lesswrong.com, 17d ago
Certainly these industry developments deserve a closer look than what lobbyists, congressional staff, the Securities and Exchange Commission and the Municipal Securities Rulemaking Board (MSRB) appear to have given them to date. It’s too soon to know whether the new GPT-4 and even-newer Claude2 AI models will scrape financial data (and images thereof) with sufficient accuracy and efficiency to equal or exceed the capabilities and efficiency of XBRL. But ultimately that will be far less important than the ability of evolving AI systems to ingest all this data — including decision-useful information that is not presented in basic annual financial statements, such as the footnotes and supplemental information, interim unaudited financial reports and budget documents — into a comprehensive database to then be sliced and diced in hundreds of different ways and packaged for myriad users as private goods, not public goods.Thus, the regulators now need to formulate forward-looking terms of use for how data from reports they receive will be employed and converted commercially to prevent appropriation of public information by private parties. As in hockey, they need to skate to where the puck is going.This new rival information technology may not be welcome news for companies that have laid down cash to produce conversion systems and hype the benefits of XBRL as the immediately obvious and only way to comply with the new federal law and anticipated regulatory requirements.The next generation of multimodal AI systems should soon be capable of reading words and numbers from all kinds of plain language financial reports and PDFs both new and old, ingesting the key data, formatting it into databases, calculating customized financial ratios, organizing peer group comparisons and using historical data to formulate the predictive value of key statistics (just like football). AI could easily challenge the wisdom of buying into a modern-day Edsel of governmental fintech. Not that the Edsel was a bad car: If the switching costs of implementing XBRL now and migrating to a different technology later are minuscule, then it’s obviously no big deal and its fans can enjoy their day in the sun.My concerns here could still be a false alarm, like a boy crying wolf. However, if myopic conversions to the less-flexible early-bird software “solutions” for FDTA are costly and time consuming, then a wait-and-see approach at both the regulatory and the local level may be wiser. This all reminds me of the early 1980s, when expensive “integrated financial management systems” operating on mainframe computers were procured by midsize municipalities, only to soon discover quite unhappily that microcomputers, cheaper software and ancillary data storage systems had swiftly leapfrogged that way of doing business and they had foolishly sunk capital into what quickly became “legacy” systems.Governing, 20d ago

Latest

new The dream of autonomous driving is rapidly becoming a reality, thanks to AI-powered systems that can perceive the environment, make decisions, and control vehicles. Data collected from sensors, cameras, and Lidar feed into AI algorithms, enabling self-driving cars to navigate complex urban environments and highways. The industry is heavily reliant on data to fine-tune these systems, with real-world testing providing crucial insights.Zephyrnet, 2d ago
new He pointed to the real-world example of this disparity in the digital equity space. While Denver’s digital equity needs are primarily centered around affordability of Internet service, other parts of the state face affordability and connectivity issues.Because of the collaborative relationships he built at the local level, Edinger feels well positioned to work with municipalities to advance technology use in different areas — including expanding connectivity and developing AI policy.Another major area he plans to focus on is accessibility. The state’s work in this space so far has included the launch of the Aira tool for Coloradans with low vision and a pilot program to help state workers better serve individuals with disabilities using VR, but there is more work to be done. Accessibility across state government is a critical piece to ensuring all constituents are served equitably, he said.And although Edinger expects emerging technologies to continue to impact government, he also underlined the importance of effective enterprise management systems. The technology working behind the scenes is an important foundation for any future innovation, he said.At the heart of Edinger’s vision for OIT is the people — employees and constituents alike. Empowering employees through things like process improvements, Lean, and other technology-enabled advances is the best way to make state government better, he said.“I will just say I am confident that investing in people and unlocking their potential is the way we will get there,” he said.The brief period of overlap between Edinger and Neal-Graves has helped enable a smooth transition for OIT to prepare the agency for future work under Edinger’s leadership. As Neal-Graves recently told Government Technology, the future for OIT is bright. He expects the state to continue advancing in digital government and digital equity.GovTech, 2d ago
new According to the Israeli Defense Forces, “Habsora” purportedly can use AI and “rapid and automatic extraction of updated intelligence” to generate recommended targets. Targeting systems that employ automated decision-making and AI technologies present serious concerns and are part of a worrying trend toward the deployment of autonomous weapons systems.Stop Killer Robots, 2d ago

Latest

new ...“So when we set out to build generative AI applications, we knew we had to address these gaps,” he said. “It has to be built in from the very start. Q lets you answer questions quickly, with natural language interactions, and you can easily chat, generate content and take action. And it’s all informed by an understanding of your systems, data repositories and operations.”...SiliconANGLE, 2d ago
AGI is an advanced form of AI. AI systems include “narrow AI” systems that do just one specific thing, like recognizing objects within videos, with a cognitive level lesser than humans. An AGI refers to systems that are generalists; that is, they can learn to do a wide variety of tasks at a cognitive level equal to or greater than a human. Such a system might be used to help a human plan a complex trip one day and to find novel combinations of cancer drug compounds the next.Fast Company, 3d ago
...24/7/365 uptime. Unlike human agents, AI doesn’t require rest. AI-powered chatbots or virtual assistants can provide 24/7/365 customer support on a website, via social media, messaging apps or any other channel the customer uses. Gone are the days of listening to endless hold music and eventually abandoning the call because it takes too long to reach a real person. These AI systems can promptly answer common customer questions freeing human agents to tackle more complicated, nuanced issues AI isn’t yet equipped to handle. If the query posed is too complex for the AI, it can transfer customers to live agents.MarTech Series, 3d ago
AI hallucinations occur when AI systems produce incorrect or misleading information. These errors can have dire consequences in health care. For instance, imagine an AI system for diagnosing skin cancer, misclassifying a benign mole as malignant melanoma. Such misdiagnoses could lead to unnecessary and invasive treatments, causing significant distress and harm to the patient. These instances underscore the critical need for accuracy in AI systems in health care and highlight the importance of ongoing monitoring and improvement of these technologies to ensure patient safety.KevinMD.com, 3d ago
GANs have revolutionized the field of computer vision, and their application in generating game art adds a unique dimension to your résumé. Designing AI systems that can create realistic and diverse in-game assets, characters, or environments through GANs showcases your creativity and proficiency in AI-driven content generation. Employing GANs in game development not only enhances visual aesthetics but also demonstrates your ability to leverage cutting-edge technology for practical applications.Analytics Insight, 3d ago
Fraud detection is a constant challenge for the financial industry. Traditional methods rely on rule-based systems and pattern recognition to identify fraudulent activities. However, these methods can be limited in their ability to detect sophisticated and evolving fraud techniques. Quantum AI, with its advanced computational capabilities, can analyze large volumes of data in real-time and identify patterns and anomalies that may indicate fraudulent behavior. This can help financial institutions detect and prevent fraud more effectively, protecting both their customers and their own assets.Techiexpert.com, 3d ago

Latest

With the increasing integration of frontier large language models (LLMs) into society and the economy, decisions related to their training, deployment, and use have far-reaching implications. These decisions should not be left solely in the hands of frontier LLM developers. LLM users, civil society and policymakers need trustworthy sources of information to steer such decisions for the better. Involving outside actors in the evaluation of these systems - what we term 'external scrutiny' - via red-teaming, auditing, and external researcher access, offers a solution. Though there are encouraging signs of increasing external scrutiny of frontier LLMs, its success is not assured. In this paper, we survey six requirements for effective external scrutiny of frontier AI systems and organize them under the ASPIRE framework: Access, Searching attitude, Proportionality to the risks, Independence, Resources, and Expertise. We then illustrate how external scrutiny might function throughout the AI lifecycle and offer recommendations to policymakers.thetalkingmachines.com, 3d ago
Simultaneously, AI is expected to play a crucial role in addressing complex challenges, like the increasing fraud vigilance owing to generative AI and 'deepfake' technology, and CIOs grappling with 'shadow AI', as pointed out by Stu Bradley, Senior Vice President of Risk, Fraud and Compliance Solutions at SAS and Jay Upchurch, Chief Information Officer at SAS, respectively. Furthermore, multimodal AI and AI simulation are set to reach new frontiers, with potential applications in augmented reality [AR], virtual reality [VR], and simulating complex physical systems like digital twins, remarks Marinela Profi, AI/Generative AI Strategy Advisor at SAS.IT Brief Australia, 3d ago
US, UK and a dozen more countries unveil pact to make AI ‘secure by design’: More than a dozen countries, including the UK and US, released a detailed international agreement on keeping artificial intelligence safe from rogue actors, pushing companies to create AI systems that are “secure by design”. The document stresses that AI needs to be developed and deployed to keep customers and the wider public safe from misuse. It's important to note, though, that the agreement is non-binding and carries mostly general recommendations such as monitoring AI systems for abuse, protecting data from tampering, and vetting software suppliers. (The Guardian, November 27, 2023)...Silverchair, 3d ago
Goal: Explore novel designs for generic AI agents – AI systems that can be trained to act autonomously in a variety of environments – and their implementation in software. We will study several versions of such “non-maximizing” agent designs and corresponding learning algorithms. Rather than aiming to maximize some objective function, our agents will aim to fulfill goals that are specified via constraints called “aspirations”. For example, I might want my AI butler to prepare 100–150 ml of tea, having a temperature of 70–80°C, taking for this at most 10 minutes, spending at most $1 worth of resources, and succeeding in this with at least 95% probability.alignmentforum.org, 3d ago
AI systems are trained to look for patterns in large amounts of data. Based on these patterns, AI systems can make recommendations, suggest diagnoses, or initiate actions. They can potentially continually learn, becoming better at tasks over time.medicalxpress.com, 3d ago
The future of LLMs and their integration into our daily lives and critical decision-making processes hinges on our ability to make these models not only more advanced but also more understandable and accountable. The pursuit of explainability and interpretability is not just a technical endeavor but a fundamental aspect of building trust in AI systems. As LLMs become more integrated into society, the demand for transparency will grow, not just from AI practitioners but from every user who interacts with these systems.unite.ai, 3d ago

Latest

We imagine a future where AIs self-augment by continuously seeking out more and better training data, and either creating successor AIs or training themselves on that data. Often, these data will come from the AIs running experiments in the real world (doing science), deliberately seeking data that would cover a specific gap in its current capabilities, analogous to how human scientists seek data from domains where our current understanding is limited. With AI, this could involve AgentGPT-like systems that spin up many instances of themselves to run experiments in parallel, potentially leading to quick improvements if we are in an agency overhang.lesswrong.com, 3d ago
I love that JEB is run by a not-for-profit company, by scientists for scientists, focusing on curiosity-driven research and fostering scientific excellence while supporting the community. There have been so many changes in the publication landscape over the past couple of decades and, when the Open Science movement started out, I really thought, ‘Oh, this is the future’. But then so many for-profit Open Access journals started popping up, cluttering the publication landscape. Now, there's so much more noise and completely rubbish science being published, because it's not undergoing a rigorous review process overseen properly by scientific editors. Some Open Access journals are predatory and have made scientific publishing worse. At JEB, we care about our community, we care about supporting the authors and maintaining the integrity of the scientific process. Of course, we do need change in scientific publishing, but if you have not-for-profit journals run by scientists for the community, they will evolve their publication process in a way that meets the needs of the scientific community. They're not going to be chasing the next fad for profit, like some of the newer journals. I've published articles in some of the big Open Access journals and I've also observed how those journals have evolved over time, to the point where they have automated internal systems and become a ‘paper farm’. At some of these journals, the Editor doesn't select reviewers, because it's all automated by AI; it's even hard for an Editor to intervene in the article handling process to remove an offensive review or to reject a paper because of a fundamental scientific flaw. The system automatically assigns new reviewers until two reviewers accept an article for publication. Now, I refuse to interact with such journals. I will not review for them or submit papers, because I realize just how far they have taken the profit motive. The way forward is to support not-for-profit journals run by the scientific community for the community.The Company of Biologists, 3d ago
NASHVILLE, TN, UNITED STATES, December 1, 2023 /EINPresswire.com/ -- Department store chain Macy’s, Inc. could see over $7.5 billion in business improvements by the end of the decade due to Artificial Intelligence (AI) according to new research from IHL Group, a leading technology research and advisory firm. The Retail AI Readiness Profiles research nearly 200 North American public retailers and restaurant chains, evaluates the companies based on AI Readiness, providing invaluable insights into the potential impact AI can bring to their organizations. According to the research, Macy’s could see as much as $3.8 billion in increased sales, $2.1 billion in improved gross margins through lower product costs, more optimized pricing, and supply chain improvements, and then reduce by $1.7 billion sales and generative administrative costs through 2029.“Our research approach was to start by looking at opportunities from an industry-level, then to the segment and specific retailer level leveraging our public and private data,” said Greg Buzek, President of IHL Group. “We then applied a 9-point algorithm to each company that measured items like data maturity, analytics maturity, alignment with key vendors, as well as free cash flow.”The research includes gains that can be made through traditional AI/ML technologies, Generative AI, and the potential for Artificial General Intelligence. These figures do not include any savings from reducing headcount, rather they focus on creating more efficiency and supporting growth/lower expenses through greater efficiencies only. These figures do not consider any cost savings resulting from workforce reduction. Instead, it solely emphasizes the creation of greater efficiency to support growth and reduce expenses. In total, each of the retailer profiles includes the following data:• Total AI Impact from 2022-2029: Combined impact from traditional AI/ML, Generative AI, and Artificial General Intelligence.• Annual Impact by Income Statement Category: Gains in sales, gross margins, or lower operating costs.• Total AI Readiness Score and Rankings vs Competitors: Shows competitiveness in segment and overall retail market• AI Impact by Line of Business: Explore the AI potential in Merchandising/Supply Chain, Sales & Marketing, Commerce, Infrastructure, BI/Analytics, Store Systems, and other areas such as Collaboration, ERP, and Legal.• Benefits by Specific Solutions: For instance, under Merchandising/Supply Chain gain insights on benefits gained via Order Management, Assortment and Allocation Planning, Distribution Systems, Warehouse Management, etc. For a glimpse into the rich data and insights provided by these profiles, you can access the Macy’s profile here.The Retail AI Readiness Profiles are available for individual companies or enterprises can access the entire directory of profiles with ongoing access to updated data as systems evolve.About IHL Group:IHL Group is a global research and advisory firm headquartered in Franklin, Tennessee, that provides market analysis and business consulting services for retailers and information technology companies that focus on the retail, hospitality, and consumer goods industries. For more information, see www.ihlservices.com, call 615-591-2955 or e-mail [email protected]. Press inquiries, please use [email protected] or the phone number above.Note: This report is intended for informational purposes and does not constitute financial or investment advice. Please refer to the complete report and methodology for a detailed understanding of the data and analysis.CMSWire.com, 3d ago

Top

What is potentially most challenging in recruiting “AI talent” is identifying the actual skills, capacities, and expertise needed to implement the EO’s many angles. While there is a need, of course, for technological talent, much of what the EO calls for, particularly in the area of protecting rights and ensuring safety, requires interdisciplinary expertise. What the EO requires is the creation of new knowledge about how to govern—indeed, what the role of government is in an increasingly data-centric and AI-mediated environment. These are questions for teams with a sociotechnical lens, requiring expertise in a range of disciplines, including legal scholarship, the social and behavioral sciences, computer and data science, and often, specific field knowledge—health and human services, the criminal legal system, financial markets and consumer financial protection, and so on. Such skills will especially be key for the second pillar of the administration’s talent surge—the growth in regulatory and enforcement capacity needed to keep watch over the powerful AI companies. It’s also critical to ensure that these teams are built with attention to equity at the center. Given the broad empirical base that demonstrates the disproportionate harms of AI systems to historically marginalized groups, and the President’s declared commitment to advancing racial equity across the federal government, equity in both hiring and as a focus of implementation must be a top priority of all aspects of EO implementation.Brookings, 18d ago
Jascha Achterberg further elaborates on the potential of these findings for building AI systems that closely mimic human problem-solving abilities. He suggests that AI systems tackling challenges similar to those faced by humans will likely evolve structures resembling the human brain, particularly when operating within physical constraints like energy limitations. “Brains of robots that are deployed in the real physical world,” Achterberg explains, “are probably going to look more like our brains because they might face the same challenges as us.”...unite.ai, 9d ago
The path to achieving artificial general intelligence (AGI), AI systems with capabilities at least on par with humans in most tasks, remains a topic of debate among scientists. Opinions range from AGI being far away, to possibly emerging within a decade, to “sparks of AGI” already visible in current large language models (LLM). Some researchers even argue that today’s LLMs are AGI. In an effort to bring clarity to the discussion, a team of scientists at Google DeepMind, including Chief AGI Scientist Shane Legg, have proposed a new framework for classifying the capabilities and behavior of AGI systems and their precursors. DeepMind’s framework, like all things concerning AGI, will have its own shortcomings and detractors. But it stands as a comprehensive guide for gauging where we stand on the journey toward developing AI systems capable of surpassing human abilities.continuingedupdate.blogspot.com, 9d ago
In comments filed November 13th, EPIC urged the Department of Education to establish safeguards that guide the development and deployment of any AI technologies funded by the Seedlings to Scale program. The Department of Education filed a Request for Information asking for insight to guide its efforts in funding technology intended to improve educational outcomes for all students. EPIC recommends that the Department of Education: only fund AI systems that use limited, curated datasets that are made public for accountability purposes; prohibit the funding of any emotional recognition or facial recognition tools; require strong cybersecurity practices; and require various data minimization practices.EPIC - Electronic Privacy Information Center, 14d ago
At the same time, we are aware of the limitations and concerns about AI advances, including the potential for AI systems to make errors, to provide biased recommendations, to threaten our privacy, to empower bad actors with new tools, and to have an impact on jobs. Researchers in AI and across multiple disciplines are hard at work identifying and developing ways to address these shortcomings and risks, while strengthening the benefits and identifying positive applications. In some cases, AI technology itself can be applied to create trusted oversight and guardrails to reduce or eliminate failures. Other technologies, such as cryptography and human-computer interaction design, are also playing an important role in addressing these problems. Beyond technology, we see opportunities for work in policy, including efforts with standards, laws, and regulations.AAAI, 18d ago
As businesses continue to integrate generative AI into their operations, they're not just enhancing efficiencies. They're also advancing the quality of interactions with customers, expanding the potential for personalization, and driving progress towards the long-sought goal of AI systems that can emulate human-level cognitive abilities.tmcnet.com, 26d ago

Latest

Drawing an analogy with evolutionary biology, Robey views adversarial attacks as critical to the development of more robust AI systems. "Just like organisms adapt to environmental pressures, AI systems can evolve to resist adversarial attacks," he says. By embracing this evolutionary approach, Robey's work will contribute to the development of AI systems that are not only resistant to current threats but are also adaptable to future challenges.techxplore.com, 3d ago
Marinela Profi, an AI/Generative AI Strategy Advisor at SAS, said: “The integration of text, images and audio into a single model is the next frontier of generative AI. Known as multimodal AI, it can process a diverse range of inputs simultaneously, enabling more context-aware applications for effective decision making. An example of this will be the generation of 3D objects, environments and spatial data. This will have applications in augmented reality [AR], virtual reality [VR], and the simulation of complex physical systems such as digital twins.”...Datanami, 3d ago
A common set of technical standards will also help researchers develop better methods to evaluate various components of AI systems, such as for trustworthiness and efficacy. Tabassi said that this includes evaluating a given system not just for technical robustness, but societal robustness as well.Nextgov.com, 3d ago
GenAI builds on advances in conventional AI and uses very large quantities of data to output unique written, audio, and/or visual content in response to freeform text requests from its users and programmers. GenAI tools have the capacity to produce entirely new content instead of simply regurgitating inputted data. Unlike conventional AI systems designed for specific tasks, GenAI models are designed to be flexible and multifunctional. GenAI products are already available as standalone applications such as ChatGPT, Dall-E, and Bard.Above the Law, 3d ago
In the quest for business agility, systems that facilitate content creation, not just pure development, have taken center stage—hence the growing popularity of no-code tooling. However, this takes on an entirely new meaning now with generative AI. Everyone knows code can be produced with GenAI, but we believe that will be abstracted even further to where full-fledged content in the form of dashboards, new workflows, or even AI models will start to be produced on the fly based on simple business intent inputs. Therefore, instead of writing code that can inherently break within a lifecycle process, AI uses platform tooling to construct content at a more stable level away from the source in a more durable fashion.Best ERP Software, Vendors, News and Reviews, 3d ago
The proposed rules would require companies to inform people ahead of time how they use automated decision-making tools and let consumers opt in or out of having their private data used for such tools.Automated technology — with or without the explicit use of AI — is already used in situations such as deciding whether somebody is extended a line of credit or approved for an apartment. Some early examples of the technology have been shown to unfairly factor race or socioeconomic status into decision making — a problem sometimes known as "algorithmic bias" that regulators have so far struggled to rein in.The actual rulemaking process could take until the end of next year, said Dominique Shelton Leipzig, an attorney and privacy law expert at the law firm Mayer Brown. She noted that in previous rounds of rulemaking by the state's privacy body, little has changed from inception to implementation.The proposed rules do pose one significant departure from existing state privacy rules, she said: Requiring companies to provide notice to consumers about when and why they are using automated decision-making tools is "pushing in the direction of companies being transparent and thoughtful about why they are using AI, and what the benefits are ... of taking that approach."The rules are not the state's first run at creating privacy protections for automated decision-making tools.One bill that did not make it through the state Legislature this year, authored by Assembly Member Rebecca Bauer-Kahan, D-Orinda, sought to guard against algorithmic bias in automated systems. It was ultimately held up in committee but could be reintroduced in 2024.State Sen. Scott Wiener, D-San Francisco, has also introduced a bill that will be fleshed out next year to regulate the use of AI more broadly. That effort envisions testing AI models for safety and putting more responsibility on developers to ensure their technology isn't used for malicious purposes.California Insurance Commissioner Ricardo Lara also issued guidelines last year on how artificial intelligence can and can't be used to determine eligibility for insurance policies or the terms of coverage.In an emailed statement, his office said it "recognizes algorithms and artificial intelligence are susceptible to the same biases and discrimination we have historically seen in insurance.""The Commissioner continues to monitor insurance companies' use of artificial intelligence and 'Big Data' to ensure it is not being used in a way that violates California laws by unfairly discriminating against any group of consumers," his office said.Other Bay Area lawmakers came out in support of the privacy regulations moving forward."This is an important step toward protecting data privacy and the unwanted use of AI," said State Sen. Bill Dodd, D-Napa. "Maintaining human choice is critical as this technology evolves with the prospect for so much good but also the potential for abuse."The first hearing on the proposed rules is on Dec. 8.© 2023 the San Francisco Chronicle. Distributed by Tribune Content Agency, LLC.GovTech, 3d ago

Top

AI’s role in regulatory compliance transcends rule-based automation, introducing semantic understanding and contextual awareness. Natural Language Processing (NLP) algorithms in conjunction with machine learning, enable AI systems to understand the nuances of human language and continuously adapt to evolving regulatory frameworks. This synergy not only ensures precision and adaptability in compliance processes but also anticipates potential regulatory challenges. For banks, this translates into streamlined operations, reduced compliance-related risks, and enhanced agility. Consumers benefit from a secure and transparent financial ecosystem. According to a report by Deloitte, AI could automate up to 70% of regulatory compliance tasks by 2025.nasscom | The Official Community of Indian IT Industry, 10d ago
Moreover, the reliance on AI technology in remote work may raise concerns about job displacement and the future of work. While AI can automate repetitive tasks and improve productivity, there’s a risk that some jobs may become obsolete, leading to workforce displacement and social disruption. To address this challenge, organizations must invest in reskilling and upskilling their workforce, ensuring that employees can adapt to the changing job landscape and remain competitive in the age of AI.A final series of risks are more existential, associated with uncontrolled AI. Top AI experts and funders, ranging from Elon Musk to Steve Wozniak, signed a letter asking for a 6-month pause on training AI systems. Other experts even propose a full shut-down on further AI research. Giving control of more and more systems to AI that doesn’t have human-aligned motivations indeed seems risky, and we need to make sure that our increasingly powerful AI models serve the interests of humanity.Disaster Avoidance Experts, 13d ago
LLMs, on the other hand, leverage the knowledge and techniques acquired from foundational models to generate natural language with a high degree of accuracy and coherence. LLMs specialize in generating human-like text, making them particularly useful for tasks such as language translation, summarization, creative writing and more. The integration of LLMs within generative AI systems has revolutionized various applications, including chatbots, virtual assistants, content creation and even social media marketing campaigns.gbiimpact.com, 5d ago

Latest

At the heart of AI and machine learning lies the ability to digest and interpret large datasets and learn from them to achieve specific goals. For cybercriminals, those goals include targeting and personalization at scale. AI systems can scan through social media, corporate websites, and data breaches to tailor phishing campaigns that resonate on a personal level with their victims. It’s like having a bespoke suit of deception tailored to each individual’s digital identity. With AI, phishing emails are no longer peppered with grammatical errors and easy-to-spot signs; they are convincing and context-aware. The game has changed: AI doesn’t just understand data, it understands human behavior.SC Media, 4d ago
AI, a branch of computer science, involves algorithms and systems that perform tasks that require human-like intelligence. It has great potential to shake up the rental biz in areas like revenue tips, guest vibes and property juggling. This is the piece? It’s all about how AI can amp up your rental game and leave opponents in the dust.Techiexpert.com, 3d ago
Multimodal AI and AI simulation will reach new frontiers“The integration of text, images and audio into a single model is the next frontier of generative AI. Known as multimodal AI, it can process a diverse range of inputs simultaneously, enabling more context-aware applications for effective decision making. An example of this will be the generation of 3D objects, environments and spatial data. This will have applications in augmented reality [AR], virtual reality [VR], and the simulation of complex physical systems such as digital twins.”– Marinela Profi, AI/Generative AI Strategy Advisor, SAS...MarTech Series, 4d ago
But at a high level, the EO puts pressure on organizations that produce AI-enabled, AI-generated, or AI-dependent products to adopt new application security (AppSec) practices for assessing these systems for safety, security, and privacy. They will need to account for risks, such as those from cyberattacks, adversarial manipulation of AI models, and potential theft or replication of proprietary algorithms and other sensitive data. Required security measures include penetration testing and red-team procedures to identify potential vulnerabilities and other security defects in finished products.Security Boulevard, 4d ago
However, the increasing pervasiveness and power of AI demand that ethical issues be a primary consideration. As AI systems become more integrated into our daily lives, the risk of unintended consequences and ethical dilemmas grows exponentially. From biased algorithms in surveillance systems to the misuse of AI in deepfake technology, we have all witnessed the threats that AI going rogue can pose. Therefore, it is imperative to approach AI development with ethical principles firmly in mind.Zephyrnet, 4d ago
Our belief is that AI systems like Luci, don’t just use GPT technology to convert the complex to the simple, but they allow analysts to develop investigation skills earlier, thus transforming their role from formulaic to forensic, indifference to involvement.lucinity.com, 4d ago

Top

...“Transparency and explainability are vital; SMEs should strive for AI systems that can elucidate their decision-making processes, enabling users to understand and trust AI outputs. Mitigating bias is crucial; AI should be routinely assessed and corrected for any inherent biases to ensure fairness and prevent discriminatory practices. By integrating these ethical considerations into their operations, SMEs can leverage AI responsibly, fostering innovation while minimizing potential risks and ensuring equitable benefits for all stakeholders.”...Dynamic Business, 12d ago
VMware Private AI brings compute capacity and AI models to where enterprise data is created, processed, and consumed, whether in a public cloud, enterprise data center, or at the edge, in support of traditional AI/ML workloads and generative AI. VMware and Intel are enabling the fine-tuning of task specific models in minutes to hours and the inferencing of large language models at faster than human communication using the customer’s private corporate data. VMware and Intel now make it possible to fine-tune smaller, economical state of the art models which are easier to update and maintain on shared virtual systems, which can then be delivered back to the IT resource pool when the batch AI jobs are complete. Use cases such as AI-assisted code generation, experiential customer service centers recommendation systems, and classical machine statistical analytics can now be co-located on the same general purpose servers running the application.mid-east.info, 26d ago
AI Safety and Security: The executive order directs federal agencies to take steps to ensure that AI systems are safe and secure, and that they are not used in ways that could harm the public. This includes developing and implementing risk management frameworks for AI systems and conducting regular security assessments. The executive order imposes new regulatory requirements on businesses developing or attempting to develop foundation models that could impact national security, including national economic security and public health (so called “dual-use foundation models” for their applicability in both civilian and military contexts). These requirements stem from the Defense Production Act, authorizing the president to influence industry for national defense purposes. The secretary of commerce, in collaboration with relevant officials, will define the technical specifications for models and compute resources subject to these regulations. Initially, the thresholds are set relatively high, likely impacting only large cloud computing providers or AI-model developers. However, as these requirements apply during the development and acquisition phases, businesses should carefully evaluate whether their activities could fall under these regulations.natlawreview.com, 11d ago
Legal expert systems, like ROSS Intelligence, and AI in medicine showcase AI’s ability to mimic human judgment in specific domains. They represent narrow AI’s evolution towards AGI. As these systems can analyse large amounts of data and provide precise responses, they serve as early indicators of AGI’s potential for decision-making in complex, real-world scenarios.TechRound, 20d ago
Due to advancements in the Internet of Things (IoT), the AI is already bridging and coupling with a range of other technologies, especially with the metadata provided by the Bio-tech. These mergers pose a significant challenge for global security. Driven by the lucrative commercial prospects or by state security considerations, the AI systems around the world are largely programmed towards the predictability of human behaviour. Quite at reach, they already have accurate and speedy analytics of urban traffic patterns, financial markets, consumer behaviour, health records and even our genomes.Modern Diplomacy, 16d ago
The EO includes several provisions that relate to cybersecurity. The reporting requirements for developers of certain AI systems, discussed above, require disclosure of cybersecurity protections that developers take to assure the integrity of the training process and to protect model weights. The EO directs Treasury to issue a public report on best practices for financial institutions to manage AI-specific cybersecurity risks. The EO also requires an annual assessment of risks to critical infrastructure arising from AI. The EO directs the US Department of Health and Human Services to create a task force and issue a strategic plan that will, among other things, examine the cybersecurity risks associated with health care arising from AI. The EO also directs the US Office of Management and Budget to issue guidance to agencies on best practices relating to cybersecurity and AI.natlawreview.com, 29d ago

Latest

Meanwhile, AI systems can optimise processes and uncover insights from vast datasets. However, Grela urged examining such technologies thoughtfully. “It doesn’t have an ethical and moral framework or guidelines…How do we really instil that?” she asked rhetorically.Internet of Things News, 4d ago
Companies that use artificial intelligence systems should utilize AI auditing services to ensure their systems comply with all laws and rules applicable to AI usage. AI audits allow businesses to spot any possible problems within their AI systems quickly, while providing strategies to deal with these potential issues.Tech Resider, 4d ago
Multi-modal AI incorporates data from multiple sources, including text, images, audio, and video, in contrast to standard AI models that mostly rely on textual input to produce a more thorough and detailed knowledge of the world. Multi-modal AI’s primary goal is to imitate human comprehension and interpretation of information using several senses at once. It has enabled AI systems to analyze and comprehend data in a more comprehensive way. The convergence of modalities empowers them to make more accurate predictions and judgments.MarkTechPost, 4d ago

Latest

Perhaps because of this, there is a growing focus on building trust in media, in government, and in AI systems. When it comes to data-centric technologies, this raises important questions, including: Can trust be built into systems that users have determined to be untrustworthy? Should we be thinking of trust as something that is declining or improving, something to be built into AI and other data-centric systems, or as something that is produced through a set of relations and in particular locations? Where else, besides large institutions and their technologies, is trust located? How do other frames of trust produce community-centered politics such as politics of refusal or data sovereignty? What can community-based expertise tell us about how trust is built, negotiated, and transformed within and to the side of large-scale systems? Is there a disconnect between the solutions to a broad lack of trust and how social theorists, community members, and cultural critics have thought about trust?...Data & Society, 4d ago
Newswise — In a time when the Internet has become the main source of information for many people, the credibility of online content and its sources has reached a critical tipping point. This concern is intensified by the proliferation of generative artificial intelligence (AI) applications such as ChatGPT and Google Bard. Unlike traditional platforms such as Wikipedia, which are based on human-generated and curated content, these AI-driven systems generate content autonomously - often with errors. A recently published study, jointly conducted by researchers from the Mainz University of Applied Sciences and Johannes Gutenberg University Mainz (JGU), is dedicated to the question of how users perceive the credibility of human-generated and AI-generated content in different user interfaces. More than 600 English-speaking participants took part in the study.As Professor Martin Huschens, Professor for Information Systems at the Mainz University of Applied Sciences and one of the authors of the study, emphasized: "Our study revealed some really surprising findings. It showed that participants in our study rated AI-generated and human-generated content as similarly credible, regardless of the user interface." And he added: "What is even more fascinating is that participants rated AI-generated content as having higher clarity and appeal, although there were no significant differences in terms of perceived message authority and trustworthiness – even though AI-generated content still has a high risk of error, misunderstanding, and hallucinatory behavior."The study sheds light on the current state of perception and use of AI-generated content and the associated risks. In the digital age, where information is readily available, users need to apply discernment and critical thinking. The balance between the convenience of AI-driven applications and responsible information use is crucial. As AI-generated content becomes more widespread, users must remain aware of the limitations and inherent biases in these systems.Professor Franz Rothlauf, Professor of Information Systems at Johannes Gutenberg University Mainz, added: "The study results show that – in the age of ChatGPT – we are no longer able to distinguish between human and machine language and text production. However, since AI does not 'know', but relies on statistical guessing, we will need mandatory labeling of machine-generated knowledge in the future. Otherwise, truth and fiction will blur and people cannot tell the difference." It remains a task of science communication and, not least, a social and political challenge to sensitize users to the responsible use of AI-generated content.newswise.com, 4d ago
Azam Sahir, Chief Product Officer at MongoDB, reiterated the value that this partnership holds for its customers. "Customers of all sizes from startups to enterprises tell us they want to use generative AI to build next-generation applications and future-proof their businesses," said Azam. "Many customers express concern about ensuring the accuracy of AI-powered systems' outputs whilst also protecting their proprietary data. We're easing this process for our joint-AWS customers with the integration of MongoDB Atlas Vector Search and Amazon Bedrock. This will enable them to use various foundation models hosted in their AWS environments to build generative AI applications, so they can securely use proprietary data to improve accuracy and provide enhanced end-user experiences."...ChannelLife Australia, 4d ago
Now, generative AI integration has resulted in enhanced conversational abilities of chatbots and other AI interfaces. “As the friendly face of the company for every employee, Ginger serves as an all-encompassing workplace assistant, streamlining information access and daily task management. These AI systems can now provide more sophisticated and context-aware responses, resulting in improved customer interactions and support.”...Analytics India Magazine, 4d ago
Her argument – as I have argued in every article I’ve written, is that AI – generative and otherwise – is actually MPL – Mass Produced Logic. What this means is that every AI system, be it ChatGPT, DeepMind and other Google AI systems, or that of Meta, Amazon, Microsoft etc., are not spontaneous explosions of logic, like, in a way, all living beings are, but are deliberately produced logic systems made for specific or generic use cases using existing logic systems in the world.Sify, 4d ago
Foundational Model (FM) providers train models that are general-purpose. These models can be used for many downstream tasks, such as feature extraction or to generate content. Each trained model needs to be benchmarked against many tasks not only to assess its performances but also to compare it with other existing models, to identify areas that needs improvements and finally, to keep track of advancements in the field. Model providers also need to check the presence of any biases to ensure of the quality of the starting dataset and of the correct behavior of their model. Gathering evaluation data is vital for model providers. Furthermore, these data and metrics must be collected to comply with upcoming regulations. ISO 42001, the Biden Administration Executive Order, and EU AI Act develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. For example, the EU AI Act is tasked providing information on which datasets are used for training, what compute power is required to run the model, report model results against public/industry-standard benchmarks and share results of internal and external testing.CoinGenius, 4d ago

Latest

The company has recently begun trying to position itself as a useful partner for enterprises in their artificial intelligence initiatives, which have become more popular of late due to the buzz around generative AI. It says its platform is ideal for AI projects, as it provides highly curated and optimized data that’s necessary for training large language models and other kinds of AI systems.SiliconANGLE, 4d ago
...“I have argued, since at least 2016, that AI systems need to have internal models of the world that would allow them to predict the consequences of their actions, and thereby allow them to reason and plan. Current Auto-Regressive LLMs do not have this ability, nor anything close to it, and hence are nowhere near reaching human-level intelligence,” said LeCun in a recent tweet. “In fact, their complete lack of understanding of the physical world and lack of planning abilities puts them way below cat-level intelligence, never mind human-level.”...Gizmodo, 4d ago
Developers of AI systems determine their system’s risk category themselves using standards set forth by the EU. Once the systems determined high risk are in use, the deployers have responsibilities for ongoing compliance, monitoring, human oversight, and transparency obligations once they decide to put a high-risk system to use.onetrust.com, 4d ago
The playbook highlights the three unique and most crucial abilities that AI possesses for overcoming bottlenecks that exist on the path to net zero. This includes the ability to discern patterns, predict outcomes and optimise performances in complex systems; the ability to accelerate the discovery and development of solutions like low-carbon materials and climate-resilient crops; and finally the ability to empower the sustainability workforce of the future.Intelligent Tech Channels, 4d ago
Ayesha Iqbal, IEEE senior member and engineering trainer at the Advanced Manufacturing Training Centre added: "AI has significantly evolved in recent years, with applications in almost every business sector. However, there are some barriers preventing organisations and individuals from adopting AI, such as a lack of skilled individuals, complexity of AI systems, lack of governance, and fear of job replacement. With AI growing more rapidly than ever before, and already being tested and employed in education, healthcare, transportation, finance, data security, and more, it’s high time that the Government, tech leaders, and academia work together to establish standards and regulation for safe and responsible development of AI-based systems. This way, AI can be used to its full potential for the collective benefit of humanity."...electronicspecifier.com, 4d ago
VERSES AI is a cognitive computing company specializing in biologically inspired distributed intelligence. Our flagship offering, Genius, is patterned after natural systems and neuroscience. Genius enables intelligent software agents that can learn, adapt and interact with the world. Key features of Genius include generalizability, predictive queries, real-time adaptation and an automated computing network. Built on open standards, Genius transforms disparate data into knowledge models that foster trustworthy collaboration between humans, machines and AI, across digital and physical domains. Imagine a smarter world that elevates human potential through innovations inspired by nature. Learn more at...GISCafe, 4d ago

Latest

Moore noted that AI work by CENTCOM task forces has prioritized unmanned systems and software that can accelerate workflows. She went on to say that long-term use cases will need a mechanism allowing for consistent AI model performance such as setting technical expectations for developers, active user engagement and access to large amounts of data.potomacofficersclub.com, 4d ago
KPMG takes a responsible approach to assessing the ethics, governance and security in place around clients’ AI and machine learning technologies. The set of frameworks, controls, processes and tools can help KPMG’s clients harness the power of AI —designing, building and deploying AI systems in a safe, trustworthy and ethical manner —so companies can accelerate value for consumers, organisations and society. Our responsible AI approach ensures:...techuk.org, 4d ago
We are facing a dilemma. Our AI systems would be much safer and more useful if they possessed a modicum of adult-level common sense. But one cannot create adult-level common sense without first creating the common sense of a child on which such adult-level abilities are based. Three-year-old common sense is an important first step, as even such young children have the fundamental understanding that their own actions have consequences and actually matter. But on their own, the abilities of a three-year-old aren’t commercially viable. Further, AI’s focus on super-human narrow capabilities with an expectation that these will broaden and merge into common sense hasn’t borne fruit and is unlikely to any time soon.RTInsights, 4d ago

Top

...“Enterprises should also design AI systems in a way that ensures that humans maintain ultimate control over their operation through regular, logical, human-run auditing. Lastly, organizations must be aware of the boundaries that their AI models operate within and systematically identify the strengths and weaknesses of their models. By doing so, they can pinpoint areas that require improvement, fine-tune their systems, and adapt to evolving challenges.”...Best BPM Tools, Vendors, Software and BPMS, 24d ago
The construction and maintenance of AI systems involve complex engineering processes, resulting in significant costs. AI-based software applications require frequent upgrades to adapt to the changing environment and become smarter over time. While machines may be more efficient than humans in certain tasks, they cannot entirely replace humans. Machines lack the ability to modify their responses in response to changing environments. Whenever there is a change in the input, AI systems must be re-evaluated, re-trained, and re-engineered.Modern Diplomacy, 16d ago
So many of the other options have to do with other options for sharing models, whether that’s structured access behind API, even research API access, gated download, staged release, and then also more proactive efforts. Proactive efforts which can actually also be combined with open-sourcing. They don’t have to be seen as an alternative to open-sourcing. So this is things like redistributing profits towards AI safety research or starting AI safety and bug bounty programs. Or even like we talked about with the democratization paper, thinking about how we can democratize decision-making around AI systems to help distribute influence over AI away from large labs, which is another argument for open-sourcing.alignmentforum.org, 8d ago
I previously discussed the capabilities we might expect from future AI systems, illustrated through GPT2030, a hypothetical successor of GPT-4 trained in 2030. GPT2030 had a number of advanced capabilities, including superhuman programming, hacking, and persuasion skills, the ability to think more quickly than humans and to learn quickly by sharing information across parallel copies, and potentially other superhuman skills such as protein design. I’ll use “GPT2030++” to refer to a system that has these capabilities along with human-level planning, decision-making, and world-modeling, on the premise that we can eventually reach at least human-level in these categories.Bounded Regret, 23d ago
I previously discussed the capabilities we might expect from future AI systems, illustrated through GPT2030, a hypothetical successor of GPT-4 trained in 2030. GPT2030 had a number of advanced capabilities, including superhuman programming, hacking, and persuasion skills, the ability to think more quickly than humans and to learn quickly by sharing information across parallel copies, and potentially other superhuman skills such as protein engineering. I’ll use “GPT2030++” to refer to a system that has these capabilities along with human-level planning, decision-making, and world-modeling, on the premise that we can eventually reach at least human-level in these categories.lesswrong.com, 23d ago
AI’s impact on contact centers extends beyond agent support to significantly enhance the overall customer experience. Natural Language Processing (NLP) and sentiment analysis capabilities enable AI systems to understand and respond to customer queries with a level of sophistication that rivals human interaction. This not only reduces response times but also ensures that customers feel heard and understood, fostering stronger connections between businesses and their clientele.mid-east.info, 7d ago

Latest

...“Leveraging neuromorphic hardware to provide portable, power-efficient solutions for use in the identification of sensory data is a game-changer for a plethora of practical applications, such as e-nose systems,” said Anup Vanarse, Research Scientist at BrainChip. “This latest research paper shows how Akida’s olfactory analysis technology allows for efficient and accurate detection of various strains of bacteria in blood to help with important disease diagnosis. Incorporating beneficial AI within sensory devices will provide the means for massive breakthroughs in the healthcare industry.”...BrainChip, 4d ago
Why is there so much hype now over ChatGPT? There are many reasons, but a key reason is the adoption rate has been steep and quick. While systems continue to learn, human interactions are required to make them more efficient, accurate and relevant in our world. Human validation of processes must continue to ensure that the technology is assisting the customer. There is a lot of math with AI, but unless you develop software with AI, you don’t have to be a data scientist to utilize these tools. To best utilize them, you need to understand your business data in as mu ch detail possible. In simple terms, know the data your business uses and understand what stakeholders use and how it helps them perform their jobs. It would be best to create journeys for your processes with a beginning, end, and purpose. If you have a vision and plan, AI is likely to improve business outcomes, drive more customers to your business applications, and allow them to self-serve.Traffic Technology Today, 4d ago
...“In two or three weeks’ time, it has exploded. I can’t believe how cool and efficient it is. However, my conversations with you all are protected by attorney-client privilege,” she said of a hypothetical scenario in which her education law firm, Fagen Friedman & Fulfrost, might represent attendees. “And when you turn on [an audio recording], there are pop-ups, so I notice and I say, ‘Hey, this is actually a confidential conversation, I don’t think we can have Zoom recording it without [permission].’ The person who holds that privilege is the school district, but then, are you the person to waive that privilege? I don’t know. It’s a little dicey … Yes, the note-taking apps are cool, but notify the person. Get their consent.”Shipley said a wide range of technology tools now being adopted by schools have privacy implications, from license plate readers to security systems with facial recognition, but she believes the biggest emerging shift involves parental consent for open AI tools. She said legal exceptions in the Family Educational Rights and Privacy Act (FERPA) allow teachers to use educational software without parental permission as long as it has a legitimate educational interest and limits the resharing of information. However, GenAI tools may exceed these exceptions, particularly for students under 13, depending on whether the tools are from an open or closed environment and where their data goes.Shipley used the example of the state of California piloting, through a handful of districts, an emotionally intelligent chatbot that high schoolers could access on their phones. Who is on the receiving end of personal messages sent by students? Who is responsible if a student sends messages about self-harm? Who is liable for how the chatbot responds? It is a district’s responsibility to sort out these questions before exposing students and families to such tools.GovTech, 4d ago
Human Oversight and Intervention: While AI can greatly enhance cybersecurity efforts, it's not a substitute for human expertise. Security teams should maintain an active role in monitoring and validating the decisions made by AI systems, as human intervention is essential in complex or novel situations. Additionally, security leaders must educate their teams on how to effectively use and understand AI systems to ensure that they are deployed correctly and that security teams can leverage the insights generated by these systems to make well-informed decisions.TechRadar, 5d ago
Diving deeper into the technicalities, sentiment analysis is a pivotal component of modern AI systems in customer service. By analysing the tone and context of customer inquiries, AI can gauge emotions and respond appropriately.TechRound, 5d ago
The need to secure critical computing power has led to another partnership — between Anthropic and Amazon. Anthropic was founded by former OpenAI employees who left the company over AI safety concerns. Exhibit B: In return for a minority stake in Anthropic, Amazon provides it with computing power. The two companies make strange bedfellows: Anthropic is a vanguard in responsible AI, pioneering ways to embed human rights and even non-Western perspectives in AI systems. Amazon, for its part, has a track record of facilitating problematic facial recognition technology, not to mention deploying AI to micromanage its workers, and union-busting. Although Amazon is nominally providing just the plumbing for OpenAI’s operations, it would be naive to believe Anthropic’s culture, norms and values will remain intact.Centre for International Governance Innovation, 5d ago

Top

...2. Provide context to simplify risk analysis and compliance reporting — Complying with a complex array of regulations can be a significant challenge for compliance and audit teams, who often lack clear guidance on how to address risks. Frequently, the processes identified for validating controls are also inconsistent, further complicating the process. But while enormous quantities of data are time consuming (and boring) for humans to analyze and process, properly trained AI systems can automatically analyze vast quantities of risk data to provide context and identify patterns and trends. An AI solution makes it simpler for compliance and audit teams to evaluate risks and controls and generate guidance and remediation recommendations.securitymagazine.com, 17d ago
The development of autonomous AI systems, which can make decisions and take actions independently, raises several ethical concerns. For example, autonomous AI systems are capable of making decisions without human intervention, which can lead to unforeseen consequences. This is why balancing autonomy with human oversight is crucial for ensuring responsible AI usage. It is imperative to establish methods that enable humans to oversee, regulate, and intervene in autonomous AI systems in order to deter immoral or detrimental conduct. Ensuring the safety of AI systems and assessing associated risks are also critical aspects of AI accountability. Implementing safety protocols and risk assessment practices helps identify potential issues and prevent AI systems from causing harm.AI Time Journal - Artificial Intelligence, Automation, Work and Business, 11d ago
AI’s just going to be everywhere. And I think we’ll stop realising that we’re using it. One prospect that now becomes real (and we saw this with protein structure prediction already, which will translate to other fields) is to conduct in silico research: you can ask AI models questions rather than doing the actual experiment. This will help to speed up research and will obviously completely change the scientific process. The turnaround time will be much faster because you can get answers to complex questions quickly, without conducting an experiment. Innovation in science will then depend on our ability to formulate well-defined questions that are suitable for AI systems and to correctly judge the answers from these models – what they know and where there are gaps in their predictions. This is a big challenge right now, but I think it will be solvable. But even if AI speeds up research, I very much believe that experiments will remain at the heart of research, because we need to confirm the predictions that AI give us, and use the knowledge gained from AI to push scientific discovery forward. Ultimately, we will have more informed hypotheses to start from.EMBL, 12d ago