Latest

new Of course, digital protection measures such as antivirus software, secure gateways, firewalls, and virtual private networks (VPNs) remain crucial. And, incorporating advanced digital strategies such as machine learning to monitor for behavioural anomalies, provides an added layer of security. Leadership teams should also assess whether similar approaches have been applied to address any physical vulnerabilities. For example, a combination of manned entry points, locked facilities, cameras, and security alarms offers robust protection. It’s unlikely that a physical intrusion will occur simply to steal a laptop. Instead, these malicious actors commonly look for a way to access data or install malware inside the organisation’s physical perimeter where some protections may be lacking.IT Brief New Zealand, 8h ago
new While commendable strides have been taken across various fronts in the realm of climate action, propelled by innovative solutions at our disposal, we have a clear understanding of the strategies needed to uphold our net zero ambitions. The time for action is long overdue, and COP28 certainly serves as the biggest platform and opportunity for all stakeholders to showcase what has been achieved so far and identify actions that lie ahead. It’s not just a forum for reflection but a call to action, urging all players to present evidence of their contributions, share best practices, and join global efforts to address persistent challenges. It is a collective commitment to meaningful change and progress in our shared journey toward a sustainable world.Khaleej Times, 18h ago
new STEP ONE: The first step for organizations to solve this problem is to focus on the effective extraction of knowledge from all available sources (i.e., Harvard Business Review, European Business Review, Employee Think-Tanks, Scholars, etc.). In doing this, organizations learn important methods of observation, extraction, and application. Observation is one of the important methods of acquiring knowledge. Recent research shows that observation alone, as a means of acquiring knowledge, can only lead to the illusion of learning among learners. Thus, without extraction and application, organizations can falsely insinuate that things be done in the same way. This causes inertia. False self-confidence, limits current employees who play an important role in gathering, storing, and disseminating future knowledge. This step breaks down the silos and opens up communication to build a knowledge management database.The European Business Review, 1d ago
new The digital landscape is filled with an abundance of content, but only a select few manage to capture the attention and admiration of users worldwide. These pieces of content, often referred to as “virals,” have the remarkable ability to spread rapidly across social media platforms, generating an overwhelming number of likes, shares, and comments Click Here. But what exactly makes users click that coveted ‘like' button? Delving into the fascinating realm of viral content, this article explores the factors behind its success, focusing on the psychological triggers, emotional appeal, storytelling techniques, visuals, controversy, and the impact of timing and trends. By understanding the mechanisms that drive users to engage with and promote viral content, we can unlock valuable insights that can be applied to our own strategies and create a greater impact in the digital world.WriteUpCafe.com, 1d ago
new In today’s rapidly evolving technological landscape, the need to address a comprehensive approach to risk management is crucial at all levels of an organization. This imperative becomes even more pronounced in operational technology (OT) environments where different perspectives on risk prevail. While traditional risk management practices have primarily focused on physical and financial risks, the emergence of cybersecurity threats has necessitated a paradigm shift. Collaboration with engineering teams is not just desirable but necessary to incorporate cybersecurity risks into the risk register for OT environments.Industrial Cyber, 1d ago
new Regardless of what we call this behavior, it is clearly concerning. Deepfakes and misinformation could disrupt democratic political systems. False advertising and deceptive business practices may be used to prey on consumers. As more data is gathered on individuals, companies might use that information to manipulate people’s behaviors in violation of their privacy. Therefore, we must rise to the challenge of analyzing these risks and finding solutions to these real world problems.Montreal AI Ethics Institute, 1d ago

Latest

new The research team conducted extensive experiments to assess the effectiveness of the ALIA data augmentation method across specialized tasks: domain generalization, fine-grained classification, and contextual bias in bird classification. By fine-tuning a ResNet50 model and employing Stable Diffusion for image editing, ALIA consistently outperformed traditional augmentation techniques and even real data addition in domain generalization tasks, showcasing a 17% improvement over the original data. In fine-grained classification, ALIA demonstrated competitive performance, maintaining accuracy even without domain shifts. ALIA excelled in in- and out-of-domain accuracy for functions involving contextual bias, although it faced challenges in image editing quality and text-only modifications. These experiments highlight ALIA’s potential in enhancing dataset diversity and model performance, albeit with some dependency on model quality and the choice of image editing methods.MarkTechPost, 1d ago
new Employees are often the weakest link in security, not because of malicious intent, but due to a lack of awareness about potential threats and preventative actions. When it comes to bolstering your business’ cybersecurity, conducting regular employee training and education is vital. By implementing comprehensive training programs that cover key topics such as phishing, password management, and safe internet use, you can equip your team with the knowledge they need to recognize and avoid potential cyber threats. Regular updates and refreshers are essential as the cyber threat landscape continues to evolve. The more informed and vigilant your employees are, the safer your business’s online presence will be.Better Tech Tips, 1d ago
new ...“One thing I’d like to make clear is that climate change is real. It’s serious, and it deserves urgent attention to both mitigation and adaptation,” Pielke said. “But I’ve come to see, across my career, that the importance of climate change is held up by many people as a reason for why we can abandon scientific integrity. This talk is about climate and scientific integrity, how we maintain it, and how we use it in decision-making. Reasonable people can disagree about policies and different directions that we want to go, but none of us are going to benefit if we can’t take expertise and bring it to decision-making to ground policymaking in the best available knowledge. Overall, climate science and policy have a narrative problem.”...Science Matters, 2d ago

Top

The use of generative AI tools is ultimately still in its infancy and there are still many questions that need to be addressed to help ensure data privacy is respected and organizations can remain compliant. We all have a role to play in better understanding the potential risks and ensuring that the right guardrails and policies are put in place to protect privacy and keep data secure.securitymagazine.com, 13d ago
new The use of AI-based systems to “produce targets at a fast pace” represents a grave example of digital dehumanisation raising serious concerns around the violation of human dignity and compliance with international humanitarian law. Additionally, the potential reduction of people to data points based on specific characteristics like ethnicity, gender, weight, gait, etc. raises serious questions about how target profiles are created and in turn how targets are selected.Stop Killer Robots, 2d ago
Generative AI can offer useful tools across the recruiting process, as long as organizations are careful to make sure bias hasn’t been baked into the technology they’re using. For instance, there are models that screen candidates for certain qualifications at the beginning of the hiring process. As well-intentioned as these models might be, they can discriminate against candidates from minoritized groups if the underlying data the models have been trained on isn’t representative enough. As concern about bias in AI gains wider attention, new platforms are being designed specifically to be more inclusive. Chandra Montgomery, my Lindauer colleague and a leader in advancing equity in talent management, advises clients on tools and resources that can help mitigate bias in technology. One example is Latimer, a large language model trained on data reflective of the experiences of people of color. It’s important to note that, in May, the Equal Employment Opportunity Commission declared that employers can be held liable if their use of AI results in the violation of non-discrimination laws – such as Title VII of the 1964 Civil Rights Act. When considering AI vendors for parts of their recruiting or hiring process, organizations must look carefully at every aspect of the design of the technology. For example, ask for information about where the vendor sourced the data to build and train the program and who beta tested the tool’s performance. Then, try to audit for unintended consequences or side effects to determine whether the tool may be screening out some individuals you want to be sure are screened in.Hunt Scanlon Media, 3d ago
Another disadvantage is the potential for privacy and data security concerns. Leadership coaching sessions often involve sensitive and confidential discussions about personal and professional development. Storing these recordings or transcripts in an AI system’s database raises concerns about data breaches or unauthorized access. Maintaining strict data security measures becomes paramount to protect the confidentiality of the coaching sessions, and organizations must carefully consider the ethical implications of using AI in this context. The trust between the leader and the coaching process may be compromised if individuals are concerned about the security of their coaching data, potentially discouraging them from participating in such sessions.Education Week, 19d ago
Training of AI models requires massive amounts of data, some of which includes PII. There is currently little insight into how the data is being collected, processed and stored which raises concerns about who can access your data and how they can use it. There are other privacy concerns surrounding the use of AI in surveillance. Law enforcement agencies use AI to monitor and track the movements of suspects. While highly valuable, many are worried about the misuse of those capabilities in public spaces, infringing upon individual rights to privacy.The ChannelPro Network, 17d ago
As we head into 2024, the continued evolution of and use of various AI tools in personal branding will go mainstream. Many people use AI to assist in content development whether analyzing data to include or writing initial drafts of content to support their personal brand. This trend will both continue and accelerate with people using AI to add more personalization to their brand, allowing them to create sub-themes to their main brand in support of various audiences. With this, concerns about authenticity in personal branding will grow so people will need to have a clear, accurate way to describe when and how they do or don’t use AI tools. People with very prominent personal brands will need to add even more alerts and be on the lookout for misrepresentation or fake content created in their brand voice. It will be an exciting time with lots of upside, but we still need to manage the risks to reputation and brand.Thinkers360 | World’s First Open Platform For Thought Leaders, 9d ago

Latest

new The sensor fusion market for automotive in Europe is anticipated to experience heightened demand, primarily driven by stringent safety regulations. The EU has set a targeted goal to reduce fatalities and injuries by 50% by the year 2030. As part of this strategic initiative, the EU has mandated the inclusion of major safety features, including lane departure warning, automatic emergency braking, and drowsiness and attention detection, in new vehicles. These regulations came into effect in July 2022, reflecting a comprehensive approach to enhance automotive safety. Several European countries, including Germany, France, and the UK, have already granted approval for the utilization of automated vehicles on specific roadways. Notably, Germany took a pioneering step in July 2021 by enacting the Autonomous Driving Act, positioning itself as the first country globally to permit sensor fusion-based level 4 automated vehicles to operate on public roads without the requirement for a human driver as a backup control. In 2015, the French government declared its intention to mandate the installation of automatic emergency braking (AEB) and lane departure warning (LDW) systems in all new heavy trucks, contributing to an increased demand for safety systems within the country. Furthermore, the support from OEMs and the burgeoning presence of numerous startups dedicated to the development of sensor fusion technology are expected to be pivotal factors propelling market growth in the European region.marketsandmarkets.com, 2d ago
new Yet the initial excitement surrounding AI has given way to genuine and growing concerns—including about the spread of misinformation that disrupts democracy and destabilises economies, threats to jobs across the skills spectrum, a widening of the gulf separating the haves and have-nots, and the proliferation of biases, both human and computational.interest.co.nz, 2d ago
new Background The growing interest in analysis of surgical video through machine learning has led to increased research efforts; however, common methods of annotating video data are lacking. There is a need to establish recommendations on the annotation of surgical video data to enable assessment of algorithms and multi-institutional collaboration. Methods Four working groups were formed from a pool…...MashupMd, 2d ago
new In 2022, over £1.2 billion was stolen through fraud in the UK. Yet, organisations increasingly recognise that fraud is a security issue rather than a cost of doing business. Many UK players, especially in the financial services industry, will be looking to adopt end-to-end online fraud prevention solutions, counting on multi-layered tools and technologies. One key development to watch within these ‘layers of defence’ is the addition of so-called risk signals. Leading digital identity experts have already started to incorporate more behavioural biometrics, such as typing patterns or mouse movements. These patterns from historic interactions with users will start to impact fraud prevention positively, with device signals or template signals of fraudulent ID documents making it much harder for fraudsters to get away with their crimes.Financial IT, 2d ago
new Building on the research findings, the study identified five key areas of focus for MDBs that would help scale up their investment for urban climate finance. Recommendations put forward include increasing volumes and share of urban climate finance in the total climate finance mix of MDBs while maintaining their steadfast commitment to prioritizing adaptation, with a specific focus on cities that are most vulnerable to climate change; leveraging existing MDB concessional funding and partnering with international climate funds to develop operating modes and instruments tailored to subnational investment needs; provision of technical assistance aimed at supporting national policy reform and closing cities’ planning-to-investment gaps; and promoting risk-mitigation instruments to increase private sector engagement, among others.CPI, 2d ago
new In 2024, cybersecurity evolves amidst increased AI integration and a focus on data protection. AI-driven automation prompts the need for AI-centric security solutions to combat emerging threats and comply with strict privacy regulations. With rising concerns about AI-generated attacks and the democratization of AI, secure data sharing becomes pivotal. Anticipate a surge in cyber attacks through deepfake technology, necessitating a delicate balance between AI adoption, strong cybersecurity measures, and employee awareness for robust defense strategies. Moreover, expect increased Board expertise in cyber to reinforce these initiatives.Thinkers360 | World’s First Open Platform For Thought Leaders, 2d ago

Top

For example, driver assistance systems reduce the burden on the driver, improving safety. Also, collecting data in real-time using IoT technology could lead to more efficient delivery routes that enable faster delivery of more packages. And utilizing AGVs and AMRs to automate tasks such as handling and picking packages makes it possible to carry out logistics operations quickly and accurately without having to worry about labor shortages. Cutting-edge sensing technology comprises the core of smart logistics systems. While the volume of logistics continues to increase with the spread and expansion of e-commerce, serious labor shortages in the logistics industry have become the norm, resulting in longer working hours. The introduction of smart logistics systems is expected to help solve problems in the logistics industry by automating operations that save labor and increase efficiency, leading to reduced workloads, shorter working hours, and lower costs, but there are still several issues that need to be addressed before full-scale adoption can begin.ELE Times, 3d ago
Additionally, the President calls on Congress to pass federal data privacy protections, and then through the EO’s Section 9 directs agencies to do what they can to protect people’s data privacy without Congressional action. The section opener calls out not only “AI’s facilitation of the collection or use of information about individuals,” but also specifically “the making of inferences about individuals.” This could open up a broader approach to assessing privacy violations, along the lines of “networked privacy” and associated harms, which considers not only individual personal identifiable information but the inferences that can be drawn by looking at connected data about an individual, or relationships between individuals.Brookings, 18d ago
In conclusion, the pervasive surveillance in our digitized world presents a multifaceted challenge to our privacy and personal freedoms. From covert location tracking to the use of advanced cameras with facial recognition capabilities, the rapid evolution of surveillance technologies forces us to grapple with profound ethical questions. Furthermore, the integration of AI into education, while offering potential benefits, raises concerns about the well-being of our future generations. As we move forward, it is crucial to strike a delicate balance between the benefits of technology and the preservation of our individual rights and dignity in this era of constant scrutiny.interestingengineering.com, 11d ago

Latest

new The forum concluded with Dr. Tan Wee Liang, Vice Chairman of IAAS, sharing his thoughts on the impact of ESG on agriculture and engaging the audience in a discussion about its potential benefits. In conclusion, the influence of ESG on agriculture is highly contingent on factors such as the responder’s position in the supply chain, the scale of the operation, and the state of ESG implementation. Recognizing that embracing ESG will be a journey, not a quick fix, is crucial, and the process may disrupt financial projections. Ultimately, progress will depend on collaborative efforts from businesses, government, and society, requiring responsible behaviors to navigate the complexities of sustainable agricultural practices.Eco-Business, 2d ago
new Among the lowest-hanging fruit for AI in government involves press releases and other forms of communication from agencies to residents. Zencity, for example, debuted a ChatGPT tool that writes what amounts to a first draft of a press release — including potential quotes from public officials. That could save significant time for city managers, among other advantages, according to the company.Municipal budgeting, too, could serve as fertile ground for generative AI.A new AI tool from ClearGov takes in past budgeting data and future estimates to produce what officials sometimes call a budget narrative. Such narratives, which put spending figures into context, often help those officials sell the budget to peers and voters. AI could bring more efficiency to the process, usually the most difficult and contentious work undertaken by local and state governments.Generative AI also gained more of a presence in higher education in 2023. The technology can help with essays, math problems and lesson plans, with work completed within seconds. But fears of plagiarism and other abuses have led to a more cautious welcome for AI in universities and colleges than in city halls, with large school districts such as the one in New York City initially placing restrictions on ChatGPT.College deans and local school boards continue to grapple with the full implications of AI. So are other governmental bodies as the technology hogged more of the spotlight as 2023 progressed. Maine imposed a six-month ban on the use of AI for state employees using state devices or conducting state business. Officials said they needed time to study the privacy, bias and misinformation concerns sparked by ongoing deployment of AI-based products. Executive orders started to emerge at a regular clip as fall rolled around, with Pennsylvania, Virginia, Oklahoma and New Jersey governors all issuing guidance within a few weeks of each other. Their missives were followed within weeks by an AI Executive Order from the White House in late October. Each official action recognized both the potential and the risk, with many calling for outside help to develop appropriate policies for safe use in service of their residents.It’s almost impossible, however, to imagine a gov tech future without much more artificial intelligence. Evidence for that comes from every corner of the industry.For instance, industry giant Tyler Technologies touted its growing ability to use AI for quicker and more accurate court filings, whose complex coding and redaction requirements often force judicial employees to perform manual data entry. Klir’s new AI-backed offering is designed to improve water management and compliance, with what the company calls “holistic” views of utility systems delivered via a chatbot fueled by artificial intelligence.Startups, of course, have also embraced AI, as shown by the most recent cohort from CivStart’s gov tech accelerator program, which provides at least some foreshadowing of the tools public agencies might be using a few years from now. One of the program participants is using artificial intelligence to help local officials — many of them new to the grunt work of government — write and manage legislation.GovTech, 2d ago
new ...“In coming years, the citizen will use the Internet to build a relationship with government that is personal, custom-built for each user with features that are accessible. Digital government will be easy to use, consistent in its appearance and functionality, offer a complete selection of services that are unified across agencies, and available around the clock. Citizens will be aware of their rights to privacy and able to control governmental use of their personal information.”Current NASCIO President and Tennessee CIO Stephanie Dedmon set out to assess progress against the goals outlined in that paper, publishing a companion report this year. It points to a catalog of citizen-facing digital accomplishments recognized by NASCIO in the intervening years, including Tennessee’s MyTN App, which offers users more than 60 services from many different departments accessible with single sign-on. The updated report also includes evidence of how the personalized, user-focused, cyber- and privacy-aware aspirations have matured in the real world, creating new challenges along the way. The larger goals of effective technology that is focused on the user, however, remain largely the same.A recent report from KPMG, Tomorrow’s Government Today, surveyed residents and government leaders at the state and federal levels, acknowledging some of the challenges ahead for CIOs. Government executives view deploying new technology as a challenge — 72 percent say it will be “moderately difficult” while almost 25 percent say it will be “very difficult.”...GovTech, 2d ago
Gillibrand said the legislation is an important step forward in the effort to deter illegal robocalls."Don't dial if you don't want to go to trial," the Democrat said. "But, there's still more we need to do to address the rise of generative AI. I'm sending a letter to the chair of the Federal Trade Commission requesting information about its work to track the increasing use of artificial intelligence to perpetrate frauds and scams against older Americans. While public reporting indicates that more families are being targeted by voice clones in family-emergency scams, the number of Americans targeted by scammers using generative AI remains unknown."Earlier this month, the Federal Communications Commission announced it will pursue an inquiry to study the impact of artificial intelligence on robocalls and robotexts and is evaluating how it can also use AI technology to combat the problem.Gillibrand said she hopes to get both Republican and Democratic co-sponsors to push the bill forward, as people on both sides of the aisle are alarmed by the incidents. Gillibrand advised New Yorkers, especially older residents, to be cautious and aware of the problem. She said she's also weighing other legislation that would create a responsibility for banks and tellers to ask a set of standardized questions if an elderly person goes to a bank and wants to take out, say, $10,000 when that is not a usual practice."If [they have] never done that before, to have a series of questions that the teller can ask to say, 'Are you taking this out for a reason? Is there an emergency? Have you verified the emergency with a loved one? Would you like me to help you verify the emergency?'" Gillibrand explained. "I want to come up with some legislation to focus our tellers on good questions they can ask that don't violate their privacy or make them feel unsure of themselves or insecure, but just protective questions."© 2023 The Daily Gazette, Schenectady, N.Y. Distributed by Tribune Content Agency, LLC.GovTech, 3d ago
new It’s imperative to think about the working conditions of infectious disease (“ID”) specialists if we’re serious about sustaining a pipeline. ID doctors are renowned in the medical education world for their meticulous history-taking and comprehensive review of medical records. Given their pivotal role in antibiotic stewardship, infection control, and diverse diagnostic challenges, it is not uncommon to witness ID pvractitioners tirelessly working long hours, even on weekends and holidays. Although this isn’t universally applicable to all ID physicians, the exposure of the field to trainees, including me, significantly shapes our perception of ID specialists — often depicted as overworked, burdened with bureaucratic tasks, and inadequately compensated.STAT, 2d ago
In recent years, Geospatial Data Science – the use of geographic knowledge and AI approaches to extract meaningful insights from large-scale geographic data – has achieved remarkable success in spatial knowledge discovery and reasoning, and geographic phenomena modeling. However, two challenges remain in geospatial data science: (1) geographic phenomena are always treated as functions of a set of physical settings, but human experience has received insufficient attention; (2) there are limited strategies to focus on and address geoethical issues. In this talk, Dr. Kang will present a series of works that utilized geospatial data science to understand human experience and sense of place. In particular, using large-scale street view images, social media data, human mobility data, and advanced GeoAI approaches, he measured and analyzed human subjective safety perceptions (e.g., whether a neighborhood is perceived as a safe place), and emotions (e.g., happiness) at places, as well as human-environment relationships. Also, his work paid attention on geoethical issues such as monitoring perception bias and model bias and protecting geoprivacy.nyu.edu, 3d ago

Top

Of course, nobody has yet laid out specifically what financial information is actually deemed to be decision-useful in order to establish which data must be converted to a new format; that’s an unresolved first-order problem that leaves open the risk of unwarranted burdens on local governments during the initial implementation phase. All the oversight boards need to tread carefully.What proponents of the FDTA had in mind when they lobbied Congress, standardization on the extensible financial reporting language platform that has become commonplace in the private sector, was only a first-stage rocket in this new space race. The federal legislation did not give XBRL a monopoly per se, specifying only the use of “structured” data formats.Clearly, what most parties in the legislative process could never have anticipated last year was the possibility that existing financial reports using generally accepted accounting terminology may themselves already be computer-readable because of the new large language machine learning models that can read plain English typeset produced by word-processing software as well as alphanumeric images contained in the commonly used PDF documents that typically encapsulate governments’ audited annual financial reports. All of a sudden, “structured data” may ultimately prove to be little more than what we already have in place with conventional text documents that can be ingested by new AI systems with superior analytics already integrated with database utilities, without costly data entry hurdles.Governing, 20d ago
However, with great promise comes great responsibility. As we celebrate the potential benefits of AI in agriculture, it is crucial to acknowledge and address the associated risks. The deployment of advanced technologies in farming raises concerns about data privacy, ethical considerations and the potential for exacerbating existing inequalities. How can we ensure that the benefits of AI are equitably distributed, reaching smallholder farmers who are the backbone of many agricultural economies?...Agrilinks, 3d ago
Third, Congress should continue to invest in research for privacy- and security-enhancing technologies, as these will have important uses for AI. Additional research on topics such as secure multiparty computation, homomorphic encryption, differential privacy, federated learning, zero-trust architecture, and synthetic data can minimize or eliminate the need for AI-enabled services to process personal data while still maintaining the benefits of those services.[18] Many developers are already exploring solutions to address privacy concerns associated with large language models (LLMs). For example, some developers are exploring the use of “data privacy vaults” to isolate and protect sensitive data.[19] In this scenario, any PII would be replaced with deidentified data so that the LLM would not have access to any sensitive data, preventing data leaks during training and inference and ensuring only authorized users could access the PII.itif.org, 26d ago
Protect intellectual property (IP): While the EO does not completely ignore the issue of protecting IP, it does not offer much peace of mind to those who know how this typically gets done. The government states the obvious: They will deal with IP theft in a manner consistent and applicable to current methods. The problem? We can’t even do that effectively now. The FBI estimates that China steals between $225 billion to $600 billion of American IP annually, and it only takes a simple internet search to uncover the number of arrests and cases the FBI has made concerning espionage by Chinese nationals in the United States under a visa program. This has been a persistent issue for American businesses, and the United States has made it clear that they are concerned with the implementation of IP law in China. We can monitor their progress in addressing IP issues (which has slowed recently), all we want. Meanwhile, American companies are losing billions to a competitive market. Foul play will cause America to fall behind in the AI arms race. To protect and better manage IP, we need to raise the barriers to entry from countries identified on the United States Trade Representative Priority Watch List, and combine that with enhanced investigations and penalties for espionage committed by nationals from countries on the list.SC Media, 25d ago
Ethical and Security Concerns: With AI taking a more prominent role in coding, ethical and security concerns arise, especially around data privacy and the potential for biases in AI-generated code. Ensuring ethical use and addressing these biases is crucial for the responsible development of AI-driven programming tools.unite.ai, 20d ago
The study also revealed that 87% of HR and business leaders are receptive to the use of Generative AI tools in their professional roles. Nonetheless, certain potential issues were acknowledged – 44% felt apprehensive regarding the data security and accuracy of AI-generated outputs. Similarly, over half of the participants (52%) displayed concerns about AI's potential to replace their jobs.CFOtech Australia, 13d ago

Latest

These projects are distinct from one another in experimental design. They share the potential, however, to revolutionize what is possible in the care of extremely preterm babies. Existing forms of neonatal care are emergency interventions. The baby is given treatments to stave off the effects of being born with significantly underdeveloped organs. The artificial womb, in contrast, extends the period of gestation to prevent these complications from arising to begin with. If it works, it will enable the infant to keep growing as though it had not yet been born. And with scientists anticipating human trials within the next few years, artificial-womb technology is no longer purely speculative. The “before” and “after” images released by the biobag team were eerie and briefly ubiquitous. In the first, a floating, pink-skinned, wrinkled lamb fetus sleeps adrift in a transparent bag. In the second, it has grown soft white wool and its body presses against the plastic surface, waiting to be born. These pictures evoke much the same reaction that people once felt when they first encountered incubators: the curious sensation of peering into the future.The Walrus, 3d ago
Prompt #6: How do confirmation bias, stereotyping and other cognitive biases impact how we interpret events, news and information? What are potential consequences of not verifying the accuracy of such information? Analyze a current news event with these multiple issues in mind for your essay.News Literacy Project, 3d ago
Newswise — In today's medical landscape, antibiotics are pivotal in combatting bacterial infections. These potent compounds, produced by bacteria and fungi, act as natural defenses against microbial attacks. A team of researchers delved into the intricate world of glycopeptide antibiotics – a vital resource in countering drug-resistant pathogens – to uncover their evolutionary origins. Dr. Demi Iftime and Dr. Martina Adamek headed this interdisciplinary project, guided by Professors Evi Stegmann and Nadine Ziemert from the “Controlling Microbes to Fight Infections” Cluster of Excellence at the University of Tübingen, with support from Professor Max Cryle and Dr. Mathias Hansen from Monash University in Australia.Using advanced bioinformatics, the team sought to decipher the chemical blueprint of ancient glycopeptide antibiotics. By understanding their evolutionary trajectory, the researchers were looking for insights that could steer the development of future antibiotics for medical applications. The team’s study has been published in the latest edition of Nature Communications.Tracing an Evolutionary Path“Antibiotics emerge from an ongoing evolutionary tug-of-war between different organisms, each striving to outmaneuver or curtail the spread of their adversaries,” explains Evi Stegmann. To explore this, the researchers utilized the glycopeptide antibiotics teicoplanin and vancomycin, along with related compounds sourced from specific bacterial strains. These compounds, built from amino acids and sugars, disrupt bacterial cell wall construction, ultimately leading to bacterial death. Notably, teicoplanin and vancomycin exhibit this potency against numerous human pathogens.In simplified terms, scientists often organize species into an evolutionary tree structure to illustrate their relationships. Similarly, the research team constructed a family tree of known glycopeptide antibiotics, linking their chemical structures via gene clusters that encode their blueprints. Employing bioinformatics algorithms, they deduced a putative ancestral form of these antibiotics – which they dubbed “paleomycin.” By reconstructing the genetic pathways they believed to produce paleomycin, the team successfully synthesized the compound, which displayed antibiotic properties in tests. “Recreating such an ancient molecule was exhilarating, akin to bringing dinosaurs or wooly mammoths back to life,” remarks Ziemert.Connecting Evolution to Practicality“One intriguing finding is that all glycopeptide antibiotics stem from a common precursor,” Stegmann says. “Moreover, the core structure of paleomycin mirrors the complexity seen in teicoplanin, while vancomycin exhibits a simpler core. We speculate that recent evolution streamlined the latter’s structure, yet its antibiotic function remained unchanged,” Ziemert adds. This family of antibiotics – though beneficial for bacteria producing them – demand substantial energy due to their complex chemical composition. Streamlining this complexity while retaining efficacy could confer an evolutionary advantage.The researchers meticulously traced the evolution of these antibiotics and their underlying genetic sequences, investigating pivotal steps required for creating functional molecules. In collaboration with Australian scientists, some of these steps were replicated in laboratory settings. “This journey through time revealed profound insights into the evolution of bacterial antibiotic pathways and nature's optimization strategies, leading to modern glycopeptide antibiotics,” says Ziemert. “This provides us with a solid foundation for advancing this crucial antibiotic group using biotechnology.”...newswise.com, 3d ago

Latest

AI-powered surveillance systems are gradually becoming common and with this most of us have started to worry with respect to privacy and safety. For example, in China, they use facial recognition technology to watch people closely while in the United States the police use algorithms to predict where crimes might happen. These technologies could violate the personal freedoms of people and make inequalities in society even worse. In simple words, they might invade our privacy and make the gaps between different groups of people even bigger.Techiexpert.com, 3d ago
The Middle East augmented analytics in BFSI market is anticipated to witness traction in the future. This is attributed to boost in awareness among organizations regarding the potential of augmented analytics in enhancing decision-making processes and streamlining the operations. Furthermore, augmented analytics technology plays a crucial role in automating compliance processes, reducing risk of violation, and safeguarding financial institutions from regulatory penalties.alliedmarketresearch.com, 3d ago
However, the complexity of integrating augmented analytics into existing workflows is a key factor restraining the growth of the Germany augmented analytics in BFSI market. Moreover, increase in concerns regarding data security is another key factor restraining the market growth. Companies are investing in robust security measures to mitigate the risks associated with data breaches and cyberattacks. On the contrary, augmented analytics provides valuable insights into consumer preferences, which enables financial institutions to offer personalized services. Furthermore, with the help of AI and ML algorithms, companies optimize their investment portfolios, identify emerging market trends, and make informed investment decisions. This offers lucrative opportunities for increased profitability and competitiveness in the financial market.alliedmarketresearch.com, 3d ago
However, there are certain restraints of the South Korea augmented analytics in BFSI market. The stringent laws implemented by the South Korea Government pertaining to lack of transparency and unethical practices in augmented analytics technology present a significant obstacle for market players, as many of them fail to comply with these laws, thus hampering the growth of the market. In addition, concerns over data privacy and security restrict many financial organizations from adopting the latest technology, therefore restraining the market.alliedmarketresearch.com, 3d ago
The issue of security is another thorny challenge, especially in sectors like healthcare, retail, and BFSI, Jain notes. “Businesses that fail to protect their customer's data run the risk of losing their trust, leading to a loss of sales, ruining the company’s reputation, and potentially subjecting them to legal liability. In an environment where consumer loyalty and relationships are everything, businesses must take every measure possible to keep customer data safe and secure,” he says. “Protecting customer trust and mitigating cybersecurity risks are just a few examples of why privacy and security are vital to businesses.”...technologymagazine.com, 3d ago
Beyond the main players, the AI landscape consists of smaller AI-powered apps, commonly referred to as 'Shadow AI'. Largely unnoticed by organisations' security teams, these apps are of concern due to their lack of robust enterprise-level security. "Security teams often don't know which apps their employees use daily, posing significant risks to third-party risk and data security programs," warned Harmonic researchers.ChannelLife New Zealand, 3d ago

Latest

The development of commercial mixed reality platforms and the quick advancement of 3D graphics technology have made the creation of high-quality 3D scenes one of the main challenges in computer vision. This calls for the capacity to convert any input text, RGB, and RGBD pictures, for example, into a variety of realistic and varied 3D scenarios. Although attempts have been made to construct 3D objects and sceneries directly using the diffusion model in voxel, point cloud, and implicit neural representation, the results have shown limited diversity and quality due to the restrictions in training data based on 3D scans. Using a pre-trained picture-generating diffusion model, like Stable Diffusion, to generate a variety of excellent 3D sceneries is one approach to address the problem. With data-driven knowledge gained from the massive training set, such a huge model produces believable images but cannot ensure multi-view consistency among the images it generates.MarkTechPost, 3d ago
Meta routinely exchanges information about coordinated inauthentic behavior (CIB) networks – groups of fraudulent accounts utilized for foreign propaganda and other influence operations. While most of Meta’s CIB removals are not based on government intelligence, the company has depended on such tips to identify attempts to target US politics. These government intelligence tips have proven invaluable in detecting potential threats and mitigating the spread of misinformation in the political landscape. Meta continues to work closely with government agencies to improve its ability to detect and eliminate such networks, thereby maintaining the integrity of political discourse on its platforms.ReadWrite, 3d ago
The opaqueness in the decision-making process of LLMs like GPT-3 or BERT can lead to undetected biases and errors. In fields like healthcare or criminal justice, where decisions have far-reaching consequences, the inability to audit LLMs for ethical and logical soundness is a major concern. For example, a medical diagnosis LLM relying on outdated or biased data can make harmful recommendations. Similarly, LLMs in hiring processes may inadvertently perpetuate gender bi ases. The black box nature thus not only conceals flaws but can potentially amplify them, necessitating a proactive approach to enhance transparency.unite.ai, 3d ago
Our focus lies on understanding the risks and unintended consequences of self-improvements. Thus, the insights obtained will likely enhance the safety of an already existing trend without significantly boosting capabilities. The self-reflective data curation process doesn't appear likely to instill or elicit dramatic, novel capabilities in a model. It yields predictable improvements in each iteration, as opposed to significant leaps from algorithmic advancements (e.g., LSTM to Transformer architecture). Given that our tasks resemble human-performed data curation, we are less concerned about the "threshold" family of threat models. Nonetheless, if it seems likely at any point that our research would significantly advance capabilities on this frontier, we would try to limit its dissemination or avoid releasing it altogether.lesswrong.com, 3d ago
We need high quality, clinical trials in medicine, but we also need to invest in projects wisely. Can polygenic risk scores realistically be expected to meet the aims of the project? This is screening—a complex intervention where a test is one small part of a larger process. Polygenic risk scores rank people at higher or lower-than-middling risk, but one must recall basic statistics on predictive values. I will save you the effort—the bigger the attempt not to miss people with serious disease, the more people have to be screened; the more people are screened, the more false positives. Restrict the test to people at high risk, and the volume of false positives will decline, but since diseases usually falls in the larger group of lower risk people, you will also mainly miss it. This is why effective disease screening is exceptional: it rarely works well enough to do more good than harm.The BMJ, 3d ago
No experiment I could possibly design today is more valuable than preserving the opportunity to pose a new experiment tomorrow, next year, or in a decade. My cohort of scientists has come up inspired by imagining what it was like for contemporaries of Darwin to encounter and compare global wildlife, or during the modern synthesis, as the invisible internal mechanisms of evolutionary genetics unfurled. Now, we stare down the prospect that, during our turn, we will have to watch the biosphere die. I have peers who set out to study ancient mass extinction events only to find that the conditions that precipitated ancient mass extinction events aptly describe events now. I have contemporaries who set out to discover new species by recording sounds in the rainforest, only to capture an eerie transition toward silence. I've done very little field work and I study hardy, laboratory-tractable species that aren't endangered or picky about where they live, but even I stopped finding butterflies at my best collection site after wildfires. In my 10 years in science, I think I've never been to any research conference, on any topic, without hearing my colleagues interject dire warnings into their presentations – and I've never attended a climate-focused conference. So, the most important research question is ‘will the species I hope to study – and a stable international society that can support research activity as I've known it – survive the next 50 years?' With that in mind, with ‘unlimited’ funding, the best thing I can imagine doing for science is to fight. I think of legal support for climate protesters; cultivating honest communication platforms that bypass corporatized media; criminalizing ecocide; eliminating fossil fuels fast; protecting democracy against regulatory capture; buying out and defending the recommended 30% of Earth's surface as nature reserves; facilitating socially just transitions to safely support humans in the remaining land.The Company of Biologists, 3d ago

Latest

Soltani said the agency's board and the public will have opportunities to provide input on the proposed rules starting next month. The guidelines are meant to clarify how the 2020 California Consumer Privacy Act — which addressed a range of electronic and online uses of personal information — should apply to decision-making technology.The proposal also outlines options for how consumers' personal information could be protected when training AI models, which collect massive data sets in order to predict likely outcomes or respond to prompts with text, photo and video.OpenAI and Google already have been sued over their use of personal information found on the internet to train their AI products.GovTech, 3d ago
The Pharma IT transformation yielded impressive outcomes. The implementation of advanced analytics resulted in an impressive 20% reduction in drug development cycles, while a streamlined infrastructure led to a noteworthy 30% reduction in operational costs. The cybersecurity measures improved the company's reputation for data security in addition to complying with regulations. The company not only overcame immediate difficulties by implementing state-of-the-art Pharma IT solutions, but it also established itself as a market leader in India's very competitive pharmaceutical sector. This transformation was a great success and demonstrated how important IT is to driving innovation, efficiency, and compliance in India's ever-expanding pharmaceutical industry.acnnewswire.com, 3d ago
The concept of misinformation has deep historical roots. Throughout various epochs, from ancient civilizations to the modern digital age, misinformation has consistently influenced human communication. This includes the distortion of facts in pre-print societies' oral storytelling (Burkhardt, 2017) through to contemporary digital information warfare conducted between states (Karpf, 2019). In academic research, the two World Wars played a pivotal role in shaping early scholarship around the topic. Following World War I, academic research in propaganda studies analysed the techniques employed during the war and their societal impact (Bernays, 1928; Lasswell, 1927). The post-Second World War era witnessed an increased focus on academic research regarding rumours, acknowledging their significant impact on shaping public perceptions and attitudes towards the war effort. This phenomenon garnered particular attention within the field of social psychology, as researchers sought to gain a deeper understanding of the psychological processes involved in the propagation of rumours (Allport & Postman, 1947). Since the late 20th century, the advent of information and communication technologies has greatly propelled contemporary research on misinformation. Within this burgeoning field, scholars have dedicated significant efforts to investigate the intricate role of digital communication technologies in shaping the multifaceted landscape of misinformation (e.g. Marres, 2018; Napoli, 2019; Tufekci, 2018).Internet Policy Review, 3d ago

Top

Moreover, the deterministic view risks downplaying human agency and the multifaceted interplay between technology and society. It suggests a passive acceptance of technological progress, ignoring the potential for ethical deliberation, cultural influences, and deliberate choices in shaping technological trajectories. The rise of data privacy concerns and the ethical implications of AI and biotechnology underscore the need for a more nuanced understanding that recognizes the role of human values, governance, and societal norms in steering technological development.Psychology Today, 21d ago
...“In the sphere of privacy and transparency, we’re diligent about openly communicating our practices. We believe it’s crucial to be transparent about the collection and use of data in training our AI models. It’s not just about regulatory compliance; it’s about earning the trust of our customers and the community at large. By addressing these issues head-on, we can harness the potential of AI ethically and effectively, fostering innovation that respects individual rights and societal values.”...Dynamic Business, 12d ago
For all its potential use cases, AI can pose huge risks to businesses due to its security vulnerabilities. Organisations who fail to put proper guardrails in place to stop employees from potentially breaching existing privacy regulations through the inappropriate use of generative AI tools are likely to face significant consequences. Over the past 12 months, our research revealed that 43% of organisations in the UAE have been penalised for compliance breaches, with an average fine of $178,000. Many regulatory bodies are currently focused on how existing data privacy laws apply to generative AI. However, as the technology continues to evolve, legislation created specifically for generative AI will be created in 2024, that will apply rules directly to these tools and the data used to train them.TahawulTech.com, 11d ago
A new large-scale, immersive art exhibit is opening in Toronto that promises to reignite the conversation around the most pressing environmental issues of our time. Called Arcadia Earth Toronto, it combines creative art installations and the latest augmented and virtual reality technologies to showcase the beauty of our planet and the impact of human actions on the environment. Previously mounted in New York, Las Vegas and Saudi Arabia, Arcadia Earth will debut on Dec. 1 in Toronto’s newest mixed-use complex, the Well. Mounted in a 17,000-square-foot space, the exhibit takes visitors on a 10-room adventure that teaches them about the challenges facing our planet and suggests ways to incorporate more sustainable practices into our daily lives. (Did you know, for example, that we each ingest the equivalent of one credit card a week in microplastics?) Founded by experiential artist Valentino Vettori, Arcadia Earth was brought to Toronto by Craig Perlmutter who hopes the exhibit (its first permanent location) will “captivate a new audience and inspire them” to find pathways to a greener, more sustainable future. Tickets, ranging from $24 to $39, can be purchased at www.arcadiaearth.ca, with $2 from every ticket going to WWF-Canada. - Gayle MacDonald...The Globe and Mail, 9d ago
AI has possible downsides and restrictions, just like any other new technology. Concerns around false information and data privacy have grown. Its seeming incapacity to forge genuine connections with the world around us raises other issues. The well-known blogging destination Medium has even implemented rules to limit the use of AI on its network, reminding readers that they are a place for human storytelling. For AI to be effectively applied in any industry, including events, businesses need to grasp the notion of “Explainable AI.”...Event-Technology Portal, 4d ago
AI is obviously much talked about, particularly since the emergence of ChatGPT. It has some great applications, but people should be cognisant of its limitations and the risks around its use. The other hot topic is cloud. A lot of customers are embracing the model and moving their data to the big service providers. Unfortunately, the pace of adoption isn’t being matched by commensurate adoption of cybersecurity technology. Many organisations don’t understand the risks sufficiently; they think the cloud provider will protect their data, whereas, in fact, it’s their responsibility.Intelligent CISO, 26d ago

Latest

Most businesses today have many document-heavy or coordination-heavy processes that are manual because automating them has been a daunting task with traditional tools. For example, a person in finance who processes a large variety of invoices, purchase orders and contracts on a daily basis finds it hard to explain all the details of what they do to an RPA developer, especially all the edge-cases. Even after automating such processes, the ballooning maintenance costs have resulted in the failure of many projects.Datanami, 3d ago
Just like technology, policies can quickly become outdated. They must be revised, replaced, or even removed. Although this isn’t the most exciting area of CISO work, creating clear policies that are proactive and empowering, not restrictive, can ensure employees gain the benefits of new technology without the risk. For example, generative AI (GenAI) can offer enormous benefits for a company — improved productivity, efficiency, and creativity. But without appropriate guardrails to govern how the technology is used and what data (or code) can be input into GenAI models, a company could be at extreme risk for compromise. Creating a formal policy with input from stakeholders throughout the company enables employee use of the technology while reducing risk.securitymagazine.com, 3d ago
Third, a larger conversation should be had about the influence of letting digital platforms into the most personal aspects of our lives. Having that data breached or sold with the possibility of being connected to a specific individual can have serious consequences in a society that still stigmatizes people seeking resources to protect their mental health. Although, in many ways, the benefits of these technologies are clear in terms of accessibility, their users must stay cognizant of the fact that private technology companies — not licensed clinical facilities — are facilitating the services that they are using. And these technology companies carry with them a unique ability to surveil mental health and other data at a massive scale for their commercial interests.Brookings, 3d ago
C-suite marketing leaders and CEOs feel significantly more prepared for the phase-out of third-party tracking cookies than their junior employees, according to a report from Fyllo, a provider of contextual targeting solutions combined with audience data. Fyllo’s study found that 88% of the C-Suite and 78% of CEOs feel their current targeting solutions are prepared to operate without cookies, compared to 62% of VP/Director level executives. “While they’re preparing for the future, senior leadership might be more optimistic about the transition than their managers in the trenches,” said Jeff Ragovin, President of Fyllo. “Contextual targeting will almost certainly prove to be the dominant alternative to behavioral targeting, and such widespread experimentation suggests that advertisers of all descriptions are looking to future-proof their digital strategies.”...Cynopsis Media, 3d ago
Training individuals to use AI ethically is essential in order to ensure responsible and unbiased deployment of this powerful technology. Ethical AI training equips individuals and organizations with the knowledge and skills to navigate the challenges and identify risks that arise when working with AI systems. It ultimately boils down to mitigating risk – just like anti-bribery and corruption policies, as well as the importance of data privacy and security. By providing individuals with the necessary training, we can foster a culture of ethical AI use, where technology is harnessed for the benefit of all while mitigating potential harm and ensuring equitable outcomes.RTInsights, 3d ago
This paper delves into the cryptanalysis of QARMAv2 to enhance our understanding of its security. Given that the integral distinguishers of QARMAv2 are the longest concrete distinguishers for this cipher so far, we focus on integral attack. To this end, we first further improve the automatic tool introduced by Hadipour et al., for finding integral distinguishers of TBCs following the TWEAKEY framework. This new tool exploits the MixColumns property of QARMAv2 to find integral distinguishers more suitable for key recovery attacks. Then, we combine several techniques for integral key recovery attacks, e.g., Meet-in-the-middle and partial-sum techniques to build a fine-grained integral key recovery attack on QARMAv2. Notably, we demonstrate how to leverage the low data complexity of the integral distinguishers of QARMAv2 to reduce the memory complexity of the meet-in-the-middle technique. As a result, we managed to propose the first concrete key recovery attacks on reduced-round versions of QARMAv2 by attacking 13 rounds of QARMAv2-64-128 with a single tweak block, 14 rounds of QARMAv2-64-128 with two independent tweak blocks, and 16 rounds of QARMAv2-128-256 with two independent tweak blocks. Our attacks do not compromise the claimed security of QARMAv2, but they shed more light on the cryptanalysis of this cipher.iacr.org, 3d ago

Top

The initiative aligns with Snow’s broader vision of technology intelligence, focusing on the ability to understand and manage all technology data through Snow Atlas. The generative AI in Snow Copilot aims to bridge the visibility gap in IT, helping organizations extract more value from their data and make informed decisions on software, cloud services, and technology assets. The use of Microsoft Azure OpenAI Service ensures secure data processing, addressing concerns of data privacy and integrity. Snow Copilot represents the first of several planned AI capabilities, with future features such as machine learning entitlement ingestion, enhanced data intelligence services, and fine-tuning large language models, scheduled for release on Snow Atlas in 2024.DATAVERSITY, 17d ago
AI Snake Oil Blog: “Foundation models such as GPT-4 and Stable Diffusion 2 are the engines of generative AI. While the societal impact of foundation models is growing, transparency is on the decline, mirroring the opacity that has plagued past digital technologies like social media. How are these models trained and deployed? Once released, how do users actually use them? Who are the workers that build the datasets that these systems rely on, and how much are they paid? Transparency about these questions is important to keep companies accountable and understand the societal impact of foundation models. Today, we’re introducing the Foundation Model Transparency Index to aggregate transparency information from foundation model developers, identify areas for improvement, push for change, and track progress over time. This effort is a collaboration between researchers from Stanford, MIT, and Princeton. The inaugural 2023 version of the index consists of 100 indicators that assess the transparency of the developers’ practices around developing and deploying foundation models. Foundation models impact societal outcomes at various levels, and we take a broad view of what constitutes transparency…Execution. For the 2023 Index, we score 10 leading developers against our 100 indicators. This provides a snapshot of transparency across the AI ecosystem. All developers have significant room for improvement that we will aim to track in the future versions of the Index…Key Findings...bespacific.com, 19d ago
...50% of respondents prefer third-party product analytics tools for tracking video product performance and user behavior, while 26% leverage in-house tools or experts, and 24% use a combination of both. Only 8% of respondents are highly satisfied with the third-party product analytics tool they use to track video product performance. The dissatisfaction with third-party product analytics tools is mainly due to their inability to track end-user insights across all offered devices (48%), and a lack of video content monitoring capabilities (40%)“In today’s fiercely competitive streaming market, the success of video products largely depends on a business’s ability to make decisions rooted in fact and driven by audience interests and behavior,” said Jordi Bartomeu, NPAW’s Chief Strategy Officer and Head of Product Analytics. “To do so, they need an analytics tool that can capture all the complexities of online video and track users across the entire customer lifecycle, helping them design user journeys that deliver increased engagement and stable growth.”...Cynopsis Media, 18d ago

Latest

Here you go, insider trading robot:We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision. We perform a brief investigation of how this behavior varies under changes to the setting, such as removing model access to a reasoning scratchpad, attempting to prevent the misaligned behavior by changing system instructions, changing the amount of pressure the model is under, varying the perceived risk of getting caught, and making other simple changes to the environment. To our knowledge, this is the first demonstration of Large Language Models trained to be helpful, harmless, and honest, strategically deceiving their users in a realistic situation without direct instructions or training for deception.John Lothian News, 4d ago
BD: Depending on the year, we interview roughly a quarter to a third of the applicant pool. Interviews are conducted primarily by trained second-year students and recent alumni who were interviewers as students, although members of the Admissions Committee do conduct some as well. Our interviews are “blind,” meaning that the interviewer has reviewed your resume, but has not seen the rest of your application. The idea is for this input to be as independent of the other reviews as possible. The interviews themselves are 30 minutes in length and structured in format – every interviewee receives the same questions in the same order. Research consistently shows that structured interviews are far more predictive than unstructured ones, which is why we adopted this format many years ago. Interviewers also use a highly structured rubric in evaluating candidates, to heighten inter-rater consistency, decrease bias, and increase the fairness of the process.Clear Admit, 3d ago
MRI-based functional brain imaging has been a key tool in attempts to understand how connectivity in the brain changes in people with psychosis. The knowledge gained to date has not, however, yielded biomarkers reliable enough across the full spectrum of patients to be able to predict treatment response or long-term symptom trajectory. Drs. Malhotra, Cao and colleagues developed and tested a method aimed at combining several distinct modalities in which fMRI is used to observe connectivity in the brain. It’s possible, for example, to look at network connections in the brain when the brain is in a “resting state”; as well as in various active states that can be induced in test subjects by asking them, during the scan, to perform various kinds of tasks. Different tasks make demands upon different brain regions, or different networks spanning brain regions.A Connectivity Signature Predicting Response to Antipsychotic Therapy is Identified in First-Episode Psychosis Patients | Brain & Behavior Research Foundation, 4d ago
Before regulatory action is taken to oversee the development and deployment of AI, it is judicious to examine how existing regulations might be extended to AI. These tools heavily rely on a dependable data supply chain. IT and security leaders are already grappling with the challenge of adhering to a slew of data-related legislation, including acronyms such as HIPAA, GLBA, COPPA, CCPA, and GDPR. Since the advent of GDPR in 2018, Chief Information Security Officers (CISOs) and IT leaders have been mandated to provide transparent insights into the data they collect, process, and store, along with specifying the purpose behind these data-handling processes. Furthermore, GDPR empowers individuals with the right to control the use of their data. Understandably, leaders are concerned about the potential impact of deploying AI and ML tools on their ability to comply with these existing regulatory requirements.TechRadar, 4d ago
This session highlighted the ways in which bioanalysts are currently leveraging AI to spur laboratory automation, expedite clinical trial recruitment, optimize sample sizes and create adaptive study designs. AI’s usefulness in data-driven tasks such as report writing, assay development, chromatography verification, etc, was discussed, but Matthew also considered the concerns associated with the implementation of AI and delved into the challenges posed by data privacy, job displacement, perpetuation of biases and data dependence.Bioanalysis Zone, 4d ago
Azam Sahir, Chief Product Officer at MongoDB, reiterated the value that this partnership holds for its customers. "Customers of all sizes from startups to enterprises tell us they want to use generative AI to build next-generation applications and future-proof their businesses," said Azam. "Many customers express concern about ensuring the accuracy of AI-powered systems' outputs whilst also protecting their proprietary data. We're easing this process for our joint-AWS customers with the integration of MongoDB Atlas Vector Search and Amazon Bedrock. This will enable them to use various foundation models hosted in their AWS environments to build generative AI applications, so they can securely use proprietary data to improve accuracy and provide enhanced end-user experiences."...ChannelLife Australia, 4d ago

Top

In essence, the fears and concerns of consumers conflict directly with their desire for individualized, value-driven experiences supported by AI. This means brands must speak out and help consumers make the connection between their data and artificial intelligence so they can fully understand the benefits of its use.Retail TouchPoints, 20d ago
School district officials are eager for policy guidance around how to protect student privacy with AI tools, deal with the possibility of students using the technology to cheat, and a host of other challenges. The federal government plans to step up to the plate to help, but it may take some time. A sweeping White House executive order on AI released Oct. 30 calls on the department to develop AI policies and guidance within a year.In crafting those resources, the department is "not looking for the perfect policy prescription that is a one size fits all for everybody," Rodríguez said.Instead, officials should ask: "What structures do we want to see to help support the responsible use of AI in education? ... I think one of the most important pieces is: How do we think about building the capacity and exposure of our educators around how AI can be of use?"Last May, the Education Department released a report on AI that called for keeping "humans in the loop" when using the technology to help with tasks like creating lesson plans, tutoring students, or making recommendations about how to help individual students grasp a concept.Rodríguez elaborated on that principle at the AEI event. AI tools "need to have an expectation that human judgment and teacher judgment be part of the process of learning," he said.Meanwhile, the federal government needs to ensure its laws keep pace with developments in technology, Rodríguez said, in response to a question about the main law governing student privacy, the Family Educational Rights and Privacy Act, or FERPA. FERPA was signed into law in 1974, almost 50 years ago and well before the birth of the internet.Rodríguez agreed that the law needs to be updated to reflect an environment where technology products and services, including those powered by AI, are collecting a mind-boggling quantity of student data. While rewriting FERPA — or creating new federal privacy laws to supplement it — will be up to Congress, the Education Department has already begun conducting listening sessions to inform a rewriting of the regulations, or rules governing the law, Rodríguez said."How we utilize data, how we collect that data looks so different than it did back" in the 1970s when the law was passed, Rodríguez said. "Think about the average of 148 tech tools that are being used every year by students or by their teachers, many of those tools gathering student data. We need a more modern policy infrastructure to match the technological infrastructure we're seeing."©2023 Education Week (Bethesda, Md.). Distributed by Tribune Content Agency, LLC.GovTech, 17d ago
The ROB-ME tool includes several innovations in the assessment of non-reporting biases. In the original Cochrane tool for assessing risk of bias in randomised trials,29 users were prompted to judge the risk of selective reporting bias at the study level, based on whether any results in the study were selectively reported. In reviews adopting this approach, many studies have been judged at high risk of selective reporting bias11; however, the corresponding risk of bias in meta-analyses affected by selective non-reporting of study results is infrequently acknowledged, because no guidance on how to reach such a judgment was provided. ROB-ME explicitly deals with this gap, directing assessments at the level of the meta-analysis result and outlining what factors need to be considered to determine whether the amount of evidence known or assumed to be missing matters. Furthermore, to our knowledge, ROB-ME is the first tool to help users reach an overall judgment about risk of bias in a meta-analysis result arising from both missing studies and missing results in studies.The BMJ, 14d ago
During the program, CCC’s General Counsel Catherine Zaller Rowland and a panel of international legal experts including Prof. Daniel Gervais, Bruce Rich, and Carlo Scollo Lavizarri considered how voluntary collective licensing is a proven way to use large collections of copyrighted materials with permission, and why AI technologies must address important concerns over equity, transparency, and authenticity.Velocity of Content | A series of recordings from the Copyright Clearance Center, 17d ago
Where risks cannot be mitigated, the provider should be responsible for informing users further down the supply chain of the risks that they and (if applicable) their own users are accepting. They must also advise them on how to use the component securely. “Where system compromise could lead to tangible or widespread physical or reputational damage, significant loss of business operations, leakage of sensitive or confidential information and/or legal implications, AI cyber security risks should be treated as critical,” it added.Industrial Cyber, 6d ago
In the last 5 years, Dr. Gupta has led work on the digital transformation of medical affairs, inclusion and expansion of patient centricity as a concept in Menarini’s pharma strategy, and has devised physician-friendly medical solutions with clinical decision support systems. He is a strong advocate of improving patient outcomes and adopting data analytics to enhance the understanding of real-world experiences and the effects of medications on the larger population. He is a regular commentator on social media in voicing his views on new-age leadership, including how best to embrace artificial intelligence with empathy – ‘digitalise but don’t dehumanise’ is one of his favourite quotes.EuroCham, 28d ago

Latest

In recent years we’ve seen several third-party system OEMs enter the market developing highly modular and standardized solutions that tend to be based around robotics (as opposed to traditional conveyance), such as Exotec, AutoStore, Dexterity, and Berkshire Grey. This is shifting the buy-vs-build decision for system integrators in favor of ‘buy’ due to the rapid rate of innovation surrounding these technologies, coupled with the lack of in-house robotic expertise. Because the market is changing at such a rapid pace, incumbent system integrators don’t want to bet big on a single acquisition target or look to develop technologies in-house because end-customer preferences may have shifted by that time.Material Handling Network, 4d ago
While not an immediate consequence of Moore’s Law, the overarching principles of exponential growth in computational power heighten concerns about the potential impact of quantum computing on cryptography. Quantum computers, when realized, could leverage algorithms like Shor’s algorithm to efficiently break widely used cryptographic schemes, rendering them obsolete. Preparing for the advent of quantum computing requires the development and adoption of quantum-resistant cryptographic methods.Blockchain Magazine, 4d ago
The program only permits warrantless searches on individuals if they are non-Americans and located outside of the United States when the communications occur. However, due to the program’s “upstream” and “downstream” data collection procedures and intelligence agencies’ loose understanding of what it means for a query to be reasonably likely to return foreign-intelligence information, the FBI gets away with this pattern and practice of noncompliance. Even worse, it enables the FBI to discretely continue Hoover and post 9/11 era practices of treating immigrants and communities of color like national security threats. AAPI advocates have attested to the chilling, xenophobic effect of this surveillance. As a result of having friends and family who are not naturalized citizens and travel outside the U.S., immigrant communities in the U.S. have long been overrepresented in Section 702 data collection, and thus are disproportionately deprived of their civil liberties.Fast Company, 4d ago

Latest

The Pharma IT transformation yielded impressive outcomes. The implementation of advanced analytics resulted in an impressive 20% reduction in drug development cycles, while a streamlined infrastructure led to a noteworthy 30% reduction in operational costs. The cybersecurity measures improved the company’s reputation for data security in addition to complying with regulations. The company not only overcame immediate difficulties by implementing state-of-the-art Pharma IT solutions, but it also established itself as a market leader in India’s very competitive pharmaceutical sector. This transformation was a great success and demonstrated how important IT is to driving innovation, efficiency, and compliance in India’s ever-expanding pharmaceutical industry.CoinGenius, 4d ago
Academic progress was evident, as 36% of students demonstrated advancement of over 13 months in their reading age. This aligns with the belief that narratives have the power to shape thoughts and emotions, fostering self-driven behavioural changes.Additionally, the study revealed that two-thirds of students experienced improved wellbeing, emphasising the therapeutic effects of storytelling. The positive impact extended to the staff as well, with 37% reporting increased calmness in the classroom. This underscores the shared emotional experience created through storytelling, benefiting both students and educators.Twinkl, a key education resources provider, has played a significant role in supporting bibliotherapeutic initiatives. By offering a wide range of fiction and non-fiction books - such as Twinkl Originals and Rhino Readers - via a mobile app and online, Twinkl contributes to creating a rich literary environment that enhances the effectiveness of bibliotherapy. Their commitment to providing resources that align with educators' evolving needs fortifies the success of initiatives like the Spring 2023 story time programme. Twinkl continues to develop their book offering to support reading for pleasure. The Reading Framework reinforces the emotional benefits of reading, enabling pupils to express ideas and feelings. Stories serve as a tool for navigating difficult conversations, allowing individuals, especially children, to see themselves in a broader context. Teachers play a crucial role in modelling healthy emotional responses through shared stories."Bibliotherapy is a safe, non-confrontational method of exploring and developing emotions, and can be used to develop an understanding of difficult topics with learners. Children build empathy through their interaction with literature, which in turn has the power to change thoughts and feelings," explained Katie Rose, Subject Leads Segment Manager at Twinkl.Beyond academics, the incorporation of storytelling into daily routines proves to be a potent tool for nurturing well-being, fostering empathy, and creating a positive educational environment. As schools recognize the impact of bibliotherapy, it's clear that the narrative of education is being rewritten, one story at a time.To learn more visit TwinklTwinkl...openPR.com, 4d ago
However, the chatbot has also generated ethical concerns among academicians and authors alike. Both communities have expressed scepticism regarding its misuse and the possibility of plagiarism of their original work. Students are said to have already started using ChatGPT to complete their assignments, prompting some schools and universities to strictly ban its use for academic purposes. Others have integrated it into their curricula with strict riders on how to use it ethically. Mainly, its ability to provide human-like responses makes it difficult for educators to distinguish between what is written, and which is AI-generated.cnbctv18.com, 4d ago
...“While there’s been significant concern about the abuse of AI and LLMs by cybercriminals since the release of ChatGPT, our research has found that, so far, threat actors are more sceptical than enthused. Across two of the four forums on the dark web we examined, we only found 100 posts on AI. Compare that to cryptocurrency where we found 1,000 posts for the same period. We did see some cybercriminals attempting to create malware or attack tools using LLMs, but the results were rudimentary and often met with scepticism from other users. In one case, a threat actor, eager to showcase the potential of ChatGPT inadvertently revealed significant information about his real identity. We even found numerous ‘thought pieces’ about the potential negative effects of AI on society and the ethical implications of its use. In other words, at least for now, it seems that cybercriminals are having the same debates about LLMs as the rest of us”, said Christopher Budd, director, X-Ops research, Sophos.TahawulTech.com, 4d ago
Trust is deeply relational (Scheman 2020, Knudsen et al, 2021, Baier 1986), and has been understood in terms of the vulnerabilities inherent in relationships (Mayer et al 1995). Yet discussions about trust in AI systems often reveal a lack of understanding of the communities whose lives they touch — their particular vulnerabilities, and the power imbalances that further entrench them. Some populations are expected to simply put their trust in large AI systems. Yet those systems only need to prove themselves useful to the institutions deploying them, not trustworthy to the people enmeshed in their decisions (Angwin et. al 2016, O’Neill 2018; Ostherr et. al 2017). At the same time, researchers often stop upon asking whether we can trust algorithms, instead of extending the question of trust to the institutions feeding data into or deploying these algorithms.Data & Society, 4d ago
Fusion of Cloud and Edge Computing Propelling AI: The combination of Cloud and Edge Computing is a response to the growing demand for reduced latency and enhanced real-time data processing. In 2024, organizations are set to experience the benefits of this fusion firsthand, as AI algorithms operate directly on edge devices, delivering unparalleled responsiveness and efficiency. This will revolutionize industries by enabling real-time decision-making on edge devices, addressing security concerns, and leveraging specialized processors, marking a pivotal shift towards autonomy and efficiency.CXOToday.com, 4d ago

Latest

What, he asks, would be the role of book publishers in a fully digitized environment organized in accordance with open access? After the cost of a book’s first copy is covered, an unlimited supply of subsequent copies, provided directly from an electronic repository, would be virtually free. The publisher’s function as a gatekeeper would cease to exist because there would be no more gates—and therefore an end to “the university world’s thralldom to the prestige hierarchy of the established publishing venues.” Acquisitions editors would be superfluous. Already they do little more than weeding out inferior texts and “reading around on company time,” Baldwin claims. Improved search engines would handle the selection process, taking readers right to the works they want, which would be available on the global bulletin board. Peer review could therefore be eliminated, along with unnecessary apparatuses such as professionally designed layout, indexes, dust jackets, blurbs, and sales catalogs. Most bookstores would disappear; libraries would be reduced to storehouses of old-fashioned volumes; and virtually all cultural intermediaries—book reviewers, literary agents, advertisers—would be eliminated, because their functions would be replaced by the all-powerful search engines putting readers in direct communication with texts in the all-encompassing cloud.The New York Review of Books, 4d ago
...“We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision. We perform a brief investigation of how this behavior varies under changes to the setting, such as removing model access to a reasoning scratchpad, attempting to prevent the misaligned behavior by changing system instructions, changing the amount of pressure the model is under, varying the perceived risk of getting caught, and making other simple changes to the environment. To our knowledge, this is the first demonstration of Large Language Models trained to be helpful, harmless, and honest, strategically deceiving their users in a realistic situation without direct instructions or training for deception.”...John Lothian News, 4d ago
...“XDR is one of the best cybersecurity investments available today,” continued Shaju. “It offers improved, consolidated visibility by ingesting data from siloed security solutions. It offers automated analysis that yields insights that would be unlikely to emerge from manual processes. The security function is therefore empowered to carry out faster, more productive investigations because the platform has already prioritised avenues of inquiry. Here we can see an end to alert fatigue and the beginning of a new era of high morale in the SOC and less risk across the board. and if that is not enough to get security leaders thinking in a new direction, imagine making a business case for XDR in which you can say with confidence that the entire security stack, now consolidated and simplified, will have a lower total cost of ownership.Intelligent CISO, 4d ago
The reported tension between Toner and Altman may smack of personal politics, but it is also a microcosm of a broader tension in the world of AI research as to the field’s goals and the best—or least dangerous—ways to get there. As I wrote recently in this newsletter, there are, broadly, two schools of thought when it comes to the potential dangers of AI research. One focuses on the risk that people will unwittingly give birth to an all-powerful artificial intelligence, with potentially catastrophic results for humanity. (Many believers in effective altruism fall into this camp.) Geoffrey Hinton, seen by many in the field as the godfather of modern AI research, said recently that he left Google specifically so that he could raise the alarm about the dangers of super-intelligent AI. Last month, President Biden issued an executive order in an attempt to set boundaries for the development of AI; this week, sixteen countries including the US agreed to abide by common research standards.Columbia Journalism Review, 4d ago
Due to the current knowledge gap in the area, UNESCO believes it is impossible to fully anticipate the consequences of geoengineering, and its report highlights numerous risks associated with it. UNESCO says the strategy could undermine existing climate policies and divert funding from emissions reduction and adaptation efforts. The high cost of these technologies could further exacerbate global inequalities, especially in terms of the distribution of risks. According to UNESCO, climate engineering tools could also have the potential for military or geopolitical use, accentuating the need for a framework of international governance.The Engineer, 4d ago
NATO’s cyber defense teams and their counterparts in the US have long prepared to defend against nation-state attacks by training in advanced cyber ranges that replicate the real production IT and operational technology environments that that have to defend every day. Security teams are equipped with the same defensive tools, combatting the same tactics, techniques, and procedures implemented in high-profile attacks. Many leading publicly listed companies have followed suit with those best practices, and now, a broad cross section of listed companies need to take on the same best practice of military-grade protections. These best practice environments enable companies to explore and make sure their defenses are as good around key specialty systems, like the billing system that took down the Colonial Pipeline. This ability for companies to rehearse for the unfortunate eventuality that they are hit by a significant cyber event is also helping companies to integrate their financial and disclosure teams right into their incident processes to help them to work the early stages of their materiality determinations in parallel with the incident response teams to help them to make their determinations "without unreasonable delay.”...Global Security Mag Online, 4d ago

Latest

This acquisition positions Advent International strategically in the market, particularly in the context of increasing contactless payment technologies like NFC and RFID. Moreover, the acquisition signifies a commitment to tackling critical challenges in the digital payments sphere, including evolving cybersecurity threats and the complexities of cross-border payments due to a lack of global standards. Through this acquisition, Advent International is poised to leverage the synergies of myPOS’s expertise and its robust investment strategies to capitalize on the rapidly growing digital payment sector .marketsandmarkets.com, 4d ago
In conclusion, the University of Leeds' introduction of a neural network for mapping large Antarctic icebergs represents a significant advancement in environmental monitoring. This technology not only streamlines the process of tracking iceberg size, location, and melt but also contributes to our understanding of climate change and its impact on sea levels. By providing accurate and rapid assessments of iceberg dimensions, this approach aids in the global effort to monitor and respond to environmental changes, highlighting the critical role of technology in addressing climate challenges.electronicspecifier.com, 4d ago
Women experience chronically inferior returns in organizations. One common recommendation is to form instrumental network ties with high-status others in groups. We integrate research on social status, social perceptions, and gender issues in social networks to suggest that, despite the theoretical and empirical appeal of this approach, instrumental ties to high-status network contacts (versus ties to lower-status network contacts) in groups may incur hidden social status costs for women in intragroup status-conferral processes. Instrumental ties to high-status network contacts may be perceived as a sign of agency of the focal person, which violates feminine gender norms. Women with these high-status network contacts in groups may therefore be perceived as less communal, thus subsequently lowering their status in the eyes of other group members compared with women with lower-status network contacts. Studies 1–4, across cross-sectional, longitudinal, and experimental designs, support our model. Study 5 suggests that signaling a group-oriented goal may mitigate the interpersonal, social perceptual costs of instrumental ties to high-status network contacts for women. The effect of ties with high-status network contacts for men is relatively inconsistent. This research reveals a potential social-network dilemma for women: Instrumental ties to high-status network contacts in groups and organizations are necessary for success and should be encouraged, yet they may also create an extra social perceptual hurdle for women. Organizations need to investigate social and structural solutions that harness the benefits of high-status network contacts for women, while minimizing any potential social perceptual costs.nationalaffairs.com, 4d ago

Top

When moving into 2024, Trellix considers the threat of artificial intelligence to be something which organisations should be wary of. One of the biggest concerns is the development of malicious large language models (LLMs), as these AI systems are trained on massive amounts of text data, can generate human-quality text, translate languages, and even write different kinds of creative content. While LLMs have many beneficial applications, they can also be used for malicious purposes, such as spreading misinformation, creating fake news, and conducting cyberattacks.cybermagazine.com, 17d ago
In the quickly developing field of artificial intelligence (AI), moral issues have taken center stage. With each AI breakthrough, questions surrounding data privacy and security, bias and fairness, accountability and responsibility, job displacement, and the economic impact of AI innovations gain prominence. As autonomous systems become more integrated into our daily lives, the need for a robust ethical framework to guide their use becomes increasingly apparent. This article “Navigating Ethical Challenges in AI Advancements” delves into the multifaceted landscape of ethical challenges in AI advancements, exploring how data privacy and security concerns raise questions about the protection of sensitive information in an interconnected world. Examining the critical issues of bias and fairness in AI algorithms and the potential consequences of perpetuating inequalities.AI Time Journal - Artificial Intelligence, Automation, Work and Business, 11d ago
Social media risks go beyond amplified terrorism. The dangers that algorithms designed to maximize attention represent to teens, and particularly to girls, with still-developing brains have become impossible to ignore. Other product design elements, often called “dark patterns,” designed to keep people using for longer also appear to tip young users into social media overuse, which has been associated with eating disorders and suicidal ideation. This is why 41 states and the District of Columbia are suing Meta, the company behind Facebook and Instagram. The complaint against the company accuses it of engaging in a “scheme to exploit young users for profit” and building product features to keep kids logged on to its platforms longer, while knowing that was damaging to their mental health.Scientific American, 24d ago
Important limitations of intraoperative/intracranial studies in humans include a lack of access to healthy control data (hence comparison across disorders), the inability to use pharmacological interventions to verify pathway specificity of elicited responses, and time constraints preventing thorough scrutinization of time courses of LTP-like effects. Future studies, which may record evoked fields via macroelectrodes connected to stimulators with chronic sensing capabilities, could provide the opportunity to substantiate these effects in the chronic setting. Furthermore, chronic monitoring of evoked fields may allow for microcircuit interaction to selectively modulate the efficacy of target synapses. Indeed, optogenetic studies in parkinsonian rodents have demonstrated the ability to achieve lasting therapeutic efficacy via periodic activations of striatal direct pathway projections, likely leveraging LTP-like mechanisms [34]. Conversely, a long-term depression paradigm may instead be beneficial in dystonia. While our neuronal investigations provide cellular level support for closed-loop targeting of disease-related neural oscillations [35,36], future applications of DBS may also benefit from closed loop tuning of basal-ganglia-thalamo-cortical circuit dynamics through modulation of plasticity. An additional limitation of this study is that we did not stringently monitor personal medication intake or levels of intraoperative sedation; however, all patients with PD were asked to withdraw from medications the night before surgery, and the analyses only included patients operated on awake. Finally, it is important to consider that classifications of PD and dystonia as hypo- and hyperkinetic disorders can be considered oversimplifications, as there can be contradictory comorbidities and drug- and DBS-related effects [37,38]. Additionally, our analyses involved pooling of patients with various forms of dystonia; however, we did not observe differences across subtypes (Supplementary Fig. 1).elifesciences.org, 14d ago
In your own words, can you describe the importance of using uncommon / non-model organisms in research?I believe that our current conception of non-model organisms is the result of a recent myopia. Founders of the fields of developmental biology and genetics did not limit themselves to the study of a single organism. Thomas Hunt Morgan and Edwin Conklin all worked on a diverse array of organisms, ranging from ascidians to fiddler crabs to articulate questions about development that we are still addressing today. The Nobel laureate Eric Kandel used the giant snail Aplysia to study neuronal signal transduction and memory storage, a model that at the time was only used in two labs in France. The development of genomic tools has allowed Elaine Ostrander’s lab to use dog breeds to be used as preexisting isolated populations to study genetics of chondrodysplasia and cancer7, 8. The common thread in these is the utility of the organism in answering a given question. Fiddler crabs have one large claw so they can be useful models for left/right asymmetry. Eric Kandel wanted to study neurons, so he chose a model that had large, easily accessible neurons that were invariantly positioned.ASCB, 19d ago
Innovative problem-solving. While it has its flaws, the non-sentient AI we already use has been known to come up with creative solutions to human problems. It is now common for humans to ask ChatGPT for advice for everything from career development to relationship problems. Now, keep in mind that a sentient AI would be the most self-aware tech to ever exist. Not only would it have access to virtually all the information that has ever been present and analyze it at the drop of a hat, but it would also understand first-hand how human feelings work. It’s been suggested that sentient AI could tackle issues like world hunger and poverty because it would have both the technical understanding of such problems and the emotional intelligence needed to navigate human nuances.Coinspeaker, 14d ago

Latest

Teaching is known to be an emotionally taxing profession and recent meta-analytic findings indicate that student behavior and classroom disruptions are key factors contributing to teacher burnout. Diving deeper into student behavior identified as particularly problematic we see that "disruptive, aggressive, and noncompliant behaviors constitute the majority of externalizing problems that negatively affect classroom and learning environments" (Cook et al., 2018). New teachers especially have reported feeling under-equipped to manage disruptive student behavior and "classroom management" remains a sought-after professional development topic for teachers across varied contexts. Arising from this mismatch between teachers' preparedness to address student behavior and rates of disruptive behavior in classrooms, we see teachers experience elevated stress. Chronic occupational stress can result in teachers showing less patience with students, using more punitive behavior-management protocols, and deriving diminished professional satisfaction. Chronic stress, over time, can lead to teachers abandoning the profession entirely.Psychology Today, 4d ago
As organizations continue to navigate the complex landscape of digital transformation, securing workload identities is non-negotiable. The implementation of multi-factor authentication, particularly through mechanisms like mTLS, is a proactive step towards mitigating evolving cyber threats. By understanding the risks, overcoming reluctance, and embracing modern security measures, businesses can fortify their defenses, protecting not only their assets but also their reputation in an era where data breaches are not just a possibility but a harsh reality.Security Boulevard, 4d ago
In an era where the stakes are high and identity fraud poses substantial risks to businesses, background screening companies can stand as the first line of defense in candidate identity verification. While the advantages of collaborating with advanced solutions like Jumio are evident, it’s equally important for background screening companies to take proactive steps in fortifying their systems. Enhanced security, streamlined processes and unwavering compliance with regulatory standards should be at the forefront of their strategies. By embracing these principles, screening providers can effectively authenticate candidates, mitigate risks and maintain the trust of their clients. In doing so, they contribute to a safer, more trustworthy and ultimately reliable hiring process.Jumio: End-to-End ID, Identity Verification and AML Solutions, 4d ago
...“Customers of all sizes tell us they want to take advantage of GenAI to build next-generation applications and future-proof their businesses. However, many customers are concerned about ensuring the accuracy of the outputs from AI-powered systems while protecting their proprietary data,” said Sahir Azam, chief product officer at MongoDB. “With the integration of MongoDB Atlas Vector Search with Amazon Bedrock, we’re making it easier for our joint-AWS customers to use a variety of foundation models hosted in their AWS environments to build GenAI applications that can securely use their proprietary data to improve accuracy and provide enhanced end-user experiences.”...ERP Today, 4d ago
This particular incident also shows that products can still feel cheap even when made by humans — but this doesn’t mean developers aren’t working hard or aren’t talented. More likely, it shows the consequence of other factors, like rushed timelines or expectations from management, and how those can impact the quality of a game, regardless of whether humans or algorithms made a certain component. Developers are constantly facing tight deadlines and often have to crunch, working long hours on rushed timelines. For example, when the King Kong game Skull Island: Rise of Kong was released, players dunked on its graphics. Reporting from The Verge later revealed that the team had under a year to create the game from start to finish.Polygon, 4d ago
It is impossible to detect change at the individual field level cost effectively, so many companies in this space, including Boomitra, Indigo Ag and Regrow, use AI modelling to estimate carbon sequestration, aggregating data from thousands of fields. There are different mechanisms to handle the statistical uncertainty in modelling of this kind. Verra’s recommendation – VM0042 – was developed by Indigo Ag and Terra carbon, both soil carbon companies themselves. However, they work primarily with North American farmers, who farm large tracts of land, so the mechanism is ill-suited to measuring soil carbon sequestration across multiple small farms.Verdict, 5d ago

Top

We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision. We perform a brief investigation of how this behavior varies under changes to the setting, such as removing model access to a reasoning scratchpad, attempting to prevent the misaligned behavior by changing system instructions, changing the amount of pressure the model is under, varying the perceived risk of getting caught, and making other simple changes to the environment. To our knowledge, this is the first demonstration of Large Language Models trained to be helpful, harmless, and honest, strategically deceiving their users in a realistic situation without direct instructions or training for deception.lesswrong.com, 18d ago
Albeit somewhat belatedly, the United States government is taking the threat of deepfakes seriously, with the National Security Agency and several federal agencies issuing guidance on it in September. Although the government should not interfere with the rights of candidates to express themselves, they can and should place limitations on the generation and dissemination of misleading deepfakes. California’s state law against the distribution of “materially deceptive audio or visual media” of a candidate for elective office may serve as a model for this type of action. Nonprofit organizations have trained observers to spot deepfakes ahead of one of the biggest global election years in history; these efforts in digital literacy are of great importance. We know that the likelihood of encountering synthetic content on social media is virtually certain, and we all share the burden of consuming and sharing content responsibly. Technical controls such as detection and watermarking, regulatory controls, and digital literacy efforts should all be viewed as components of a robust detection toolkit in order to protect the democratic process from manipulation.Tech Policy Press, 24d ago
Juan: To stay, as well, with your AI question above, there’s going to be a growing expectation of smarter infrastructure. If cars and trucks can pretty well go driverless, at this point, what about publishing workflows, quality control checks, and knowledge discovery and translation? This increased responsibility will need to be accompanied by a recognition for the need to support research infrastructure, with multiple overlapping efforts underway to make this a reality. I remain optimistic that all these conversations and efforts will lead to clearer pathways for projects, like PKP and many others, to receive funding from a wide range of stakeholders who indirectly benefit from our work. As infrastructure grows in importance and, I’d hope, support, then I suspect we’ll also see a growing role for these infrastructure developers as actors in the academic community and a corresponding representation of that community on infrastructure projects. We’ve seen some of this with PKP — I find myself participating on the boards of international organizations and projects and, similarly, PKP has community members play a role in its governance. But it will require growing community engagement and outreach teams so that, as infrastructure is less under the hood and more in the driver’s seat, we can ensure that it more than adequately represents the perspectives of our community of users. And with that we both thank you for such an interesting set of questions for reflecting on our work together.The Scholarly Kitchen, 27d ago