Book Volume 4
Preface
Page: i-i (1)
Author: Rupa Rani, Prashant Upadhyay, Rohit Sahu, Satya Prakash Yadav and Hardeo Kumar Thakur
DOI: 10.2174/9798898810030125040001
AI Deception Detection: Behavior Model and Techniques
Page: 1-15 (15)
Author: Manu* and Neha Varshney
DOI: 10.2174/9798898810030125040003
PDF Price: $15
Abstract
Nowadays, it has become very common for the message to reach the receiver in the wrong way by spreading false information. In every field, passing wrong information by pretending it is right has become easy since everyone is using recent technologies. It is often seen that the person is not involved, but his credentials and personal details are shared indirectly. One thing used behind this technology is artificial intelligence, which is not based on emotions but can cause harm by using false methods. Experts caution against giving artificial intelligence (AI) executive control because its lack of emotions can do unthinkable harm. Due to the absence of understanding and an ethical compass, it can ultimately result in choices that have terrible emotional repercussions. The importance is to be given to the implications of AI and cover the vulnerabilities of social networks, possible ethical issues on privacy, and automated cyberattack case studies. The effects of a breach can reverberate for years as cybercriminals use the information they have stolen. The potential risk is only constrained by the creativity and technological abilities of malevolent persons. Sophisticated artificial intelligence (AI) systems are capable of operating deceit on their own to avoid human oversight, such as avoiding safety tests that regulators have mandated of them. However, despite topical developments, social media platform administration will continue to face a number of ethical difficulties.
Navigating the Digital Frontier: An In-depth Exploration of Social Network Vulnerabilities and the Impact of Artificial Intelligence
Page: 16-33 (18)
Author: Archana Sharma*, Ayush Gupta, Sushant Sharma and Tripti Singh
DOI: 10.2174/9798898810030125040004
PDF Price: $15
Abstract
As digital interactions dominate our lives, social networks have become both a cornerstone and a breeding ground for vulnerabilities. “Understanding Social Network Vulnerabilities” dives into this complex landscape, exposing the weak points we face and the double-edged sword of artificial intelligence (AI) – a technology that can both amplify and mitigate these risks. This study goes beyond mere identification, delving into the intricate web of vulnerabilities, from privacy concerns to cybersecurity threats, while exploring the impact of AI on both sides of the equation. Utilizing a comprehensive approach that blends expert insights, real-world case studies, and thorough research, it unveils practical solutions and underscores the urgency of addressing these vulnerabilities. Ultimately, this work serves as a crucial roadmap for individuals, companies, and governments alike, navigating the evolving digital landscape with the knowledge that AI's transformative potential holds the key to both challenges and solutions.
Federated Learning for Enhanced Intrusion Detection: Combating Digital Deception in IoTEnabled Social Networks
Page: 34-56 (23)
Author: Shivansh Soni*, Ritika Binjola and Kajol Mittal
DOI: 10.2174/9798898810030125040005
PDF Price: $15
Abstract
The Internet of Things has completely changed the way we interact with our surroundings. The upsurge in the use of IoT devices has helped us to do and monitor our day-to-day tasks with ease. With the increasing use of IoTs, there also comes their own set of security issues that need to be considered. IoT devices generate vast amounts of data, most of which can be shared or connected to social networking sites or online platforms. Due to malicious activity or attacks in the network, the data can be manipulated. Digital deception poses a significant threat in the era of AI-driven social networks, enabling the spread of misinformation and cyber intrusions at an unprecedented scale. Here comes the role of the intrusion detection system, which helps us to detect and prevent security breaches in IoT. In recent years, it has become a critical issue to secure IoT systems as they are more prone to be hacked. So, it has become very important to develop and optimize our traditional intrusion detection systems. Previously used intrusion detection systems suffered from several limitations, such as centralized data storage, privacy, and security concerns. They require a significant amount of processed data to work effectively according to our criteria. This can be challenging to us in case of very few historical attacks that are not enough to train our systems. To overcome these issues, federated learning-based IDS for IoT systems has been proposed. Federated Learning is a machine learning technique that provides a system that can collaboratively learn a model without sharing its data. In federated learning, the machine learning model is trained on the device itself, and only the updates are pushed to the central server for improvement of the central model. It helps us preserve the privacy of the device and, at the same time, enables us to build and train the central model server that can detect any intrusion attacks. This can also reduce the computational cost of machine training by distributing the work across several devices. Federated learning models can adapt to the changes in the environment as they are designed to continuously learn from the recently generated data from servers and devices in the network. This way, the model can progress over time to better detect the changes in the network traffic.
AI-Driven Identity Manipulation
Page: 57-75 (19)
Author: Sana Anjum* and Deepti Sahu
DOI: 10.2174/9798898810030125040006
PDF Price: $15
Abstract
Identity theft refers to the illegal act of using someone else's personal information for fraudulent transactions. Perpetrators employ various techniques, ranging from sifting through discarded materials like credit cards and bank statements to more sophisticated methods like hacking into organizational databases to access consumer data. While identity thieves continuously develop new tactics, individuals can significantly mitigate the risk by exercising vigilance on social media platforms and practicing caution when dealing with unfamiliar emails. Identity theft remains a persistent and escalating issue, impacting a growing number of individuals and inflicting direct and indirect harm on victims. In this chapter, we are going to highlight some common types of identity theft and the role of artificial intelligence in this manipulation. Also, we will review various research papers to find solution for problems related to identity theft, such as fake profile identification using ML-based algorithms. Furthermore, the chapter will also discuss the future possibilities in this field.
Understanding the Dynamics of Misinformation and Disinformation: A Comprehensive Review
Page: 76-94 (19)
Author: Veena Bharti*, Chitra and Shikha Agarwal
DOI: 10.2174/9798898810030125040007
PDF Price: $15
Abstract
The quick spread of information in the digital era can be both a benefit and a drawback. The production of misinformation and disinformation has become apparent as a significant societal confrontation facilitated by the interconnected world of the internet and social media. Misinformation, often unintentional, spreads due to errors or misunderstandings, while disinformation, driven by deceitful intent, aims to manipulate or deceive. The propagation of false information is fuelled by the dynamics of the digital landscape. The internet community and digital platforms, in particular, have developed into conduits of swift outspread of both misinformation and disinformation. These platforms offer an unparalleled level of connectivity and engagement, making it easier for misleading content to reach a vast audience quickly. The virality of such content is heightened by its ability to evoke strong emotions, playing on fear, anger, or excitement, which drive users to share and engage. Mutual admiration societies, wherein communities interact and shore up their beliefs, have further perpetuated the spread of deceiving information, hindering efforts to discern fact from fiction. Additionally, polarized societies can exacerbate the issue, with individuals being more receptive to information that aligns with their existing viewpoint. This paper delves into the main key aspects of the propagation of misinformation and disinformation.
Privacy Concerns in AI-Powered Social Networks
Page: 95-111 (17)
Author: Garima Srivastava* and Soniya Sharma
DOI: 10.2174/9798898810030125040008
PDF Price: $15
Abstract
Artificial Intelligence (AI) is a field within computer science that studies a machine's capacity to mimic intelligent human behavior. Artificial intelligence (AI) exhibits significant potential in tackling present-day socioeconomic challenges. Nowadays, social media—also referred to as social networking—consists of Facebook, YouTube, Pinterest, Instagram, and Twitter. AI has a major role in the way today's social networks function. This chapter covers the privacy issues that arise when AI is integrated with social networking platforms, specifically in user data gathering, profiling, and algorithms that make decisions. While AI technologies like facial recognition, machine learning, and natural language processing allow social networks to provide individualized experiences, they also face serious concerns about data privacy, consent, and surveillance. AI systems often rely on large-scale personal data sets. Artificial intelligence (AI) is becoming more and more prevalent in social networks, radically changing social media in the process but also raising some issues related to data. The legal implications of AI in social media are also explored, including the possibility of bias, manipulation, and loss of control over personal information. This chapter explores how AI-powered services, such as content recommendation engines, automated content moderation, and targeted advertising, may unintentionally jeopardize user privacy by gathering and using private data. We also talk about the real case study related to it, as well as the usage of AI in social networks, the kinds of privacy issues that can develop, and how to address them.
AI Algorithmic Bias and Manipulation in Social Networks
Page: 112-125 (14)
Author: Rupa Rani* and Harnit Saini
DOI: 10.2174/9798898810030125040009
PDF Price: $15
Abstract
Algorithms are increasingly being employed in our daily lives to make decisions, and the manipulation of algorithmic bias is becoming a serious worry. Because these decisions greatly impact people and society, they must be neutral and fair. The purpose of this study is to raise awareness of the potential drawbacks of algorithmic bias reduction. Algorithmic bias manipulation can have a wide range of negative social consequences. It has the potential to discriminate against or favor specific outcomes for certain groups of individuals. This method may result in inequity and social injustice. Biased algorithm manipulation can have serious repercussions for persons and society. To lessen the impact, steps need to be taken. Algorithmic bias modification has a wide range of applications. It can be used to target certain racial or ethnic groups, or it can be used to promote specific results, such as higher earnings. One disadvantage is that algorithmic bias manipulation might be difficult to identify and prevent. It might be difficult to determine whether or not algorithms are prejudiced because they are often highly complicated and advanced. The manipulation of algorithms to introduce bias has serious ramifications for people and society. It is critical to implement mitigation measures. Algorithm manipulation to introduce bias is a critical issue that can hurt both society and individuals. Understanding and protecting against the possibility of manipulating algorithmic bias are crucial.
Automated Cyberattacks and Social Engineering
Page: 126-143 (18)
Author: Gunjan Aggarwal*, Aryan Kumar Pandey and Swati Singal
DOI: 10.2174/9798898810030125040010
PDF Price: $15
Abstract
Social engineering attacks are a prevalent and cunning method employed by cybercriminals to exploit the very essence of human psychology and behavior. These attacks are becoming increasingly common and exploit human vulnerabilities. These attacks do not follow any specific methodology and are thus difficult to identify. This makes them highly efficient, easy to execute, and capable of compromising any organization. Scams based on social engineering are built around how people behave and react to situations of fear, excitement, curiosity, etc. Once an attacker understands the person's psychology, they can plan and influence the user effectively to believe in fake news, messages, etc. In addition, the attackers also exploit persons' lack of knowledge and awareness about cyber security and attacks. The attacker’s goal is generally financial gain or to gain access to restricted areas or confidential documents. As a preventive measure, it is important to be aware of cyberattacks and how they work. To combat social engineering attacks, it is crucial to educate individuals and employees about the risks, enhance their awareness, and encourage healthy skepticism when dealing with unsolicited requests for information or actions. Technical security measures, like multi-factor authentication, the use of updated software and antivirus software, email filtering, etc., may help protect individuals or organizations from social engineering attacks, making it harder for cybercriminals to succeed in their manipulative endeavors.
Deepfake and Erosion of Trust
Page: 144-159 (16)
Author: Preeti Dubey*, Hoor Fatima and Pushpendra Kumar Rajput
DOI: 10.2174/9798898810030125040011
PDF Price: $15
Abstract
Deepfake technology's rapid growth has ushered in a new era of digital manipulation, which has led to serious worries about how it may affect people's ability to trust different aspects of modern society. This abstract provides a thorough examination of the various effects of deepfakes on trust, including how they affect media consumption, interpersonal relationships, and the integrity of social institutions. The emergence of deepfake technology has changed the digital content ecosystem and sparked worries about the decline in trust across a number of domains. With the use of complex algorithms for machine learning, deepfakes may produce incredibly lifelike and misleading multimedia content, such as pictures, audio files, and movies. This study examines the complex effects of deepfakes on media consumption, interpersonal relationship trust, and institutions of society. The authenticity of human communication in interpersonal relationships is challenged by the ease with which deepfakes can modify aural and visual clues. Relationships are built on trust, which is compromised when people struggle with the unknown of real communication. The psychological and social effects of deepfakes on interpersonal trust are investigated in this research. The consumption of media is another area where deepfakes have a significant impact. The legitimacy of information sources is called into doubt by the blurring of the boundaries between reality and fiction. The public's trust in internet and conventional media channels is eroded by misinformation spread through altered content. The study looks into how deepfakes affect media literacy, how misinformation spreads, and how this affects society's confidence in information.
Ethical Implications of AI in Social Networks
Page: 160-174 (15)
Author: Atul Kumar Rai* and Neelaksh Sheel
DOI: 10.2174/9798898810030125040012
PDF Price: $15
Abstract
The ethical implications of artificial intelligence (AI) in social networks encompass a spectrum of complex issues. Key concerns involve user privacy and data security, algorithmic bias leading to discrimination, transparency in AI operations, and the spread of misinformation. Additional ethical considerations include user manipulation, striking a balance between freedom of speech and content moderation, and addressing the impact of AI on emotional well-being. Inequities in AI access, user addiction, and the treatment of AI-driven entities are also pressing issues. The ethical challenges necessitate ongoing dialogue, transparency, and the development of ethical guidelines to ensure AI benefits users while mitigating harm and promoting fairness in social network interactions. Artificial intelligence machines demonstrate the remarkable capability not just to perceive, articulate, listen, process, and transcribe information but also to acquire these skills at a pace surpassing that of human counterparts. These advanced tools find application across industries, enhancing and automating various activities on the internet to improve overall efficacy. The main objective of the chapter is to highlight privacy, algorithmic bias, user manipulation misinformation, and emotional well-being. It emphasizes the need for ongoing dialogue, transparency, and the development of ethical guidelines to ensure that AI in social networks promotes fairness.
Regulating Artificial Intelligence in Social Networks
Page: 175-187 (13)
Author: Abha Kiran Rajpoot* and Sana Anjum
DOI: 10.2174/9798898810030125040013
PDF Price: $15
Abstract
The growth of artificial intelligence (AI) in social media has brought
significant improvements in user experience, content delivery, and personalization. But
this also raises privacy concerns, ethical considerations, and the potential for
algorithmic bias. This chapter highlights the urgent need to manage social intelligence
by focusing on striking a balance between innovation and ethical considerations. While
regulation is necessary, striking the right balance between supporting AI innovation
and addressing ethical issues is critical. Regulators should promote creativity and
technological advancement and support responsible AI development and deployment.
Managing social skills is a complex task that requires thoughtfulness and balance. It is
essential to strike a balance between new technologies and ethical considerations to
make social relationships safe, fair, and beneficial for all. Lawmakers, tech companies,
and civil society must work together to create regulatory frameworks that support
responsible intelligence while maintaining critical engagement capability. Artificial
intelligence (AI) in society requires a balance between innovation and ethics. This
principle encourages greater transparency by requiring platforms to demonstrate smart
strategies to improve user understanding and create accountability.
Ethical rules: They are important for fairness, diversity, and inclusion and aim to
reduce the bias inherent in AI-driven content selection and recommendations. Privacy
measures must be in place to protect user data from illegal AI-powered surveillance
and ensure compliance with strict data protection laws. Fighting misinformation
requires using algorithms to quickly and effectively combat false content generated by
AI. The effectiveness of the framework is under constant evaluation and modification,
enabling rapid changes in management models with the rapid development of artificial
intelligence technology. Collaboration between governments, science and technology
organizations, researchers, and civil society is essential to create policy changes that
encourage innovation while supporting standard practices. Ultimately, this policy aims
to support the artificial intelligence ecosystem in dialogue and improve relations while
protecting people's interests and rights.
User Empowerment and Digital Literacy
Page: 188-202 (15)
Author: Pooja Chaudhary* and Kajal Gupta
DOI: 10.2174/9798898810030125040014
PDF Price: $15
Abstract
Youth must be encouraged to take the lead in community development by providing them with programs that build their competence and capacity in response to the demands of the age of digital technology. Youth empowerment through instruction in digital literacy was the goal of the PPM program. Its objectives were to raise youth knowledge of and comprehension of ITE Law, enhance the youth's proficiency in utilizing information technology for educational purposes, and elevate the youth's aptitude in utilizing information technology for conducting online business. The three strategies used in this PPM program's implementation were young capacity building through seminars, digital technology training, and web-based company advising. Through training sessions, seminars, and mentorship in the application of computer technologies, the PPM strategy was executed effectively. PPM exercises improved the youth's comprehension and knowledge of ITE Law, as well as their capacity to use a variety of online business and learning resources, including information technology.
AI-Driven Solutions for Filtering Unwanted Posts from Online Social Networks (OSN)
Page: 203-220 (18)
Author: Sachin Jain*, Sudeep Varshney and Tejaswi Khanna
DOI: 10.2174/9798898810030125040015
PDF Price: $15
Abstract
There has been an explosion in the popularity of OSNs in recent years. Users can communicate and share any data through these services. The primary drawback of these OSN services is the invasion of the user's privacy. For precise filtering outcomes, we employ sample matching and textual content class sets of rules. We advocate for a system that gives OSN users complete editorial control over the content of their wall posts. There may be a grey area in which the usage of rule-based mobile devices permits customers to personalize the filtering process applied to their user profiles. A learning system can automatically label messages to aid with content-based filtering keywords: online social networks, filtering rules, devices, content-based filtering, and system learning. Globalization is reaching a significant level. In this study, we propose a more robust filter in PHP, based on the Validation Laravel framework, to circumvent the insufficient protections offered by OSN. We sort messages into desirable and unwanted groups in the first stage. In the second stage, spam messages are again sorted by kind. Both communications and users might be banned from being sent or received. If a user is blocked, they cannot post again until the blocklist is removed.
The Dark Side of Digital Surveillance: India’s Cybersecurity in the Age of Artificial Intelligence
Page: 221-230 (10)
Author: Ruchi Patira*, Rajani Singh and Manoj Singhal
DOI: 10.2174/9798898810030125040016
PDF Price: $15
Abstract
This article provides an overview of cyber security, a topic that has risen in importance since the end of the Cold War due to a confluence of technological advancements and shifts in geopolitical dynamics. The paper uses securitization theory to conceptualize cybersecurity as a separate industry with its own unique set of risks and reference points. It is believed that the collective referent objects of “the state”, “society”, “the country”, and “the economy” provide “network security” and “individual security” their political significance. Through hypersecuritization, daily security procedures, and rectifications, these referent objects are formulated as threats. Next, a case study of what has been called the “first cyber war” against Estonian governmental and commercial organizations in 2007 is used to demonstrate the theoretical framework's practicality. In the realm of IT, cyber security is a crucial component. One of the greatest difficulties now is ensuring the safety of sensitive data. Although the concept of cyber security is more important, it remains elusive. The notions of privacy, information sharing, intelligence collection, and monitoring are often muddled with it in improper ways. This study argues that proper risk management of information systems is essential for adequate cyber security. Threats (who is attacking), vulnerabilities (how the assault will be carried out), and effects (what will be damaged) are the three determinants of the risks involved in any attack (what the attack does). In terms of cyber security, the government's responsibility extends beyond only safeguarding its own networks to also include helping to safeguard private networks.
Framework to Uncover Threats in Social Networks Through Network Packet Visualisation
Page: 231-255 (25)
Author: Prashant Upadhyay*, Preeti Dubey, Amit Upadhyay and Nikiema Flavio
DOI: 10.2174/9798898810030125040017
PDF Price: $15
Abstract
With the increasing volume and complexity of network traffic in social networks, extracting meaningful insights from this data has become increasingly challenging. This paper presents a lightweight approach for analysing network traffic for social networks that enables the identification of patterns and anomalies that may indicate malicious activity. The paper starts by discussing the importance of network traffic visualisation and the challenges associated with it. It then provides an overview of the key components of network traffic data and various visualisation techniques that can be used to gain insights into network behaviour. The focus is on lightweight visualisation techniques that can be used to analyse network packet data for threat detection. Time series plots, scatter plots, heatmaps, and network graphs are some of the visualisation techniques that can be used to identify patterns and anomalies in network traffic for social networks. The lightweight nature of this approach enables efficient processing and analysis of large and complex datasets. In conclusion, analysing and visualizing network packet data is a crucial technique for identifying potential security threats, and a lightweight approach can enable efficient processing and analysis of large and complex network traffic of social networks. By using the techniques and tools presented in this paper, network administrators and researchers can gain valuable insights into network behaviour and identify potential security threats.
AI’s Psychological Impact on Users in Social Networks
Page: 256-274 (19)
Author: Pawan Kumar*, Rupa Rani, Deepika Yadav and Mandeep Singh
DOI: 10.2174/9798898810030125040018
PDF Price: $15
Abstract
Artificial intelligence (AI) is widely spreading daily in our daily communication, with some positive and negative impacts. The inclusion of AI in social media has several advantages and disadvantages and has a psychological impact on users' lives. The research study aims to investigate the harmful psychological impacts of AI on users using social media, which are discussed theoretically for enhancing and improving our social lives through social interactions of users, as well as sharing their personal life experiences or events. However, despite positive impacts, there are some negative concerns also. The research study focuses briefly on all the psychological impacts of AI on society, users, and social networks and defines all the challenges and applications of using AI in social media. AI algorithms deliver personalized content of data based on user recommendation and enhance connection and user access time on social media, but excessive access to social media has significantly high impacts on the mental health of users and results in more stress on the mind and pressure to compare themselves to others, thus increasing sadness and isolation.
Confronting the Dark Side of AI and Its Impact on Social Networks
Page: 275-287 (13)
Author: Pawan Kumar* and Amit Upadhyay
DOI: 10.2174/9798898810030125040019
PDF Price: $15
Abstract
In a world where virtual interactions and digital connections rule the roost, the deep and often frightening social landscapes are being profoundly altered by the hidden complexity of artificial intelligence (AI). Explore “Uncovering the Dark Side of AI in Social Networks” to take a trip through the pages and discover the hidden stories of the digital world. The book provides a thorough knowledge of the intricate interactions between artificial intelligence (AI), social networks, and human behavior by fusing theoretical analysis with real-world case studies. Additionally, it provides doable advice on how individuals, governments, and tech corporations may encourage a more useful and ethical use of AI in the context of social networks. The book attempts to create a meaningful dialogue and increase public awareness of the possible risks related to artificial intelligence in social networks. By exposing the negative aspects of AI, it hopes to inspire people, decision-makers, and tech innovators to assess present procedures critically and take proactive steps to resolve the problems found. The authors intend to foster a more ethical and responsible approach to the development and deployment of AI algorithms within social networks.
Subject Index
Page: 288-296 (9)
Author: Rupa Rani, Prashant Upadhyay, Rohit Sahu, Satya Prakash Yadav and Hardeo Kumar Thakur
DOI: 10.2174/9798898810030125040020
Introduction
Digital Deception: Uncovering the Dark Side of AI in Social Networks is a critical investigation into how artificial intelligence silently influences, manipulates, and, at times, undermines digital interactions across social platforms. Bridging disciplines such as computer science, sociology, and ethics, the book exposes how AI technologies contribute to misinformation, surveillance, identity manipulation, and psychological exploitation in the digital sphere. With chapters on algorithmic bias, deep fakes, federated learning, and intrusion detection, the book reveals the hidden mechanisms that shape user behavior and societal discourse. It explores the ethical implications of AI-powered content curation, privacy violations, and the rise of automated cyberattacks-while proposing regulatory and technological countermeasures. Case studies and real-world examples illustrate the consequences of unchecked AI deployment and the erosion of trust in online spaces. Key features: Examines misinformation, digital surveillance, and algorithmic bias Presents real-world case studies and AI behavior models Highlights privacy concerns and ethical frameworks Proposes AI-driven defenses and user empowerment strategies

