Book Volume 1
Page: i-ii (2)
Author: Michael N. Huhns
Page: iii-iv (2)
Author: Faria Nassiri-Mofakham
Page: v-v (1)
Author: Faria Nassiri-Mofakham
Page: 1-58 (58)
Author: Nasser Ghasem-Aghaee, Tuncer Ören and Levent Yilmaz
Simulation is applied to exhibit the extent of how the offered valuable functionalities of given issues are appreciated. A systematic glossary of about twenty types of intelligence provides a synoptic background for intelligent behavior that can be represented by agents. The three categories of the synergy of simulation and software agents are discussed in the following three sections: agent simulation, agentsupported simulation, and agent-monitored simulation. Extensive bibliographic analysis which is based on about 440 references supports each category of the synergy of simulation and software agents. Discussion of some desirable research directions and a conclusion section terminate the article.
Page: 59-92 (34)
Author: Helder Coelho
In recent years, Artificial Intelligence (AI) has been in the news again due to advances in machine learning, speech processing, natural language generation, robotics and agent technology, not to mention due to big investments. Examples, such as Google Translate, Microsoft’s Skype Translator, Google’s RankBrain, IBM’s Watson, Apple’s Siri, Honda’s ASIMO and new startups mix with movies, healthcare, social and political simulation and energy management. AI achieved some success in education, medicine, cognitive and complexity sciences due to better interfaces, algorithms and mechanisms, and due to interdisciplinary research in techniques that mirror biological behaviour. A closer look at animal and human behaviour (mimicking nature) and to how brains store/process information may help AI continue innovating to further develop. Today, there is a different stance in schools of AI and a cooperative way of viewing different technologies working together in an integrative way, including understanding limitations and dangers (e.g. Facebook emotion study). Furthermore, this is a strong signal of more democracy and respect for approaches to consider intelligence at large, thus also indicating a more mature field. In this chapter, I give an overview of the field of Artificial Intelligence to explain why recent breakthroughs and better algorithms have tackled big data.
A Baseline for Nonlinear Bilateral Negotiations: The full results of the agents competing in ANAC 2014
Page: 93-121 (29)
Author: Reyhan Aydoğan, Catholijn M. Jonker, Katsuhide Fujita, Tim Baarslag, Takayuki Ito, Rafik Hadfi and Kohei Hayakawa
In the past few years, there is a growing interest in automated negotiationin which software agents facilitate negotiation on behalf of their users and try to reach joint agreements. The potential value of developing such mechanisms becomes enormous when negotiation domain is too complex for humans to find agreements (e.g. e-commerce) and when software components need to reach agreements to work together (e.g. web-service composition). Here, one of the major challenges is to design agents that are able to deal with incomplete information about their opponents in negotiation as well as to effectively negotiate on their users’ behalves. To facilitate the research in this field, an automated negotiating agent competition has been organized yearly. This paper introduces the research challenges in Automated Negotiating Agent Competition (ANAC) 2014 and explains the competition set up and results. Furthermore, a detailed analysis of the best performing five agent has been examined.
Page: 122-153 (32)
Author: Nuno Trindade Magessi and Luis Antunes
One of the most intriguing and challenging problems in cognitive science is the difference between the individuals’ perception about reality and the reality itself. Until the last decade, the main focus has been on the behaviour of individuals and the evidences of their biases, according to some stimulus. Now, researchers have centred their focus on the main reasons why these biases occur. Consequently, it is important to analyse deeply what are the consequences of these biases to society. For that purpose, we tackle the problem by using the case study of Acquired Immune Deficiency Syndrome (AIDS) and how people perceive it. In this article, we built a multi agent model to understand the consequences to the society when individuals do not perceive correctly, the reality. The obtained results reveal that the agents who perceive more correctly the danger of HIV are the ones who have more propensities to be infected and contaminate the rest of population. This is the opposite of what is expected from perception demonstrating the existence of a reverse effect.
Page: 154-183 (30)
Author: Mohammad Ehsan Basiri, Nasser Ghasem-Aghaee and Ahmad Reza Naghsh-Nilchi
Sentiment analysis is a field of study concerning the extraction of people’s opinion and attitude from their writings on the Web. Most research efforts in the area of sentiment analysis have focused on English texts and few works considered the problem of Persian sentiment analysis. Persian is spoken by more than a hundred million speakers around the world and is the official language of Iran, Tajikistan, and Afghanistan. From a computational point of view, Persian is a challenging language due to its derivational nature and the use of Arabic words, informal style of writing, and different forms of writing for compound words. In this chapter, we present a lexicon-based framework for sentiment analysis in Persian. Specifically, we develop a Persian lexicon which associates sentiment words with their sentiment strengths. Furthermore, in the proposed framework, we address several problems of sentiment analysis in Persian, such as misspelling, word spacing, and stemming. We used the proposed framework in the problem of polarity detection and rating prediction of cellphone reviews. The results show that our approach outperforms supervised machine learning techniques in terms of accuracy and mean absolute error.
The Age of the Connected World of Intelligent Computational Entities: Reliability Issues including Ethics, Autonomy and Cooperation of Agents
Page: 184-213 (30)
Author: Tuncer Ören and Levent Yilmaz
In this chapter, Internet of Things is perceived as a special case of Connected World which in turn is considered a step in tool making. Five major stages of tool making in the history of civilization are outlined and it is pointed out that passing from one era to the next one requires a new entity which are: energy, knowledge processing ability, intelligence, connectedness, and superintelligence. The eras thus identified are: hunter-gatherer and agriculture age, industrial age, information age (also called knowledge age, informatics age), cybernetics age, connected age, and post-human era. Then what may go wrong in a connected world is elaborated on; some counter-intuitive views about cooperation and autonomy are expressed and role of ethics is stresses especially in connected world.
Page: 214-267 (54)
Author: Majid Esmaelian, Hadi Shahmoradi and Fateme Nemati
In this chapter, a new multi criteria classification technique is presented. This method is a developed/advanced UTilites Additives DIScriminantes (UTADIS) method which applies a polynomial function as its utility function for each attribute rather than a piecewise linear approximation. The method, named P-UTADIS, is applied for both nominal and ordinal group and by calculating coefficients of polynomials, threshold limits of classes and weights of attributes, tends to minimize the classification errors. Unknown parameters of a classification problem are estimated through a hybrid algorithm including Particle Swarm Optimization algorithm (PSO) and Genetic Algorithm (GA). The results of implementing P-UTADIS on different data sets and comparing them with some other previous methods indicate the high efficiency of P-UTADIS.
Page: 268-293 (26)
Author: Abdolreza Nazemi and Konstantin Heidenreich
For calculating the expected loss besides the exposure at default two measures, namely the probability of default and the loss given default (LGD), have to be taken into account. While in literature much attention has been paid to the default rate the loss given default is still comparatively less investigated. Especially, as a consequence of the enhanced regulation by Basel II accord loss given default has become a much more critical measure for banks and other financial institutions as it has been before. Therefore, in this study artificial intelligence and statistical techniques are used to predict the recovery rate of corporate bonds that defaulted between 2002 and 2012. Macroeconomic factors, bond characteristics and industry specific factors are taken into account as covariates for the techniques. Starting from the base case of a plainvanilla Least Squares-Support Vector Machine (LS-SVM) two further modifications of a LS-SVM are presented. The performance of the LS-SVM happens to be significantly better than the performance of a casual linear regression approach. So, it is empirically shown that support vector regression is an approach to LGD modeling which has significant potential to be used for forecasts of the recovery rate both for banks and other financial institutions as well as for investors in distressed debt.
Page: 294-334 (41)
Author: Amirali K. Gostar, Reza Hoseinnezhad and Alireza Bab-Hadiashar
Multi-object estimation refers to applications where there are unknown number of objects with unknown states, and the problem is to estimate both the number of objects and their individual state vectors, from observations acquired by sensors. The solution is usually called a multi-object filter. In many modern complex systems, multi-object estimation is one of the most challenging problems to be solved for satisfactory performance of the dedicated tasks by the system. A wide range of practical applications involve multi-object estimation, from multi-target tracking in radar to visual tracking in sport, to cell tracking in biomedicine, to data clustering in big data analytics. In the past decade, a new generation of multi-object filters has been developed and rapidly adopted by researchers in various fields, that is based on using stochastic geometric models and approximations. In such methods, the multi-object entity is treated as a random finite set (RFS) variable (with random variations in its cardinality and elements), and the stochastic geometric-based notions of density and integration, developed in the new theory of finite set statistics (FISST), are used to formulate Bayesian filters for estimation of cardinality (number of objects) and state of the multi-object RFS variable. Examples of such solutions include PHD filter, CPHD filter and the recent trend of multi-Bernoulli filters. In many applications, the observations are acquired through a controlled sensing procedure, either by controlling a mobile sensor (e.g. in radars and visual surveillance) or by selecting a sensor node (e.g. in sensor networks). This chapter reviews the most recent developments in sensor management (control or selection) solutions devised for multi-Bernoulli solutions in various applications. It first presents basics of random set theory and formulation of the cardinality-balanced and labeled multi-Bernoulli filters. The most recent sensor-control and sensor-selection solutions that have been proposed by the authors and other researchers active in the field are then presented and comparative simulation results are discussed.
Page: 335-369 (35)
Author: Imane Basiry and Nasser Ghasem-Aghaee
Scheduling is a basic activity in large scale systems with unexpected and high complexity demands. Instances of these complex systems are found in manufacturing, logistics, economics, traffic control, and biology. The number of entities and their interconnections are the reasons motivate researchers to find solutions which are not based on central control structures. Multi-agent based architecture is a distributed collections of interacting entities which function without a supervisor. The advantage of holonic self-organization concepts lies in the fact that they contribute to achieve more efficient performance. According to these principles, several approaches have been and are being designed, which are considered weak in handling emergency demands in an industrial environment. The concepts of multi-agent and holonic systems are addressed and discussed in this chapter where their advantages and weak points are revealed with a focused in holonic control architecture in overcome the weak points. The main objective of this architecture is to reduce time and complexity overload. The concept of parallel processing and task priority are of concern here. Task priority reduces time delay in an unexpected situation causes for handling critical tasks. The techniques like self-organization methods, high percentage of autonomy for controller holons, use of common data source, and increasing parallel processes are applied in reducing output delivery time. This newly proposed architecture is tested in a simulation environment.
Page: 370-396 (27)
Author: Dara Tafazoli and M Elena Gómez-Parra
The wide spread and development of Technologies in our daily lives provides lots of opportunities for language teachers and learners to benefit though it may also result in some pedagogical difficulties. At its first stage, this chapter aimed at introducing Computer-Assisted Language Learning (CALL) as the first step in applying Artificial Intelligence (AI) to language learning and teaching; then, the new concept of Robot-Assisted Language Learning (RALL) defined both theoretically and applied to show the new trends in the educational purposes of AI.
Page: 397-406 (10)
Author: Faria Nassiri-Mofakham
Intelligent Computational Systems presents current and future developments in intelligent computational systems in a multi-disciplinary context. Readers will learn about the pervasive and ubiquitous roles of artificial intelligence (AI) and gain a perspective about the need for intelligent systems to behave rationally when interacting with humans in complex and realistic domains. This reference covers widespread applications of AI discussed in 11 chapters which cover topics such as AI and behavioral simulations, AI schools, automated negotiation, language analysis and learning, financial prediction, sensor management, Multi-agent systems, and much more. This reference work is will assist researchers, advanced-level students and practitioners in information technology and computer science fields interested in the broad applications of AI.