***** To join INSNA, visit http://www.insna.org *****
- Submission deadline: July 15th, 2020
- Acceptance notification: August 15th, 2020
The wide adoption of social media resulted in an ocean of data which presents an interesting opportunity for performing data mining and knowledge discovery in a real-world context. The enormity and high variance of the information that propagates through large user communities influences the public discourse in society and sets trends and agendas in topics that range from marketing, education, business and medicine to politics, technology and the entertainment industry. This influence can however act as a double-edged sword, since it can also introduce threats to the community, if it is rooted in dissemination of disinformation, i.e. purposefully manipulated news and information, or misinformation, i.e. false and incorrect information, on social media. In recent years, the potential threats of dis/misinformation has been the subject of huge controversy in different domains like public healthcare systems, socio-economics, business and politics. For instance, the circulation of scientifically invalid information and news can negatively affect the way the public responds to the outbreak of a pandemic disease, like COVID-19. Threats can also be posed to the legitimacy of an election system by enabling opponent campaigns to shape the public opinion based on conspiracy theories stemming from false information. Mining the contents of social media to recognize the instances of misinformation and disinformation is a very first step towards immunizing the public society against the negative impacts they could introduce.
In this workshop, we invite researchers and practitioners from different disciplines such as computer science, big data mining, machine learning, social network analysis and other related areas to share their ideas and research achievements in order to deliver technology and solutions for mining dis/misinformation from social media.
Topics of interest include, but are not limited to:
- Descriptive models on fake new and malicious bot detection.
- Explainable AI for detection of dis/misinformation.
- User behavior analysis and susceptibility prediction with regard to dis/misinformation in social media.
- Trust and reputation in social media.
- Dis/misinformation propagation modeling and trace analysis.
- Prescriptive countermeasure methods against formation and circulation of misinformation
- Predicting misinformation and bias in news on social media.
- Predictive models for early detection of hoax spread in social media.
- Social influence analysis on online social media including discovering influential users and social influence maximization.
- Assessing the influence of fake news on advertising and viral marketing in social media.
- New datasets and evaluation methodologies to help predicting dis/misinformation in social media
- User modeling and social media including predicting daily activities, recurring events
- Determining user similarities, trustworthiness and reliability.
- Social networks and information/knowledge dissemination such as topic and trend prediction, prediction of information diffusion patterns, and identification of causality and correlation between events/topics/communities.
We invite the submission of regular research papers (4-6 pages) as well as position papers (2 pages). We recommend papers to be formatted according to the CEUR style guidelines in the two-columned style (no page numbers). See details: http://ceur-ws.org/Vol-XXX/samplestyles/. All papers will be peer reviewed by two reviewers.
All submissions must be submitted in PDF format according to the guidelines through the Easychair installation: https://easychair.org/conferences/?conf=maison2020.
Ebrahim Bagheri, Ryerson University, [log in to unmask]
Huan Liu, Arizona State University, [log in to unmask]
Kai Shu, Arizona State University, [log in to unmask]
Fattane Zarrinkalam, Ryerson University, [log in to unmask]