Novel Directions for Multiagent Trust Modeling in Online Social Networks

dc.contributor.advisorCohen, Robin
dc.contributor.authorParmentier, Alexandre
dc.date.accessioned2020-05-14T01:07:35Z
dc.date.available2020-05-14T01:07:35Z
dc.date.issued2020-05-13
dc.date.submitted2020-05-07
dc.description.abstractThis thesis presents two works with the shared goal of improving the capacity of multiagent trust modeling to be applied to social networks. The first demonstrates how analyzing the responses to content on a discussion forum can be used to detect certain types of undesirable behaviour. This technique can be used to extract quantified representations of the impact agents are having on the community, a critical component for trust modeling. The second work expands on the technique of multi-faceted trust modeling, determining whether a clustering step designed to group agents by similarity can improve the performance of trust link predictors. Specifically, we hypothesize that learning a distinct model for each cluster of similar users will result in more personalized, and therefore more accurate, predictions. Online social networks have exploded in popularity over the course of the last decade, becoming a central source of information and entertainment for millions of users. This radical democratization of the flow of information, while purporting many benefits, also raises a raft of new issues. These networks have proven to be a potent medium for the spread of misinformation and rumors, may contribute to the radicalization of communities, and are vulnerable to deliberate manipulation by bad actors. In this thesis, our primary aim is to examine content recommendation on social media through the lens of trust modeling. The central supposition along this path is that the behaviors of content creators and the consumers of their content can be fit into the trust modeling framework, supporting recommendations of content from creators who not only are popular, but have the support of trustworthy users and are trustworthy themselves. This research direction shows promise for tackling many of the issues we've mentioned. Our works show that a machine learning model can predict certain types of anti-social behaviour in a discussion starting comment solely on the basis of analyzing replies to that comment with accuracy in the range of 70% to 80%. Further, we show that a clustering based approach to personalization for multi-faceted trust models can increase accuracy on a down-stream trust aware item recommendation task, evaluated on a large data set of Yelp users.en
dc.identifier.urihttp://hdl.handle.net/10012/15848
dc.language.isoenen
dc.pendingfalse
dc.publisherUniversity of Waterlooen
dc.subjectartificial intelligenceen
dc.subjectmultiagent systemsen
dc.subjecttrust modelingen
dc.subject.lcshArtificial intelligenceen
dc.subject.lcshMultiagent systemsen
dc.subject.lcshSocial networksen
dc.subject.lcshComputer network resourcesen
dc.titleNovel Directions for Multiagent Trust Modeling in Online Social Networksen
dc.typeMaster Thesisen
uws-etd.degreeMaster of Mathematicsen
uws-etd.degree.departmentDavid R. Cheriton School of Computer Scienceen
uws-etd.degree.disciplineComputer Scienceen
uws-etd.degree.grantorUniversity of Waterlooen
uws.contributor.advisorCohen, Robin
uws.contributor.affiliation1Faculty of Mathematicsen
uws.peerReviewStatusUnrevieweden
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.scholarLevelGraduateen
uws.typeOfResourceTexten

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Parmentier_Alexandre.pdf
Size:
3.21 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
6.4 KB
Format:
Item-specific license agreed upon to submission
Description: