TrustCon 2023 Recap

Protecting Communities, Enhancing Safety, Navigating Legislation, and Embracing Opportunities

As TrustCon 2023 wrapped up in San Francisco last week, it was evident that the conference was a huge success with over 800 attendees (double their last event!) and a robust speaker lineup including experts from Google, AirBnB, Pinterest, Meta, Bumble, Discord, and TikTok.

Let’s explore the key themes from TrustCon and discuss potential opportunities arising from the challenges that were identified.

Key Themes and Opportunities

Child Safety

A strong current running through the conference was the emphasis on child safety and protection around Child Sexual Abuse Material (CSAM). Many companies, including Discord, proudly showcased their efforts to increase child protection measures and proactively deactivate accounts associated with CSAM.

The opportunity here is one we can all stand behind; protection first and foremost for underaged users. Key actions include exploring new methods for preventing bad actors from ever gaining access to a platform with minimal PII requested upfront but with maximum protection applied. Additionally, we heard from many speakers and sponsors on their efforts and new technology initiatives to streamline content moderation and escalations. 

Legislation

All eyes are on the European Union (EU) and the enactment of the Digital Services Act (DSA) and the Age Appropriate Design Code (AADC). Each could have several effects on US online marketplaces, even though they are primarily aimed at the EU. Here’s how US online marketplaces and communities could be affected:

  1. Global Compliance: US-based online marketplaces and communities that operate or have users within the EU need to comply with the DSA and AADC. We anticipate companies with any EU affiliations will build a solution that benefits all users, even those based in the US or other parts of the world.
  2. Data Privacy and Collection: US marketplaces are already making moves to adopt the AADC standards for age appropriate content. This may mean that we begin collecting more PII than we have before, and monitoring the impact on user sign-up friction.
  3. Content Moderation: DSA requires increased transparency and accountability, but we’ll tackle this important topic a bit later.
  4. Enhanced User Protection: Both would push US online marketplaces to invest in advanced systems for user and content verification. 

Challenges of Collecting Additional PII?

The conference highlighted the trend of collecting less personal information at sign-up, forcing platforms to be more diligent in running checks. The challenge is how to collect personal data with minimal input, but still provide protection for users without disrupting the user experience. 

Wellbeing of Content Moderators

Maybe most interesting were discussions centered on content moderators’ wellbeing. This is an area that is rising to the surface as employee care and wellbeing becomes even more important to companies. 

Several sessions highlighted the need to protect content moderators’ mental health while balancing their productivity goals centered around flagging/monitoring unsafe content. US online marketplaces need to adopt a page from the EU playbook to improve their content moderation policies with clearer measures and empathy for those in the content moderation field. 

AI will continue to play an integral role in this area as technology continues to innovate. AI can reduce the volume of materials to be moderated, empowering moderators to focus on higher-risk content. 

AI is Everywhere

You can’t attend a tech conference these days without some highly-charged discussions surrounding generative AI. The risk of technological advancements enabling bad actors to quickly scale the reach of malicious content, particularly CSAM, was of real concern to participants. However, most of the discussion around AI focused on its role in content moderation as outlined above.

“Human review is no longer the gold standard, and in 5 years, the vast majority of content moderation will be performed by models like GPT-4.” – Alex Rosenblatt, TrustCon speaker and digital communications & social media strategist, professor & thought leader. There’s still a lot to discover as the scope of AI evolves, but all agree that it will be a part of the Trust and Safety world, however its use is ultimately defined.

How Tessera’s Criminal Data Solutions Can Help 

Our Criminal Database provides proactive prevention from bad actors ever gaining access to marketplace and community platforms. The focus is on a preemptive approach to content safety rather than a reactive approach after harm is already caused. 

Our enormous nationwide criminal database provides a smooth and effective vetting process for online platform providers. User onboarding experiences take advantage of our criminal databases, sex offender registries, and security watch lists to reduce the instances of harmful users entering into their communities.

One example would be how an online community, like TikTok, can easily partner with Tessera to run a “safety screening” during the onboarding process of a user to validate in milliseconds the user’s ability to positively contribute to the platform.

Personal Identifiable Information (PII) presents little challenge for the Tessera Team. We’ve developed a process to extract criminal data using minimal PII collected at onboarding by platform providers. No added friction needs to be experienced by the user.

We empower content moderators and Trust and Safety teams to quickly make informed people risk decisions while also protecting their valued users, especially minors, from malicious activities. 

For example, let’s say content is flagged for CSAM or other malicious activity or violation of terms of service. A trigger would queue a “safety screening” to validate behavior, quickly empowering content moderation teams on how to take action (i.e., deactivate accounts, etc.). As a case evolves, our solutions offer a waterfall approach with varying levels of screening depth.

Final Thoughts

Our three days at TrustCon highlighted essential themes and offered valuable insights. It also forged a tight-knit community of T&S professionals focused on creating safer online communities. We’re proud to be a pioneering contributor to creating safer, more secure online communities and marketplaces.

Interested to connect with our team and learn more about how Tessera is partnering with leaders in the Trust & Safety space? Get in touch with our Sales team to learn more about our solutions for online communities and 2-sided marketplaces.

Related resources

ID verification blog hero image

How to Prevent Fraud with Identity Verification Solutions

As identity fraud continues to grow in volume and sophistication, companies should always try to stay one step ahead in the ID verification tactics they use to tackle this issue.

Webinar Recording & Recap | Trust & Safety by Design

Catch the recording from our virtual event, "Safety by Design." Learn how to optimize Trust & Safety practices throughout your user journey.

Webinar Recording & Recap | New Ways for Trust & Safety Leaders to Keep Communities Safe

Trust & Safety professionals agree that it’s becoming more challenging than ever to proactively control the safety of your communities. Generative AI and other tactics are just some of the ways that bad actors are impacting your platform. And no matter how tightly you plan your processes and policies, we continue to live in a reactive world.