Moderation and Reporting Policy

Effective Date: January 01, 2026
Last Updated: January 01, 2026

GAGE Technology Inc.
St. Simons, GA
support@gagework.com


Introduction

This Content Moderation & Reporting Policy explains how GAGE TECHNOLOGY, INC. monitors, evaluates, and enforces safety and conduct standards across the GAGE platform. To maintain a safe and inclusive environment for all users, GAGE implements a dual-layer system to identify, review, and take action on objectionable or abusive content after it has been posted, ensuring it is removed within a reasonable, clearly defined timeframe.

This policy describes automated detection, human review, reporting tools, prohibited content classes, enforcement actions, and safety escalation procedures.


Moderation Philosophy

GAGE aims to provide a safe, respectful, and productive environment for workplace engagement and peer social interaction. We use a layered moderation approach consisting of:

  • Automated filters and detection systems.

  • Video content review.

  • User reporting.

  • Safety escalation and legal compliance processes.


Automated Detection of Inappropriate Text Content

GAGE uses automated filtering tools that analyze text contained in post descriptions and comments. If the system detects offensive or inappropriate language, the content will be flagged after being posted and sent to the GAGE Support Team for review.

Flagged categories may include, but are not limited to:

  • Insults and abusive expressions.

  • Racial or ethnic slurs.

  • Homophobic or transphobic slurs.

  • Sexist or misogynistic language.

  • Hate speech or extremism-related expressions.

  • Variations of the above using symbols, numbers, or altered spelling.

Once flagged, the GAGE Support Team reviews the content and decides within 24 hours.

If a user reports the content, the reporting party will be notified of the outcome via email.

If the content is removed, the user who posted it will receive an email notification explaining the reason for removal, a reminder of the platform’s Terms and Conditions, and a warning regarding repeated violations, which may result in temporary or permanent account restrictions.


Human Review of Uploaded Video Content

As GAGE currently limits uploads to videos of one (1) minute or less, our moderation team manually reviews videos after they have been posted, ensuring ongoing compliance with community standards.

This moderation approach includes:

  • Continuous review of newly posted videos.

  • Identification of objectionable or harmful content following publication.

  • Immediate action on direct user reports related to video content.

Our moderation team may remove or restrict any video found to violate these Terms, and may take further action as necessary to protect the community.


User Reporting Tools

Users may report content or accounts for:

  • Harassment, bullying, hate speech.

  • Sexual content or sexual advances.

  • Underage users (under 16).

  • Violent or dangerous content.

  • Spam or scams.

  • Impersonation or fake accounts.

  • Workplace safety concerns.

Reports may be submitted via in-app tools or email at:

reportabuse@gagework.com


Blocking Tools

Users may block others at any time. Blocking prevents:

  • Messaging.

  • Commenting.

  • Profile viewing (where applicable).

Blocking is an essential safety function and is required by Apple for UGC applications.


Enforcement Actions

Depending on the severity and frequency of violations, GAGE may apply:

  • Content removal.

  • Content visibility reduction.

  • Temporary account restrictions.

  • Feature limitations (e.g., messaging bans).

  • Warning notifications.

  • Permanent account suspensions.

  • Device bans for repeated or severe violations.


Zero-Tolerance Policies

GAGE immediately removes and disables accounts involved in:

  • Sexual content involving minors.

  • Child exploitation.

  • Human trafficking.

  • Serious threats of harm or violence.

  • Distribution of illegal content.

  • Coordinated harassment campaigns.


Workplace-Related Misconduct

When interacting in workplace workspaces, users may not:

  • Retaliate against coworkers.

  • Share confidential employer information.

  • Falsify evaluations or recognition posts.

  • Harass coworkers or supervisors.


Escalation to Law Enforcement

GAGE cooperates with law enforcement when required by law or when credible threats, child safety concerns, or illegal activity are detected.
We may preserve or share data when legally compelled or necessary to protect user safety.


Appeals

Users may appeal moderation decisions by emailing support@gagework.com. Not all cases are eligible for appeal, including child safety or severe violence violations.


Transparency

GAGE may provide periodic transparency updates regarding moderation trends, enforcement actions, and platform safety.


Updates to This Policy

This policy may be updated to reflect evolving safety standards, legal requirements, or new platform features.


Contact Information

support@gagework.com

Let's do this shift.

Copyright © 2025 Gage Technology, Inc. All rights reserved.

Gage and Gagework are registered trademarks of Gage Technology, Inc.