Social Media Parental Controls Guide



CNN

More than a year ago, social media companies came under fire for the way they protected, or failed to protect, their youngest users.

In a series of congressional hearings, executives from Facebook (FB), TikTok, Snapchat and Instagram faced tough questions from lawmakers about how their platforms exposed young users to harmful content and damaged mental health and body image (especially are teenage girls), and the lack of adequate parental controls and protections to guard against it.

The hearings came after whistleblower Frances Haugen revealed Instagram’s influence on teens in what became known as the “Facebook File,” prompting the companies to vow to make changes. The four social networks have since rolled out more tools and parental control options aimed at better protecting young users. Some have also made changes to their algorithms, such as making teens see less sensitive content by default and ramping up their moderation. But some lawmakers, social media experts and psychologists say the new solutions are still limited and more needs to be done.

“More than a year after Facebook documents dramatically exposed Big Tech’s abuses, social media companies have taken only small, slow steps to clean up their practices,” the senator said. Richard Blumenthal, chairman of the Senate Consumer Protection Subcommittee, told CNN Business. “Trust in Big Tech is long gone and we need real rules to keep kids safe online.”

Michela Menting, director of digital safety at market research firm ABI Research, agrees that social media platforms “provide little substance to address the ills that their platforms create.” Responsibility activates various parental controls, such as those designed to filter, block, and limit access, as well as more passive options, such as monitoring and monitoring tools that run in the background.

Alexandra Hamlet, a clinical psychologist in New York City, recalls being invited to a roundtable discussion about 18 months ago about how to improve Instagram, especially for younger users. “I don’t see many of our ideas being implemented,” she said. She added that social media platforms needed to work to “continue to improve parental controls, protect young people from targeted advertising, and remove objectively harmful content.”

The social media companies mentioned in this article either declined to comment or did not respond to requests for comment on criticism that more needs to be done to protect young users.

Currently, guardians must learn how to use parental controls, while also being aware that teens can often circumvent these tools. Below is a breakdown of what parents can do to help keep their children safe online.

In the aftermath of the leaked documents, Meta-owned Instagram has suspended its much-criticized plans to release a version of Instagram for children under 13 and focus on making its main service safer for younger users.

It has since launched an education center for parents with resources, tips and articles from user safety experts, and a tool that lets guardians see and set time limits on how much time their kids spend on Instagram. Parents can also receive updates on which accounts their teens follow and which accounts follow them, and view and receive notifications when their teens update their privacy and account settings. Parents can also see which accounts their kids have blocked. The company also offers video tutorials on how to use the new regulatory tools.

Another feature encourages users to take a break from the app, such as suggesting they take a deep breath after a predetermined amount of time, write something down, check a to-do list, or listen to a song. Instagram also said it was taking a “stricter approach” to the content it recommends to teens, actively pushing them toward different topics, such as architecture and travel destinations, if they spend too much time on any type of content.

Facebook’s Safety Center offers oversight tools and resources, such as articles and advice from leading experts. “Our vision for Family Hub is to finally empower parents and guardians to help their teens manage experiences across Meta technologies from one place,” Meta spokeswoman Liza Crenshaw told CNN Business.

The center also offers a guide to Meta VR parental monitoring tools from ConnectSafely, a nonprofit that helps kids stay safe online, to help parents discuss virtual reality with their teens. Guardians can see which accounts their teens have blocked and access supervision tools, as well as approve their teens to download or purchase apps that are blocked by default based on their ratings, or block specific apps that may not be appropriate for their teens.

In August, Snapchat launched a parent guide and hub designed to give guardians more insight into how their teens use the app, including who they’ve talked to in the last week (without revealing the content of those conversations). To use the feature, parents must create their own Snapchat account, and teens must opt ​​in and give permission.

While this is Snapchat’s first official foray into parental controls, it did previously institute some existing safety measures for younger users, such as requiring teens to become mutual friends before they can start communicating with each other and banning them from making public profiles. Teen users turn off their Snap Map location-sharing tool by default, but can also use it to reveal their real-time location to friends or family, even if their app is turned off as a safety measure. Meanwhile, the Friend Check Up tool encourages Snapchat users to check their friend lists and make sure there are some people they still want to keep in touch with.

Snap has previously said it was working on more features, such as enabling parents to see what new friends their kids have added and allowing them to privately report accounts that their kids may be interacting with. It’s also working on a tool that would give young users the option to notify their parents when they report an account or content.

The company told CNN Business that it will continue to build out its safety features and consider feedback from the community, policymakers, safety and mental health advocates, and other experts to improve the tool over time.

In July, TikTok announced new ways to filter mature or “potentially questionable” videos. The new protection assigns a “maturity score” to videos detected as likely to contain mature or complex themes. It also launched a tool designed to help people decide how much time they want to spend on TikToks. The tool allows users to set regular screen time intervals and provides a dashboard detailing how many times they have opened apps, a breakdown of day and night usage, and more.

The popular short-form video app currently offers a Family Matching Center that allows parents and teens to customize their safety settings. Parents can also link their TikTok account to their child’s app and set parental controls, including how much time they can spend on the app each day; limit exposure to certain content; decide whether teens can search for videos, hashtags or the live content; and whether their account is private or public. TikTok also offers its guardian guide, emphasizing how parents can best protect their children on the platform.

In addition to parental controls, the app restricts younger users from accessing certain features, such as live and direct messaging. When teens under 16 are ready to post their first video, a pop-up will also appear asking them to choose who can watch it. Push notifications are restricted after 9pm for account users aged 13 to 15 and after 10pm for users aged 16 to 17.

The company says it will do more to raise awareness of the parental controls feature in the coming days and months.

Discord did not appear in the Senate last year, but the popular messaging platform has faced criticism for its difficulty reporting questionable content and the ability of strangers to get in touch with young users.

In response, the company recently updated its Safety Center, where parents can find guidance on how to turn on safety settings, answers to frequently asked questions about how Discord works, and tips on how to talk to teens about online safety. Some existing parental control tools include an option to prevent minors from receiving friend requests or direct messages from people they don’t know.

Still, it is possible for a minor to connect with a stranger on a public server or in a private chat if the stranger is invited by someone else in the room, or if the channel link is put into a public group that the user accesses. By default, all users (including those aged 13 to 17) can receive friend invites from anyone on the same server so they can send private messages.



Source link