Facebook chief Mark Zuckerberg said the firm took too long to flag a doctored video of US House Speaker Nancy Pelosi, describing it as an “execution mistake”.
The firm was criticised for not taking down the altered clip which made Ms Pelosi appear incoherent.
Mr Zuckerberg also addressed the firm’s wider struggle with “deepfake” videos.
Made by AI software, deepfakes use photos of a person to create a video of them in action.
The controversy surrounding the clip of Ms Pelosi erupted in May when Facebook said it would not remove the doctored video that had been slowed and made the US politician appear to slur.
One version of the clip has been viewed more than 2.5 million times.
Speaking at a conference in the US, Mr Zuckerberg said the social media giant took too long to remove the video.
“One of the issues in the example of the Pelosi video… which was an execution mistake on our side, was it took a while for our systems to flag that and for fact-checkers to rate it as false,” he said.
Mr Zuckerberg said there was a case for considering deepfakes as different from traditional misinformation, adding that it was necessary to proceed cautiously so as not to compromise freedom of speech.
“I think that what we want to be doing is improving execution, but I do not think we want to go so far towards saying that a private company prevents you from saying something that it thinks is factually incorrect to another person.
“That to me just feels like it’s too far and goes away from the tradition of free expression.”
Facebook’s policy on how to handle false content was put to the test recently when a deepfake video of Mr Zuckerberg was created.
Made for an art installation, the clip was designed to draw attention to how people can be monitored and manipulated via social media.
Facebook said it would not remove the manipulated video of Mr Zuckerberg from Instagram, in which he appears to confess to controlling the stolen data of billions of people.
It said it followed the same policy as with the altered video of Ms Pelosi and other misinformation on its services – to let third-party fact-checkers determine whether it was fake, and then make it less visible in users’ feeds rather than taking it down.