Property development firm Argent, which is overseeing a regeneration project on the 67-acre site, said its use of the cameras was “in the interest of public safety”.
“These cameras use a number of detection and tracking methods, including facial recognition, but also have sophisticated systems in place to protect the privacy of the general public,” a spokesperson said.
It is unclear how many facial recognition cameras have been installed at the site, or how long they have been in use.
It is understood Canary Wharf Group is also looking to introduce facial recognition technology on its property as an additional security measure.
The deployment of the technology, first reported by the Financial Times, has sparked fears over the use of personal data.
The Information Commissioner’s Office (ICO) said it has concerns about the potential for inappropriate use of facial recognition in a way that could undermine privacy.
An ICO spokesperson said: “The ICO is currently looking at the use of facial recognition technology by law enforcement in public spaces and by private sector organisations, including where they are partnering with police forces.
“We’ll consider taking action where we find non-compliance with the law.”
The government’s Office for Science last year found that in optimal conditions, facial recognition cameras have an error rate of around 0.2%.
But low-resolution footage, poor lighting and image view angle raise the error rate to over 10%.
It also pointed out some studies have found the algorithms can less accurately recognise people from certain ethnic groups.
Critics of the technology say the laws underpinning its use is too weak, and that it disproportionately encroaches on the right to privacy.
Both the government and police argue the law is up to date enough, according to a House of Commons library briefing note published last month.