Apple is set to launch its private AI cloud, "Private Cloud Compute," next week, and in preparation, it has announced a reward of up to $1 million for security researchers who can identify vulnerabilities in this new service.
In a recent post on its security blog, Apple shared details of the bounty program, which includes a maximum payout of $1 million for anyone who reports exploits allowing remote malicious code execution on Private Cloud Compute servers. Additionally, researchers may receive up to $250,000 for reporting vulnerabilities that can access sensitive user information or user prompts stored within the private cloud.
Apple stated it will consider "any security issue that has a significant impact" even if it falls outside of the predefined categories, including rewards up to $150,000 for exploits that retrieve sensitive user data from a privileged network position.
The company emphasized that maximum bounties are awarded for vulnerabilities that compromise user data or request data outside the trust boundary of the Private Cloud Compute environment.
This initiative is part of Apple’s broader bug bounty program, which incentivizes ethical hackers to report security flaws that could threaten the integrity of its devices or user accounts.
In recent years, Apple has also expanded security measures for its flagship iPhone, offering a version for researchers that facilitates hacking to improve the device's defenses, often targeted by spyware.
Further information about the security protocols, as well as source code and documentation for Private Cloud Compute, was provided in the blog post. Apple describes the service as a secure, online extension of its customers’ on-device AI model, "Apple Intelligence," designed to perform more intensive AI tasks while safeguarding user privacy.
Post a Comment