Picture this scenario: A "cloud" filesharing service, backed by some well-known investors and in the business for a few years, provides a service that allows clients to share files and folders with others.
UserA sends a folder to UserB with no password security (default setting). The link arrives in an email as:
UserB visits the link. The parameters for the POST following the GET for that link look like:
POST /publicPage.json HTTP/1.1
In a proxy, UserB changes the 'pubFolderPath=' value to '%2FPrivate' (or just %2F for a listing of the users' root directory).
UserB can now view and traverse all folders that UserA has access to (Shared and Private) without ever logging in. If UserA is an admin, UserB can now access *every* subdirectory for the given domain and files contained inside, including the data in all other users' private directories.
A simple GoogleDork digs up many similar "companyXYZ" links for other subdomains.
While the company name is fake, the scenario was very real and jeopardized client data that was entrusted to be secure.
Once reported to unknown-filesharing-company, the problem was addressed in a serious manner and a fix was rolled out later that same day. They were sure to mention that they use the services of a "3rd party penetration testing company" to keep their application secure, but as chance would have it, this public-facing piece of the app was not being tested because it was a new feature…a new feature that had been out in the wild for months.