Startup, created on April 25th this year, quickly raised its first $2 million in investments by May 1st.

Payman is developing a platform where AI agents can pay individuals for completing tasks required by the AI agents.

It is understood that the budget for these AI agents is allocated by their owners. However, the distribution of tasks, monitoring their execution, and payment are managed autonomously by the AI agents themselves.

The platform is currently in a closed beta-testing phase, attracting the first developers of AI agents ready to integrate with the Payman platform.

From a technical perspective, the startup's idea seems clear. However, why it is needed and why the startup was able to raise $2 million in investments so quickly is particularly intriguing.

What's Interesting Soon, AI agents will begin to perform a variety of tasks currently done by humans. These agents include digital employees hired by companies and AI assistants working with individuals.

However, AI agents are unlikely ever to completely replace humans in a range of tasks, especially those that are complex or creative. For these tasks, human-AI interaction is necessary—where the AI handles what it can manage on its own, and humans do what only they can.

Typically, it is assumed that the human-AI agent interaction scenario will be "human-driven." Here, a human will plan the execution of a necessary task—subsequently performing parts independently or through human contractors, while delegating other parts to AI agents.

But why can't this scenario be "agent-driven"? Where the AI agent itself devises a plan for the assigned task—then handles the bulk of the work independently. For aspects it cannot manage alone, it could autonomously hire human contractors or seek a "second opinion" from humans to evaluate its work.

Where This Could Be Useful Here are some scenarios where this could be useful, though this list could certainly be expanded:

  • When an AI designer is creating designs meant for human use (websites, interfaces, advertisements, products)—and gathers feedback from those who will use them.
  • When an AI lawyer handles complex legal cases, where an experienced human lawyer might offer an unexpected strategy based on their expertise.
  • When an AI diagnostician consults experienced doctors for a "second opinion" on a diagnosis made by the agent.
  • When an AI sales agent develops a product sales strategy but now wants to attract live salespeople who could handle the actual sales within that strategy, as the conversion rate might be higher with human salespeople.
  • When an AI scriptwriter hires live performers or well-known bloggers for filming and participating in ad campaigns designed by the agent, because the returns on such campaigns might be higher than those generated artificially.

For this, first, the AI agent Payman needs access to funds that it can independently pay to people it hires for tasks, along with an interface for transferring these funds to performers—ensuring the process is quick, guaranteed, and does not waste the AI agent owner's time.

Second, the AI agent needs access to a database of verified individuals it can call upon to perform certain tasks. It makes no sense for humans to spend time contacting these individuals to find out if they are available and willing to perform the task at a certain time for a certain amount of money.

It is assumed that Payman AI agent itself might manage this database collection, initially using freelancers from popular platforms like Fiverr or Upwork, potentially with human involvement in the final approval stage for inclusion in the final database.

Third, the AI agent Payman must be able to verify the quality of the tasks completed by humans. For this, the platform will need to incorporate a specifically developed set of scenarios and rules for checking task execution.

Thus, in time, Payman should transform into a comprehensive platform:

  • Through which AI agents can receive and transfer money,
  • Featuring a marketplace of performers ready to work on AI agent orders,
  • Embedded with a catalog of procedures for verifying the quality of human-performed tasks tailored for the platform.

The idea that AI will set tasks for humans, rather than the other way around, may seem unnatural. But this is a new concept that could have many practical implications in various fields.

Therefore, the general direction is to seek areas where such operations can be organized and appropriate platforms created.

What are the fields where a) AI can already create a high-quality task execution plan, but b) the execution of specific points will require human skills and expertise? How to identify these points? Where and how to find people capable of performing them? How best to organize the trading process by terms and prices to attract them? What simple and automatable ways can be used to check the quality of their work?

The future is not about AI completely replacing humans. The future lies in platforms where people and AI can interact most effectively.

Accordingly, creating platforms for such interactions is a promising direction of movement. And today's review showed us another principle of organizing such interactions—that in some areas, it could undoubtedly be successful. So, we just need to find out where and how this can be achieved.