Senior AI Security Engineer
Hello! We are Cashea đź‘‹ and our mission is to give Venezuelans back the opportunity to access credit through a BNPL business model. Since our launch in 2022, we have been dedicated to promoting financial inclusion. Today we have more than 9 million active users, both consumers and merchants, and we have become a trusted brand in Venezuela, winning hearts and minds.
About the role
The Senior AI Security Engineer is responsible for testing, attacking, and defending the company's generative AI implementations: proprietary AI products, internal platforms, agents, integrations, and coding assistants. Combines an offensive profile (prompt injection, jailbreaking, LLM and agent red teaming) with a defensive profile (guardrails, input/output controls, best practices and guidance to teams). Works with AppSec, DevSecOps, product and engineering.
Responsabilities:
Perform red teaming and penetration testing on AI deployments: proprietary products using LLMs, agents, chatbots, coding assistants and any generative interface. This includes direct and indirect prompt injection, jailbreaking, data exfiltration via outputs, system prompt leakage, excessive agency and tool misuse.
Assess the security of AI agents in production: permissions, tool use, function calling, plugins and autonomous action capabilities — identify abuse paths and confused-deputy scenarios.
Design and implement guardrails and defensive controls based on offensive findings: input/output validation, content filtering, hardening of system prompts, context-based access controls and action limits for agents.
Apply and operationalize controls from the OWASP Top 10 for LLM Applications, OWASP Top 10 for Agentic Applications and MITRE ATLAS across the organization’s products and platforms.
Test and validate the robustness of existing guardrails in internal platforms and in products that expose AI to end users.
Develop and maintain tools, scripts and automations for LLM red teaming, focusing on continuous and repeatable testing.
Document findings with actionable remediation playbooks and support development teams in fixing and preventing AI vulnerabilities.
Create and maintain best-practice standards for secure development with generative AI: how to build secure agents, how to integrate LLMs without exposing data, how to perform code review of AI-generated code.
Collaborate with the SOC to define alerts and detection based on attack patterns against LLMs and agents.
Requirements:
4+ years in Application Security, Penetration Testing or Security Engineering with demonstrable offensive testing experience.
Strong knowledge of OWASP: Top 10 Web, LLM Applications, API Security.
Hands-on experience attacking and/or defending systems with LLMs and generative AI: prompt injection, jailbreaking, data exfiltration, output manipulation, indirect prompt injection.
Understanding of generative AI architectures: how an LLM works, RAG, embeddings, function calling/tool use, context windows, system prompts, autonomous agents.
Experience in web application and API security.
Python for tooling development, testing scripts and automation.
Ability to document technical findings clearly and translate them into concrete actions for development teams.
Advanced English.
Why you'll love working at Cahsea
At Cashea, we have a work culture based on trust and purpose. If you need a clue as to why we are a good choice, these are our core values:
We don't work on autopilot. Everything we do is intentional. We love to develop ideas with full awareness of the impact they can have on our users.
Your creativity and curiosity are our most important assets.
Your voice matters. We listen and make space for ideas and feedback. Everyone belongs, and what's important to you is important to us.
We value transparency. Clarity keeps us connected and grounded.
Last but not least, we focus on real impact.
If you want to work with us, fill out the application. We'd love to meet you!
- Departamento
- IngenierĂa
- Ubicaciones
- Buenos Aires
- Estado remoto
- Completamente remoto