{"id":339,"date":"2024-07-22T16:52:26","date_gmt":"2024-07-22T20:52:26","guid":{"rendered":"https:\/\/www.econai.tech\/?page_id=339"},"modified":"2024-09-07T22:17:53","modified_gmt":"2024-09-08T02:17:53","slug":"adversarial-attacks","status":"publish","type":"page","link":"https:\/\/tomomitanaka.ai\/?page_id=339","title":{"rendered":"Gen AI: Adversarial Attacks"},"content":{"rendered":"\n<p>Adversarial attacks represent a significant challenge in the field of AI safety, particularly for generative models. These attacks involve manipulating input data to cause AI systems to produce unexpected or undesired outputs. <\/p>\n\n\n\n<p>In this post, we\u2019ll explore the concept of adversarial attacks, their implications for generative AI, and some real-world examples.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Understanding Adversarial Attacks<\/h3>\n\n\n\n<p>Adversarial attacks involve subtly altering the input data to deceive an AI model into making incorrect predictions or generating unintended outputs. These modifications are often imperceptible to humans but can cause AI models to make significant errors.<\/p>\n\n\n\n<p>There are four main types of adversarial attacks:<\/p>\n\n\n\n<p><strong>Evasion Attacks<\/strong>: These attacks aim to fool a model at test time, causing misclassification or unexpected generation. <\/p>\n\n\n\n<p><strong>Poisoning Attacks<\/strong>: These attacks target the training data, introducing malicious examples to influence the model&#8217;s behavior. <\/p>\n\n\n\n<p><strong>Model Extraction<\/strong>: Attackers attempt to steal model parameters or architecture through repeated queries. <\/p>\n\n\n\n<p><strong>Prompt Injection<\/strong>: In language models, carefully crafted prompts can manipulate the model&#8217;s output in unintended ways.<\/p>\n\n\n\n<p>These modifications are often imperceptible to humans but can cause AI models to make significant errors.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Real-World Examples<\/h3>\n\n\n\n<h4 class=\"wp-block-heading\">Prompt Injection Attacks<\/h4>\n\n\n\n<p>A <a href=\"https:\/\/arxiv.org\/abs\/2311.11538\">recent study<\/a> from Northwestern University uncovered serious vulnerabilities in custom GPT models, specifically related to prompt injection attacks. These customizable AI models, widely used for various tasks, were tested by researchers who found that nearly all of the 200+ models they examined were vulnerable.<\/p>\n\n\n\n<p>The study revealed a 97.2% success rate in extracting system prompts\u2014essentially the instructions that guide the GPT\u2019s behavior\u2014and a 100% success rate in accessing user-uploaded files. <\/p>\n\n\n\n<p>These vulnerabilities allow attackers to steal sensitive information and intellectual property, raising significant security concerns.<\/p>\n\n\n\n<p>Despite existing defenses, the researchers were able to bypass security measures in nearly every case, particularly when the custom GPTs had code interpreters enabled. <\/p>\n\n\n\n<p>This case study highlights the urgent need for stronger security frameworks to protect custom GPTs from exploitation, emphasizing the importance of securing these models as AI becomes more integral to critical applications.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Backdoor Attacks in Text-to-Image AI Models<\/h4>\n\n\n\n<p>AI-powered tools like Stable Diffusion are transforming the way we create art, but they come with hidden risks. <\/p>\n\n\n\n<p><a href=\"https:\/\/arxiv.org\/abs\/2305.04175\">Researchers have discovered that these models can be easily compromised through &#8220;backdoor attacks<\/a>.&#8221;<\/p>\n\n\n\n<p>In such attacks, subtle alterations are made during the training phase of the AI, allowing the model to generate unintended images when triggered by specific prompts. <\/p>\n\n\n\n<p>For example, a user might intend to create an image of a peaceful landscape, but the backdoor could cause the AI to insert inappropriate elements or drastically change the content.<\/p>\n\n\n\n<p>These backdoors can remain hidden and active even after further training, posing a significant security threat as AI tools become more widely used. This highlights the critical need for robust security measures in AI development to protect the integrity of these creative tools.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Adversarial Attacks with Python: Practical Examples<\/h3>\n\n\n\n<p>Adversarial attacks pose significant challenges in the domain of generative AI, where seemingly minor perturbations to input data can lead to vastly different outputs from the model. <\/p>\n\n\n\n<p>In this section, we&#8217;ll explore how adversarial attacks work, focusing on practical examples using Python. These examples will help illustrate the vulnerability of generative models and underscore the importance of robust defenses.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Example 1: Crafting Adversarial Images for a Generative Model<\/h4>\n\n\n\n<p>In this example, we\u2019ll use a Variational Autoencoder (VAE) to demonstrate how an adversarial attack can subtly alter the latent space of the model, leading to a completely different output.<\/p>\n\n\n\n<p>Instead of manipulating the input image directly, we will perturb the latent representation within the VAE. This approach highlights how adversarial attacks can affect the generative process in models designed to create new content, such as images.<\/p>\n\n\n\n<div class=\"wp-block-kevinbatdorf-code-block-pro\" data-code-block-pro-font-family=\"Code-Pro-JetBrains-Mono\" style=\"font-size:.875rem;font-family:Code-Pro-JetBrains-Mono,ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,monospace;line-height:1.25rem;--cbp-tab-width:2;tab-size:var(--cbp-tab-width, 2)\"><span style=\"display:flex;align-items:center;padding:10px 0px 10px 16px;margin-bottom:-2px;width:100%;text-align:left;background-color:#2b2b2b;color:#c7c7c7\">Python<\/span><span role=\"button\" tabindex=\"0\" data-code=\"import torch\nimport torch.nn as nn\nimport torch.optim as optim\nfrom torchvision import datasets, transforms\nfrom torch.utils.data import DataLoader\nimport matplotlib.pyplot as plt\n\n# Define the VAE model\nclass VAE(nn.Module):\n    def __init__(self, latent_dim=20):\n        super(VAE, self).__init__()\n        self.fc1 = nn.Linear(784, 400)\n        self.fc21 = nn.Linear(400, latent_dim)\n        self.fc22 = nn.Linear(400, latent_dim)\n        self.fc3 = nn.Linear(latent_dim, 400)\n        self.fc4 = nn.Linear(400, 784)\n\n    def encode(self, x):\n        h1 = torch.relu(self.fc1(x))\n        return self.fc21(h1), self.fc22(h1)\n\n    def reparameterize(self, mu, logvar):\n        std = torch.exp(0.5*logvar)\n        eps = torch.randn_like(std)\n        return mu + eps*std\n\n    def decode(self, z):\n        h3 = torch.relu(self.fc3(z))\n        return torch.sigmoid(self.fc4(h3))\n\n    def forward(self, x):\n        mu, logvar = self.encode(x.view(-1, 784))\n        z = self.reparameterize(mu, logvar)\n        return self.decode(z), mu, logvar\n\n# Loss function\ndef loss_function(recon_x, x, mu, logvar):\n    BCE = nn.functional.binary_cross_entropy(recon_x, x.view(-1, 784), reduction='sum')\n    KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())\n    return BCE + KLD\n\n# Load dataset\ntransform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))])\ntrainset = datasets.MNIST(root='.\/data', train=True, download=True, transform=transform)\ntrainloader = DataLoader(trainset, batch_size=128, shuffle=True)\n\n# Initialize model, optimizer, and train\nlatent_dim = 20\nmodel = VAE(latent_dim=latent_dim)\noptimizer = optim.Adam(model.parameters(), lr=1e-3)\n\n# Train the VAE\nepochs = 5\nfor epoch in range(epochs):\n    model.train()\n    train_loss = 0\n    for batch_idx, (data, _) in enumerate(trainloader):\n        data = data.to(torch.float32)\n        optimizer.zero_grad()\n        recon_batch, mu, logvar = model(data)\n        loss = loss_function(recon_batch, data, mu, logvar)\n        loss.backward()\n        train_loss += loss.item()\n        optimizer.step()\n\n    print(f'Epoch {epoch+1}, Loss: {train_loss\/len(trainloader.dataset):.4f}')\n\n# Generate an image\nmodel.eval()\nwith torch.no_grad():\n    sample = torch.randn(1, latent_dim)\n    generated_image = model.decode(sample).view(28, 28).cpu().numpy()\n\n# Display the generated image\nplt.figure(figsize=(5, 5))\nplt.title(&quot;Generated Image&quot;)\nplt.imshow(generated_image, cmap='gray')\nplt.axis('off')\nplt.show()\n\n# Perturb the latent space to generate an adversarial image\nepsilon = 0.3\nadversarial_sample = sample + epsilon * torch.sign(torch.randn_like(sample))\n\nwith torch.no_grad():\n    adversarial_image = model.decode(adversarial_sample).view(28, 28).cpu().numpy()\n\n# Display the adversarial image\nplt.figure(figsize=(5, 5))\nplt.title(&quot;Adversarial Image&quot;)\nplt.imshow(adversarial_image, cmap='gray')\nplt.axis('off')\nplt.show()\n\" style=\"color:#D4D4D4;display:none\" aria-label=\"Copy\" class=\"code-block-pro-copy-button\"><svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" style=\"width:24px;height:24px\" fill=\"none\" viewBox=\"0 0 24 24\" stroke=\"currentColor\" stroke-width=\"2\"><path class=\"with-check\" stroke-linecap=\"round\" stroke-linejoin=\"round\" d=\"M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2m-6 9l2 2 4-4\"><\/path><path class=\"without-check\" stroke-linecap=\"round\" stroke-linejoin=\"round\" d=\"M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2\"><\/path><\/svg><\/span><pre class=\"shiki dark-plus\" style=\"background-color: #1E1E1E\" tabindex=\"0\"><code><span class=\"line\"><span style=\"color: #C586C0\">import<\/span><span style=\"color: #D4D4D4\"> torch<\/span><\/span>\n<span class=\"line\"><span style=\"color: #C586C0\">import<\/span><span style=\"color: #D4D4D4\"> torch.nn <\/span><span style=\"color: #C586C0\">as<\/span><span style=\"color: #D4D4D4\"> nn<\/span><\/span>\n<span class=\"line\"><span style=\"color: #C586C0\">import<\/span><span style=\"color: #D4D4D4\"> torch.optim <\/span><span style=\"color: #C586C0\">as<\/span><span style=\"color: #D4D4D4\"> optim<\/span><\/span>\n<span class=\"line\"><span style=\"color: #C586C0\">from<\/span><span style=\"color: #D4D4D4\"> torchvision <\/span><span style=\"color: #C586C0\">import<\/span><span style=\"color: #D4D4D4\"> datasets, transforms<\/span><\/span>\n<span class=\"line\"><span style=\"color: #C586C0\">from<\/span><span style=\"color: #D4D4D4\"> torch.utils.data <\/span><span style=\"color: #C586C0\">import<\/span><span style=\"color: #D4D4D4\"> DataLoader<\/span><\/span>\n<span class=\"line\"><span style=\"color: #C586C0\">import<\/span><span style=\"color: #D4D4D4\"> matplotlib.pyplot <\/span><span style=\"color: #C586C0\">as<\/span><span style=\"color: #D4D4D4\"> plt<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #6A9955\"># Define the VAE model<\/span><\/span>\n<span class=\"line\"><span style=\"color: #569CD6\">class<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #4EC9B0\">VAE<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #4EC9B0\">nn<\/span><span style=\"color: #D4D4D4\">.<\/span><span style=\"color: #4EC9B0\">Module<\/span><span style=\"color: #D4D4D4\">):<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #569CD6\">def<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #DCDCAA\">__init__<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #9CDCFE\">self<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #9CDCFE\">latent_dim<\/span><span style=\"color: #D4D4D4\">=<\/span><span style=\"color: #B5CEA8\">20<\/span><span style=\"color: #D4D4D4\">):<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #4EC9B0\">super<\/span><span style=\"color: #D4D4D4\">(VAE, <\/span><span style=\"color: #569CD6\">self<\/span><span style=\"color: #D4D4D4\">).<\/span><span style=\"color: #DCDCAA\">__init__<\/span><span style=\"color: #D4D4D4\">()<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #569CD6\">self<\/span><span style=\"color: #D4D4D4\">.fc1 = nn.Linear(<\/span><span style=\"color: #B5CEA8\">784<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #B5CEA8\">400<\/span><span style=\"color: #D4D4D4\">)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #569CD6\">self<\/span><span style=\"color: #D4D4D4\">.fc21 = nn.Linear(<\/span><span style=\"color: #B5CEA8\">400<\/span><span style=\"color: #D4D4D4\">, latent_dim)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #569CD6\">self<\/span><span style=\"color: #D4D4D4\">.fc22 = nn.Linear(<\/span><span style=\"color: #B5CEA8\">400<\/span><span style=\"color: #D4D4D4\">, latent_dim)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #569CD6\">self<\/span><span style=\"color: #D4D4D4\">.fc3 = nn.Linear(latent_dim, <\/span><span style=\"color: #B5CEA8\">400<\/span><span style=\"color: #D4D4D4\">)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #569CD6\">self<\/span><span style=\"color: #D4D4D4\">.fc4 = nn.Linear(<\/span><span style=\"color: #B5CEA8\">400<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #B5CEA8\">784<\/span><span style=\"color: #D4D4D4\">)<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #569CD6\">def<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #DCDCAA\">encode<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #9CDCFE\">self<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #9CDCFE\">x<\/span><span style=\"color: #D4D4D4\">):<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        h1 = torch.relu(<\/span><span style=\"color: #569CD6\">self<\/span><span style=\"color: #D4D4D4\">.fc1(x))<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #C586C0\">return<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #569CD6\">self<\/span><span style=\"color: #D4D4D4\">.fc21(h1), <\/span><span style=\"color: #569CD6\">self<\/span><span style=\"color: #D4D4D4\">.fc22(h1)<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #569CD6\">def<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #DCDCAA\">reparameterize<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #9CDCFE\">self<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #9CDCFE\">mu<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #9CDCFE\">logvar<\/span><span style=\"color: #D4D4D4\">):<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        std = torch.exp(<\/span><span style=\"color: #B5CEA8\">0.5<\/span><span style=\"color: #D4D4D4\">*logvar)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        eps = torch.randn_like(std)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #C586C0\">return<\/span><span style=\"color: #D4D4D4\"> mu + eps*std<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #569CD6\">def<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #DCDCAA\">decode<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #9CDCFE\">self<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #9CDCFE\">z<\/span><span style=\"color: #D4D4D4\">):<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        h3 = torch.relu(<\/span><span style=\"color: #569CD6\">self<\/span><span style=\"color: #D4D4D4\">.fc3(z))<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #C586C0\">return<\/span><span style=\"color: #D4D4D4\"> torch.sigmoid(<\/span><span style=\"color: #569CD6\">self<\/span><span style=\"color: #D4D4D4\">.fc4(h3))<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #569CD6\">def<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #DCDCAA\">forward<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #9CDCFE\">self<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #9CDCFE\">x<\/span><span style=\"color: #D4D4D4\">):<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        mu, logvar = <\/span><span style=\"color: #569CD6\">self<\/span><span style=\"color: #D4D4D4\">.encode(x.view(-<\/span><span style=\"color: #B5CEA8\">1<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #B5CEA8\">784<\/span><span style=\"color: #D4D4D4\">))<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        z = <\/span><span style=\"color: #569CD6\">self<\/span><span style=\"color: #D4D4D4\">.reparameterize(mu, logvar)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #C586C0\">return<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #569CD6\">self<\/span><span style=\"color: #D4D4D4\">.decode(z), mu, logvar<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #6A9955\"># Loss function<\/span><\/span>\n<span class=\"line\"><span style=\"color: #569CD6\">def<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #DCDCAA\">loss_function<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #9CDCFE\">recon_x<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #9CDCFE\">x<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #9CDCFE\">mu<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #9CDCFE\">logvar<\/span><span style=\"color: #D4D4D4\">):<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    BCE = nn.functional.binary_cross_entropy(recon_x, x.view(-<\/span><span style=\"color: #B5CEA8\">1<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #B5CEA8\">784<\/span><span style=\"color: #D4D4D4\">), <\/span><span style=\"color: #9CDCFE\">reduction<\/span><span style=\"color: #D4D4D4\">=<\/span><span style=\"color: #CE9178\">&#39;sum&#39;<\/span><span style=\"color: #D4D4D4\">)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    KLD = -<\/span><span style=\"color: #B5CEA8\">0.5<\/span><span style=\"color: #D4D4D4\"> * torch.sum(<\/span><span style=\"color: #B5CEA8\">1<\/span><span style=\"color: #D4D4D4\"> + logvar - mu.pow(<\/span><span style=\"color: #B5CEA8\">2<\/span><span style=\"color: #D4D4D4\">) - logvar.exp())<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #C586C0\">return<\/span><span style=\"color: #D4D4D4\"> BCE + KLD<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #6A9955\"># Load dataset<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((<\/span><span style=\"color: #B5CEA8\">0.5<\/span><span style=\"color: #D4D4D4\">,), (<\/span><span style=\"color: #B5CEA8\">0.5<\/span><span style=\"color: #D4D4D4\">,))])<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">trainset = datasets.MNIST(<\/span><span style=\"color: #9CDCFE\">root<\/span><span style=\"color: #D4D4D4\">=<\/span><span style=\"color: #CE9178\">&#39;.\/data&#39;<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #9CDCFE\">train<\/span><span style=\"color: #D4D4D4\">=<\/span><span style=\"color: #569CD6\">True<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #9CDCFE\">download<\/span><span style=\"color: #D4D4D4\">=<\/span><span style=\"color: #569CD6\">True<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #9CDCFE\">transform<\/span><span style=\"color: #D4D4D4\">=transform)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">trainloader = DataLoader(trainset, <\/span><span style=\"color: #9CDCFE\">batch_size<\/span><span style=\"color: #D4D4D4\">=<\/span><span style=\"color: #B5CEA8\">128<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #9CDCFE\">shuffle<\/span><span style=\"color: #D4D4D4\">=<\/span><span style=\"color: #569CD6\">True<\/span><span style=\"color: #D4D4D4\">)<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #6A9955\"># Initialize model, optimizer, and train<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">latent_dim = <\/span><span style=\"color: #B5CEA8\">20<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">model = VAE(<\/span><span style=\"color: #9CDCFE\">latent_dim<\/span><span style=\"color: #D4D4D4\">=latent_dim)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">optimizer = optim.Adam(model.parameters(), <\/span><span style=\"color: #9CDCFE\">lr<\/span><span style=\"color: #D4D4D4\">=<\/span><span style=\"color: #B5CEA8\">1e-3<\/span><span style=\"color: #D4D4D4\">)<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #6A9955\"># Train the VAE<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">epochs = <\/span><span style=\"color: #B5CEA8\">5<\/span><\/span>\n<span class=\"line\"><span style=\"color: #C586C0\">for<\/span><span style=\"color: #D4D4D4\"> epoch <\/span><span style=\"color: #C586C0\">in<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #DCDCAA\">range<\/span><span style=\"color: #D4D4D4\">(epochs):<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    model.train()<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    train_loss = <\/span><span style=\"color: #B5CEA8\">0<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #C586C0\">for<\/span><span style=\"color: #D4D4D4\"> batch_idx, (data, _) <\/span><span style=\"color: #C586C0\">in<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #DCDCAA\">enumerate<\/span><span style=\"color: #D4D4D4\">(trainloader):<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        data = data.to(torch.float32)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        optimizer.zero_grad()<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        recon_batch, mu, logvar = model(data)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        loss = loss_function(recon_batch, data, mu, logvar)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        loss.backward()<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        train_loss += loss.item()<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        optimizer.step()<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #DCDCAA\">print<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #569CD6\">f<\/span><span style=\"color: #CE9178\">&#39;Epoch <\/span><span style=\"color: #569CD6\">{<\/span><span style=\"color: #D4D4D4\">epoch+<\/span><span style=\"color: #B5CEA8\">1<\/span><span style=\"color: #569CD6\">}<\/span><span style=\"color: #CE9178\">, Loss: <\/span><span style=\"color: #569CD6\">{<\/span><span style=\"color: #D4D4D4\">train_loss\/<\/span><span style=\"color: #DCDCAA\">len<\/span><span style=\"color: #D4D4D4\">(trainloader.dataset)<\/span><span style=\"color: #569CD6\">:.4f}<\/span><span style=\"color: #CE9178\">&#39;<\/span><span style=\"color: #D4D4D4\">)<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #6A9955\"># Generate an image<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">model.eval()<\/span><\/span>\n<span class=\"line\"><span style=\"color: #C586C0\">with<\/span><span style=\"color: #D4D4D4\"> torch.no_grad():<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    sample = torch.randn(<\/span><span style=\"color: #B5CEA8\">1<\/span><span style=\"color: #D4D4D4\">, latent_dim)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    generated_image = model.decode(sample).view(<\/span><span style=\"color: #B5CEA8\">28<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #B5CEA8\">28<\/span><span style=\"color: #D4D4D4\">).cpu().numpy()<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #6A9955\"># Display the generated image<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">plt.figure(<\/span><span style=\"color: #9CDCFE\">figsize<\/span><span style=\"color: #D4D4D4\">=(<\/span><span style=\"color: #B5CEA8\">5<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #B5CEA8\">5<\/span><span style=\"color: #D4D4D4\">))<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">plt.title(<\/span><span style=\"color: #CE9178\">&quot;Generated Image&quot;<\/span><span style=\"color: #D4D4D4\">)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">plt.imshow(generated_image, <\/span><span style=\"color: #9CDCFE\">cmap<\/span><span style=\"color: #D4D4D4\">=<\/span><span style=\"color: #CE9178\">&#39;gray&#39;<\/span><span style=\"color: #D4D4D4\">)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">plt.axis(<\/span><span style=\"color: #CE9178\">&#39;off&#39;<\/span><span style=\"color: #D4D4D4\">)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">plt.show()<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #6A9955\"># Perturb the latent space to generate an adversarial image<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">epsilon = <\/span><span style=\"color: #B5CEA8\">0.3<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">adversarial_sample = sample + epsilon * torch.sign(torch.randn_like(sample))<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #C586C0\">with<\/span><span style=\"color: #D4D4D4\"> torch.no_grad():<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    adversarial_image = model.decode(adversarial_sample).view(<\/span><span style=\"color: #B5CEA8\">28<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #B5CEA8\">28<\/span><span style=\"color: #D4D4D4\">).cpu().numpy()<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #6A9955\"># Display the adversarial image<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">plt.figure(<\/span><span style=\"color: #9CDCFE\">figsize<\/span><span style=\"color: #D4D4D4\">=(<\/span><span style=\"color: #B5CEA8\">5<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #B5CEA8\">5<\/span><span style=\"color: #D4D4D4\">))<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">plt.title(<\/span><span style=\"color: #CE9178\">&quot;Adversarial Image&quot;<\/span><span style=\"color: #D4D4D4\">)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">plt.imshow(adversarial_image, <\/span><span style=\"color: #9CDCFE\">cmap<\/span><span style=\"color: #D4D4D4\">=<\/span><span style=\"color: #CE9178\">&#39;gray&#39;<\/span><span style=\"color: #D4D4D4\">)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">plt.axis(<\/span><span style=\"color: #CE9178\">&#39;off&#39;<\/span><span style=\"color: #D4D4D4\">)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">plt.show()<\/span><\/span>\n<span class=\"line\"><\/span><\/code><\/pre><\/div>\n\n\n\n<p><\/p>\n\n\n\n<p>This example highlights how adversarial attacks can affect the generative process in models designed to create new content, such as images. <\/p>\n\n\n\n<p>By manipulating the latent space, we can cause the model to generate images that are significantly different from what was intended, demonstrating the potential risks in applications like image generation or style transfer.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>Adversarial Text Generation for a Language Model<\/strong><\/h4>\n\n\n\n<p>This example illustrates how to craft an adversarial prompt to manipulate the output of a GPT-2 model. It demonstrates the risks of prompt engineering in generative AI, where a slight change in the input can produce a dramatically different output.<\/p>\n\n\n\n<div class=\"wp-block-kevinbatdorf-code-block-pro\" data-code-block-pro-font-family=\"Code-Pro-JetBrains-Mono\" style=\"font-size:.875rem;font-family:Code-Pro-JetBrains-Mono,ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,monospace;line-height:1.25rem;--cbp-tab-width:2;tab-size:var(--cbp-tab-width, 2)\"><span style=\"display:flex;align-items:center;padding:10px 0px 10px 16px;margin-bottom:-2px;width:100%;text-align:left;background-color:#2b2b2b;color:#c7c7c7\">Python<\/span><span role=\"button\" tabindex=\"0\" data-code=\"import torch\nfrom transformers import GPT2LMHeadModel, GPT2Tokenizer\n\n# Load the pre-trained GPT-2 model and tokenizer\nmodel_name = &quot;gpt2&quot;\nmodel = GPT2LMHeadModel.from_pretrained(model_name)\ntokenizer = GPT2Tokenizer.from_pretrained(model_name)\n\n# Original prompt\noriginal_prompt = &quot;The AI revolution is&quot;\n\n# Generate text from the original prompt\ninput_ids = tokenizer.encode(original_prompt, return_tensors='pt')\noriginal_output = model.generate(input_ids, max_length=50, num_return_sequences=1)\n\nprint(&quot;Original Output:&quot;)\nprint(tokenizer.decode(original_output[0], skip_special_tokens=True))\n\n# Adversarial prompt to manipulate the output\nadversarial_prompt = &quot;The AI revolution is going to fail miserably because&quot;\n\n# Generate text from the adversarial prompt\ninput_ids = tokenizer.encode(adversarial_prompt, return_tensors='pt')\nadversarial_output = model.generate(input_ids, max_length=50, num_return_sequences=1)\n\nprint(&quot;\\nAdversarial Output:&quot;)\nprint(tokenizer.decode(adversarial_output[0], skip_special_tokens=True))\n\" style=\"color:#D4D4D4;display:none\" aria-label=\"Copy\" class=\"code-block-pro-copy-button\"><svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" style=\"width:24px;height:24px\" fill=\"none\" viewBox=\"0 0 24 24\" stroke=\"currentColor\" stroke-width=\"2\"><path class=\"with-check\" stroke-linecap=\"round\" stroke-linejoin=\"round\" d=\"M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2m-6 9l2 2 4-4\"><\/path><path class=\"without-check\" stroke-linecap=\"round\" stroke-linejoin=\"round\" d=\"M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2\"><\/path><\/svg><\/span><pre class=\"shiki dark-plus\" style=\"background-color: #1E1E1E\" tabindex=\"0\"><code><span class=\"line\"><span style=\"color: #C586C0\">import<\/span><span style=\"color: #D4D4D4\"> torch<\/span><\/span>\n<span class=\"line\"><span style=\"color: #C586C0\">from<\/span><span style=\"color: #D4D4D4\"> transformers <\/span><span style=\"color: #C586C0\">import<\/span><span style=\"color: #D4D4D4\"> GPT2LMHeadModel, GPT2Tokenizer<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #6A9955\"># Load the pre-trained GPT-2 model and tokenizer<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">model_name = <\/span><span style=\"color: #CE9178\">&quot;gpt2&quot;<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">model = GPT2LMHeadModel.from_pretrained(model_name)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">tokenizer = GPT2Tokenizer.from_pretrained(model_name)<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #6A9955\"># Original prompt<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">original_prompt = <\/span><span style=\"color: #CE9178\">&quot;The AI revolution is&quot;<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #6A9955\"># Generate text from the original prompt<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">input_ids = tokenizer.encode(original_prompt, <\/span><span style=\"color: #9CDCFE\">return_tensors<\/span><span style=\"color: #D4D4D4\">=<\/span><span style=\"color: #CE9178\">&#39;pt&#39;<\/span><span style=\"color: #D4D4D4\">)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">original_output = model.generate(input_ids, <\/span><span style=\"color: #9CDCFE\">max_length<\/span><span style=\"color: #D4D4D4\">=<\/span><span style=\"color: #B5CEA8\">50<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #9CDCFE\">num_return_sequences<\/span><span style=\"color: #D4D4D4\">=<\/span><span style=\"color: #B5CEA8\">1<\/span><span style=\"color: #D4D4D4\">)<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #DCDCAA\">print<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #CE9178\">&quot;Original Output:&quot;<\/span><span style=\"color: #D4D4D4\">)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #DCDCAA\">print<\/span><span style=\"color: #D4D4D4\">(tokenizer.decode(original_output[<\/span><span style=\"color: #B5CEA8\">0<\/span><span style=\"color: #D4D4D4\">], <\/span><span style=\"color: #9CDCFE\">skip_special_tokens<\/span><span style=\"color: #D4D4D4\">=<\/span><span style=\"color: #569CD6\">True<\/span><span style=\"color: #D4D4D4\">))<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #6A9955\"># Adversarial prompt to manipulate the output<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">adversarial_prompt = <\/span><span style=\"color: #CE9178\">&quot;The AI revolution is going to fail miserably because&quot;<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #6A9955\"># Generate text from the adversarial prompt<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">input_ids = tokenizer.encode(adversarial_prompt, <\/span><span style=\"color: #9CDCFE\">return_tensors<\/span><span style=\"color: #D4D4D4\">=<\/span><span style=\"color: #CE9178\">&#39;pt&#39;<\/span><span style=\"color: #D4D4D4\">)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">adversarial_output = model.generate(input_ids, <\/span><span style=\"color: #9CDCFE\">max_length<\/span><span style=\"color: #D4D4D4\">=<\/span><span style=\"color: #B5CEA8\">50<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #9CDCFE\">num_return_sequences<\/span><span style=\"color: #D4D4D4\">=<\/span><span style=\"color: #B5CEA8\">1<\/span><span style=\"color: #D4D4D4\">)<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #DCDCAA\">print<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #CE9178\">&quot;<\/span><span style=\"color: #D7BA7D\">\\n<\/span><span style=\"color: #CE9178\">Adversarial Output:&quot;<\/span><span style=\"color: #D4D4D4\">)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #DCDCAA\">print<\/span><span style=\"color: #D4D4D4\">(tokenizer.decode(adversarial_output[<\/span><span style=\"color: #B5CEA8\">0<\/span><span style=\"color: #D4D4D4\">], <\/span><span style=\"color: #9CDCFE\">skip_special_tokens<\/span><span style=\"color: #D4D4D4\">=<\/span><span style=\"color: #569CD6\">True<\/span><span style=\"color: #D4D4D4\">))<\/span><\/span>\n<span class=\"line\"><\/span><\/code><\/pre><\/div>\n\n\n\n<p><\/p>\n\n\n\n<p>This example showcases how vulnerable language models can be to adversarial inputs. A small change in the prompt can dramatically alter the sentiment and content of the generated text, which could have serious implications in applications like chatbots, content generation, or automated writing assistance.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Implications and Mitigation Strategies<\/h3>\n\n\n\n<p>The vulnerabilities demonstrated in these examples have far-reaching implications for AI safety:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Misinformation Spread<\/strong>: Adversarial attacks on generative models could be used to create and disseminate false or misleading information at scale.<\/li>\n\n\n\n<li><strong>Privacy Breaches<\/strong>: As seen in the prompt injection attacks on custom GPTs, these vulnerabilities can lead to unauthorized access to sensitive information.<\/li>\n\n\n\n<li><strong>Copyright and Intellectual Property Issues<\/strong>: Manipulated generative models might produce content that infringes on copyrights or misuses intellectual property.<\/li>\n\n\n\n<li><strong>Trust in AI Systems<\/strong>: Frequent successful attacks could erode public trust in AI-generated content and AI systems in general.<\/li>\n<\/ol>\n\n\n\n<p>To address these challenges, several mitigation strategies can be employed:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Adversarial Training<\/strong>: Incorporating adversarial examples into the training process can make models more robust to these types of attacks.<\/li>\n\n\n\n<li><strong>Input Validation and Sanitization<\/strong>: Implementing strong input validation techniques can help prevent malicious prompts or data from reaching the model.<\/li>\n\n\n\n<li><strong>Ensemble Methods<\/strong>: Using multiple models with different architectures can provide more resilient outputs.<\/li>\n\n\n\n<li><strong>Continuous Monitoring<\/strong>: Implementing systems to detect unusual patterns in model inputs and outputs can help identify potential attacks.<\/li>\n\n\n\n<li><strong>Explainable AI<\/strong>: Developing more interpretable models can make it easier to detect and understand when a model is behaving unexpectedly due to adversarial inputs.<\/li>\n<\/ol>\n\n\n\n<p>As generative AI continues to advance and find new applications, addressing these security concerns will be crucial for ensuring the safe and responsible deployment of these powerful technologies.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Conclusion<\/h3>\n\n\n\n<p>Adversarial attacks on generative AI models represent a significant challenge in AI safety, with far-reaching implications for information integrity, privacy, and public trust. As demonstrated through real-world examples and practical demonstrations, these attacks can subtly yet profoundly manipulate AI outputs, underscoring the vulnerabilities in current systems.<\/p>\n\n\n\n<p>As generative AI becomes increasingly integrated into our digital landscape, the development of robust defense mechanisms is crucial. This will require a combination of technical solutions, such as adversarial training and enhanced monitoring, alongside broader strategies to ensure the responsible deployment of AI technologies.<\/p>\n\n\n\n<p>By addressing these challenges head-on, we can work towards harnessing the full potential of generative AI while mitigating associated risks, paving the way for safer and more reliable AI systems in the future.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Adversarial attacks represent a significant challenge in the field of AI safety, particularly for generative models. These attacks involve manipulating input data to cause AI systems to produce unexpected or undesired outputs. In this post, we\u2019ll explore the concept of adversarial attacks, their implications for generative AI, and some real-world examples. Understanding Adversarial Attacks Adversarial<\/p>\n","protected":false},"author":1,"featured_media":6278,"parent":319,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-339","page","type-page","status-publish","has-post-thumbnail","hentry"],"_links":{"self":[{"href":"https:\/\/tomomitanaka.ai\/index.php?rest_route=\/wp\/v2\/pages\/339","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/tomomitanaka.ai\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/tomomitanaka.ai\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/tomomitanaka.ai\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/tomomitanaka.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=339"}],"version-history":[{"count":54,"href":"https:\/\/tomomitanaka.ai\/index.php?rest_route=\/wp\/v2\/pages\/339\/revisions"}],"predecessor-version":[{"id":6314,"href":"https:\/\/tomomitanaka.ai\/index.php?rest_route=\/wp\/v2\/pages\/339\/revisions\/6314"}],"up":[{"embeddable":true,"href":"https:\/\/tomomitanaka.ai\/index.php?rest_route=\/wp\/v2\/pages\/319"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/tomomitanaka.ai\/index.php?rest_route=\/wp\/v2\/media\/6278"}],"wp:attachment":[{"href":"https:\/\/tomomitanaka.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=339"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}