{"id":6486,"date":"2025-07-06T19:18:49","date_gmt":"2025-07-06T23:18:49","guid":{"rendered":"https:\/\/www.econai.tech\/?page_id=6486"},"modified":"2026-05-06T20:32:40","modified_gmt":"2026-05-07T00:32:40","slug":"genai-vs-crypto-scammers-which-llm-wins","status":"publish","type":"page","link":"https:\/\/tomomitanaka.ai\/?page_id=6486","title":{"rendered":"GenAI vs Crypto Scammers: Which LLM Wins"},"content":{"rendered":"\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The Growing Threat of Cryptocurrency Romance Scams<\/h3>\n\n\n\n<p>Cryptocurrency romance scams have evolved into a sophisticated, multi-billion dollar criminal enterprise. These operations combine emotional manipulation with financial fraud, often lasting weeks or months before the final money extraction.<\/p>\n\n\n\n<p>The perpetrators follow detailed scripts, systematically building trust through fake personas before introducing \u201cinvestment opportunities.\u201d<\/p>\n\n\n\n<p>To combat this threat, I embarked on a unique research project: infiltrating scammer networks to collect real conversation data and using LLMs to automatically classify their tactics.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Methodology<\/h3>\n\n\n\n<h4 class=\"wp-block-heading\">Data Collection: 175 Scammer Conversations<\/h4>\n\n\n\n<p>Over 3 years, I personally interacted with&nbsp;<strong>175 cryptocurrency scammers<\/strong>&nbsp;across multiple dating platforms, collecting&nbsp;<strong>15,913 messages<\/strong>&nbsp;of real scam conversations.<\/p>\n\n\n\n<p>This unprecedented dataset captures the full spectrum of scam tactics, from initial contact to final money extraction attempts.<\/p>\n\n\n\n<p>For this analysis, I focused on the first 50 scammer conversations, totaling 3,946 messages (3,697 Japanese, 249 English, excluding emoji-only messages). Each message was manually reviewed and categorized using a set of 11 distinct scam strategy categories, identified based on recurring behavioral patterns and scammer training materials.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">The 11 Scam Categories<\/h4>\n\n\n\n<p>Based on analysis of scammer training materials and behavioral patterns, I identified 11 scam strategy categories:<\/p>\n\n\n\n<h5 class=\"wp-block-heading\"><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-luminous-vivid-orange-color\"><strong>Grooming Phase<\/strong><\/mark><\/h5>\n\n\n\n<p><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-luminous-vivid-orange-color\"><strong>1. Emotional Bonding<\/strong><\/mark><kbd>&nbsp;<\/kbd>\u2013 Building romantic connections, isolation tactics<\/p>\n\n\n\n<p><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-luminous-vivid-orange-color\"><strong>2. Financial Baiting<\/strong><\/mark>&nbsp;\u2013 Displaying wealth to generate interest<\/p>\n\n\n\n<p><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-luminous-vivid-orange-color\"><strong>3. Fake Persona Building<\/strong><\/mark>&nbsp;\u2013 Creating believable background stories<\/p>\n\n\n\n<p><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-luminous-vivid-orange-color\"><strong>4. Manipulative Care<\/strong><\/mark> &#8211; Fake concern and compliments<\/p>\n\n\n\n<p><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-luminous-vivid-orange-color\"><strong>5. Excuse Avoidance<\/strong><\/mark> &#8211; Avoiding video calls and meetings<\/p>\n\n\n\n<h5 class=\"wp-block-heading\"><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-cyan-blue-color\"><strong>Profiling Phase<\/strong><\/mark><\/h5>\n\n\n\n<p><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-cyan-blue-color\"><strong>6. Personal Profiling<\/strong><\/mark> &#8211; Gathering lifestyle and family information<\/p>\n\n\n\n<p><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-cyan-blue-color\"><strong>7. Financial Inquiry<\/strong><\/mark> &#8211; Probing income, assets, and financial capacity<\/p>\n\n\n\n<h5 class=\"wp-block-heading\"><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-purple-color\"><strong>Persuasion Phase<\/strong><\/mark><\/h5>\n\n\n\n<p><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-purple-color\"><strong>8. Financial Education<\/strong><\/mark> &#8211; Teaching crypto and investment &#8220;lessons&#8221;<\/p>\n\n\n\n<p><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-purple-color\"><strong>9. Investment Pitch<\/strong><\/mark> &#8211; Promoting fake investment schemes<\/p>\n\n\n\n<p><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-purple-color\"><strong>10. Urgency or Pressure<\/strong><\/mark> &#8211; Creating time-sensitive manipulation<\/p>\n\n\n\n<h5 class=\"wp-block-heading\"><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-red-color\"><strong>Exploitation Phase<\/strong><\/mark><\/h5>\n\n\n\n<p><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-red-color\"><strong>11. Money Extraction<\/strong><\/mark> &#8211; Direct requests for funds<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Model Testing Setup<\/h4>\n\n\n\n<h5 class=\"wp-block-heading\">Automated LLM Classification with Secure Protocols<\/h5>\n\n\n\n<p>To evaluate the LLMs, I developed a consistent and secure script that allowed each of the four LLMs (GPT-3.5, GPT-4, Claude 3 Haiku, and Gemini 1.5 Pro) to categorize the scam messages independently. <\/p>\n\n\n\n<p>A critical aspect of this step was ensuring <strong>no data leakage<\/strong>. This means that when an LLM was classifying a message, it <em>only<\/em> had access to the message text itself. It was explicitly prevented from seeing any manual labels or the categorization results from other LLMs.<\/p>\n\n\n\n<p>You can find&nbsp;<a href=\"https:\/\/github.com\/tomomitanaka00\/LLM_Data_Analysis\">the complete Python script<\/a> in my GitHub repository.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Results<\/h3>\n\n\n\n<h4 class=\"wp-block-heading\">Model Accuracy<\/h4>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td>Model<\/td><td class=\"has-text-align-center\" data-align=\"center\">Japanese \ud83c\uddef\ud83c\uddf5 <br>(3,697 messages)<\/td><td class=\"has-text-align-center\" data-align=\"center\">English \ud83c\uddec\ud83c\udde7<br>(249 messages)<\/td><\/tr><tr><td>GPT-4<\/td><td class=\"has-text-align-center\" data-align=\"center\">1\ufe0f\u20e3 <span class=\"marker2\"><strong>70.8%<\/strong><\/span><\/td><td class=\"has-text-align-center\" data-align=\"center\">2\ufe0f\u20e3 76.7%<\/td><\/tr><tr><td>GPT-3.5<\/td><td class=\"has-text-align-center\" data-align=\"center\">2\ufe0f\u20e3 66.9%<\/td><td class=\"has-text-align-center\" data-align=\"center\">4\ufe0f\u20e3 70.3%<\/td><\/tr><tr><td>Claude 3 Haiku<\/td><td class=\"has-text-align-center\" data-align=\"center\">3\ufe0f\u20e3 66.2%<\/td><td class=\"has-text-align-center\" data-align=\"center\">1\ufe0f\u20e3<strong> <\/strong><span class=\"marker2\"><strong>78.3%<\/strong><\/span><\/td><\/tr><tr><td>Gemini 1.5 Pro<\/td><td class=\"has-text-align-center\" data-align=\"center\">4\ufe0f\u20e3 62.0%<\/td><td class=\"has-text-align-center\" data-align=\"center\">3\ufe0f\u20e3 72.3%<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h4 class=\"wp-block-heading\">Accuracy by Category (Japanese)<\/h4>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td>Category<\/td><td class=\"has-text-align-center\" data-align=\"center\">Messages<\/td><td class=\"has-text-align-center\" data-align=\"center\">GPT-4<\/td><td class=\"has-text-align-center\" data-align=\"center\">GPT-3.5<\/td><td class=\"has-text-align-center\" data-align=\"center\">Claude<\/td><td class=\"has-text-align-center\" data-align=\"center\">Gemini<\/td><\/tr><tr><td>none<\/td><td class=\"has-text-align-center\" data-align=\"center\">2,504<\/td><td class=\"has-text-align-center\" data-align=\"center\"><span class=\"marker2\"><strong>75.8%<\/strong><\/span><\/td><td class=\"has-text-align-center\" data-align=\"center\">75.0%<\/td><td class=\"has-text-align-center\" data-align=\"center\">75.1%<\/td><td class=\"has-text-align-center\" data-align=\"center\">62.6%<\/td><\/tr><tr><td>Personal Profiling<\/td><td class=\"has-text-align-center\" data-align=\"center\">417<\/td><td class=\"has-text-align-center\" data-align=\"center\">69.5%<\/td><td class=\"has-text-align-center\" data-align=\"center\">56.1%<\/td><td class=\"has-text-align-center\" data-align=\"center\">53.5%<\/td><td class=\"has-text-align-center\" data-align=\"center\"><span class=\"marker2\"><strong>76.0%<\/strong><\/span><\/td><\/tr><tr><td>Emotional Bonding<\/td><td class=\"has-text-align-center\" data-align=\"center\">273<\/td><td class=\"has-text-align-center\" data-align=\"center\"><span class=\"marker2\"><strong>75.5%<\/strong><\/span><\/td><td class=\"has-text-align-center\" data-align=\"center\">72.5%<\/td><td class=\"has-text-align-center\" data-align=\"center\">63.7%<\/td><td class=\"has-text-align-center\" data-align=\"center\">51.6%<\/td><\/tr><tr><td>Fake Persona Building<\/td><td class=\"has-text-align-center\" data-align=\"center\">194<\/td><td class=\"has-text-align-center\" data-align=\"center\">57.2%<\/td><td class=\"has-text-align-center\" data-align=\"center\">18.6%<\/td><td class=\"has-text-align-center\" data-align=\"center\">23.7%<\/td><td class=\"has-text-align-center\" data-align=\"center\"><span class=\"marker2\"><strong>68.0%<\/strong><\/span><\/td><\/tr><tr><td>Investment Pitch<\/td><td class=\"has-text-align-center\" data-align=\"center\">96<\/td><td class=\"has-text-align-center\" data-align=\"center\">5.2%<\/td><td class=\"has-text-align-center\" data-align=\"center\">13.5%<\/td><td class=\"has-text-align-center\" data-align=\"center\"><span class=\"marker2\"><strong>49.0%<\/strong><\/span><\/td><td class=\"has-text-align-center\" data-align=\"center\">11.5%<\/td><\/tr><tr><td>Manipulative Care<\/td><td class=\"has-text-align-center\" data-align=\"center\">59<\/td><td class=\"has-text-align-center\" data-align=\"center\">78.0%<\/td><td class=\"has-text-align-center\" data-align=\"center\">49.2%<\/td><td class=\"has-text-align-center\" data-align=\"center\">45.8%<\/td><td class=\"has-text-align-center\" data-align=\"center\"><span class=\"marker2\"><strong>98.3%<\/strong><\/span><\/td><\/tr><tr><td>Financial Education<\/td><td class=\"has-text-align-center\" data-align=\"center\">38<\/td><td class=\"has-text-align-center\" data-align=\"center\"><span class=\"marker2\"><strong>71.1%<\/strong><\/span><\/td><td class=\"has-text-align-center\" data-align=\"center\">63.2%<\/td><td class=\"has-text-align-center\" data-align=\"center\">44.7%<\/td><td class=\"has-text-align-center\" data-align=\"center\">60.5%<\/td><\/tr><tr><td>Financial Baiting<\/td><td class=\"has-text-align-center\" data-align=\"center\">35<\/td><td class=\"has-text-align-center\" data-align=\"center\">28.6%<\/td><td class=\"has-text-align-center\" data-align=\"center\"><span class=\"marker2\"><strong>71.4%<\/strong><\/span><\/td><td class=\"has-text-align-center\" data-align=\"center\">31.4%<\/td><td class=\"has-text-align-center\" data-align=\"center\">31.4%<\/td><\/tr><tr><td>Money Extraction<\/td><td class=\"has-text-align-center\" data-align=\"center\">35<\/td><td class=\"has-text-align-center\" data-align=\"center\">2.9%<\/td><td class=\"has-text-align-center\" data-align=\"center\">2.9%<\/td><td class=\"has-text-align-center\" data-align=\"center\">5.7%<\/td><td class=\"has-text-align-center\" data-align=\"center\">2.9%<\/td><\/tr><tr><td>Financial Inquiry<\/td><td class=\"has-text-align-center\" data-align=\"center\">32<\/td><td class=\"has-text-align-center\" data-align=\"center\">62.5%<\/td><td class=\"has-text-align-center\" data-align=\"center\"><span class=\"marker2\"><strong>81.2%<\/strong><\/span><\/td><td class=\"has-text-align-center\" data-align=\"center\">46.9%<\/td><td class=\"has-text-align-center\" data-align=\"center\">71.9%<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h4 class=\"wp-block-heading\">Key Insights and Discoveries<\/h4>\n\n\n\n<h5 class=\"wp-block-heading\">1. GPT-4 Wins Overall, But Category Performance Varies Dramatically<\/h5>\n\n\n\n<p>While GPT-4 achieved the highest overall accuracy on Japanese messages, no single model dominated all categories. Each model showed distinct strengths.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>GPT-4<\/strong>: Best at <strong>emotional bonding<\/strong><\/li>\n\n\n\n<li><strong>GPT-3.5<\/strong>: Excelled at <strong>financial baiting and financial inquiry<\/strong> detection<\/li>\n\n\n\n<li><strong>Claude<\/strong>: Best performer on <strong>investment pitches<\/strong><\/li>\n\n\n\n<li><strong>Gemini<\/strong>: Surprisingly strong at <strong>personal profiling<\/strong>, <strong>fake persona building<\/strong> and <strong>manipulative care<\/strong><\/li>\n<\/ul>\n\n\n\n<h5 class=\"wp-block-heading\">2. All Models Struggle with Money Extraction<\/h5>\n\n\n\n<p><strong>All models performed poorly at detecting direct money extraction attempts<\/strong> &#8211; the final and most critical scam phase. Best performance was Claude at only 5.7% accuracy.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\">3. Language Matters: English vs Japanese Performance<\/h5>\n\n\n\n<p>Claude showed a remarkable 12-point accuracy improvement on English messages (78.3% vs 66.2% for Japanese), while other models showed smaller gaps. This suggests significant language-specific bias in model training and performance.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\">4. The &#8220;None&#8221; Category Challenge<\/h5>\n\n\n\n<p>With 67.7% of messages classified as legitimate conversation, accurately distinguishing between scam tactics and normal chat proved crucial. <\/p>\n\n\n\n<p>GPT-4 and GPT-3.5 performed best here, while Gemini struggled, over-classifying innocent messages as scam attempts.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\">5. Cultural Context Blindness: The Japanese Politeness Problem<\/h5>\n\n\n\n<p>LLMs demonstrated a critical weakness in understanding Japanese cultural communication norms, with <strong>Gemini suffering most severely from this issue<\/strong>.<\/p>\n\n\n\n<p>In Japanese culture, expressions of care, concern, and attentiveness are standard politeness markers, not necessarily signs of manipulation.<\/p>\n\n\n\n<p>Gemini&#8217;s 62.6% accuracy on legitimate conversation (&#8220;none&#8221; category) compared to ~75% for other models reveals systematic over-classification of polite Japanese phrases. <\/p>\n\n\n\n<p>The model frequently misclassified cultural politeness like &#8220;\u304a\u75b2\u308c\u69d8\u3067\u3057\u305f&#8221; as &#8220;Manipulative Care.&#8221;<\/p>\n\n\n\n<p>This cultural blindness explains why Gemini paradoxically achieved the highest accuracy on actual &#8220;Manipulative Care&#8221; detection (98.3%) &#8211; it was flagging both genuine manipulation <em>and<\/em> normal Japanese politeness as the same category.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Conclusion<\/h3>\n\n\n\n<p>Large language models (LLMs) have emerged as powerful tools in the fight against digital deception\u2014but this study reveals both their potential and their current limitations.<\/p>\n\n\n\n<p>Through the classification of nearly 4,000 real scam messages from 50 cryptocurrency romance scams, I found that <strong>no single model consistently outperformed across all scam tactics<\/strong>. <\/p>\n\n\n\n<p>While <strong>GPT-4 achieved the highest overall accuracy<\/strong>, other models like <strong>Claude<\/strong> and <strong>Gemini<\/strong> demonstrated unique strengths in niche categories like investment pitches or fake persona detection.<\/p>\n\n\n\n<p>However, the most sobering insight is this: <strong>every model performed poorly on detecting direct money extraction attempts<\/strong>, the final and most dangerous step in the scam process. <\/p>\n\n\n\n<p>Even the best model (Claude) only achieved 5.7% accuracy in this critical category\u2014underscoring the difficulty of identifying explicit fraud when cloaked in emotionally manipulative language.<\/p>\n\n\n\n<p>I also uncovered significant <strong>language-specific performance gaps<\/strong>, with all models performing better in English than Japanese. This points to the need for more diverse training data and language-aware model tuning.<\/p>\n\n\n\n<p>Ultimately, this benchmark serves as both a <strong>progress report<\/strong> and a <strong>call to action<\/strong>. GenAI tools are promising allies in scam detection, but <strong>relying on them blindly is not enough<\/strong>. Human oversight, diverse training data, and continued evaluation are essential for building safer systems.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Real-World Implications<\/h4>\n\n\n\n<p>The findings have immediate practical applications:<\/p>\n\n\n\n<p><strong>For Platforms:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Deploy hybrid detection<\/strong>: No single model works\u2014combine GPT-4&#8217;s emotional detection with Claude&#8217;s investment pitch recognition.<\/li>\n\n\n\n<li><strong>Address cultural misinterpretation<\/strong>: Current models often confuse standard Japanese politeness with manipulation, leading to false positives.<\/li>\n\n\n\n<li><strong>Rethink money extraction detection<\/strong>: 5.7% accuracy means current approaches are fundamentally broken.<\/li>\n<\/ul>\n\n\n\n<p><strong>For AI Researchers:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Train on real adversarial data<\/strong>: Synthetic scam data clearly isn&#8217;t cutting it.<\/li>\n\n\n\n<li><strong>Build conversation-aware models<\/strong>: Message-level classification misses scammer progression patterns.<\/li>\n\n\n\n<li><strong>Address cross-cultural training gaps<\/strong>: English-centric training creates dangerous blind spots.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">What&#8217;s Next: Fine-Tuning for Superior Performance<\/h3>\n\n\n\n<p>The performance of general-purpose LLMs in this study is just the starting point. The real opportunity lies in <strong>transforming these models through fine-tuning<\/strong>\u2014training them further on a domain-specific dataset to boost their precision, context awareness, and reliability.<\/p>\n\n\n\n<p>With 15,913 manually reviewed messages from 175 real cryptocurrency scammers, this dataset offers an unparalleled foundation. Unlike synthetic or simulated text, these conversations capture authentic scammer psychology, cultural nuances, and evolving fraud tactics\u2014elements no model has been exposed to at scale.<\/p>\n\n\n\n<p>In my upcoming blog post, I\u2019ll explore how to fine-tune open-source LLMs using this dataset. <\/p>\n\n\n\n<p>Fine-tuning offers three key advantages:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Boost Accuracy in Critical Categories<\/strong>: Address persistent weaknesses\u2014especially in detecting \u201cMoney Extraction\u201d\u2014by training models on more representative, labeled examples.<\/li>\n\n\n\n<li><strong>Reduce Cultural Misclassification<\/strong>: Use culturally grounded Japanese data to help models distinguish between standard politeness and manipulative grooming, minimizing false positives.<\/li>\n\n\n\n<li><strong>Build Real-World-Ready Models<\/strong>: Adapt models for practical deployment, moving from generic language understanding to nuanced threat recognition in scam detection systems.<\/li>\n<\/ul>\n\n\n\n<p>This next phase will detail dataset preparation, architecture selection, and performance evaluation. The goal is to show how targeted fine-tuning can turn general LLMs into expert-level fraud detectors\u2014helping us build safer, smarter tools in the GenAI era.<\/p>\n\n\n\n<p>By moving beyond off-the-shelf models and toward culturally-aware, tactic-specific fine-tuning, we can take meaningful steps to reduce harm from AI-assisted scams worldwide.<\/p>\n\n\n\n<p><strong>Stay tuned<\/strong> for the next post, where I\u2019ll share early results\u2014and whether fine-tuned open-source LLMs can finally outsmart the scammers.<\/p>\n\n\n\n<div class=\"wp-block-jin-gb-block-icon-box jin-icon-caution jin-iconbox\"><div class=\"jin-iconbox-icons\"><i class=\"jic jin-ifont-caution jin-icons\"><\/i><\/div><div class=\"jin-iconbox-main\">\n<p><em>This research was conducted ethically with proper security measures. No real victims were involved, and all scammer interactions were conducted safely with appropriate protections.<\/em><\/p>\n\n\n\n<p><strong>Data Note<\/strong>: The complete dataset of 15,913 messages from 175 scammers represents one of the largest collections of real scam conversations available for research. Anonymized subsets will be made available once I complete manual review of all messages. <\/p>\n\n\n\n<p><strong>Technical Note<\/strong>: All model testing was conducted with identical prompts, security measures to prevent data leakage, and consistent evaluation criteria. Full methodology and code are available <a href=\"https:\/\/github.com\/tomomitanaka00\/LLM_Data_Analysis\">here<\/a>. <\/p>\n<\/div><\/div>\n","protected":false},"excerpt":{"rendered":"<p>The Growing Threat of Cryptocurrency Romance Scams Cryptocurrency romance scams have evolved into a sophisticated, multi-billion dollar criminal enterprise. These operations combine emotional manipulation with financial fraud, often lasting weeks or months before the final money extraction. The perpetrators follow detailed scripts, systematically building trust through fake personas before introducing \u201cinvestment opportunities.\u201d To combat this<\/p>\n","protected":false},"author":1,"featured_media":6883,"parent":6413,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-6486","page","type-page","status-publish","has-post-thumbnail","hentry"],"_links":{"self":[{"href":"https:\/\/tomomitanaka.ai\/index.php?rest_route=\/wp\/v2\/pages\/6486","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/tomomitanaka.ai\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/tomomitanaka.ai\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/tomomitanaka.ai\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/tomomitanaka.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=6486"}],"version-history":[{"count":236,"href":"https:\/\/tomomitanaka.ai\/index.php?rest_route=\/wp\/v2\/pages\/6486\/revisions"}],"predecessor-version":[{"id":6884,"href":"https:\/\/tomomitanaka.ai\/index.php?rest_route=\/wp\/v2\/pages\/6486\/revisions\/6884"}],"up":[{"embeddable":true,"href":"https:\/\/tomomitanaka.ai\/index.php?rest_route=\/wp\/v2\/pages\/6413"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/tomomitanaka.ai\/index.php?rest_route=\/wp\/v2\/media\/6883"}],"wp:attachment":[{"href":"https:\/\/tomomitanaka.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=6486"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}