{"id":86,"date":"2023-02-02T15:28:28","date_gmt":"2023-02-02T20:28:28","guid":{"rendered":"https:\/\/carleton.ca\/xlab\/?p=86"},"modified":"2023-05-24T14:43:26","modified_gmt":"2023-05-24T18:43:26","slug":"knowledge-graphs-and-gpt-3","status":"publish","type":"post","link":"https:\/\/carleton.ca\/xlab\/2023\/knowledge-graphs-and-gpt-3\/","title":{"rendered":"Knowledge Graphs and GPT-3"},"content":{"rendered":"<p>One of the things we&#8217;re working on in our group are ways to extract structured knowledge from lots of unstructured text. In an article coming out soon in <a href=\"https:\/\/www.cambridge.org\/core\/journals\/advances-in-archaeological-practice\/latest-issue?sort=canonical.position%3Aasc\">Advances in Archaeological Practice<\/a>, we demonstrate some of the things that we can accomplish when we have statements about a domain (in this case, the antiquities trade) arranged as subject, verb, object statements and then knit into a graph. The graph can be turned into a series of numerical vectors (think of each vector as a line shooting out in a different direction in a multidimensional space; call this a knowledge graph embedding model), and then we can make predictions about things we <em>don&#8217;t<\/em> know based on measuring the distances between those vectors. (We used a package called <a href=\"https:\/\/github.com\/Accenture\/AmpliGraph\">Ampligraph<\/a>)<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-107 aligncenter\" src=\"https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/ampl1.png\" alt=\"\" width=\"457\" height=\"266\" srcset=\"https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/ampl1.png 908w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/ampl1-240x140.png 240w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/ampl1-400x233.png 400w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/ampl1-160x93.png 160w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/ampl1-768x447.png 768w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/ampl1-360x210.png 360w\" sizes=\"(max-width: 457px) 100vw, 457px\" \/><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-108 aligncenter\" src=\"https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/ampl2.png\" alt=\"\" width=\"642\" height=\"230\" srcset=\"https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/ampl2.png 1639w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/ampl2-240x86.png 240w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/ampl2-400x143.png 400w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/ampl2-160x57.png 160w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/ampl2-768x275.png 768w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/ampl2-1536x550.png 1536w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/ampl2-360x129.png 360w\" sizes=\"(max-width: 642px) 100vw, 642px\" \/><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-109 aligncenter\" src=\"https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/ampli3.png\" alt=\"\" width=\"571\" height=\"202\" srcset=\"https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/ampli3.png 1318w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/ampli3-240x85.png 240w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/ampli3-400x141.png 400w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/ampli3-160x57.png 160w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/ampli3-768x272.png 768w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/ampli3-360x127.png 360w\" sizes=\"(max-width: 571px) 100vw, 571px\" \/><\/p>\n<p>(illustrations of that process from Ampligraph)<\/p>\n<p>But getting the knowledge <em>out<\/em> in the first place, from the text, so we can do something with it&#8230; that&#8217;s the hard part. We manually annotated 129 articles by hand (using a tool that used XML to indicate the relationship between the annotations and the text) and then Chantal wrote a script to turn those annotations into a CSV file. That took us several months. Surely there&#8217;s got to be a faster way?<\/p>\n<p>We started playing with <a href=\"https:\/\/openai.com\/api\/\">GPT3<\/a>, the large language model that some fear will ruin education, others fear will put people out of jobs, and still some others figure will create jobs that don&#8217;t exist yet. We just wanted to know if it could identify subject, verbs, and objects from an article, selecting the most important bits. It can, and yes, it does feel spooky when it works. But it can make mistakes.<\/p>\n<p>We <a href=\"https:\/\/www.seotraininglondon.org\/gpt3-google-sheets-free-tutorial\/\">adapted this tutorial\u00a0<\/a>so that we could pass one cell of data from a spreadsheet through the OpenAI API. Each cell contains the full text of an article we want to analyze. The script prepends a prompt to that text, and the prompt steers the generative power of the model in a particular direction. Having seen untold numbers of examples of internet texts that summarize movies, books, whatever, if you prompted the model with &#8216;Summarize this article&#8217;, you&#8217;d duly get a three or four line summary that captures the main point. What&#8217;s more, if you told it, &#8216;<span data-offset-key=\"esbib-0-0\">Sum<\/span><span data-offset-key=\"esbib-1-0\">mar<\/span><span data-offset-key=\"esbib-2-0\">ize<\/span><span data-offset-key=\"esbib-3-0\"> the<\/span><span data-offset-key=\"esbib-4-0\"> text<\/span><span data-offset-key=\"esbib-5-0\">,<\/span><span data-offset-key=\"esbib-6-0\"> but<\/span><span data-offset-key=\"esbib-7-0\"> do<\/span><span data-offset-key=\"esbib-8-0\"> it<\/span><span data-offset-key=\"esbib-9-0\"> in<\/span><span data-offset-key=\"esbib-10-0\"> the<\/span><span data-offset-key=\"esbib-11-0\"> style<\/span><span data-offset-key=\"esbib-12-0\"> of<\/span><span data-offset-key=\"esbib-13-0\"> a<\/span><span data-offset-key=\"esbib-14-0\"> 1940<\/span><span data-offset-key=\"esbib-15-0\">s<\/span><span data-offset-key=\"esbib-16-0\"> gang<\/span><span data-offset-key=\"esbib-17-0\">ster<\/span><span data-offset-key=\"esbib-18-0\"> film<\/span><span data-offset-key=\"esbib-19-0\"> where<\/span><span data-offset-key=\"esbib-20-0\"> two<\/span><span data-offset-key=\"esbib-21-0\"> gang<\/span><span data-offset-key=\"esbib-22-0\">sters<\/span><span data-offset-key=\"esbib-23-0\"> are<\/span><span data-offset-key=\"esbib-24-0\"> talking<\/span><span data-offset-key=\"esbib-25-0\">&#8216; and then gave it the <a href=\"https:\/\/traffickingculture.org\/encyclopedia\/case-studies\/euphronios-sarpedon-krater\/\">text of this article about the Sarpedon Krater,<\/a> you&#8217;d get:<\/span><\/p>\n<blockquote><p><span data-offset-key=\"esbib-922-0\">G<\/span><span data-offset-key=\"esbib-923-0\">ang<\/span><span data-offset-key=\"esbib-924-0\">ster<\/span><span data-offset-key=\"esbib-925-0\"> 1<\/span><span data-offset-key=\"esbib-926-0\">:<\/span><span data-offset-key=\"esbib-927-0\"> &#8220;<\/span><span data-offset-key=\"esbib-928-0\">Did<\/span><span data-offset-key=\"esbib-929-0\"> ya<\/span><span data-offset-key=\"esbib-930-0\"> hear<\/span><span data-offset-key=\"esbib-931-0\"> &#8216;<\/span><span data-offset-key=\"esbib-932-0\">b<\/span><span data-offset-key=\"esbib-933-0\">out<\/span><span data-offset-key=\"esbib-934-0\"> that<\/span><span data-offset-key=\"esbib-935-0\"> k<\/span><span data-offset-key=\"esbib-936-0\">r<\/span><span data-offset-key=\"esbib-937-0\">ater<\/span><span data-offset-key=\"esbib-938-0\"> they<\/span><span data-offset-key=\"esbib-939-0\"> got<\/span><span data-offset-key=\"esbib-940-0\"> at<\/span><span data-offset-key=\"esbib-941-0\"> the<\/span><span data-offset-key=\"esbib-942-0\"> Metropolitan<\/span><span data-offset-key=\"esbib-943-0\"> Museum<\/span><span data-offset-key=\"esbib-944-0\">?<\/span><span data-offset-key=\"esbib-945-0\"> Word<\/span><span data-offset-key=\"esbib-946-0\"> is<\/span><span data-offset-key=\"esbib-947-0\"> it<\/span><span data-offset-key=\"esbib-948-0\"> was<\/span><span data-offset-key=\"esbib-949-0\"> made<\/span><span data-offset-key=\"esbib-950-0\"> in<\/span><span data-offset-key=\"esbib-951-0\"> Athens<\/span><span data-offset-key=\"esbib-952-0\"> like<\/span><span data-offset-key=\"esbib-953-0\"> 500<\/span><span data-offset-key=\"esbib-954-0\"> years<\/span><span data-offset-key=\"esbib-955-0\"> ago<\/span><span data-offset-key=\"esbib-956-0\">.<\/span><span data-offset-key=\"esbib-957-0\"> They<\/span><span data-offset-key=\"esbib-958-0\"> paid<\/span><span data-offset-key=\"esbib-959-0\"> a<\/span><span data-offset-key=\"esbib-960-0\"> million<\/span><span data-offset-key=\"esbib-961-0\"> bucks<\/span><span data-offset-key=\"esbib-962-0\"> for<\/span><span data-offset-key=\"esbib-963-0\"> it<\/span><span data-offset-key=\"esbib-964-0\">!<\/span><\/p>\n<p><span data-offset-key=\"esbib-968-0\">G<\/span><span data-offset-key=\"esbib-969-0\">ang<\/span><span data-offset-key=\"esbib-970-0\">ster<\/span><span data-offset-key=\"esbib-971-0\"> 2<\/span><span data-offset-key=\"esbib-972-0\">:<\/span><span data-offset-key=\"esbib-973-0\"> Yeah<\/span><span data-offset-key=\"esbib-974-0\">,<\/span><span data-offset-key=\"esbib-975-0\"> I<\/span><span data-offset-key=\"esbib-976-0\"> heard<\/span><span data-offset-key=\"esbib-977-0\">.<\/span><span data-offset-key=\"esbib-978-0\"> Rum<\/span><span data-offset-key=\"esbib-979-0\">or<\/span><span data-offset-key=\"esbib-980-0\"> has<\/span><span data-offset-key=\"esbib-981-0\"> it<\/span><span data-offset-key=\"esbib-982-0\"> they<\/span><span data-offset-key=\"esbib-983-0\"> got<\/span><span data-offset-key=\"esbib-984-0\"> it<\/span><span data-offset-key=\"esbib-985-0\"> illegally<\/span><span data-offset-key=\"esbib-986-0\"> in<\/span><span data-offset-key=\"esbib-987-0\"> Italy<\/span><span data-offset-key=\"esbib-988-0\">,<\/span><span data-offset-key=\"esbib-989-0\"> and<\/span><span data-offset-key=\"esbib-990-0\"> then<\/span><span data-offset-key=\"esbib-991-0\"> had<\/span><span data-offset-key=\"esbib-992-0\"> to<\/span><span data-offset-key=\"esbib-993-0\"> give<\/span><span data-offset-key=\"esbib-994-0\"> it<\/span><span data-offset-key=\"esbib-995-0\"> back<\/span><span data-offset-key=\"esbib-996-0\">.<\/span><span data-offset-key=\"esbib-997-0\"> But<\/span><span data-offset-key=\"esbib-998-0\"> it<\/span><span data-offset-key=\"esbib-999-0\"> was<\/span><span data-offset-key=\"esbib-1000-0\"> all<\/span><span data-offset-key=\"esbib-1001-0\"> worth<\/span><span data-offset-key=\"esbib-1002-0\"> it<\/span><span data-offset-key=\"esbib-1003-0\"> &#8211;<\/span><span data-offset-key=\"esbib-1004-0\"> it<\/span><span data-offset-key=\"esbib-1005-0\"> made<\/span><span data-offset-key=\"esbib-1006-0\"> history<\/span><span data-offset-key=\"esbib-1007-0\">!<\/span><\/p><\/blockquote>\n<p>While fun, gangster-speak isn&#8217;t what we&#8217;re after. Instead, we wondered, could we get GPT-3 to summarize these articles about the antiquities trade as a knowledge graph by generating the necessary code directly? Turns out, yes, yes you can.<\/p>\n<p>We started with this prompt:<\/p>\n<pre>Write correctly formatted Cypher create and match queries \r\nto represent the knowledge graph of this text.\r\n<\/pre>\n<p>and it duly wrote,<\/p>\n<pre>CREATE (medici:Person {name:\"Giacomo Medici\"}), \r\n(hecht:Person {name:\"Robert Hecht\"}), \r\n(krater:Artifact {name:\"Euphronios (Sarpedon) krater\"}), \r\n(romana:Gallery {name:\"Antiquaria Romana\"}), \r\n(cleveland:Museum {name:\"Cleveland Museum of Art\"}), \r\n(cooney:Person {name:\"John Cooney\"}), \r\n(italy:Country {name:\"Italy\"}), \r\n(medici)-[:STARTED_DEALING_IN_ANTIQUITIES_IN]-&gt;(italy), \r\n(medici)-[:BECAME_SUPPLIER_TO]-&gt;(hecht), \r\n(medici)-[:OPENED]-&gt;(romana), \r\n(medici)-[:BOUGHT]-&gt;(krater), \r\n<\/pre>\n<p>&lt;snip&gt;<\/p>\n<div id=\"attachment_87\" class=\"wp-caption alignleft\" style=\"width: 240px\"><img decoding=\"async\" loading=\"lazy\" class=\"size-medium wp-image-87\" src=\"https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/gpt3-to-cypher-240x97.png\" alt=\"\" width=\"240\" height=\"97\" srcset=\"https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/gpt3-to-cypher-240x97.png 240w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/gpt3-to-cypher-400x161.png 400w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/gpt3-to-cypher-160x64.png 160w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/gpt3-to-cypher-768x309.png 768w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/gpt3-to-cypher-1536x618.png 1536w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/gpt3-to-cypher-2048x824.png 2048w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/gpt3-to-cypher-360x145.png 360w\" sizes=\"(max-width: 240px) 100vw, 240px\" \/><p class=\"wp-caption-text\">Code above dropped directly into Neo4j<\/p><\/div>\n<p><\/p>\n<p>This code works! The problem though is that if you run a second article through the OpenAI API that mentions some of the same individuals, objects, places, or organizations, there&#8217;s no <i>memory<\/i> of the previous time. So you&#8217;ll get different variable names being created for the same figure, and so multiple nodes in your graph that are actually the same person or thing. We think we can get around this problem though by carefully framing the prompt with a few limited examples (the longer the prompt text, the shorter the response, so that&#8217;s also a concern).<\/p>\n<p>This is what we&#8217;re working with at present, which seems to handle things with only a few errors that can be cleared up manually afterwards:<\/p>\n<pre>Write a correctly formatted Cypher create query \r\nto represent first key individuals, organizations, and objects, \r\nthen the appropriate relationships between them. \r\nCheck to see if a node exists before creating it; \r\nALWAYS derive the variable name from the first initial plus the name. \r\nHere is an example of the desired output: \r\nMERGE (gfb:Individual {name: 'Gianfranco Becchina'})\r\nMERGE (gkk:Object {name: 'Getty Kouros'})\r\nCREATE (gfb)-[:ACQUIRED]-&gt;(gkk)\r\n<\/pre>\n<p>The screenshot below is the result after running four articles mentioning the figure of Giacomo Medici (from the <a href=\"https:\/\/traffickingculture.org\/encyclopedia\/\">Trafficking Culture Project Encyclopedia<\/a>) through our prompts; total elapsed time to get the Cypher statements: about 3 minutes.<\/p>\n<p>We&#8217;ve still got some kinks to figure out, but it&#8217;s exciting to see this (mostly) working. Once the graph is built, we&#8217;ll be able to query it too using natural language instead of Cypher &#8211; we&#8217;ll just ask GPT3 to translate our questions into suitable code.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignleft size-medium wp-image-88\" src=\"https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/Screen-Shot-2023-02-02-at-2.48.32-PM-240x232.png\" alt=\"\" width=\"240\" height=\"232\" srcset=\"https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/Screen-Shot-2023-02-02-at-2.48.32-PM-240x232.png 240w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/Screen-Shot-2023-02-02-at-2.48.32-PM-400x387.png 400w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/Screen-Shot-2023-02-02-at-2.48.32-PM-160x155.png 160w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/Screen-Shot-2023-02-02-at-2.48.32-PM-768x743.png 768w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/Screen-Shot-2023-02-02-at-2.48.32-PM-1536x1486.png 1536w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/Screen-Shot-2023-02-02-at-2.48.32-PM-360x348.png 360w, https:\/\/carleton.ca\/xlab\/wp-content\/uploads\/Screen-Shot-2023-02-02-at-2.48.32-PM.png 1540w\" sizes=\"(max-width: 240px) 100vw, 240px\" \/><\/p>\n<p><strong>Things to Read<br \/>\n<\/strong><\/p>\n<p><a href=\"https:\/\/neo4j.com\/developer-blog\/explore-chatgpt-learning-code-data-nlp-graph\/\">https:\/\/neo4j.com\/developer-blog\/explore-chatgpt-learning-code-data-nlp-graph\/<\/a><\/p>\n<p><a href=\"https:\/\/towardsdatascience.com\/gpt-3-for-doctor-ai-1396d1cd6fa5\">https:\/\/towardsdatascience.com\/gpt-3-for-doctor-ai-1396d1cd6fa5<\/a><\/p>\n<p><\/p>\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>One of the things we&#8217;re working on in our group are ways to extract structured knowledge from lots of unstructured text. In an article coming out soon in Advances in Archaeological Practice, we demonstrate some of the things that we can accomplish when we have statements about a domain (in this case, the antiquities trade) [&hellip;]<\/p>\n","protected":false},"author":6,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_relevanssi_hide_post":"","_relevanssi_hide_content":"","_relevanssi_pin_for_all":"","_relevanssi_pin_keywords":"","_relevanssi_unpin_keywords":"","_relevanssi_related_keywords":"","_relevanssi_related_include_ids":"","_relevanssi_related_exclude_ids":"","_relevanssi_related_no_append":"","_relevanssi_related_not_related":"","_relevanssi_related_posts":"","_relevanssi_noindex_reason":"","_mi_skip_tracking":false,"_exactmetrics_sitenote_active":false,"_exactmetrics_sitenote_note":"","_exactmetrics_sitenote_category":0,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":"","_links_to":"","_links_to_target":""},"categories":[1],"tags":[58,57,59,27],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v21.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Knowledge Graphs and GPT-3 - X-Lab<\/title>\n<meta name=\"description\" content=\"One of the things we&#039;re working on in our group are ways to extract structured knowledge from lots of unstructured text. In an article coming out soon in\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/carleton.ca\/xlab\/2023\/knowledge-graphs-and-gpt-3\/\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"shawngraham\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/carleton.ca\/xlab\/2023\/knowledge-graphs-and-gpt-3\/\",\"url\":\"https:\/\/carleton.ca\/xlab\/2023\/knowledge-graphs-and-gpt-3\/\",\"name\":\"Knowledge Graphs and GPT-3 - X-Lab\",\"isPartOf\":{\"@id\":\"https:\/\/carleton.ca\/xlab\/#website\"},\"datePublished\":\"2023-02-02T20:28:28+00:00\",\"dateModified\":\"2023-05-24T18:43:26+00:00\",\"author\":{\"@id\":\"https:\/\/carleton.ca\/xlab\/#\/schema\/person\/e8707158a71e77734ea13346b6e46feb\"},\"description\":\"One of the things we're working on in our group are ways to extract structured knowledge from lots of unstructured text. In an article coming out soon in\",\"breadcrumb\":{\"@id\":\"https:\/\/carleton.ca\/xlab\/2023\/knowledge-graphs-and-gpt-3\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/carleton.ca\/xlab\/2023\/knowledge-graphs-and-gpt-3\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/carleton.ca\/xlab\/2023\/knowledge-graphs-and-gpt-3\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/carleton.ca\/xlab\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"News\",\"item\":\"https:\/\/carleton.ca\/xlab\/category\/news\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Knowledge Graphs and GPT-3\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/carleton.ca\/xlab\/#website\",\"url\":\"https:\/\/carleton.ca\/xlab\/\",\"name\":\"X-Lab\",\"description\":\"Carleton University\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/carleton.ca\/xlab\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/carleton.ca\/xlab\/#\/schema\/person\/e8707158a71e77734ea13346b6e46feb\",\"name\":\"shawngraham\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/carleton.ca\/xlab\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/1b4be5c0f305aa12c7a3dd75ae5c731e?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/1b4be5c0f305aa12c7a3dd75ae5c731e?s=96&d=mm&r=g\",\"caption\":\"shawngraham\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Knowledge Graphs and GPT-3 - X-Lab","description":"One of the things we're working on in our group are ways to extract structured knowledge from lots of unstructured text. In an article coming out soon in","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/carleton.ca\/xlab\/2023\/knowledge-graphs-and-gpt-3\/","twitter_misc":{"Written by":"shawngraham","Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/carleton.ca\/xlab\/2023\/knowledge-graphs-and-gpt-3\/","url":"https:\/\/carleton.ca\/xlab\/2023\/knowledge-graphs-and-gpt-3\/","name":"Knowledge Graphs and GPT-3 - X-Lab","isPartOf":{"@id":"https:\/\/carleton.ca\/xlab\/#website"},"datePublished":"2023-02-02T20:28:28+00:00","dateModified":"2023-05-24T18:43:26+00:00","author":{"@id":"https:\/\/carleton.ca\/xlab\/#\/schema\/person\/e8707158a71e77734ea13346b6e46feb"},"description":"One of the things we're working on in our group are ways to extract structured knowledge from lots of unstructured text. In an article coming out soon in","breadcrumb":{"@id":"https:\/\/carleton.ca\/xlab\/2023\/knowledge-graphs-and-gpt-3\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/carleton.ca\/xlab\/2023\/knowledge-graphs-and-gpt-3\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/carleton.ca\/xlab\/2023\/knowledge-graphs-and-gpt-3\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/carleton.ca\/xlab\/"},{"@type":"ListItem","position":2,"name":"News","item":"https:\/\/carleton.ca\/xlab\/category\/news\/"},{"@type":"ListItem","position":3,"name":"Knowledge Graphs and GPT-3"}]},{"@type":"WebSite","@id":"https:\/\/carleton.ca\/xlab\/#website","url":"https:\/\/carleton.ca\/xlab\/","name":"X-Lab","description":"Carleton University","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/carleton.ca\/xlab\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/carleton.ca\/xlab\/#\/schema\/person\/e8707158a71e77734ea13346b6e46feb","name":"shawngraham","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/carleton.ca\/xlab\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/1b4be5c0f305aa12c7a3dd75ae5c731e?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/1b4be5c0f305aa12c7a3dd75ae5c731e?s=96&d=mm&r=g","caption":"shawngraham"}}]}},"acf":{"Post Thumbnail Icon":"","Post Thumbnail":false},"_links":{"self":[{"href":"https:\/\/carleton.ca\/xlab\/wp-json\/wp\/v2\/posts\/86"}],"collection":[{"href":"https:\/\/carleton.ca\/xlab\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/carleton.ca\/xlab\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/carleton.ca\/xlab\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/carleton.ca\/xlab\/wp-json\/wp\/v2\/comments?post=86"}],"version-history":[{"count":3,"href":"https:\/\/carleton.ca\/xlab\/wp-json\/wp\/v2\/posts\/86\/revisions"}],"predecessor-version":[{"id":111,"href":"https:\/\/carleton.ca\/xlab\/wp-json\/wp\/v2\/posts\/86\/revisions\/111"}],"wp:attachment":[{"href":"https:\/\/carleton.ca\/xlab\/wp-json\/wp\/v2\/media?parent=86"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/carleton.ca\/xlab\/wp-json\/wp\/v2\/categories?post=86"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/carleton.ca\/xlab\/wp-json\/wp\/v2\/tags?post=86"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}