{"id":3305,"date":"2016-12-12T14:16:57","date_gmt":"2016-12-12T05:16:57","guid":{"rendered":"https:\/\/avinton.com\/?p=3305"},"modified":"2021-06-03T10:37:01","modified_gmt":"2021-06-03T01:37:01","slug":"machine-learning-ai-storage-infrastructure-considerations","status":"publish","type":"post","link":"https:\/\/avinton.com\/en\/blog\/2016\/12\/machine-learning-ai-storage-infrastructure-considerations\/","title":{"rendered":"Machine Learning \/ AI Storage and Infrastructure Considerations"},"content":{"rendered":"<div class=\"wpb-content-wrapper\"><p>[vc_row][vc_column][vc_column_text]<\/p>\n<h2>Scope<\/h2>\n<p>With specialised IO heavy applications, certain considerations need to be taken into account to ensure that we make the most of the available processing hardware.<br \/>\nWe require a good design of the storage fabric and connectivity to and from the computation nodes to realise this.<\/p>\n<p>Here I share our experience during a recent solution design for a deep learning application using <a href=\"http:\/\/www.nvidia.com\/object\/deeplearningsystem.html\">NVIDIA&#8217;s DGX-1<\/a>.<\/p>\n<p>Our Client had the following requirements:<\/p>\n<ul>\n<li>Fast Read<\/li>\n<li>Fast Write<\/li>\n<li>Low Cost<\/li>\n<\/ul>\n<p>As in many of the emerging machine learning use cases this was an R&amp;D project and the client already had the full data set backed up on an isolated enterprise storage solution. As a result fault tolerance was not a big concern as the raw data can be reloaded from source at any time in case of a failure.<\/p>\n<p>This article goes through some of the considerations taken into account for the design of a commodity hardware based storage cluster.<\/p>\n<p><em><\/em><\/p>\n<h3>What is Nvidia DGX-1?<\/h3>\n<div id=\"attachment_3316\" style=\"width: 1090px\" class=\"wp-caption aligncenter\"><img decoding=\"async\" aria-describedby=\"caption-attachment-3316\" src=\"https:\/\/avinton.com\/wp-content\/uploads\/2016\/12\/NVIDIA_DGX-1.jpg\" alt=\"NVIDIA DGX-1\" width=\"1080\" height=\"608\" class=\"size-full wp-image-3316\" srcset=\"https:\/\/avinton.com\/wp-content\/uploads\/2016\/12\/NVIDIA_DGX-1.jpg 1080w, https:\/\/avinton.com\/wp-content\/uploads\/2016\/12\/NVIDIA_DGX-1-297x167.jpg 297w, https:\/\/avinton.com\/wp-content\/uploads\/2016\/12\/NVIDIA_DGX-1-300x169.jpg 300w, https:\/\/avinton.com\/wp-content\/uploads\/2016\/12\/NVIDIA_DGX-1-1024x576.jpg 1024w\" sizes=\"(max-width: 1080px) 100vw, 1080px\" \/><p id=\"caption-attachment-3316\" class=\"wp-caption-text\">NVIDIA DGX-1<\/p><\/div>\n<p>This is a purpose built appliance for deep learning applications.<br \/>\nIt achieves super-computer like processing power by using multiple Tesla P100 GPUs interconnected by a proprietary <a href=\"http:\/\/www.nvidia.com\/object\/nvlink.html\">NVLink interface<\/a>.<\/p>\n<div id=\"attachment_3317\" style=\"width: 985px\" class=\"wp-caption aligncenter\"><img decoding=\"async\" aria-describedby=\"caption-attachment-3317\" src=\"https:\/\/avinton.com\/wp-content\/uploads\/2016\/12\/nvlink.jpg\" alt=\"NVIDIA NVLink\" width=\"975\" height=\"663\" class=\"size-full wp-image-3317\" srcset=\"https:\/\/avinton.com\/wp-content\/uploads\/2016\/12\/nvlink.jpg 975w, https:\/\/avinton.com\/wp-content\/uploads\/2016\/12\/nvlink-246x167.jpg 246w, https:\/\/avinton.com\/wp-content\/uploads\/2016\/12\/nvlink-300x204.jpg 300w\" sizes=\"(max-width: 975px) 100vw, 975px\" \/><p id=\"caption-attachment-3317\" class=\"wp-caption-text\">NVIDIA NVLink<\/p><\/div>\n<p>It claims to replace ~250 conventional servers (we believe it!). Its hard to find any off the shelf hardware that can beat its performance in terms of Teraflops in just 3Us of rack space.<\/p>\n<div id=\"attachment_3321\" style=\"width: 581px\" class=\"wp-caption aligncenter\"><img decoding=\"async\" aria-describedby=\"caption-attachment-3321\" src=\"https:\/\/avinton.com\/wp-content\/uploads\/2016\/12\/dgx-1_vs_Xeon1.jpg\" alt=\"DGX-1 vs Dual Xeon Server\" width=\"571\" height=\"286\" class=\"size-full wp-image-3321\" srcset=\"https:\/\/avinton.com\/wp-content\/uploads\/2016\/12\/dgx-1_vs_Xeon1.jpg 571w, https:\/\/avinton.com\/wp-content\/uploads\/2016\/12\/dgx-1_vs_Xeon1-333x167.jpg 333w, https:\/\/avinton.com\/wp-content\/uploads\/2016\/12\/dgx-1_vs_Xeon1-300x150.jpg 300w, https:\/\/avinton.com\/wp-content\/uploads\/2016\/12\/dgx-1_vs_Xeon1-768x384.jpg 768w\" sizes=\"(max-width: 571px) 100vw, 571px\" \/><p id=\"caption-attachment-3321\" class=\"wp-caption-text\">DGX-1 vs Dual Xeon Server<\/p><\/div>\n<h2>Storage Considerations<\/h2>\n<p>With 170 teraflops of computing power it is important that the storage solution does not leave those processing cores idle. We try to avoid the situation where the storage is the bottleneck.<br \/>\nThe end result is that if we can keep up with the data feed and data sink requirements we will be able to reach results sooner and we can get the most value of the machine learning system.<br \/>\nFaster machine learning results can allow us to run more tests and fine tune algorithms for greater applicability and accuracy.<\/p>\n<h2>Commodity Hardware based Storage<\/h2>\n<h3>What do we mean by Commodity Hardware based Storage?<\/h3>\n<p>Commodity Hardware based storage is a purpose built cluster of commodity servers with DAS arranged in RAID arrays for HOT \/ WARM \/ COLD storage areas. You can find out more about Hot \/ Cold \/ Warm storage <a href=\"https:\/\/avinton.com\/blog\/\" target=\"_blank\" rel=\"noopener noreferrer\">here (Avinton Storage Solutions)<\/a><br \/>\nWe typically use Dell or HP servers but any recent DAS Supporting hardware can be used.<\/p>\n<p>The HOT storage can be for example an SSD array whereas the WARM \/ COLD storage can be normal 15k rpm SAS drives.<\/p>\n<p>The machines will run customised Linux distributions which provide enterprise storage like features at a lower cost.<\/p>\n<h4>Maintenance<\/h4>\n<p>On Commodity Hardware the maintenance overhead will be higher than enterprise storage solutions as a result of having more components in the solution.<br \/>\nThis can be significantly alleviated by having good monitoring in place with regular housekeeping.<\/p>\n<h4>Fault Tolerance<\/h4>\n<p>Fault Tolerance is typically not as good as an enterprise storage solution which has multiple levels of inbuilt redundancy. This can be improved by having hot spare disks in the RAID arrays to minimise impact of any disk failures as well as a dedicated backup storage or cluster 1:1 mirroring with software based fail-over.<br \/>\n<em>(The servers will typically be using RAID-6 which can tolerate up to two faulty drives per array but this does not cover memory \/ raid controller \/ motherboard \/ cpu failures.)<\/em><\/p>\n<h4>Security<\/h4>\n<p>Raid controller based full disk encryption can be used to ensure data security albeit with increased latency.<\/p>\n<h4>Scalability<\/h4>\n<p>Scalability is achieved by adding more servers in the cluster and configuring those in the controlling software.<br \/>\nThese solutions scale surprisingly well into the 10s of Petabytes.<\/p>\n<h2>Enterprise Storage Solution<\/h2>\n<p>If cost is not much of an issue and the use case requires high fault tolerance, security and speed it is best to go with a dedicated enterprise storage solution from vendors like Dell\/EMC, Oracle or software defined storage solution like Nutanix etc.<\/p>\n<p>We will not go into the details of those solutions here but some even use proprietary drives to achieve excellent IO speeds and native encryption coupled with various interconnect options from GbE to multiple Infiniband ports.<\/p>\n<h2>Storage Fabric Interconnect<\/h2>\n<p>The Storage interconnect is just as important as the Storage solution choice itself in the case of high speed data processing. With the DGX-1 this is particularly important since the on board GPUs are able to write data directly to the NIC Buffers over PCIe.<br \/>\nThis makes the data sink speed (write) just as important as the data feed speed (read) for our selected storage design as slow downs on either can cause a bottleneck.<\/p>\n<h3>Infiniband vs GbE<\/h3>\n<p>For a Commodity Based Hardware Storage solution:<br \/>\nPretty much any 10\/40GbE NIC would suffice provided it has a supported driver, Intel and Broadcom are good candidates.<br \/>\nOne server would need two dual port NICs to support redundancy and multi-path.<\/p>\n<p>Infiniband is likely to push the price up to the point where it would be worth looking at enterprise storage solution. This is because once you use infiniband you need to factor in all the nodes in the cluster having infiniband NICs, the cabling and the switching gear.<br \/>\nThis pushes the price up significantly and one can achieve comparable IO using additional nodes in the storage cluster connected by cheaper 10GbE or 40GbE for a more cost effective solution overall.<\/p>\n<h2>NVMe SSD<\/h2>\n<p>NVMe SSD drives are significantly faster than other SSDs on the market, however, at the time of writing enterprise NVMe drives would make a DIY Storage solution way too expensive to the point that it wouldn\u2019t be competitive anymore.<br \/>\nIt made more sense in our case to use many smaller and cheaper SAS\/SATA SSDs arranged in a hot storage pool or cache area and some big 15k rpm spinning disks for warm and cold.<br \/>\nThe combination of the high number of small SSDs and multi-path makes the system ideal to support MPP or GPU type processing loads.<\/p>\n<h2>Software Defined Storage<\/h2>\n<p>Software Defined Storage (SDS) is seen as the future in the storage eco-system.<br \/>\nSDS provides features like automatic data classification and prioritization based on usage or access patterns, automatic defragmentation, optimization, compression, replication.<\/p>\n<p>It is similar to our aforementioned solution in that it can run on Commodity Hardware but with some key differences.<\/p>\n<p>SDS uses a software abstraction layer which abstracts the hardware from the target application.<\/p>\n<p>SDS supports standard and proprietary protocols with extended feature set.<\/p>\n<p>Commodity Hardware based storage provides classic storage services over standard protocols like iSCSI, NFS, SMB, CIFS, etc. and more advanced features like HA, hot-warm-cold pool or multi-path usually have to be setup or implemented manually using open source components.<\/p>\n<p>SDS&#8217;s software layer is not free and not open source.<\/p>\n<p>In SDS all advanced features are automated and manageable from a central interface.<\/p>\n<p>All storage actions can be accessed, controlled and automated programmatically via RESTful APIs.<\/p>\n<p>A typical SDS is Nutanix\u2019s <a href=\"https:\/\/www.nutanix.com\/products\/acropolis\/\" target=\"_blank\" rel=\"noopener noreferrer\">Acropolis Distributed Storage Fabric<\/a>.<\/p>\n<p>[\/vc_column_text][\/vc_column][\/vc_row]<\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>[vc_row][vc_column][vc_column_text] Scope With specialised IO heavy applications, certain considerations need to be taken into account to ensure that we make the most of the available processing hardware. We require a good design of the storage fabric and connectivity to and from the computation nodes to realise this. Here I share our experience during a recent<br \/><a href=\"https:\/\/avinton.com\/en\/blog\/2016\/12\/machine-learning-ai-storage-infrastructure-considerations\/\" class=\"more\">Read more<\/a><\/p>\n","protected":false},"author":2,"featured_media":3364,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[8,1580],"tags":[480,763,764,770],"class_list":["post-3305","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tech-articles","category-mainpage","tag-storage","tag-ai","tag-machine-learning","tag-commodity-hardware"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.8 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Machine Learning \/ AI Storage and Infrastructure Considerations<\/title>\n<meta name=\"description\" content=\"Storage and Infrastructure considerations for an AI \/ Machine Learning system. Use case uses NVIDIA DGX-1 with a commodity server based storage cluster.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/avinton.com\/en\/blog\/2016\/12\/machine-learning-ai-storage-infrastructure-considerations\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Machine Learning \/ AI Storage and Infrastructure Considerations\" \/>\n<meta property=\"og:description\" content=\"Storage and Infrastructure considerations for an AI \/ Machine Learning system. Use case uses NVIDIA DGX-1 with a commodity server based storage cluster.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/avinton.com\/en\/blog\/2016\/12\/machine-learning-ai-storage-infrastructure-considerations\/\" \/>\n<meta property=\"og:site_name\" content=\"Avinton Japan\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/Avintons\/\" \/>\n<meta property=\"article:published_time\" content=\"2016-12-12T05:16:57+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2021-06-03T01:37:01+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/avinton.com\/wp-content\/uploads\/2016\/12\/ml.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"850\" \/>\n\t<meta property=\"og:image:height\" content=\"400\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"James Cauchi\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@AvintonJapan\" \/>\n<meta name=\"twitter:site\" content=\"@AvintonJapan\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"James Cauchi\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/avinton.com\/en\/blog\/2016\/12\/machine-learning-ai-storage-infrastructure-considerations\/\",\"url\":\"https:\/\/avinton.com\/en\/blog\/2016\/12\/machine-learning-ai-storage-infrastructure-considerations\/\",\"name\":\"Machine Learning \/ AI Storage and Infrastructure Considerations\",\"isPartOf\":{\"@id\":\"https:\/\/avinton.com\/en\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/avinton.com\/en\/blog\/2016\/12\/machine-learning-ai-storage-infrastructure-considerations\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/avinton.com\/en\/blog\/2016\/12\/machine-learning-ai-storage-infrastructure-considerations\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/avinton.com\/wp-content\/uploads\/2016\/12\/ml.jpg\",\"datePublished\":\"2016-12-12T05:16:57+00:00\",\"dateModified\":\"2021-06-03T01:37:01+00:00\",\"author\":{\"@id\":\"https:\/\/avinton.com\/en\/#\/schema\/person\/aa5bcc7a7c363ca85c0eeb6a7c2c594b\"},\"description\":\"Storage and Infrastructure considerations for an AI \/ Machine Learning system. Use case uses NVIDIA DGX-1 with a commodity server based storage cluster.\",\"breadcrumb\":{\"@id\":\"https:\/\/avinton.com\/en\/blog\/2016\/12\/machine-learning-ai-storage-infrastructure-considerations\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/avinton.com\/en\/blog\/2016\/12\/machine-learning-ai-storage-infrastructure-considerations\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/avinton.com\/en\/blog\/2016\/12\/machine-learning-ai-storage-infrastructure-considerations\/#primaryimage\",\"url\":\"https:\/\/avinton.com\/wp-content\/uploads\/2016\/12\/ml.jpg\",\"contentUrl\":\"https:\/\/avinton.com\/wp-content\/uploads\/2016\/12\/ml.jpg\",\"width\":850,\"height\":400,\"caption\":\"Avinton Machine Learning - Infrastructure Considerations\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/avinton.com\/en\/blog\/2016\/12\/machine-learning-ai-storage-infrastructure-considerations\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/avinton.com\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Machine Learning \/ AI Storage and Infrastructure Considerations\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/avinton.com\/en\/#website\",\"url\":\"https:\/\/avinton.com\/en\/\",\"name\":\"Avinton Japan\",\"description\":\"Tailored Solutions and Consulting in AI and Big Data\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/avinton.com\/en\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/avinton.com\/en\/#\/schema\/person\/aa5bcc7a7c363ca85c0eeb6a7c2c594b\",\"name\":\"James Cauchi\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/avinton.com\/en\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/24fff15ecfe40a23480c47de1acb5c69cc3aa019d6f6cd36353cee85ac20a9e7?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/24fff15ecfe40a23480c47de1acb5c69cc3aa019d6f6cd36353cee85ac20a9e7?s=96&d=mm&r=g\",\"caption\":\"James Cauchi\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Machine Learning \/ AI Storage and Infrastructure Considerations","description":"Storage and Infrastructure considerations for an AI \/ Machine Learning system. Use case uses NVIDIA DGX-1 with a commodity server based storage cluster.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/avinton.com\/en\/blog\/2016\/12\/machine-learning-ai-storage-infrastructure-considerations\/","og_locale":"en_US","og_type":"article","og_title":"Machine Learning \/ AI Storage and Infrastructure Considerations","og_description":"Storage and Infrastructure considerations for an AI \/ Machine Learning system. Use case uses NVIDIA DGX-1 with a commodity server based storage cluster.","og_url":"https:\/\/avinton.com\/en\/blog\/2016\/12\/machine-learning-ai-storage-infrastructure-considerations\/","og_site_name":"Avinton Japan","article_publisher":"https:\/\/www.facebook.com\/Avintons\/","article_published_time":"2016-12-12T05:16:57+00:00","article_modified_time":"2021-06-03T01:37:01+00:00","og_image":[{"width":850,"height":400,"url":"https:\/\/avinton.com\/wp-content\/uploads\/2016\/12\/ml.jpg","type":"image\/jpeg"}],"author":"James Cauchi","twitter_card":"summary_large_image","twitter_creator":"@AvintonJapan","twitter_site":"@AvintonJapan","twitter_misc":{"Written by":"James Cauchi","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/avinton.com\/en\/blog\/2016\/12\/machine-learning-ai-storage-infrastructure-considerations\/","url":"https:\/\/avinton.com\/en\/blog\/2016\/12\/machine-learning-ai-storage-infrastructure-considerations\/","name":"Machine Learning \/ AI Storage and Infrastructure Considerations","isPartOf":{"@id":"https:\/\/avinton.com\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/avinton.com\/en\/blog\/2016\/12\/machine-learning-ai-storage-infrastructure-considerations\/#primaryimage"},"image":{"@id":"https:\/\/avinton.com\/en\/blog\/2016\/12\/machine-learning-ai-storage-infrastructure-considerations\/#primaryimage"},"thumbnailUrl":"https:\/\/avinton.com\/wp-content\/uploads\/2016\/12\/ml.jpg","datePublished":"2016-12-12T05:16:57+00:00","dateModified":"2021-06-03T01:37:01+00:00","author":{"@id":"https:\/\/avinton.com\/en\/#\/schema\/person\/aa5bcc7a7c363ca85c0eeb6a7c2c594b"},"description":"Storage and Infrastructure considerations for an AI \/ Machine Learning system. Use case uses NVIDIA DGX-1 with a commodity server based storage cluster.","breadcrumb":{"@id":"https:\/\/avinton.com\/en\/blog\/2016\/12\/machine-learning-ai-storage-infrastructure-considerations\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/avinton.com\/en\/blog\/2016\/12\/machine-learning-ai-storage-infrastructure-considerations\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/avinton.com\/en\/blog\/2016\/12\/machine-learning-ai-storage-infrastructure-considerations\/#primaryimage","url":"https:\/\/avinton.com\/wp-content\/uploads\/2016\/12\/ml.jpg","contentUrl":"https:\/\/avinton.com\/wp-content\/uploads\/2016\/12\/ml.jpg","width":850,"height":400,"caption":"Avinton Machine Learning - Infrastructure Considerations"},{"@type":"BreadcrumbList","@id":"https:\/\/avinton.com\/en\/blog\/2016\/12\/machine-learning-ai-storage-infrastructure-considerations\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/avinton.com\/en\/"},{"@type":"ListItem","position":2,"name":"Machine Learning \/ AI Storage and Infrastructure Considerations"}]},{"@type":"WebSite","@id":"https:\/\/avinton.com\/en\/#website","url":"https:\/\/avinton.com\/en\/","name":"Avinton Japan","description":"Tailored Solutions and Consulting in AI and Big Data","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/avinton.com\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/avinton.com\/en\/#\/schema\/person\/aa5bcc7a7c363ca85c0eeb6a7c2c594b","name":"James Cauchi","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/avinton.com\/en\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/24fff15ecfe40a23480c47de1acb5c69cc3aa019d6f6cd36353cee85ac20a9e7?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/24fff15ecfe40a23480c47de1acb5c69cc3aa019d6f6cd36353cee85ac20a9e7?s=96&d=mm&r=g","caption":"James Cauchi"}}]}},"_links":{"self":[{"href":"https:\/\/avinton.com\/en\/wp-json\/wp\/v2\/posts\/3305","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/avinton.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/avinton.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/avinton.com\/en\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/avinton.com\/en\/wp-json\/wp\/v2\/comments?post=3305"}],"version-history":[{"count":18,"href":"https:\/\/avinton.com\/en\/wp-json\/wp\/v2\/posts\/3305\/revisions"}],"predecessor-version":[{"id":57113,"href":"https:\/\/avinton.com\/en\/wp-json\/wp\/v2\/posts\/3305\/revisions\/57113"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/avinton.com\/en\/wp-json\/wp\/v2\/media\/3364"}],"wp:attachment":[{"href":"https:\/\/avinton.com\/en\/wp-json\/wp\/v2\/media?parent=3305"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/avinton.com\/en\/wp-json\/wp\/v2\/categories?post=3305"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/avinton.com\/en\/wp-json\/wp\/v2\/tags?post=3305"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}