\n\n\nHeld during the darkest time of the year in Helsinki, Finland, Slush has always been characterized by a unique energy and enthusiasm. The very core of Slush is to facilitate founder and investor meetings and to build a world-wide startup community.\n\nSlush is a student-driven, non-profit movement originally founded to change attitudes toward entrepreneurship.\n\nSlush Asia 2015 was the first Slush event held in Asia. It was held in Tokyo, Japan. As part of the core organizing team, I was responsible for overseeing all production and logistics for the Slush Asia conference.\n\nWith over 3,000 attendees, 250 startups, 30 speakers, and 100 investors, executing a smooth event required meticulous planning and coordination across vendors, partners, and staff.\n\nThrough bringing together change makers, key industry players and different community builders together, we create a thriving startup ecosystem under one roof.\n\n\n\n"],"filePath":[0,"src/content/projects/20160513-slush-asia.mdx"],"digest":[0,"d543c6a4703bd1e7"],"deferredRender":[0,true],"collection":[0,"projects"],"slug":[0,"20160513-slush-asia"],"render":[0,null]}]]],"2018":[1,[[0,{"id":[0,"20180102-aubik.mdx"],"data":[0,{"title":[0,"Aubik"],"date":[0,"2018.01.02"],"tags":[1,[[0,"engineering"],[0,"fashion"]]],"client":[0,""],"youtube":[0,""],"role":[0,"Researcher / Engineer"],"publications":[1,[]],"media":[1,[[0,{"media_name":[0,"Amana"],"reference":[0,"https://insights.amana.jp/article/23836/"],"medium":[0,"Online"],"year":[0,2020]}],[0,{"media_name":[0,"Fashionsnap"],"reference":[0,"https://www.fashionsnap.com/article/ai-synflux-hatra/"],"medium":[0,"Online"],"year":[0,2020]}],[0,{"media_name":[0,"Wired"],"reference":[0,"https://wired.me/culture/design/your-ai-generated-clothes-are-trending/"],"medium":[0,"Online"],"year":[0,2022]}],[0,{"media_name":[0,"WWD Japan"],"reference":[0,"https://www.wwdjapan.com/articles/999674"],"medium":[0,"Online"],"year":[0,2020]}],[0,{"media_name":[0,"Fashionsnap"],"reference":[0,"https://www.fashionsnap.com/article/ai-synflux-hatra/"],"medium":[0,"Online"],"year":[0,2020]}]]],"awards":[1,[[0,{"award_name":[0,"Dezeen Awards Longlist 2020"],"year":[0,2020]}]]],"exhibitions":[1,[[0,{"exhibition_name":[0,"Making Fashion Sense"],"date":[0,"2020.01.16 - 2020.03.08"],"city":[0,"Bazel, Switzerland"],"place":[0,"HeK"]}]]],"credits":[1,[[0,"Kye Shimizu (Synflux LLC)"],[0,"Kotaro Sano (Synflux LLC)"],[0,"Kazuya Kawasaki (Synflux LLC)"],[0,"Keisuke Nagami (HATRA)"]]],"thanks":[1,[[0,"Fukule inc."]]],"grants":[1,[]],"talks":[1,[]],"protected":[0,false]}],"body":[0,"import OptimizedPicture from '../../components/OptimizedPicture.astro'\nimport { Spacer } from '../../components/layout/Spacer'\nimport TextSection from '../../components/blog/TextSection'\nimport img1 from './images/aubik/01.webp'\nimport img2 from './images/aubik/02.webp'\nimport img3 from './images/aubik/03.webp'\nimport img4 from './images/aubik/04.webp'\nimport img5 from './images/aubik/05.webp'\nimport SingleColumnPicture from '../../components/layout/SingleColumnPicture'\nimport DoubleColumnPicture from '../../components/layout/DoubleColumnPicture'\n\n\n\nAUBIK, an AI-generated hoodie is a collaborative project with the Tokyo-based fashion brand HATRA. The garment pattern was generated by the above-mentioned \"Algorithmic Couture\", based on the three-dimensional data of the garment created on the CAD software CLO3D. The algorithm-generated patterns are composed of geometric shapes inspired by traditional Japanese straight-cutting, so they are optimized and filled on the fabric like Tetris, reducing waste.\nAiming to fuse the generativity of artificial intelligence and fashion design creativity, those works explore the concept of \"Cyborg-like Body\" where bits and atoms, nature and artificiality, are mixed together as \"Xenomorphic Algorithmic Chimera\". The accelerated evolution of information technology and bioengineering is becoming awe-inspiring, it shows the possibility of the \"Artificial Sublime\" in the Anthropocene.\nThese projects were exhibited at Making FASHION Sense at HeK in Basel.\n\nhttps://hatroid.com/en/pages/aubik-project\n\n
\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n \n"],"filePath":[0,"src/content/projects/20180102-aubik.mdx"],"digest":[0,"59f94a1cc27c36dc"],"deferredRender":[0,true],"collection":[0,"projects"],"slug":[0,"20180102-aubik"],"render":[0,null]}]]],"2019":[1,[[0,{"id":[0,"20190102-algorithmic-couture.mdx"],"data":[0,{"title":[0,"Algorithmic Couture"],"date":[0,"2019.01.02"],"tags":[1,[[0,"installation"],[0,"research"],[0,"fashion"]]],"role":[0,"Researcher / Engineer"],"publications":[1,[]],"media":[1,[[0,{"media_name":[0,"Senken Newspaper"],"reference":[0,"https://senken.co.jp/posts/toyoshima-200108"],"medium":[0,"Online"],"year":[0,2019]}],[0,{"media_name":[0,"H&M"],"reference":[0,"https://prtimes.jp/main/html/rd/p/000000283.000011958.html"],"medium":[0,"Online"],"year":[0,2019]}],[0,{"media_name":[0,"AXIS"],"reference":[0,"https://www.axismag.jp/posts/2019/04/124419.html"],"medium":[0,"Online"],"year":[0,2019]}],[0,{"media_name":[0,"H&M"],"reference":[0,"https://about.hm.com/ja_jp/news/general-news-2019/_.html"],"medium":[0,"Online"],"year":[0,2019]}],[0,{"media_name":[0,"Design Indaba"],"reference":[0,"https://www.designindaba.com/articles/conference-talks/kye-shimizu-coding-fix-wasteful-fashion-industry"],"medium":[0,"Online"],"year":[0,2019]}],[0,{"media_name":[0,"NHK"],"reference":[0,"https://www3.nhk.or.jp/nhkworld/en/news/ataglance/707/"],"medium":[0,"TV"],"year":[0,2020]}]]],"awards":[1,[[0,{"award_name":[0,"Dezeen Awards Longlist 2019"],"year":[0,2019]}],[0,{"award_name":[0,"Wired Japan Creative Hack Award 2018, Judge Award"],"year":[0,2018]}],[0,{"award_name":[0,"Global Change Award 2018, H&M Early Bird"],"year":[0,2018]}]]],"exhibitions":[1,[[0,{"exhibition_name":[0,"AnyTokyo 2019"],"date":[0,"2019.11.15 - 2019.11.24"],"city":[0,"Tokyo, Japan"],"place":[0,"Kudanhouse"]}],[0,{"exhibition_name":[0,"Keio SFC Open Research Forum 2019"],"date":[0,"2018.11.23 - 2018.11.24"],"city":[0,"Tokyo, Japan"],"place":[0,"Roppongi Midtown"]}],[0,{"exhibition_name":[0,"Keio SFC Open Research Forum 2018"],"date":[0,"2018.11.22 - 2018.11.23"],"city":[0,"Tokyo, Japan"],"place":[0,"Roppongi Midtown"]}]]],"credits":[1,[[0,"Software Engineering: Kye Shimizu (Synflux LLC), Yusuke Fujihira(archiroid LLC)"],[0,"Design: Kotaro Sano (Synflux LLC), Kazuya Kawasaki (Synflux LLC)"]]],"thanks":[1,[[0,"Model: Tamami Ohbuchi"],[0,"Videography: Tomoki Yoneyama, Tomoyuki Hayama"],[0,"Music: Kenta Tanaka"]]],"grants":[1,[[0,{"grant_name":[0,"COI STREAM"],"year":[0,2019],"reference":[0,"https://www.mext.go.jp/a_menu/kagaku/coi/"]}]]],"talks":[1,[[0,{"talk_name":[0,"Algorithmic Couture"],"reference":[0,"https://coi-conference.github.io/2019/index.html"],"place":[0,"The National Museum of Emerging Science and Innovation (Miraikan)"],"city":[0,"Tokyo, Japan"],"year":[0,2019],"event_name":[0,"COI Conference 2019"]}],[0,{"talk_name":[0,"Algorithmic Couture"],"reference":[0,"https://www.sfc.keio.ac.jp/orf2019/"],"place":[0,"Centre for Sustainable Fashion, UAL"],"city":[0,"London, UK"],"year":[0,2018],"event_name":[0,"Global Fashion Conference 2018"]}],[0,{"talk_name":[0,"Kye Shimizu is coding to fix the wasteful fashion industry "],"reference":[0,"https://www.designindaba.com/articles/conference-talks/kye-shimizu-coding-fix-wasteful-fashion-industry"],"place":[0,"Artscape"],"city":[0,"Cape Town, South Africa"],"year":[0,2018],"event_name":[0,"Wired Creative Hack Award 2018"]}]]],"protected":[0,false]}],"body":[0,"import TextSection from '../../components/blog/TextSection'\nimport VideoPlayer from '../../components/VideoPlayer'\nimport { Spacer } from '../../components/layout/Spacer'\n\n\n \n Algorithmic Couture is a project aiming to automate the creation of zero waste fashion patterns\n and digitize traditional haute couture techniques to create customized ethical fashion garments.\n Through the collaboration of fashion designers, machine learning engineers, and digital\n fabrication specialists, our goal is to revitalize the production system of fashion design for a\n more sustainable future. The project at its core consists of 3 main concepts; Firstly\n sustainability, through changing a polluting industry through zero waste fashion design.\n Secondly digitization, through digitizing traditional haute couture techniques into parameters\n for an automated digital fabrication production system. Lastly customization, through the mass\n customization of bespoke garments, providing an area of co-creation with customers the ability\n through the use of parameters over designs that would be generated from their body data.\n
\n\n\n\n\nhttps://www.google.com/url?q=https://www3.nhk.or.jp/nhkworld/en/news/ataglance/707/&sa=U&ved=2ahUKEwiPmbmCuseCAxVWdPUHHRbrALMQFnoECAMQAg&usg=AOvVaw3ZOPxA_mEcPxGSN9nA6uCA"],"filePath":[0,"src/content/projects/20190102-algorithmic-couture.mdx"],"digest":[0,"b5f1a9c5ded4e8d2"],"deferredRender":[0,true],"collection":[0,"projects"],"slug":[0,"20190102-algorithmic-couture"],"render":[0,null]}],[0,{"id":[0,"20190102-harmonize.mdx"],"data":[0,{"title":[0,"Harmonize"],"date":[0,"2019.01.02"],"tags":[1,[[0,"engineering"],[0,"installation"],[0,"research"],[0,"fashion"]]],"youtube":[0,"https://www.youtube.com/watch?v=IAPWKrpDj5w"],"role":[0,"Researcher / Engineer"],"hoverImage":[0,{"src":[0,"/_astro/Yuima-Nakazato-SS18-Couture-04.B33oFuCU.webp"],"width":[0,620],"height":[0,930],"format":[0,"webp"]}],"publications":[1,[]],"media":[1,[[0,{"media_name":[0,"Toray"],"reference":[0,"https://www.toray.com/global/news/details/20180201000451.html"],"medium":[0,"Online"],"year":[0,2018]}],[0,{"media_name":[0,"Elle"],"reference":[0,"https://www.elle.com/jp/culture/a230964/cne-yuima-nakazato-21-21-design-sight-exhibition-180216-hns/"],"medium":[0,"Online"],"year":[0,2018]}],[0,{"media_name":[0,"Wired"],"reference":[0,"https://wired.jp/special/2018/yuima-nakazato/"],"medium":[0,"Online"],"year":[0,2018]}]]],"awards":[1,[]],"exhibitions":[1,[[0,{"exhibition_name":[0,"Rules?"],"date":[0,"2021.07.02 - 2021.11.28"],"city":[0,"Tokyo, Japan"],"place":[0,"2121_Design Sight"]}],[0,{"exhibition_name":[0,"Making Fashion Sense"],"date":[0,"2020.01.16 - 2020.03.08"],"city":[0,"Bazel, Switzerland"],"place":[0,"HeK"]}],[0,{"exhibition_name":[0,"Harmonize"],"date":[0,"2018.02.21 - 2021.02.25"],"city":[0,"Tokyo, Japan"],"place":[0,"ELEPHANT PANAME"]}]]],"credits":[1,[[0,"Digital Fabrication Engineering & Lighting Coordination : Kye Shimizu"],[0,"Software Engineering : NOIZ"],[0,"Design Research: Kotaro Sano, Kazuya Kawasaki"],[0,"Supervision : Daijiro Mizuno"],[0,"Art direction & Production : YUIMA NAKAZATO staff"]]],"thanks":[1,[[0,"Trotec Laser Japan inc."],[0,"Toray Inc."],[0,"SHOJI FUJII"]]],"grants":[1,[]],"talks":[1,[[0,{"talk_name":[0,"Speculating the future Haute Couture: Designing mass-customisable garments for the sustainable future"],"reference":[0,"https://www.sfc.keio.ac.jp/orf2019/"],"place":[0,"Centre for Sustainable Fashion, UAL"],"city":[0,"London, UK"],"year":[0,2018],"event_name":[0,"Global Fashion Conference 2018"]}]]],"protected":[0,false]}],"body":[0,"import OptimizedPicture from '../../components/OptimizedPicture.astro'\nimport { Spacer } from '../../components/layout/Spacer'\nimport TextSection from '../../components/blog/TextSection'\nimport VideoPlayer from '../../components/VideoPlayer'\nimport TripleColumnPicture from '../../components/layout/TripleColumnPicture'\n\nimport harmonize_1 from './images/harmonize/Yuima-Nakazato-SS18-Couture-01.webp'\nimport harmonize_2 from './images/harmonize/Yuima-Nakazato-SS18-Couture-02.webp'\nimport harmonize_3 from './images/harmonize/Yuima-Nakazato-SS18-Couture-03.webp'\nimport harmonize_4 from './images/harmonize/Yuima-Nakazato-SS18-Couture-04.webp'\nimport harmonize_5 from './images/harmonize/Yuima-Nakazato-SS18-Couture-05.webp'\nimport harmonize_6 from './images/harmonize/Yuima-Nakazato-SS18-Couture-06.webp'\nimport harmonize_7 from './images/harmonize/Yuima-Nakazato-SS18-Couture-07.webp'\nimport harmonize_8 from './images/harmonize/Yuima-Nakazato-SS18-Couture-08.webp'\nimport harmonize_9 from './images/harmonize/Yuima-Nakazato-SS18-Couture-09.webp'\nimport harmonize_10 from './images/harmonize/Yuima-Nakazato-SS18-Couture-10.webp'\nimport harmonize_11 from './images/harmonize/Yuima-Nakazato-SS18-Couture-11.webp'\n\n“Eventually, each and every garment will be unique and different.” This is what Yuima Nakazato envisions as the future of mankind. Based on this vision, Nakazato has developed an innovative method to manufacture clothes without sewing. Assembling clothes with originally developed parts called “Units” makes it possible to freely combine and rearrange their design, size, and materials, leading to garments that harmonize with everyone who wears them.\n\nThis exhibition features Nakazato’s latest collection, introduced at Paris Haute Couture Fashion Week in January 2018, and its proprietary manufacturing system of innovative clothes along with its new products using Ultrasuede®PX, an environmentally friendly suede texture fabric invented by Toray.\nExperience the future apparel envisioned by Nakazato, who wishes to deliver ultimate one-of-a-kind garments to each and every person in the world.\n\n\n\n\n\n \n \n \n \n \n \n \n\n\n\n\n\n\n\n\n\n\n"],"filePath":[0,"src/content/projects/20190102-harmonize.mdx"],"assetImports":[1,[[0,"./images/harmonize/Yuima-Nakazato-SS18-Couture-04.webp"]]],"digest":[0,"f6f5af6b08949865"],"deferredRender":[0,true],"collection":[0,"projects"],"slug":[0,"20190102-harmonize"],"render":[0,null]}],[0,{"id":[0,"20190323-algorithmic-urban-composition.mdx"],"data":[0,{"title":[0,"Algorithmic Urban Composition"],"subtitle":[0,"Algorithmic sonification of urban landscapes"],"date":[0,"2019.03.23"],"tags":[1,[[0,"engineering"],[0,"installation"]]],"client":[0,""],"youtube":[0,"https://www.youtube.com/watch?v=4diG4lfNYX4"],"role":[0,"Researcher / Engineer"],"publications":[1,[]],"media":[1,[]],"awards":[1,[]],"exhibitions":[1,[[0,{"exhibition_name":[0,"Algorithmic Urban Composition"],"date":[0,"2019.03.23 - 2019.03.26"],"city":[0,"Palo Alto, California"],"place":[0,"Stanford CCRMA"]}]]],"credits":[1,[[0,"Software Engineering: Kye Shimizu, Ryo Yumoto"],[0,"Music Composition: Kenta Tanaka"],[0,"Videography: Yuki Aizawa"]]],"thanks":[1,[[0,"Supervision: Shinya Fujii"]]],"grants":[1,[]],"talks":[1,[]],"protected":[0,false]}],"body":[0,"import OptimizedPicture from '../../components/OptimizedPicture.astro'\nimport TextSection from '../../components/blog/TextSection'\nimport { Spacer } from '../../components/layout/Spacer'\nimport img1 from './images/algorithmic-urban-composition/img1.webp'\nimport img2 from './images/algorithmic-urban-composition/img2.webp'\nimport img3 from './images/algorithmic-urban-composition/img3.webp'\nimport img4 from './images/algorithmic-urban-composition/img4.webp'\nimport img5 from './images/algorithmic-urban-composition/img5.webp'\n\n\n This project aims to explore the possibilities of “urban compositions” by means of mashing up\n urban landscapes through the algorithmic sonication process using object detection software.\n As the modern city has various elements (e.g. nature, buildings, and human activities), by generating\n sounds through the eyes of machines, we aim to aurally re-represent alternative cityscapes. Our\n question is how unique sounds and spaces would be created by applying urban complexity in the\n field of audiovisual composition.\n\n\n
\n\n\n"],"filePath":[0,"src/content/projects/20190323-algorithmic-urban-composition.mdx"],"digest":[0,"9cf5796facddaebb"],"deferredRender":[0,true],"collection":[0,"projects"],"slug":[0,"20190323-algorithmic-urban-composition"],"render":[0,null]}],[0,{"id":[0,"20190323-arashi.mdx"],"data":[0,{"title":[0,"Arashi 5x20 Dome Tour"],"subtitle":[0,"360 Visual Performance for Arashi"],"date":[0,"2019.03.23"],"tags":[1,[[0,"engineering"],[0,"installation"]]],"client":[0,""],"youtube":[0,"https://www.youtube.com/watch?v=4diG4lfNYX4"],"role":[0,"Engineer"],"publications":[1,[]],"media":[1,[]],"awards":[1,[]],"exhibitions":[1,[[0,{"exhibition_name":[0,"Algorithmic Urban Composition"],"date":[0,"2019.03.23 - 2019.03.26"],"city":[0,"Palo Alto, California"],"place":[0,"Stanford CCRMA"]}]]],"credits":[1,[[0,"Kye Shimizu"],[0,"Kenta Tanaka"],[0,"Ryo Yumoto"],[0,"Yuki Aizawa"]]],"thanks":[1,[[0,"Shinya Fujii"]]],"grants":[1,[]],"talks":[1,[]],"protected":[0,false]}],"filePath":[0,"src/content/projects/20190323-arashi.mdx"],"digest":[0,"b5482d0a5aeebf9d"],"deferredRender":[0,true],"collection":[0,"projects"],"slug":[0,"20190323-arashi"],"render":[0,null]}]]],"2020":[1,[[0,{"id":[0,"20200102-fencing-visualized.mdx"],"data":[0,{"title":[0,"Fencing Visualized"],"subtitle":[0,"Enable detection for real-time AR synthesis in fencing"],"date":[0,"2020.01.02"],"tags":[1,[[0,"engineering"],[0,"research"]]],"client":[0,""],"youtube":[0,"https://youtu.be/D6DTYeLW3C0"],"role":[0,"Researcher / Engineer"],"hoverImage":[0,{"src":[0,"/_astro/fencing_01.B8jHcCfY.webp"],"width":[0,1920],"height":[0,1080],"format":[0,"webp"]}],"publications":[1,[[0,{"conference_name":[0,"ACM SIGGRAPH ASIA"],"year":[0,2021],"reference":[0,"Technical Contribution / Yuya Hanai, Kyle McDonald, Satoshi Horii, Futa Kera, Kisaku Tanaka, Motoi Ishibashi, and Daito Manabe. 2021. Fencing tracking and visualization system. In SIGGRAPH Asia 2021 Real-Time Live! (SA '21). Association for Computing Machinery, New York, NY, USA, Article 2, 1. https://doi.org/10.1145/3478511.3491310"],"url":[0,"https://doi.org/10.1145/3478511.3491310"]}]]],"media":[1,[]],"awards":[1,[]],"exhibitions":[1,[[0,{"exhibition_name":[0,"H.I.H. Prince Takamado Trophy JAL Presents Fencing World Cup 2019 (2019)"],"date":[0,"2019.03.23 - 2019.03.26"],"city":[0,"Tokyo, Japan"],"place":[0,"Stanford "]}],[0,{"exhibition_name":[0,"エイブルPresents第72回全日本フェンシング選手権大会 (2019)"],"date":[0,"2019.03.23 - 2019.03.26"],"city":[0,"Palo Alto, California"],"place":[0,"Stanford CCRMA"]}]]],"credits":[1,[[0,"(Rhizomatiks) Planning, Creative Direction : Daito Manabe "],[0,"(Rhizomatiks) Planning, Technical Direction, Hardware Engineering : Motoi Ishibashi"],[0,"Software Engineering: Kye Shimizu, anno lab (Kisaku Tanaka, Sadam Fujioka, Kyle Mc-Donald (IYOIYO)"],[0,"Dataset System Engineering: Tatsuya Ishii (Rhizomatiks), ZIKU Technologies, Inc. (Yoshihisa Hashimoto, Hideyuki Kasuga, Seiji Nanase, Daisetsu Ido)"],[0," Dataset System Engineering : Ignis Imageworks Corp. (Tetsuya Kobayashi, Katsunori Kiuchi, Kanako Saito, Hayato Abe, Ryosuke Akazawa, Yuya Nagura, Shigeru Ohata, Ayano Takimoto, Kanami Kawamura, Yoko Konno)"],[0,"Visual Programming : Satoshi Horii, Futa Kera (Rhizomatiks) "],[0,"Videographer : Muryo Homma (Rhizomatiks)"],[0,"Hardware Engineering & Videographer Support : Toshitaka Mochizuki (Rhizomatiks)"],[0,"Hardware Engineering : Yuta Asai, Kyohei Mouri, Saki Ishikawa"],[0,"Technical Support : Shintaro Kamijyo (Rhizomatiks)"],[0,"Project Management : Kahori Takemura (Rhizomatiks)"],[0,"Project Management, Produce : Takao Inoue (Rhizomatiks)"],[0,"This work was conducted with assistance from Dentsu Lab Tokyo"]]],"thanks":[1,[]],"grants":[1,[]],"talks":[1,[]],"protected":[0,false]}],"body":[0,"import OptimizedPicture from '../../components/OptimizedPicture.astro'\nimport { Spacer } from '../../components/layout/Spacer'\nimport TextSection from '../../components/blog/TextSection'\nimport VideoPlayer from '../../components/VideoPlayer'\nimport img2 from './images/fencing-visualized/fencing_02.webp'\n\n\n \n Rhizomatiks' Fencing Tracking and Visualization system uses AR technology to visualize the tips\n of swords in motion. Building on various development processes since 2012, the system has been\n updated to utilize deep learning to visualize sword tips without markers. The system enables\n detection of sword tips, which human eyes cannot follow, and real-time AR synthesis to instantly\n visualize the trajectory.\n
\n \n We developed the \"Fencing tracking and visualization system.\" It detects the tips of sabers\n (fencing swords) to visualize the trajectory of the sabers in real time, which doesn't require\n any markers but works only with the input of the images from cameras. This is the only fencing\n visualization technology that has been used in actual international fencing matches, such as the\n H.I.H. Prince Takamado Trophy JAL Presents Fencing World Cup 2019.\n
\n\n\n\n\n\n \n\n\n \n Fencing sabre, especially its tip, moves quite fast, and its flexibility results in a large\n distortion in its shape. Additionally the tip is the size of only a few pixels when captured\n even by a 4K camera so that it is too small to detect with image recognition techniques. We\n developed a multi-stage deep learning network for general object detection based on YOLO v3\n [Redmon and Farhadi 2017, 2018], starting from the hardware selection of a camera for analysis.\n Since a single camera can only cover about 8 meters, we eventually installed 24 4K cameras on\n the both sides of the piste to cover the entire match area and improved the robustness of the\n sabre tip detection. We also developed a system to estimate the 3D position of the tips from the\n detection results of multiple cameras.\n
\n\n\n\n"],"filePath":[0,"src/content/projects/20200102-fencing-visualized.mdx"],"assetImports":[1,[[0,"./images/fencing-visualized/fencing_01.webp"]]],"digest":[0,"21978cddafdc4650"],"deferredRender":[0,true],"collection":[0,"projects"],"slug":[0,"20200102-fencing-visualized"],"render":[0,null]}],[0,{"id":[0,"20200608-xenon.mdx"],"data":[0,{"title":[0,"Xenon"],"date":[0,"2020.06.08"],"tags":[1,[[0,"engineering"],[0,"installation"],[0,"fashion"]]],"client":[0,""],"youtube":[0,""],"role":[0,"Researcher / Engineer"],"hoverImage":[0,{"src":[0,"/_astro/main.ji9xlWt8.webp"],"width":[0,1000],"height":[0,551],"format":[0,"webp"]}],"publications":[1,[]],"media":[1,[]],"awards":[1,[[0,{"award_name":[0,"Dezeen Awards Longlist 2020"],"year":[0,2020]}]]],"exhibitions":[1,[[0,{"exhibition_name":[0,"Histopolis - Extinction and Regeneration"],"date":[0,"2020.06.08 - 2020.09.27"],"city":[0,"Tokyo, Japan"],"place":[0,"Gyre Gallery"]}]]],"credits":[1,[[0,"Kye Shimizu (Synflux LLC)"],[0,"Kotaro Sano (Synflux LLC)"],[0,"Kazuya Kawasaki (Synflux LLC)"],[0,"Keisuke Nagami (HATRA)"]]],"thanks":[1,[[0,"Fukule inc."],[0,"Okido Weaver"],[0,"Yosuke Takahashi"],[0,"Takayo Iida"]]],"grants":[1,[]],"talks":[1,[]],"protected":[0,false]}],"body":[0,"import OptimizedPicture from '../../components/OptimizedPicture.astro'\nimport { Spacer } from '../../components/layout/Spacer'\nimport TextSection from '../../components/blog/TextSection'\nimport img2 from './images/xenon/2.webp'\nimport img3 from './images/xenon/3.webp'\nimport img4 from './images/xenon/4.webp'\nimport img5 from './images/xenon/5.webp'\nimport img6 from './images/xenon/6.webp'\nimport img7 from './images/xenon/7.webp'\nimport img8 from './images/xenon/8.webp'\nimport img9 from './images/xenon/9.webp'\n\n\n \n XENON, an algorithm-made jacquard denim setup was also created through a fusion of artificial\n intelligence, fashion and craft. Generative Adversarial Networks (GAN) learned the countless\n images of animals floating in cyberspace, and the generated data of \"imaginary animal patterns\"\n were woven together on a jacquard loom, which is said to be the origin of computers. What\n alternative representations are possible when attempting to interpret artificial intelligence\n and jacquard in the history of the computer? Aiming to fuse the generativity of artificial\n intelligence and fashion design creativity, those works explore the concept of \"Cyborg-like\n Body\" where bits and atoms, nature and artificiality, are mixed together as \"Xenomorphic\n Algorithmic Chimera\". The accelerated evolution of information technology and bioengineering is\n becoming awe-inspiring, it shows the possibility of the \"Artificial Sublime\" in the\n Anthropocene. This project was exhibited at Histopolis: Extinction and Regeneration at the GYRE\n Gallery in Tokyo\n
\n\n\n\n\n \n \n \n \n \n\n{' '}\n\n\n\n{' '}\n\n\n\n \n
"],"filePath":[0,"src/content/projects/20200608-xenon.mdx"],"assetImports":[1,[[0,"./images/xenon/main.webp"]]],"digest":[0,"de3808736aa65980"],"deferredRender":[0,true],"collection":[0,"projects"],"slug":[0,"20200608-xenon"],"render":[0,null]}],[0,{"id":[0,"20200711-double-horizon.mdx"],"data":[0,{"title":[0,"Double Horizon"],"date":[0,"2020.07.11"],"tags":[1,[[0,"engineering"],[0,"architecture"],[0,"installation"]]],"client":[0,"Abacus, inc."],"youtube":[0,""],"role":[0,"Researcher / Engineer"],"hoverImage":[0,{"src":[0,"/_astro/2020_alternativekyoto.Bs23xSq0.webp"],"width":[0,1031],"height":[0,693],"format":[0,"webp"]}],"publications":[1,[]],"media":[1,[]],"awards":[1,[]],"exhibitions":[1,[[0,{"exhibition_name":[0,"Paper and Me Festival"],"date":[0,"2021.10.01 - 2021.10.17"],"city":[0,"Kochi, Japan"],"place":[0,"Ino Town Paper Museum"]}],[0,{"exhibition_name":[0,"Alternative Kyoto - Artspace of the Light"],"date":[0,"2020.10.16 - 2020.11.23"],"city":[0,"Kyoto, Japan"],"place":[0,"Amanohashidate"]}]]],"credits":[1,[[0,"Production Coordination & Engineering : Kye Shimizu (N sketch inc.)"],[0,"Direction & Visuals : Saito Tatsuya (Abacus)"],[0,"Production Assistance : Yuki Anezaki (N sketch inc.)"],[0,"Music : Shinichi Suda (Twoth)"],[0,"Visual Assistance: Marie Nagasue"]]],"thanks":[1,[[0,"Moto Ise Shrine STAFF"],[0,"Alternative Kyoto STAFF"]]],"grants":[1,[]],"talks":[1,[]],"protected":[0,false]}],"body":[0,"import OptimizedPicture from '../../components/OptimizedPicture.astro'\nimport { Spacer } from '../../components/layout/Spacer'\nimport TextSection from '../../components/blog/TextSection'\nimport img1 from './images/double-horizon/exhibition_main.webp'\n\n\n \n The interactive art director and media artist, Tatsuya Saito presents artwork Double Horizon\n which intends to describe the original meaning of Amanohashidate. In Japanese language, both the\n sea and the sky are called “Ama.” Why are these two different elements expressed with the same\n sound? Saito’s installation work suggests a space that —derived from the origin of\n Amanohashidate—looks beyond the horizon. The work was exhibited at the approach to Motoise Kono\n Shrine, a sacred place that inherently occupies a liminal margin between two worlds. Saito made\n use of looming fog and projections to give us the illusion of a three-dimensional sea surface,\n or flowing clouds.\n
\n\n\n\n \n
\n\nこの作品《Double Horizon》で、インタラクティブアートディレクターの齋藤達也は、天橋立という場所が本来持つ意味を表現することを試みた。\n空のことを「天 (アマ)」といい、またその下に広がる海のことも「アマ」と呼ぶ。なぜ空と海が共通の音で言い表されるのか。水平線のかなたに見える、その境界を望む天橋立の成り立ちにも由来した、二つの世界に挟まれる境界を表現するようなこのインスタレーション作品を、齋藤は天橋立の聖地である元伊勢籠神社参道に展示した。海の水面あるいは、流れていく雲のようにも見える、プロジェクションと霧の装置によって立体的に表現された空間を、鑑賞者が身体的に体験できる空間を創り上げた。"],"filePath":[0,"src/content/projects/20200711-double-horizon.mdx"],"assetImports":[1,[[0,"./images/double-horizon/2020_alternativekyoto.webp"]]],"digest":[0,"b7f3029599f05310"],"deferredRender":[0,true],"collection":[0,"projects"],"slug":[0,"20200711-double-horizon"],"render":[0,null]}],[0,{"id":[0,"20200818-synthetic-feather.mdx"],"data":[0,{"title":[0,"Synthetic Feather"],"date":[0,"2020.08.18"],"tags":[1,[[0,"engineering"],[0,"installation"]]],"client":[0,""],"role":[0,"Researcher / Engineer"],"publications":[1,[]],"media":[1,[]],"awards":[1,[]],"exhibitions":[1,[[0,{"exhibition_name":[0,"Art & New Ecology"],"date":[0,"2022.05.28 - 2022.06.26"],"city":[0,"Tokyo, Japan"],"place":[0,"The University Art Museum"]}],[0,{"exhibition_name":[0,"Orinothopter"],"date":[0,"2020.10.06 - 2020.10.11"],"city":[0,"Tokyo, Japan"],"place":[0,"HATRA 2021SS Exhibition"]}],[0,{"exhibition_name":[0,"Study Skins"],"date":[0,"2020.03.20 - 2022.03.30"],"city":[0,"Tokyo, Japan"],"place":[0,"HATRA 2020AW Exhibition"]}]]],"credits":[1,[[0,"Kye Shimizu (Synflux LLC)"],[0,"Kotaro Sano (Synflux LLC)"],[0,"Kazuya Kawasaki (Synflux LLC)"],[0,"Keisuke Nagami (HATRA)"]]],"thanks":[1,[[0,"Photographer: TOKI"],[0,"Hair: Eiji Sato"],[0,"Make-up: Yuka Hirac"],[0,"Model: Adelina、Arthur, Aya Watanabe、Daichi Yamada"]]],"grants":[1,[]],"talks":[1,[]],"protected":[0,false]}],"body":[0,"import OptimizedPicture from '../../components/OptimizedPicture.astro'\nimport { Spacer } from '../../components/layout/Spacer'\nimport TextSection from '../../components/blog/TextSection'\nimport TripleColumnPicture from '../../components/layout/TripleColumnPicture'\n\nimport hatra2020_1 from './images/synthetic-feather/HATRA-2020-21aw-034.webp'\nimport hatra2020_2 from './images/synthetic-feather/HATRA-2020-21aw-007.webp'\nimport hatra2020_3 from './images/synthetic-feather/HATRA-2020-21aw-002.webp'\nimport hatra1 from './images/synthetic-feather/HATRA-2021ss-re-re-001.webp'\nimport hatra2 from './images/synthetic-feather/HATRA-2021ss-re-re-002.webp'\nimport hatra3 from './images/synthetic-feather/HATRA-2021ss-re-re-006.webp'\nimport hatra4 from './images/synthetic-feather/HATRA-2021ss-re-re-007.webp'\nimport hatra5 from './images/synthetic-feather/HATRA-2021ss-re-re-008.webp'\nimport hatra6 from './images/synthetic-feather/HATRA-2021ss-re-re-011.webp'\n\nimport hatra7 from './images/synthetic-feather/HATRA-2021ss-re-re-012.webp'\n\nimport hatra8 from './images/synthetic-feather/HATRA-2021ss-re-re-013.webp'\n\nimport hatra9 from './images/synthetic-feather/HATRA-2021ss-re-re-018.webp'\n\nimport hatra10 from './images/synthetic-feather/HATRA-2021ss-re-re-025.webp'\n\n\n \n \"Synthetic Feather\" is an algorithmically made knitwear piece created to accompany the HATRA\n 2020 Autumn Winter Collection \"STUDY SKINS\". From the myriad of images that exist in cyberspace,\n the GAN (Generative Adversarial Network) algorithm generates imaginary birds. The generated\n images are converted into data that can be output by a knitting machine in collaboration with a\n textile craftsman, and the patterns that symbolize the theme of this season \"Study Skins\" are\n saved as sweaters. Following on from AUBIK, this work was the result of continued\n experimentation with new digital expressions that deviate from optimization, and was presented\n as a collaborative work by Synflux and HATRA.\n
\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n\n"],"filePath":[0,"src/content/projects/20200818-synthetic-feather.mdx"],"digest":[0,"8e5ac483683062b3"],"deferredRender":[0,true],"collection":[0,"projects"],"slug":[0,"20200818-synthetic-feather"],"render":[0,null]}],[0,{"id":[0,"20200818-urban-rhythmability.mdx"],"data":[0,{"title":[0,"Urban Rhythmability"],"date":[0,"2020.08.18"],"tags":[1,[[0,"engineering"],[0,"installation"]]],"client":[0,""],"youtube":[0,"https://www.youtube.com/watch?v=4diG4lfNYX4"],"role":[0,"Researcher / Engineer"],"publications":[1,[]],"media":[1,[]],"awards":[1,[]],"exhibitions":[1,[[0,{"exhibition_name":[0,"TOKYO culture research"],"date":[0,"2020.08.18 - 2020.08.31"],"city":[0,"Tokyo, Japan"],"place":[0,"Roppongi Mori Tower"]}]]],"credits":[1,[[0,"Kye Shimizu"],[0,"Kenta Tanaka"],[0,"Ryo Yumoto"],[0,"Kazuki Takahashi"]]],"thanks":[1,[]],"grants":[1,[]],"talks":[1,[]],"protected":[0,false]}],"body":[0,"import OptimizedPicture from '../../components/OptimizedPicture.astro'\nimport { Spacer } from '../../components/layout/Spacer'\nimport TextSection from '../../components/blog/TextSection'\n\n\n \n \"Urban Rhythmability\" is a work that explores the structural rhythmicity of the city and plays\n music with the rhythm of the city. The project begins by filming every aspect of the Roppongi\n landscape from the TOKYO CITY VIEW on the 52nd floor of Roppongi Hills Mori Tower. Using a\n machine learning program, it detects the movement of objects in Roppongi (people, cars, trains,\n traffic lights, etc.) and generates rhythms from these objects, which are then used as notes on\n a musical score. The rhythms, the environmental sounds collected by field recording in Roppongi,\n and the musical sounds are then used as equal parts to compose a piece of music that can be\n played by the city of Roppongi.\n
\n"],"filePath":[0,"src/content/projects/20200818-urban-rhythmability.mdx"],"digest":[0,"bbcec7ad9c7b1772"],"deferredRender":[0,true],"collection":[0,"projects"],"slug":[0,"20200818-urban-rhythmability"],"render":[0,null]}]]],"2021":[1,[[0,{"id":[0,"20210512-morphing-identity.mdx"],"data":[0,{"title":[0,"Morphing Identity"],"subtitle":[0,"Exploring the facial boundaries between self and other"],"date":[0,"2021.05.12"],"tags":[1,[[0,"engineering"],[0,"research"],[0,"installation"],[0,"academic"]]],"client":[0,""],"youtube":[0,"https://www.youtube.com/watch?v=ahKxwaJ3k_U"],"role":[0,"Researcher / Engineer"],"hoverImage":[0,{"src":[0,"/_astro/main.C-sPglVa.webp"],"width":[0,1024],"height":[0,576],"format":[0,"webp"]}],"publications":[1,[[0,{"conference_name":[0,"ACM CHI"],"year":[0,2023],"reference":[0,"Kye Shimizu, Santa Naruse, Jun Nishida, and Shunichi Kasahara. 2023. Morphing Identity: Exploring Self-Other Identity Continuum through Interpersonal Facial Morphing Experience. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23), April 23– 28, 2023, Hamburg, Germany. ACM, New York, NY, USA, 15 pages. https://doi.org/10.1145/3544548.3580853"],"url":[0,"https://doi.org/10.1145/3544548.3580853"],"pdf_path":[0,"https://files.kyeshimizu.com/chi2023_morphingidentity.pdf"]}],[0,{"conference_name":[0,"ACM SIGGRAPH ETECH"],"year":[0,2023],"reference":[0,"A Demonstration of Morphing Identity: Exploring Self-Other Identity Continuum through Interpersonal Facial Morphing Kye Shimizu, Santa Naruse, Jun Nishida, and Shunichi Kasahara. 2023. ACM SIGGRAPH 2023 Emerging Technologies. https://dl.acm.org/doi/10.1145/3588037.3595394"],"url":[0,"https://dl.acm.org/doi/10.1145/3588037.3595394"],"pdf_path":[0,"https://files.kyeshimizu.com/siggraphetech2023-morphing-identity.pdf"]}]]],"media":[1,[[0,{"media_name":[0,"Media Ambition Tokyo"],"reference":[0,"https://mediaambitiontokyo.jp/movie/268ghkfu_8r/"],"medium":[0,"Online"],"year":[0,2021]}],[0,{"media_name":[0,"ACM Siggraph Blog"],"reference":[0,"https://blog.siggraph.org/2024/01/unveiling-the-illusion-self-other-identity-with-facial-morphing.html/"],"medium":[0,"Online"],"year":[0,2023]}]]],"awards":[1,[[0,{"award_name":[0,"Innovative Technologies 2021 Prize"],"year":[0,2021]}]]],"exhibitions":[1,[[0,{"exhibition_name":[0,"ACM SIGGRAPH Emerging Technologies"],"date":[0,"2023.08.06 - 08.10"],"city":[0,"Los Angeles, California"],"place":[0,"Los Angeles Convention Center"]}],[0,{"exhibition_name":[0,"ACM CHI Interactivity Session"],"date":[0,"2023.04.24 - 04.25"],"city":[0,"Hamburg, Germany"],"place":[0,"Congress Center Hamburg"]}],[0,{"exhibition_name":[0,"YCAM InterLab Camp Vol.4 Digital Embodied Co-Creations"],"date":[0,"2022.12.16 - 12.18"],"city":[0,"Yamaguchi, Japan"],"place":[0,"YCAM"]}],[0,{"exhibition_name":[0,"You and Robots, What is it to be Human?"],"date":[0,"2022.3.18 - 8.31"],"city":[0,"Tokyo, Japan"],"place":[0,"Miraikan"]}],[0,{"exhibition_name":[0,"Digital Contents EXPO 2021”"],"date":[0,"2021.11.17 - 2021.11.19"],"city":[0,"Tokyo, Japan"],"place":[0,"Makuhari Messe"]}],[0,{"exhibition_name":[0,"Research for the Future of Humanity"],"date":[0,"2021.07 - 2021.08"],"city":[0,"Tokyo, Japan"],"place":[0,"Sony Park Exhibition"]}],[0,{"exhibition_name":[0,"Media Ambition Tokyo 2021"],"date":[0,"2021.05.12 - 2021.06.08"],"city":[0,"Tokyo, Japan"],"place":[0,"Mori Art Building"]}]]],"credits":[1,[[0,"Kye Shimizu"],[0,"Santa Naruse (Sony CSL)"],[0,"Jun Nishida (Sony CSL / University of Maryland)"],[0,"Shunichi Kasahara (Sony CSL)"]]],"thanks":[1,[[0,"Naoto Ienaga"],[0,"Maki Sugimoto"],[0,"Taku Tanichi (Sony CSL)"],[0,"Kazuma Takada (Sony CSL / OIST)"]]],"grants":[1,[[0,{"grant_name":[0,"JST Moonshot R&D Program JPMJMS2013"],"year":[0,2021],"reference":[0,"https://www.jst.go.jp/moonshot/en/program/goal1/index.html"]}]]],"talks":[1,[[0,{"talk_name":[0,"CHI 2023 Interactivity"],"reference":[0,"https://mediaambitiontokyo.jp/en/movie/st8vfuf8o/"],"place":[0,"Congress Center Hamburg"],"city":[0,"Hamburg, Germany"],"year":[0,2022],"event_name":[0,"ACM CHI 2023"]}],[0,{"talk_name":[0,"Behind the Scenes with Touchdesigner"],"reference":[0,"https://mediaambitiontokyo.jp/en/movie/st8vfuf8o/"],"place":[0,"Mori Art Building"],"city":[0,"Tokyo, Japan"],"year":[0,2021],"event_name":[0,"Media Ambition Tokyo 2019"]}]]],"protected":[0,false]}],"body":[0,"import OptimizedPicture from '../../components/OptimizedPicture.astro'\nimport VideoPlayer from '../../components/VideoPlayer'\nimport TextSection from '../../components/blog/TextSection'\nimport DoubleColumnPicture from '../../components/layout/DoubleColumnPicture'\nimport { Spacer } from '../../components/layout/Spacer'\nimport ycam1 from './images/morphing-identity/2022-ycam1.webp'\nimport ycam2 from './images/morphing-identity/2022-ycam2.webp'\nimport chi2023demo from './images/morphing-identity/2023chi-interactivity.webp'\nimport chi2023 from './images/morphing-identity/2023chi-presentation.webp'\nimport ginza1 from './images/morphing-identity/SonyPark_2021_0723-1.webp'\nimport ginza2 from './images/morphing-identity/SonyPark_2021_0723-2.webp'\nimport ginza3 from './images/morphing-identity/SonyPark_2021_0723-3.webp'\nimport ginza4 from './images/morphing-identity/SonyPark_2021_0723-4.webp'\n\n\n \n We explored continuous changes in self-other identity by designing an interpersonal facial\n morphing experience where the facial images of two users are blended and then swapped over time.\n Both users’ facial images are displayed side by side, with each user controlling their own\n morphing facial images, allowing us to create and investigate a multifaceted interpersonal\n experience. To explore this with diverse social relationships, we conducted qualitative and\n quantitative investigations through public exhibitions. We found that there is a window of\n self-identification as well as a variety of interpersonal experiences in the facial morphing\n process. From these insights, we synthesized a Self-Other Continuum represented by a sense of\n agency and facial identity. This continuum has implications in terms of the social and\n subjective aspects of interpersonal communication, which enables further scenario design and\n could complement findings from research on interactive devices for remote communication.\n
\n\n\n\n\n \n Where is the boundary of self? How do we feel and interact with each other? We explored\n continuous changes in self-other identity by designing an interpersonal facial morphing\n experience where the facial images of two users are blended and then swapped over time.\n
\n\n\n\n\n\n\n\n\n
\n\n\n \n With Morphing Identity system, two users' facial images are displayed side by side, with each\n user controlling their own morphing facial images, allowing us to create and investigate a\n multifaceted interpersonal experience.\n\n To explore this with diverse social relationships, we conducted qualitative and quantitative investigations through public exhibitions, beyond the lab environment.\n\n
\n\n\n\n\n\n \n Our findings suggest a window of self-identification and a variety of interpersonal experiences in the facial morphing process.\n \n We synthesized a Self-Other Continuum, with implications for interpersonal communication and interactive devices for remote communication.\n\n
\n\n\n\n\n\n\n\n\n"],"filePath":[0,"src/content/projects/20210512-morphing-identity.mdx"],"assetImports":[1,[[0,"./images/morphing-identity/main.webp"]]],"digest":[0,"213c1ead92d04880"],"deferredRender":[0,true],"collection":[0,"projects"],"slug":[0,"20210512-morphing-identity"],"render":[0,null]}],[0,{"id":[0,"20210702-who-else-if-not-you.mdx"],"data":[0,{"title":[0,"Who else if not you"],"date":[0,"2021.07.02"],"tags":[1,[[0,"engineering"],[0,"installation"]]],"client":[0,"Miyuki Tanaka"],"youtube":[0,""],"role":[0,"Researcher / Engineer"],"publications":[1,[]],"media":[1,[[0,{"media_name":[0,"Pen Online"],"reference":[0,"https://www.pen-online.jp/article/008993.html"],"medium":[0,"Online"],"year":[0,2021]}],[0,{"media_name":[0,"Tokyo Art Beat"],"reference":[0,"https://www.tokyoartbeat.com/articles/-/21_21_rules"],"medium":[0,"Online"],"year":[0,2021]}],[0,{"media_name":[0,"Numero"],"reference":[0,"https://numero.jp/news-20210708-rules/"],"medium":[0,"Online"],"year":[0,2021]}],[0,{"media_name":[0,"WWD Japan"],"reference":[0,"https://www.wwdjapan.com/articles/999674"],"medium":[0,"Online"],"year":[0,2020]}],[0,{"media_name":[0,"Fashionsnap"],"reference":[0,"https://www.fashionsnap.com/article/ai-synflux-hatra/"],"medium":[0,"Online"],"year":[0,2020]}]]],"awards":[1,[[0,{"award_name":[0,"Japan Media Arts Festival 2022 25th Art Division Jury Selection"],"year":[0,2022]}]]],"exhibitions":[1,[[0,{"exhibition_name":[0,"Rules?"],"date":[0,"2021.07.02 - 2021.11.28"],"city":[0,"Tokyo, Japan"],"place":[0,"2121_Design Sight"]}]]],"credits":[1,[[0,"Technical Artist: Kye Shimizu (N sketch inc.)"],[0,"System development: Hidemaro Fujinami (N sketch inc.)"],[0,"Software Engineer : Ryo Yumoto (N sketch Inc.)"],[0,"Motion Graphics: Ken Muroi"],[0,"Interaction Design: Masato Sekine (N sketch inc.)"],[0,"Text / Sound: Daniel Wetzel (Rimini Protokoll) "],[0,"Space Design : Kengo Kobayashi (NoRA), Haruka Uemura"],[0,"Technical Director: Shunya Hagiwara"],[0,"Producer / Text: Miyuki Tanaka"],[0,"Statistical Adviser: Kazuyuki Nakamura (Meiji University)"],[0,"Graphic Design: UMA / design farm"],[0,"Translation: Kyle Yamada"],[0,"Production Manager: Fumiko Toda"]]],"thanks":[1,[[0,"Technical Adviser / Equipment Cooperation: Luftzug"],[0,"2121 Design Sight Staff"],[0,"Sponsor: precognition co.,LTD. / THEATRE for ALL"],[0,"K2LAB"],[0,"Higure inc."],[0,"Photo: Masaya Yoshimura"]]],"grants":[1,[]],"talks":[1,[]],"protected":[0,false]}],"body":[0,"import { Spacer } from '../../components/layout/Spacer'\nimport TextSection from '../../components/blog/TextSection'\nimport DoubleColumnPicture from '../../components/layout/DoubleColumnPicture'\nimport OptimizedPicture from '../../components/OptimizedPicture.astro'\nimport VideoPlayer from '../../components/VideoPlayer'\nimport rule1 from './images/who-else-if-not-you/rule_exhibition-1.webp'\nimport rule2 from './images/who-else-if-not-you/rule_exhibition-2.webp'\nimport rule3 from './images/who-else-if-not-you/rule_exhibition-3.webp'\nimport rule4 from './images/who-else-if-not-you/rule_exhibition-4.webp'\n\n\n \nThe origin of this work, \"100% Tokyo\" (2013), was a live awareness survey theater for 100 citizens collected based on the demographics of Tokyo at that time.\n\nInspired by this, this work is an experiential exhibition in which the viewer becomes a sample to construct his or her own statistics. Behind the questions that divide the majority and minority groups, a pseudo-society is revealed.\n\nWhen answers that are not part of the majority appear before your eyes accompanied by your body, you are made aware of the fact that a 100% yes or no is almost impossible to achieve. In the present time, when the social environment has changed due to the new coronavirus, and various structural discriminations have begun to be revealed, we will look down on the boundaries that separate us from others and our position as a relative existence.\n\n
\n\n\n\n\n\n\n \n Inspired by this, this work is an experiential exhibition in which the viewer becomes a sample to construct his or her own statistics. Behind the questions that divide the majority and minority groups, a pseudo-society is revealed.\n
\n \nWhen answers that are not part of the majority appear before your eyes accompanied by your body, you are made aware of the fact that a 100% yes or no is almost impossible to achieve. In the present time, when the social environment has changed due to the new coronavirus, and various structural discriminations have begun to be revealed, we will look down on the boundaries that separate us from others and our position as a relative existence.\n\n
\n\n\n\n\n\n\n\n \n \n \n \n"],"filePath":[0,"src/content/projects/20210702-who-else-if-not-you.mdx"],"digest":[0,"71c02f30a5000384"],"deferredRender":[0,true],"collection":[0,"projects"],"slug":[0,"20210702-who-else-if-not-you"],"render":[0,null]}]]],"2022":[1,[[0,{"id":[0,"20220922-human-latent-metrics.mdx"],"data":[0,{"title":[0,"Human Latent Metrics"],"subtitle":[0,"Perceptual and Cognitive Response Correlates to Distance in GAN Latent Space for Facial Images"],"date":[0,"2022.09.22"],"tags":[1,[[0,"research"],[0,"academic"]]],"client":[0,""],"youtube":[0,""],"role":[0,"Researcher / Engineer"],"hoverImage":[0,{"src":[0,"/_astro/human-main.6Nt4G1Dw.webp"],"width":[0,2798],"height":[0,1147],"format":[0,"webp"]}],"publications":[1,[[0,{"conference_name":[0,"ACM SAP"],"year":[0,2022],"reference":[0,"Shimizu, Kye, Naoto Ienaga, Kazuma Takada, Maki Sugimoto, and Shunichi Kasahara. 2022. “Human Latent Metrics: Perceptual and Cognitive Response Correlates to Distance in GAN Latent Space for Facial Images.” In ACM Symposium on Applied Perception 2022, 1-10. SAP ’22 3. New York, NY, USA: Association for Computing Machinery. https://dl.acm.org/doi/10.1145/3548814.3551460"],"url":[0,"https://dl.acm.org/doi/10.1145/3548814.3551460"],"pdf_path":[0,"https://files.kyeshimizu.com/sap2022_humanlatentmetric.pdf"]}]]],"media":[1,[]],"awards":[1,[]],"exhibitions":[1,[]],"credits":[1,[[0,"Kye Shimizu (Sony CSL)"],[0,"Kazuma Takada (Sony CSL / OIST)"],[0,"Shunichi Kasahara (Sony CSL / OIST)"],[0,"Naoto Ienaga (Keio, Sugimoto Lab)"],[0,"Maki Sugimoto (Keio, Sugimoto Lab)"]]],"thanks":[1,[]],"grants":[1,[[0,{"grant_name":[0,"JST Moonshot R&D Program JPMJMS2013"],"year":[0,2021],"reference":[0,"https://www.jst.go.jp/moonshot/en/program/goal1/index.html"]}]]],"talks":[1,[[0,{"talk_name":[0,"Human Latent Metrics: Perceptual and Cognitive Response Correlates to Distance in GAN Latent Space for Facial Images."],"reference":[0,"https://sap.acm.org/2022/index.html"],"place":[0,"Online"],"city":[0,"Tokyo, Japan"],"year":[0,2022],"event_name":[0,"ACM SAP 2022"]}]]],"protected":[0,false]}],"body":[0,"import OptimizedPicture from '../../components/OptimizedPicture.astro'\nimport { Spacer } from '../../components/layout/Spacer'\nimport TextSection from '../../components/blog/TextSection'\n\n\n\nGenerative adversarial networks (GANs) generate high-dimensional vector spaces (latent spaces) that can interchangeably represent vectors as images. Advancements have extended their ability to computationally generate images indistinguishable from real images such as faces, and more importantly, to manipulate images using their inherit vector values in the latent space. \n
\n\nThis interchangeability of latent vectors has the potential to calculate not only the distance in the latent space, but also the human perceptual and cognitive distance toward images, that is, how humans perceive and recognize images. However, it is still unclear how the distance in the latent space correlates with human perception and cognition. Our studies investigated the relationship between latent vectors and human perception or cognition through psycho-visual experiments that manipulates the latent vectors of face images. \n
\n\n\nIn the perception study, a change perception task was used to examine whether participants could perceive visual changes in face images before and after moving an arbitrary distance in the latent space. \n
\n\nIn the cognition study, a face recognition task was utilized to examine whether participants could recognize a face as the same, even after moving an arbitrary distance in the latent space.\n
\n\n Our experiments show that the distance between face images in the latent space correlates with human perception and cognition for visual changes in face imagery, which can be modeled with a logistic function. By utilizing our methodology, it will be possible to interchangeably convert between the distance in the latent space and the metric of human perception and cognition, potentially leading to image processing that better reflects human perception and cognition.\n
\n"],"filePath":[0,"src/content/projects/20220922-human-latent-metrics.mdx"],"assetImports":[1,[[0,"./images/human-latent-metrics/human-main.webp"]]],"digest":[0,"b6db361c1537ed50"],"deferredRender":[0,true],"collection":[0,"projects"],"slug":[0,"20220922-human-latent-metrics"],"render":[0,null]}]]],"2023":[1,[[0,{"id":[0,"20230102-deviation-game.mdx"],"data":[0,{"title":[0,"Deviation Game"],"subtitle":[0,"A game to explore the co-evolution of AI and humans. "],"date":[0,"2023.01.02"],"tags":[1,[[0,"research"]]],"client":[0,""],"youtube":[0,"https://vimeo.com/808615172"],"role":[0,"Researcher / Software Engineering"],"hoverImage":[0,{"src":[0,"/_astro/deviation_game_mainpic.BNzxlLUT.webp"],"width":[0,3505],"height":[0,2341],"format":[0,"webp"]}],"publications":[1,[]],"media":[1,[[0,{"title":[0,"絵を描いてAIを騙すゲーム、ぜんぜん勝てませんでした"],"media_name":[0,"Gizmodo"],"reference":[0,"https://www.gizmodo.jp/2023/04/deviation-game.html"],"medium":[0,"Online"],"year":[0,2023]}],[0,{"title":[0,"オープンに先駆けて、CCBTで活動する公募型アーティスト・フェロー3組が決定!"],"media_name":[0,"PR Times"],"reference":[0,"https://prtimes.jp/main/html/rd/p/000000391.000038211.html"],"medium":[0,"Online"],"year":[0,2022]}],[0,{"title":[0," 「シビック・クリエイティブ・ベース東京[CCBT]」渋谷にオープン あらゆる人々が創造性を発揮するための活動拠点を目指す "],"media_name":[0,"Excite"],"reference":[0,"https://woman.excite.co.jp/article/lifestyle/rid_LP_P_PIA_ecb445cc_73f4_48c4_9e02_ef9431ad4144/pid_2.html"],"medium":[0,"Online"],"year":[0,2023]}],[0,{"title":[0,"渋谷にあるよ、アート×テックの最前線。CCBT「アート・インキュベーション・プログラム」をレポ"],"media_name":[0,"Cinra.net"],"reference":[0,"https://www.cinra.net/article/202304-CCBT_skksk"],"medium":[0,"Online"],"year":[0,2023]}],[0,{"title":[0,"AIは人間を凌駕するという意見に対抗できる可能性を体験できる「デヴィエーションゲーム展ver 1.0 ー模倣から逸脱へ」ワークショップ体験レポート"],"media_name":[0,"ASCII.jp"],"reference":[0,"https://ascii.jp/elem/000/004/129/4129041/"],"medium":[0,"Online"],"year":[0,2023]}]]],"awards":[1,[]],"exhibitions":[1,[[0,{"exhibition_name":[0,"PARCO POND 2023"],"date":[0,"2023.11.17 - 12.4"],"city":[0,"Tokyo, Japan"],"place":[0,"Parco Shibuya"]}],[0,{"exhibition_name":[0,"Now Play This 2023"],"date":[0,"2023.04.01 - 04.09"],"city":[0,"London, UK"],"place":[0,"V&A Museum"]}],[0,{"exhibition_name":[0,"Cinekid Festival 2023"],"date":[0,"2023.10.14 - 10.29"],"city":[0,"Amsterdam, Netherlands"],"place":[0,"Cinekid Festival"]}],[0,{"exhibition_name":[0,"Ars Electronica Festival 2023"],"date":[0,"2023.09.06 - 09.10"],"city":[0,"Linz, Austria"],"place":[0,"Post City"]}],[0,{"exhibition_name":[0,"CCBT Fellowship 2023"],"date":[0,"2023.03.04 - 03.26"],"city":[0,"Tokyo, Japan"],"place":[0,"CCBT"]}]]],"credits":[1,[[0,"Software Engineering: Kye Shimizu (N sketch inc.), Daiki Hashimoto (N sketch inc.), Hidemaro Fujinami (N sketch inc.)"],[0,"Concept: Tomo Kihara"],[0,"Design: Studio Playfool"],[0,"Sound: Matteo Bandi"],[0,"Production Support: Masato Sekine, Studio Onder de Linde"],[0,"Graphic Design: Chika Yamaguchi, Taeko Isu"],[0,"Mentors: Takayuki Ito (Yamaguchi Center for Arts and Media), Asami Hosokawa (Sapporo International Art Festival)"],[0,"Funding & Support: Civic Creative Base Tokyo (CCBT), Creative Industries Fund NL"]]],"thanks":[1,[]],"grants":[1,[[0,{"grant_name":[0,"Civic Creative Base Tokyo (CCBT) Artist Incubation"],"year":[0,2023],"reference":[0,"https://ccbt.rekibun.or.jp/en/events/deviationgame_ver1"]}],[0,{"grant_name":[0,"UK Games Fund"],"year":[0,2021],"reference":[0,"https://ukgamesfund.com/funded-project/deviation-game/"]}]]],"talks":[1,[]],"protected":[0,false]}],"body":[0,"import OptimizedPicture from '../../components/OptimizedPicture.astro'\nimport { Spacer } from '../../components/layout/Spacer'\nimport TextSection from '../../components/blog/TextSection'\nimport img1 from './images/deviation-game/exhibition_main.webp'\nimport VideoPlayer from '../../components/VideoPlayer'\n\n\n\n\nThe rapid advancement in the field of artificial intelligence (AI), has resulted in AI being able to imitate many of the intellectual tasks that were once exclusive to humans, such as generating images and writing. This development has sparked a great deal of excitement but also considerable concern among artists who fear that their jobs may become obsolete.\n\n
\n\n\n\n\nHowever, just as the rise of the first photographic machine in the early 19th century freed painters from realism and paved the way for the Impressionist movement represented by Monet and Van Gogh, history has shown us that whenever a new technology emerges that can replace human acts of expression, people have found ways to deviate and create unique forms of expression that cannot be replicated by that technology. This pattern of imitation and deviation has been a driving force in the evolution of both technology and expression.\n\n
\n\n\n{/* */}\n\n\n\nBuilding on the Imitation Game (1950) by Alan Turing, often considered as the father of modern computer science, we propose a new type of game, titled the Deviation Game. Through this project, our aim is to utilise AI not to imitate past expressions, but to identify what has already been expressed, allowing one to deviate from it.\n
\n\n\n"],"filePath":[0,"src/content/projects/20230102-deviation-game.mdx"],"assetImports":[1,[[0,"./images/deviation-game/deviation_game_mainpic.webp"]]],"digest":[0,"3056be0f6aa90cf9"],"deferredRender":[0,true],"collection":[0,"projects"],"slug":[0,"20230102-deviation-game"],"render":[0,null]}],[0,{"id":[0,"20230102-nanco.mdx"],"data":[0,{"title":[0,"Nanco"],"subtitle":[0,"Simple Inventory Management App"],"date":[0,"2023.01.02"],"tags":[1,[[0,"engineering"],[0,"installation"],[0,"app"]]],"client":[0,""],"youtube":[0,""],"role":[0,"Researcher / Engineer"],"hoverImage":[0,{"src":[0,"/_astro/main.BPw4Id05.webp"],"width":[0,1920],"height":[0,1080],"format":[0,"webp"]}],"publications":[1,[]],"media":[1,[]],"awards":[1,[[0,{"award_name":[0,"Good Design Award"],"year":[0,2023]}]]],"exhibitions":[1,[[0,{"exhibition_name":[0,"Good Design Award 2023 Exhibition"],"date":[0,"2023.10.25 - 2023.10.29"],"city":[0,"Tokyo, Japan"],"place":[0,"Tokyo Midtown"]}],[0,{"exhibition_name":[0,"96th Gift Show FW 2023"],"date":[0,"2023.09.06 - 2023.09.08"],"city":[0,"Tokyo, Japan"],"place":[0,"Tokyo Big Sight"]}],[0,{"exhibition_name":[0,"Japan IT Week 2023"],"date":[0,"2023.04.05 - 2023.04.07"],"city":[0,"Tokyo, Japan"],"place":[0,"Tokyo Big Sight"]}],[0,{"exhibition_name":[0,"95th Gift Show SS 2023"],"date":[0,"2023.02.15 - 2023.02.17"],"city":[0,"Tokyo, Japan"],"place":[0,"Tokyo Big Sight"]}]]],"credits":[1,[[0,"Kye Shimizu (N sketch inc.)"],[0,"Yuki Anezaki (N sketch inc.)"],[0,"Hidemaro Fujinami (N sketch inc.)"],[0,"Daiki Hashimoto (N sketch inc.)"],[0,"Masato Sekine (N sketch inc.)"],[0,"Fukaishi Keisuke (N sketch inc.)"],[0,"Yukie Yoshizumi (N sketch inc.)"]]],"thanks":[1,[]],"grants":[1,[[0,{"grant_name":[0,"Tokyo Metropolitan Industrial Technology Research Institute"],"year":[0,2022],"reference":[0,"https://iot.iri-tokyo.jp/result/nsketch.html"]}]]],"talks":[1,[]],"protected":[0,false]}],"body":[0,"import OptimizedPicture from '../../components/OptimizedPicture.astro'\nimport TextSection from '../../components/blog/TextSection'\nimport { Spacer } from '../../components/layout/Spacer'\nimport VideoPlayer from '../../components/VideoPlayer'\n\n\n \n nanco is a simple and intuitive inventory management application aimed at small businesses. You\n can freely customize the registration and editing of the necessary product data, and the latest\n information and history are saved on the cloud anytime, anywhere. By reading QR codes and\n barcodes, product registration .\n
\n \n \n\n\n\n\n \nYour smartphone is a\nbarcode reader.\n\nNo special equipment is needed. Your smartphone turns into a high-speed reader. You can proceed with your daily operations and inventory work much more efficiently.\n\n
\n\n\n\n\n\n\n \nIntuitive UX design for handling product data in folder format.\n\nSupport your business. Group items in folders. Daily work becomes a little more enjoyable.\n\n
\n\n\n\n\n\n \nReady to use\nInventory Management App\n\nEasily register items you want to manage from your smartphone or PC!\nYou can easily make changes to your inventory and see the latest inventory count and history at a glance.\nReal-time synchronization to the cloud makes it ideal for team management.\n\n
\n\n\n"],"filePath":[0,"src/content/projects/20230102-nanco.mdx"],"assetImports":[1,[[0,"./images/nanco/main.webp"]]],"digest":[0,"9811d0b65efbc352"],"deferredRender":[0,true],"collection":[0,"projects"],"slug":[0,"20230102-nanco"],"render":[0,null]}]]]}],"tags":[0]}" renderer-url="/_astro/client.BL5RHyhw.js" ssr="" uid="1DmJ5V">