PeopleSpace AI Lab (PSAIL)

I am developing an Artificial Intelligence method to use in human-electronics interaction and measure its effect on people, planet, and profit.  

Human-electronics interaction is a phenomenon that describes how human beings interact with electronic devices to meet their needs, and how humans depend and adapt to this interaction.  We collect data from sensors and make sense of them.  Data Science techniques are used to collaborate with Group IoT  (Robotics, Sensors and Advanced Manufacturing)   The interaction generates data; click, no-click, view, no-view, active, passive, start, stop, balk, take photo, talk, listen...  These data can be analyzed in three broad categories;  

Data in everyone’s hands can mean digitally connected democratic world.  Data in the right hands can mean Innovation.  Data can increase revenue, reduce waste, and correct fraud.  

The success of location-mobility companies are well known.  Google's search engine, Apple's iPhone, Airbnb, Skype, Uber, facebook, WeWork, etc. address the need of nomads who want to move around in the world in the most efficient manner.  

The success of time-mobility companies are up and coming too.  The data can assist us to move around in time.  The financial industry has been using DATA with great success since its inception.  The options and futures are created to hedge for the future uncertainty, and correct forecasting using the statistical analysis of past data is a fundamental norm; knowing the future by seeing the past yield unimaginable profit.  

Data used in personal life is just as useful.  Age-friendly technologies can preserve our past and brighten up our future.  Not exactly a time machine yet, but will be close.  Time-mobility companies will be the next unicorns.

Memory enhancement exercise (game) and Alzheimer's Disease patient chatbot (NLP) can preserve the patients' dignity, enhance caregivers' quality of life, and reduce society's burden.  Judicial mix of Biotech, Healthcare, Life Science technology can give us a glimpse into future so we can make healthier choices.  

The quality of life will have enhanced greatly when we can say, "I can remember my past clearly, and see my future confidently".  The problems deriving from future uncertainty, i.e. fear, anxiety, social inequality, financial troubles, healthcare cost, etc... will be eliminated. 

I kissed a malware 

I rescued a firewall log, and DNSBL activity log from an old Linux firewall box which died after over 1,000 days of faithful service at PeopleSpace. I was curious to know how much malware and ransomware it caught (and saved my skin) before it died.  It was sad to see the old Linux box go, but frankly, I was surprised it lasted as long as it had.  I did not revive the old Linux box.  I purchased a proper Protectli firewall micro appliance with 4x Gigabit Intel LAN Ports with 4GB RAM / 8GB mSATA and slammed in pfSense.   I am happy with this decision.

DNS is the most underappreciated human-electronics interface in the internet age. 

The DNS log of a compromised machine is a treasure trove of data when it comes to identifying which other devices may also be infected. The log contains Dynamic Name Server (DNS) queries and the data on the clients that requested them.  The log can show the connections a suspected device has made in the past and may attempt to make in the future.   It may yield a maverick method of catching and containing the spread of malware.

This research started with repurposing an old Pentium II with m0n0wall.  m0n0wall is a project aimed at creating a complete, embedded firewall software package that, when used together with an embedded PC, provides all the important features of commercial firewall boxes (including ease of use) at a fraction of the price (free software).  m0n0wall is based on a bare-bones version of FreeBSD, along with a web server, PHP, and a few other utilities. The entire system configuration is stored in one single XML text file to keep things transparent.  

Since then I had upgraded from m0n0wall to pfSense.  pfSense is commonly deployed as a perimeter firewall, router, wireless access point, DHCP server, DNS server, and as a VPN endpoint. pfSense supports the installation of third-party packages like Snort or Squid through its Package Manager.  The pfSense project started in 2004 as a fork of the m0n0wall project by Chris Buechler and Scott Ullrich and the first release was in 2006. The name was derived from the fact that the software uses the packet-filtering tool, PF. 

How the log is created

I wanted to take advantage of DNSBL feeds to filter out adware, malware and ransomware domains.  I also wanted to use openDNS ( I felt like I am getting double protection. I ran namebench which provides personalized DNS server recommendations based on  browsing history to make sure I was reasonably close to openDNS servers.  

The problem with running DNSBL feed and openDNS together is 

DNSBL feed likes DNS resolver only

openDNS likes DNS forwarder  

You can not run both DNS resolver and DNS forwarder

The following steps will show how to run DNSBL feed and openDNS by running DNS forwarder off and DNS resolver on.

Services > DNS forwarder

Services > DNS resolver

Enable DNS resolver on

DNSSEC is off

DNS Query Forwarding on


The custom options should be auto populated when you set up DNSBL 

Services > Dynamic DNS > Dynamic DNS Clients > Edit

Get an account/password at

Firewall > pfBlockerNG > General

System > General Setup

The client's LAN interface uses pfSense as the DNS resolver. When the client tries to visit a website (say a DNS request for will be intercepted by the pfSense firewall.  Its package, pfBlockerNG will compare against lists of bad domains. If a bad domain is called up, the client will get 1 x 1 pixel gif from a webserver on  If all is well, the client will be connected to The blocked site will look like this.   Just black. 

Enable unbound DNS resolver.  Services > DNS resolver

If it does not work, turn off IPv6.

Stop malware, adware, malvestisments and ransomware

pfBlockerNG is a pfSense package maintained by @BBcan177  pfBlockerNG adds all kinds of security such as blocking known bad IP addresses with blocklists.  I use pfBlockerNG everyday. 

Firewall > pfBlockerNG > DNSBL

If you want to white list a wild card, do it like this:   Once whitelisted, you must UPDATE to reassemble the feed list.  This step is where I got tripped up for an hour.  "I configured everything, but why isn't it running?"

Test.  Try to reach a black listed domain's ransomware website.

It goes to which is an internal webserver showing 1x1 black gif.

Shinhan Bank installed Ahnlab Safe Transaction on my laptop, and it caught the malware site too. 

American Statistical Association DataFest 

I am helping ASA DataFest as a mentor.  The American Statistical Association (ASA) DataFest is a celebration of data in which teams of undergraduates work around the clock to find and share meaning in a large, rich, and complex data set.

A key feature of ASA DataFest is that it brings together the data science community. Undergraduate students do the work, but they are assisted by roving consultants who are graduate students, faculty, and industry professionals. Many professionals find ASA DataFest to be a great recruiting opportunity–they get to watch talented undergraduate students work under pressure in a team and examine their thinking processes.

DataFest was founded at UCLA in 2011, when 30 students gathered for 48 intense hours to analyze five years of arrest records provided by Lt. Thomas Zak of the Los Angeles Police Department. ASA DataFest is now sponsored by the American Statistical Association and hosted by several of the most prestigious colleges and universities in the country. More than 2000 students take part from schools such as UCLA, Pomona College, Cal Poly San Luis Obispo, UC Riverside, University of Southern California, Purdue University, Duke, the University of North Carolina, North Carolina State, Emory, Princeton, Dartmouth, Smith, Hampshire, Amherst, Mt. Holyoke, and the University of Massachusetts.

Data for the kickoff, DataFest 2011, was provided by the Los Angeles Police Department (LAPD) and included data records for every arrest in Los Angeles from 2005–2010. That’s almost 10 million police reports. Reports filed by the arresting officer include details about the suspect and the nature of the alleged crime. The LAPD had geo-tagged the reports wherever possible to indicate the location of the arrest. To bring realism, Lieutenant Thomas Zak, officer-in-charge of the LAPD Strategic Crime Analysis Section, presented the data to students and challenged teams to suggest policy changes that could improve public safety.

Past events have featured data from companies such as:





After two days of intense data wrangling, analysis, and presentation design, each team is allowed a few minutes and no more than two slides to impress a panel of judges. Prizes are given for Best in Show, Best Visualization, and Best Use of External Data.

How to Tunnel VNC through SSH PuTTY


     WAN > virtual server/port forwarding > 

    Service name: ssh 

    Source IP: blank

    Port range:  922  (not opening 22 on purpose)

    Local IP: server IP


SERVER SETUP (Linux Mint or Ubuntu 18.4  tested)

Install x11vnc and openssh-server

    $ sudo apt-get install x11vnc openssh-server


   $ sudo nano

        x11vnc -safer -localhost -nopw -once -display :0

        change permission to executable


Run PuTTY  

    Connection > SSH > remote command


    Connection > SSH > Tunnels.

5902 for Source port

localhost:5900 for Destination



Session of PuTTY 

Host Name: MYIP OR URL

Port 922 (22 is default; do not use 22) 

        Save profile name  'abeto'  


Once SSH tunnel is open, leave the PUTTY running

If case PuTTy hangs:  putty -cleanup

run TightVNC Viewer

localhost::5902 or localhost:2  or works too

or you can run from DOS or batch file

        putty.exe -load abeto -ssh username@serveripaddress -pw password

        make sure the username exists on the server

openKimchi is the new Open Source Kernel-based Virtual Machine

Kimchi is a spicy Korean side dish. It is also the code name for a new open source virtualization management project that offers sweet familiarity.

But unlike the spicy side dish Kimchi, the open source project Kimchi offers a taste of something sweet - a familiar user interface for virtualization management. Put simply, that is what Kimchi is all about - removing barriers to using KVM for a set of potential users. 

openKimchi is a new open source project aimed at providing an easy on-ramp for people who would like to start using KVM (Kernel-based Virtual Machine) but believe it will be too difficult. Kimchi is targeted at users who may have avoided the open source hypervisor because they don’t have experience with Linux or don’t have the ability to install a management server, or simply don’t have time to invest in Linux administration.

Open Source Tool Designed to Appeal to VMware and Windows administrators 

There are certainly people in the enterprise who are Linux administrators and are perfectly comfortable with the way KVM is today. They regularly work with Linux admin tools and KVM fits right in to their day-to-day practice.

But there are also VMware administrators and Windows administrators who are not familiar with Linux admin practices and are not comfortable with the KVM tools. These people in particular will benefit from Kimchi, since the user interface is similar to that of VMware and Windows tools, thus helping to ease the transition to KVM.

Kimchi’s Role in the KVM Ecosystem

If you have one Linux server, then installing Kimchi on that server is quick and easy. Kimchi puts a thin layer over what is already there with KVM and Linux. You don’t need to install a separate management server. All you have to do point your browser to the KVM host and with just a couple of clicks, you can install your first guest and start running it.

While it does not come as part of KVM yet, it is hoped that Kimchi will be mature enough to be packaged up with some of the community Linux distributions in 2014, and then be included in some enterprise Linux distributions after that. The beauty of the Kimchi interface is that it boils management features down to their essence, simplifying everything, without a requirement that users have any Linux skills. And, it is rendered using HTML5 so there is total independence of both device and operating system, meaning that you can use Kimchi from a Windows or Linux work station, or a tablet or a phone.

Kimchi Reaches a Functional Milestone

Because it is a simple point-to-point management tool, it is not able to provide clustering or resource pooling. Users are limited to managing a few hundred virtual machines at a time, one host at a time.

Kimchi reached a functional milestone in October 2013 with the release of Version 1. Although it is still early in the development process for the project, it is now at the point where we think it has enough functionality for people to try it. The clear advantage is that users don’t need to maintain any management infrastructure - and they can get started using KVM right away.

IBM’s Commitment to Kimchi

IBM supports Kimchi because it represents another way to promote KVM adoption and remove barriers to open source virtualization, which IBM believes is a smart choice. Kimchi is a sound, multi-platform management tool. We, at IBM, are also using it to manage KVM on Power. It will come bundled with KVM on Power, available later in 2014. 

Future Development Plans for Kimchi

At this point, the focus for Kimchi going forward is on community building and additional feature development. The input from the community will determine the future direction for Kimchi, which is an Apache-licensed project hosted on GitHub, and incubated by

I know nothing about dating, but...

The online dating industry spends unlimited budget on coming up with a better matching algorithm. They collect hundreds of variables and attempt to predict the match success rate.   They have millions and millions of user-candidate pairs who feed them data and $ dues. It's a highly lucrative data and revenue-rich business. The algorithm is measured by the success of the suggested match, and the success is defined as whether the matched pairs contacted each other.  

One approach to understand the dating relationship is to use the Network topology.  It is the arrangement of a network, including its nodes and connecting lines. There are two ways of defining network geometry: the physical topology and the logical (or signal) topology.  This is one way of visualizing how we might stack our friends.  And friend-candidates. 

We can see the node connections, but they do not tell us the flow direction and strength of each connection.  

The flow direction can be 

And the connection can be powerful or weak in a

A server is powerful both ways; outbound and inbound.  The clients are not.  Most clients and their internet connection may need powerful download speed, but just a weak upload speed would be fine.  The server pushes out information, and the clients receive. 

Peer to Peer (P2P) is unique in that the client act as both server and client.  

Suppose node A wants to talk to node B. But cannot connect directly. A will ask a root server C who is also a root server to B to make a handshake. C will relay A's message to B. If all goes well C will step back, and A and B will start talking to each other. This is how p2p works.  If A and B can not p2p connect, C will continue to relay for A and B while periodically pinging A and B to direct connect because C really rather make the first introduction and step back. The first widely popular p2p application was Napster around 1999.  In my experience, early p2p applications were a bad news for the network.  A network router with hundreds of people working nicely would be brought down by a single user on Napster or Limewire.  Stronger the DNS filtering, port blocking, and firewall got, cleverer the DNS proxy and UDP hole punching got.  Traffic shaping by mac address?  Hello, mac spoofing.   So the most of older computer workers have bad memory of p2p.  That p2p is a resource hog and network buster.  p2p works ok now, but only because we have a phat broadband.  p2p's true problem hasn't been fixed.  

It may be true in a human relationship too.  The 'connectors' in human society fulfills the role of root server C.  The connectors make it their business to know everyone and keep a constant tap on what they are doing.  It is easier to meet people through introduction rather than on your own.  Only a brave or naive would make an attempt to introduce themselves. 

A connection relayed through a group is usually organized and configuration-free.  It is easy to meet people by joining a group.  The group, usually for a small fee, takes care of the awkwardness in communication and logistics.  Then p2p or one-on-one relation can be cultivated, if so desired. 

As a simple rule, we can define low, medium and high powerful friends.  If anyone is sending out high-quality messages other want to hear, they are powerful.  Anyone who is a consumer of those messages is medium powerful.  Anyone who is not receptive to such messages is low powerful.  This is the world we live in today. That is why the celebrities and even the web bloggers are celebrated.  They are powerful.  A twitter account holder in 1600 Pennsylvania Ave, Washington D.C. understands this. This is why a nerd playing an online game in mom's basement does not have the reach.  And this is why your parents told you to go and make friends. 

What about the connectors?  They live on the borrowed space between the servers and clients with no real content of their own.  Their role is more important than you think.

Birds of the same feather flock together

Don't look at components. Don't look at the whole. Look at simple rules that govern them.  

If we try to analyze the birds in flight as connected interacting parts, it would be impossible to understand their intent and motivation.  They are a complex system.  Same would apply to human.  Human interaction mapping would be possible but would be needlessly complex.  It will be a map of people just living about.  Probably a pretty colorful chart, but mostly just that. Instead look at the emergence where a whole is more than just the sum of its parts, and assume that there must be a simple rule that governs that complex behavior.   Understanding a simple rule of division in 1/3 is much easier than trying to count out how many 3's there are in 0.333333333...[...]  

Stop yelling at your kids, mom and dad.  It's your fault.

It is easy to visualize that friends belong to communities based on demographic characteristics such as education level and ethnicity. In this example, one can generalize those powerful kids will tend to have powerful friends. This is an oversimplified topology, of course.  I am sure powerful kids have friends in all places; their powerful parents would have made sure of that.  But I am not sure if unpowerful kids would have friends in all places.  I doubt their unpowerful parents would have the means and resources to make the same effort.  Therefore it would seem logical if you want your kids to be powerful, you better become a powerful parent yourself first.   

We can find ways to quantify the friendship distribution and control by calculating the degree of influence. And derive at potential friendship, the probability of imposing one's will. All this is academical when the friends are in alignment with you, say as in same fraternity or in love or same family.   

But none of it is elementary if you are going through life alone as an offspring of an unpowerful parent.

Dating, or more precisely online dating, was the theme of DataFest2013, with data provided by eHarmony. Vaclav Petricek, the senior data scientist at eHarmony presented the data set, which consisted of roughly one million “user-candidate” pairs.  The team measured the success of users in the eHarmony space with two metrics that they called active and passive conversion. They defined passive conversion as the willingness to respond to people reaching out to the user, and active conversion as the success rate of one’s own desired interactions. They found that as income increases, active conversion increases, meaning that the higher the income, the more successful the users are when contacting others. On the other hand, as income increases, passive conversion decreases, meaning that the user will be less likely to reply to those who have contacts them 

Example of ownership flow of control formula by

Stefania Vitali, James B. Glattfelder, Stefano Battiston

Chair of System Desing, ETH Zurich, Zurich, Switzerland

The Network of Global Corporate Control

Show me the data


Data for the kickoff, DataFest 2011, was provided by the Los Angeles Police Department (LAPD) and included data records for every arrest in Los Angeles from 2005–2010. That’s almost 10 million police reports. Reports filed by the arresting officer include details about the suspect and the nature of the alleged crime. The LAPD had geo-tagged the reports wherever possible to indicate the location of the arrest. To bring realism, Lieutenant Thomas Zak, officer-in-charge of the LAPD Strategic Crime Analysis Section, presented the data to students and challenged teams to suggest policy changes that could improve public safety.

DataFest 2012 took students global and into the world of microfinance., a nonprofit organization that brokers micro-loans internationally, provided the data. Any visitors to the website can invest small amounts of money to entrepreneurs in developing countries. Lenders can re-invest the money when the loan is repaid. Records are kept on every transaction, and makes these available via an API. These data were organized for students in various files: one consisting of almost 100,000 loans, another containing information on 15,000 lenders who made these loans, and yet another on Kiva’s field partners (microfinance organizations who are responsible for administering Kiva’s loans). The data and the challenge were presented by Kiva engineer Noah Balmer. The challenge was very broad: Kiva wished to know what outsiders would find interesting and useful, and so invited the teams to discover any insight or association they felt meaningful.

Robotics, Sensors and Advanced Manufacturing

The lab is populated with 3D printers able to print in food and plastic collecting data between the users and electronics. We use data science to analyze and understand how humans and electronics communicate with each other. Electronics mediated communication is done via a laptop to digital manufacturing devices such as 3D printers, and Laser cutters. Open source Arduino / MIT processing and Rasberry pi / multitude of sensors are in use to collect data.

The lab specializes in low cost small plastic parts in short runs without tooling costs. It's a fancy way of saying we can make small plastic parts for cheap. From discontinued replacement parts to whatever your prototyping needs may be, OpenKimchi has the 3D printing capabilities and expertise to get you what you need for a low price. We make small plastics parts in short runs without tooling cost. 

Printing Capability: We can print objects up to 12 inch x 12 inch x 12 inch (300 x 300 x 300 mm) (L x W x H)

Printing Material: Hard plastic (ABS) or eco-friendly plastic (PLA) or soft plastic (Polyurethane)

McMasterCarr 3D Printed PLA Bolt Collection

McMaster-Carr is Awesome: While you are on the McMasterCarr webpage feel free to order from their abundant hardware selection and enjoy speedy delivery. Iif we order something by 11am, the Engineers usually have it on our desk by 2PM that same day, truly AMAZING. We have no affiliation to the company, other than we are loyal customers. 

Design tip 

If you are designing a part, let's say a block with a functional threaded hole in the side, don’t waste your time cutting the helical thread into the block. Download the correct size bolt from McMasterCarr’s online catalog and subtract it from the block - BOOM - functional threaded hole.  Compensate for 3D Printing tolerances I scale up by 2%-5% before subtracting.

McMaster-Carr's online catalog has an abundance of CAD designs available and ready for printing.  If you have the desire to print standard hardware, but lack the motivation to reverse engineer the parts yourself, I have some good news for you.  McMasterCarr’s online catalog is well organized, has an overwhelming selection, and as it turns out, is a fantastic resource for standard hardware CAD Designs. A good portion of the standard hardware have downloadable CAD files within the product detail page.

Head over to and search for a part that you are interested in 3D printing, be sure it has a downloadable CAD file in the product detail page.  Look for this symbol:

Product Details CAD for Product

Download your prefered CAD file type. If you are a Solidworks user, download the 3D Solidworks file directly. If you do not have a prefered file type - thats okay, we will get you through it, keep reading.

Converting CAD files to STL format:

There are a few options, but here is the one we have found to be the most robust.

Download the 3D STEP CAD file type from the Product Detail page.

Start/Install the program FreeCAD (available here:

Open the 3D STEP file.

Select the part (make sure it turns green) and go to the File menu and choose Export. Save as ‘Mesh Formats’ from the dropdown menu. Save this file to a local drive; it will be saved as "FileName.STL".

Select STEP 3D File Type

Save the STEP file to your local drive

Export from FreeCAD as 'Mesh Formats' file type

3D Printing recommendations:

When printing close tolerance parts, scale up (for male fittings) or down (for female fittings) by 2%-5% to ensure fit

Thread printing quality is limited by thread angle - it can be good to double the thread pitch by scaling your part 2X along the axis (non-uniform scaling).

Experiment with layer heights and infills to get the print that meets your needs  ABS and PLA are great to start with. Try Nylon and a few of the advanced materials for stronger, more robust products. 

Bolt to be 3D Printed 

Big Data hackathon (Seongnam City)

We won! We won a Seongnam City Big Data hackathon.  We analyzed the tourist data and identified areas of potential growth hence the investment opportunities.    

Seongnam City Government award in project "Big data analysis for entrepreneurship ideas"

using Seongnam City statistics from

City data:

City administrative districts

City Government

Civil employees

Living Environments


Hospitals & Social welfare facilities

Culture & Sports facilities


Cultural Properties

Judocare (Vision System)

Look! We’re all over the world





Hello, I am Daniel Lee. As a lifetime Judoka, I donate my free time to judo.

I created judocare to bring innovation to Judo. Through this internal improvement, I hope to attract and retain more athletes, coaches, spectators, and sponsors to judo. JudoCARE is a video arbitration tool to help make sure the right player gets the win. Through this internal improvement, we aim to attract and retain more athletes, coaches, spectators, and sponsors to judo. JudoCARE is among the world's largest providers of video analytics systems used in referee arbitration in the sport of Judo. Customers include major NGBs and elite national and international point tournaments in continents Asia, Europe, N.America, S.America, India, Africa and Australia.


The new CARE will provide innovative referee training opportunities resulting in improved referee performance during tournaments. Judicial use of CARE will lead to a better image of referees among players, coaches, and spectators. We may someday achieve referee professionalization. Through this project, I aim to establish a revolutionary CARE-based training center. It will use state-of-the-art digital technologies for reviewing actual judo situations to support education and further training and development of referees. The suggested methods will enable us to achieve higher accuracy of decision-making and improve the fairness of matches. The CARE-based training will prevent speculations and guesses concerning the appreciation of a technique. It will be useful to coaches, referees, and most importantly the players. All in all, judo wins.


Judo combat: time-motion analysis and physiology

Emerson Franchini 1, Guilherme Giannini Artioli 1,2 and Ciro José Brito 1,3.

1 Martial Arts and Combat Sports Research Group, School of Physical Education and Sport, University of São Paulo, São Paulo, Brazil 

2 Laboratory of Applied Nutrition and Metabolism, School of Physical Education and Sport, University of São Paulo, São Paulo, Brazil.

3 Center for Research in Sport Performance and Health (NEDES), Federal University of Sergipe, Sergipe, Brazil.


The understanding of time-motion and physiological responses to judo combat is important to training organization. This review was based on search results using the following terms: “judo and competition”, “judo and physiology”, “judo and randori”, and “judo and time-motion analysis”, “judo and combat”, “judo and match” and “judo and biochemestry”. The effort-pause ratio during judo combats is between 2:1 and 3:1, with 20s and 30s effort periods and 10s of pauses. Thus, judo combats rely on all three metabolisms, with the anaerobic alactic sytem being reponsible by the short duration powerful actions during technique applications, the anaerobic lactic system being responsible for the maintainance of high-intensity actions during longer periods (e.g., grip dispute), while the aerobic system is responsible for the recovery processes between high-intensity actions and matches. Training prescription must consider these demands and a muscle-specific action analysis may help to direct the proper approach to improve judo athletes’ performance. In general, lower-body is involved in short-term high-intensity actions during technique executions, while upper-body muscle groups are involved in both strength-endurance and power actions. As many muscle groups perform different actions during the match, a high cardiovascular demand is also observed in judo.

Judo Match Analysis a powerful coaching tool : basic and advanced tools in a fighting style evolution

Attilio Sacripanti°*^ 

°ENEA (National Agency for Environment Technological Innovation and Energy) 

*University of Rome II “Tor Vergata” Italy 

^ FIJLKAM Italian Judo Wrestling and Karate Federation


In this second paper on match analysis, we analyze in deep the competition steps showing the evolution of this tool at National Federation level. On the basis of our first classification, Match Analysis is a valuable source of four levels of information: 1st. Athlete’s Physiological data 2nd. Athlete’s Technical data 3rd. Athlete’s Strategically data 4th. Adversary’s Scouting Furthermore, it is the most important source of technical assessment. Studying competition with this tool is essential for the coaches because they can obtain useful information for their coaching. Match Analysis is today the master key in situation sports like Judo, to help in useful way the difficult task of coach or best for National or Olympic coaching equips. In this paper it is presented a deeper study of the judo competitions at high level both from the male and female point of view, explaining at light of biomechanics, not only the throws evolution in time ( introduction of Innovative and Chaotic techniques) but also the evolution of fighting style in these high level competitions, both connected with the grow of this Olympic Sport in the Word Arena ( today 199 countries are members of IJF) It is shown how new interesting ways are opened by this powerful coaching tool, very useful for National team technical management. In the last part of this paper we analyze advanced mathematical tools describing Couple of Athletes motion as Fractal Poisson Point Processes based on Fractional Brownian Motion to show how strategic evaluation , probability and short term forecast can be applied to Judo competition.