Tuesday, January 13, 2015

B1 Post - Week 2 Group E

Database
     Artificial intelligence has been taking unprecedented importance in contemporary science. Many forms of mimicked intelligence and other lesser forms have proven to be invaluable in areas such as manufacture and industry. With the economic advantages becoming more apparent, larger companies, such as Google, have begun to invest heavily ($600 million; Tom Simonite, 2014 in Computing: Breakthroughs in Artificial Intelligence) in different projects that intend to create higher levels of artificial intelligence. This evolution requires advancement in the hardware interfaces we currently have as well as the new sequences of logic required for "coding". The latest advancements have been done by modeling these components after the human brain (“neuromorphic” chips modeled loosely on ideas from neuroscience; Tom Simonite, 2014 in Computing: Breakthroughs in Artificial Intelligence).
     This concept of modeling these computing systems after biologic systems has seemingly been the key to unlocking the next degree of artificial intelligence. In terms of application to database management, this increased machine intelligence has allowed for much larger data sets to be automatically evaluated at a much more complex level. These advances have allowed for processes that normally take human intelligence and evaluation -- something open to errors and fatigue when used for large data set analysis -- to accurately assess patterns within data sets.
     Another idea that has been showing great promise is the ability for computing systems to learn over time through iterations and by recording trial and error scenarios. This type of long term learning has allowed different contemporary computer software to emulate the learning process. This is seen in things like the google search predictor that will select different search options based on the most likely topic related to the search text. A more complex application as well as a far more useful one is the utilization of IBM's "Jeopardy!" software Watson to make personalized treatment plans for specific patients based on their genome sequencing.
     It is obvious that this type of high level intelligence will be reflected in future computing systems especially those that evaluate large sums of data -- like the data created from the myriad of environmental sensors within modern buildings. This could also potentially be applied to automated construction processes therefore there is an overall system that could determine building conflicts from conception as well as issues that arise during the construction. In fact, the advancement of automated construction practices and construction management depends greatly on the development of artificial intelligence within the implementation devices.

Network
     This article discussed the latest and most promising project by a man named Sebastian Seung to map the human connectome -- or the entire 3D network of neural pathways or synapsis. The previous embodiment of this experiment was done on a flat worm and took roughly twenty painstaking years of research to fully map the approximately 300 + neurons present in its brain (Only once have scientists ever managed to map the complete wiring diagram of an animal — a transparent worm called C. elegans, one millimeter long with just 302 neurons — and the work required a stunning display of resolve.; Gareth Cookjan, Sebastian Seungs Quest to Map the Human Brain). This means that each neuron's specific task, or where it was wired to send messages, was determined. So each neuron responsible for each specific muscle's control, as well as the neurons required for simple thought and evaluation of obstacles for movement. To make it quite clear the magnitude of undertaking by Seung, there are over 100 trillion neural pathways in the human mind. This number is nearly impossible for modern computers to emulate or understand without enormous computational power. So for one to attempt to create a physical 3D model of this network seems as likely as numbering the planets and stars in our galaxy and creating a 3D map that is reflective of their relative positions.
     This has been likened to a short story by Jorge Luis Borges, where cartographers of a fictional empire were tasked with created the best map of the reaches of its lands. Eventually, larger and larger models were built until an exact, life sized replica of the kingdom with complete detail was made. This idea illustrates the basic principle of a model to essentially scale the complexity of the reality down such that it preserves the original information. This highlights how seemingly insurmountable mapping such a network would be with our current technology.
     The main idea of the article and solution to this modeling enigma posited by Seung is to tap into the current vast quantity of human thinking power. Seung has suggested the use of app based games that apply the intelligence of humans playing the game to solving the problem of creating this map. Seung had brought up the point that even if he could tap into a fraction of the human computing power that exists within the game Angry Birds, he could exponentially increase the rate at which the human brain can be mapped. His answer was the app EyeWire and essentially it uses computers to create the parameters of the game and the objective is to trace or recreate the 2D neural routes generated by complex brain scanning machines remotely. Then humans trace the pathways within a small cube that is essentially made by compiling many scans of that brain portion. Basically the computers are setting up as much of the problems and parameters as possible and allowing humans to do the work that is outside their scope of processing (Computers do what they can and then leave the rest to what remains the most potent pattern-recognition technology ever discovered: the human brain.; Gareth Cookjan, Sebastian Seungs Quest to Map the Human Brain).
     This idea of tapping into the computing power of humans as a whole seems to be the most feasible method of approaching a full 3D model within an acceptable amount of time given the technology and hardware that currently exists. Almost everyone has a smartphone and everyone that is capable of playing these simple app based games are already decades ahead of current computer based evaluating software, so it is likely this method of harnessing human computational power will allow for the advancement of current technologies. This is very promising for a wide variety of applications, including construction and building resource managment (mentioned in the conclusion of the Database Summary).

Sociology
     Artificial intelligence will become increasingly adept and this has some major implications for society. Many of the consequences will be beneficial, like the automation of many manufacturing processes as well as other basic processes that just cannot be done as efficiently as an automated system. This proliferation of artificial intelligence is also viewed as potentially having negative impacts as well. For instance, displacement of human workers, roboticized warfare, and even Orwellian surveillance techniques easier to develop (John Markoff, Study to Examine the Effects of Artificial Intelligence).
     A study has been proposed by Dr. Eric Horvitz, the managing director of the Redmond, Wash., campus of Microsoft Research, that will track societal effects of artificial intelligence for the next century. This is done in attempt to track changes in artificial intelligence as well as the advantages and disadvantages of different applications. The main reason is that due to the increasing abilities allowed by artificial intelligence, grey areas are created that do not fit into our current societal, economic, or legislative categories. For instance, computer recognition of images presents a new question of ethics regarding how this can affect individuals -- professionally or even legally. One of the biggest benefits of this study is the tracking of how our definition of privacy changes, this way it is not lost entirely and forgotten.

Future
     The ability for computers to learn has been developing for a few decades now. This seems to be the one conventional way classic computational systems have been able to emulate intelligence (even if it is only a basic intelligence based on trial and error iterations). Basically, by analyzing outcomes of large data sets, patterns are created that the computer can then use to predict likely outcomes. However, the computer must go through each specific scenario to have an understanding of that scenario -- humans display reasoning where plausibly deductions can be made about new scenarios without any prior trial and error type learning. The graphics card company Nvidia has created a car computer called the DrivePX that will take advantage of this learning process with software capable of "deep learning". This attempt is to further the automated car project and help make advancements in the field. The new computer DrivePX has increased computational power to evaluate more camera inputs -- twelve to be exact -- that will allow for real time assessment of a 360° field of view. The software has also displayed the ability to determine the presence of many objects within the images it records, most important of which are pedestrians and bicycles -- even when they are partially hidden by obstacles!

Comments
Alex Nunes describes some of the biological issues that arise when interfacing humans with computers -- namely virtual reality. This harkens back to Dr. Horvitz' study to track the societal changes that artificial intelligence will have. This is just one of the areas his study will address with the further evolution of computational systems. Basically, the human interfacing techniques, as well as methods of preserving the importance of human intelligence and input must evolve alongside the development of artificial intelligence to ensure its advantages and help prevent its drawbacks.

Angela Castro brings up the point of the near impossibility of ensuring data safety and preventing malicious software from getting into their systems. This is reflective of the advancements within artificial intelligence and how this will provide computers with unprecedented control and understanding. This raises many questions about the definition of privacy -- once computers have this extra reasoning abilities, it becomes that much more difficult to keep confidential data confidential. The very places it is stored have eyes and understanding of the content, making data have increasingly less safe storage options. This is another aspect of artificial intelligence's impact on society that would be of interest to Dr. Horvitz.

Justin Hileman brought up an excellent point about the proliferation of artificial intelligence and how it eventually will affect our society. This means that sociology will find relevance in topics relating to technology as they will begin to shape one another. This is imperative because it highlights the necessity for close observation of ourselves as well as developments in artificial intelligence technology. We are looking to keep human intelligence as a commodity and place enough parameters on our technology that it does not become overly pervasive (or turn into the Matrix or a Terminator type situation where computers take control of the world.)

References
Simonite, Tom. "2014 in Computing: Breakthroughs in Artificial Intelligence."Evernote. MIT Technology Review, 29 Dec. 2014. Web. 13 Jan. 2015. <https://www.evernote.com/pub/view/aengineer/ae-510/1eee24ba-a290-4b6d-989a-195b70c6b6b6?locale=en#st=p&n=1eee24ba-a290-4b6d-989a-195b70c6b6b6>.

Cook, Gareth. "Sebastian Seung’s Quest to Map the Human Brain." The New York Times. The New York Times, 10 Jan. 2015. Web. 13 Jan. 2015<http://www.nytimes.com/2015/01/11/magazine/sebastian-seungs-quest-to-map-the-human-brain.html?partner=rss&emc=rss>.

Markoff, John. "Study to Examine Effects of Artificial Intelligence." The New York Times. The New York Times, 15 Dec. 2014. Web. 13 Jan. 2015. <http://www.nytimes.com/2014/12/16/science/century-long-study-will-examine-effects-of-artificial-intelligence.html?partner=rss&emc=rss&_r=0>.

Talbot, David. "CES 2015: Nvidia Demos a Car Computer Trained with “Deep Learning”." MIT Techonology Review. MIT Technology Review, 6 Jan. 2015. Web. 13 Jan. 2015. <http://www.technologyreview.com/news/533936/nvidia-demos-a-car-computer-trained-with-deep-learning/>.

1 comment:

  1. Your discussion about Network is really interesting. I agree that it could be useful in construction and building resource management.

    ReplyDelete