Saturday, December 25, 2010

PRESENCE! The New Job Search Paradigm

Commentary: In the old days, 20 years ago or more in the pre-information age, there was a lot of latency in the job search.  Employers spent days preparing a position description. There was usually general detail in the description of the job and it was posted in newspapers, magazines, and other places. Professionals would then prepare Resu-Letters which were a cross between a resume and letter. These were then mailed to the perspective employer and followed up with a phone call. The process took about two weeks before an interview was generated. Success rates averaged around 30% of the letter sent resulted in a meeting with 80% of the hiring managers taking time to talk to the candidate. Today the situation is quite different. Resu-letters are rarely sent unless a perspective employer requests it. Most of the early engagement is conducted over the World Wide Web. It is not uncommon for an employer to post a position then update the requirements or qualifications as they discover their real needs during the vetting process. This frustrates those applying to the posting as it changes after they submitted their package.  Employers are becoming frustrated as they are inundated with irrelevant applications. The emerging situation is that proactive hiring managers are using social media and other methods to resource and hire candidates. The new paradigm is all about your Presence.

Presence! The New Job Search Paradigm

Our world is transitioning without a doubt. Large territories are regionalizing and the economy is undergoing it's 80 year cycle marked by a boom, bust, and war. We had the dot-com boom then entered an economic bust and most likely there is a major war on the horizon. Whether or not the trend's groundwork for war is emerging is another discussion. Nonetheless, it is an unfortunate history that where goods and services do not cross international borders troops do. We are also seeing that paradigms are being upset as 'necessity is not the mother of invention' insteadit is invention that is the mother of necessity.' No one in the past said we need a telephone, LED, TV, and lightbulb let us invent them. Yet these things were created and through awareness enormous markets emerged. Likewise, the ole adage "it's not what you know but who know" has been upset too. The new paradigm is "it's not who you know but who knows about you." This is the challenge of the job searcher.

The phones rings or you get an email. It is an unsolicited ping from a manager in some company you never approached. He had reviewed your background and seems to know more about you than you know about him. He tells about his company and an opportunity he would like you to consider.  After a brief conversation you wonder what just happened? As it turns out your presence in the marketplace spoke for you. Half the job search effort is over for you. The employer found you! You no longer have to sell yourself, you just have to strengthen and build the relationship.

Proactive employers are seeking attentive, well organized, and responsive employees. The employer researches you on the internet, in professional groups, and through your social media, blogs and other activities. Then determines to make contact. Early contact may be an innocuous email remarking that you have applied, even though you did not, and they need more information. If you catch this subtle email and note that you never applied to this company then you may be part of a professionally designed interview that begins with the first contact. Your marketplace presence attracted the employer and the conversation had been ongoing perhaps without you knowing it.

There has been more than enough research that indicates markets are actually conversations. The studies show these conversations go on with or without your participation. It is to your benefit to participate and mange the conversation. Otherwise, in your absence the conversation is not to your favor. In regards to your job search, your objective is to build a presence in the marketplace to get noticed. The character of the presence depends on a few things such as the audience, conversations topics, and your circumstance.

Audiences

The audience will always be your prospective boss and those senior to you. Although you will engender your peers in conversations as they will be your advocates and sponsors in organizations. You should engage your audience in a variety of ways such as blogging, professional organization involvement, speaking engagements, and other ways in order to build your presence. The intensity and focus on the type activity depends if you are employed or not. I will discuss these differences later in this postings.

Conversations

The conversation topics should center on career and industry topics. You should mix up the topics to show breadth of knowledge but also show a little depth too. Stick to knowledge that is commonplace and/or self-evident. Speeches and blogs should make appeals to commonly accepted principles but avoid making points, especially controversial points. Sensitive topics or subjects matter should be handled delicately and in third person. Showing how you handle these sensitive topics is a demonstration of your breadth of knowledge, understanding, and ability to discuss these things without invoking hostile emotions. Employers who are seeking top candidates will recognize this ability. Avoiding tough topics simply does not address your full abilities.

Some points in discussing your topics are to focus on your objective. Do not allow side topics to digress from achieving your objectives. Also anticipate reader reactions and questions. Either answer them or terminate any secondary concerns or questions then move on. If you don't they will linger and distract the audience.

Circumstances

Your circumstance has several components such as history, education, and other life conditions. Unless, you have a criminal history, poor credit rating, or other legal conditions making you not bondable almost everything else can be overcome - even getting fired. Getting fired, unless for cause, is mostly personality based. Some of the world's greatest leaders had terrible histories. Winston Churchill was an utter failure for his entire life before rising up against Nazi Germany. While Churchill is a dramatic example, you can achieve similar results of rising up to achieve great things given a challenged past in your career. It is those hard knocks that prepare you for greatness.  Dale Carnegie has a product that discusses Greatness which is about leadership in your personal life and workplace. 

Networking

Nearly every expert agrees that you need to network. Networking can help you conduct a better job search and find a better job. However, there is little agreement on the most effective methods and everyone needs to find their comfort zone. Orville Pierson is a professional career counselor who helps people find work. My postings on the Orville Pierson method follow:
  1. Everybody Knows You Need to Network!
  2. Systematic Job-Search Networking
  3. Networking Myths Misunderstanding and Dumb Ideas
  4. Real Networking and How it Works
  5. Your Total Network is Bigger than You Think
  6. Plan Your Job Search and Your Networking
  7. Personal Networks and How To Use Them
  8. Build Your Professional Network
  9. Networking Tools and Advanced Strategies
  10. Moving From Networking to Interviews and Job Offers
Commentary: Some time ago I had a post called 'The Art of the Follow Up' Part One andPart Two. I encourage you to read these posts in addition to this posting.

Conducting a Search While Employed

The problem searching for a job while employed is, of course, the current employer. Many employers would deem an employee searching for other work as a loyalty issue which often beckons immediate release due to fears of increased risk. The best approach is to leverage professional organizations, symposiums, blogging, and speaking engagements to network regularly. You'll work at first to build a presence then at some point you will transition into a heightened search using the Orville Pierson Method.

While building your presence, avoid direct networking and/or job search methods that broadcast your intentions. Obviously, never respond to blind ads and job postings that sound too much like your dream position. Queries into your availability should be answered as the reluctant jobseeker. You are happy where you are at but are always open to new challenges and willing to listen. Some job seekers anticipate becoming discovered by seeking more responsible or other professional challenges with the current employer. This, at least, demonstrates to an employer that you desire to grow with him even if he has no opportunities. Upon discovery, your response could center on your reluctance to change employers, a desire to grow into more responsible positions, and that you are open to new challenges with him. If you really dislike your employer that is a more stressful situation requiring special handling and is not a topic of this posting.

Conclusion:

It is your presence in the marketplace that draws attention to you either when a manager is proactively seeking someone or after you have established yourself in a target company. Your presence in the marketplace speaks about you to those researching and participating in the conversation.  The easier it is to find you and learn about you the greater your chances are of landing a terrific job.  


Virtualizating Computational Workloads


Commentary:  This is a general discussion in which I wrapped up into the discussion an unique use of Virtualization. In the short term, companies can benefit from off loading heightened computational demands. They may desire to purchase computational power for a limited time versus the capital expenditure of purchasing and expanding the systems. The virtualized environment also can solve issues relating to geographically dispersed personnel. Overall, we are a long way from meaningfully and effectively using the excess computational power residing on the web or across an organization. This discussion though hopefully gives some insight on how to use that excess power.  

Virtualize computing can occur over any internetwork system including World Wide Web. The concept centers on distributing the use of excess system resources such as computational power, memory, and storage space in a service-oriented or utilitarian architecture. In simple terms, internet based resource provisioning. Multiple virtualized environments can exist on the same network. Although, the physical hardware can only be assigned to and managed by a single virtualized engine. Each virtualized environment, a cloud, encapsulates a unique group of participating hardware resources which is managed through virtualization; Figure 1. Demand for services are then sent out into the cloud to be processed and the results are returned to the virtual machine.

Figure 1:  The Virtualized Concept


The virtual machine can be as simple as a browser or can be the complete set of applications including the operating system running on a terminal through thin clients such as Citrix.  The cloud service can be as simple as a search service such as google and/or database storage of information. A simple cloud example is SkyDrive, MobileMeTM, and now iCloudTM. iCloudTM offers backup services, storage services, and platform synchronization services to its users over the World Wide Web.

Virtualization

The virtualization concept is one in which operating systems, servers, applications, management, networks, hardware, storage, and other services are emulated in software but to the end user it is completely independent of the hardware or unique technological nuances of system configurations. Examples of virtualization include software such as Fusion or VMWare in which Microsoft's operating system and software run on a Apple MacBook.  Another example of virtualization is the HoneyPot used in computer network defense. Software runs on a desktop computer that gives the appearance of a real network from inside the DMZ to a hacker attempting to penetrate the system. The idea is to decoy the hacker away from the real systems using a fake one emulated in software. An example of hardware virtualization is the soft modem. PC manufacturers found that it is cheaper to emulate some peripheral hardware in software. The problem with this is diminished system performance due to the processor being loaded with the emulation. The JAVA virtual engine is also another example of virtualization. This is a platform independent engine that permits JAVA coders to code identically the same on all platforms supported and the code to function as mobile code without accounting for each platform.

Provisioning In Virtualization

Once hardware resources are inventoried and made available for loading. Provisioning in a virtualized environment occurs in several ways. First, physical resources are provisioned by management rules in the virtualization software usually at the load management tier, Figure 1. Secondly, users of a virtual machine can schedule a number of processors, the amount of RAM required, the amount of disk space, and even the degree of precision required for their computational needs. This occurs in the administration of virtualized environment tier, Figure 1. Thus, idle or excess resources can, in effect, be economically rationed by an end user who is willing to pay for the level of service desired. In this way the end user enters into an operating lease for the computational resources for a period of time. No longer will the end user need to make a capital purchase of his computational resources.

Computational Power Challenges

I have built machines with multi-processors and arrayed full machines to handle complex computing requirements. Multi-processor machines were used to solve processor intensive problem sets such as Computer Aided Design, CAD, demands or high transaction SQL servers. Not only were multiple processors necessary but so were multiple buses and drive stacks in order to marginalize contention issues. The operating system typically ran on one buss while the application ran over several over other busses accessing independent drive stacks. Vendor solutions have progressed with newer approaches to storage systems and servers in order to better support high availability and demand. In another application, arrayed machines were used to handle intensive animated graphics compilations that involve solid modeling, ray tracing, and shadowing on animations running at 32 frames per second. This meant that a 3 minute animation had 5760 frames that needed to be crunched 3 different times. In solving this problem, the load was broken into sets. Parallel machines crunched through the solid model sets handing off to ray tracing machines then to shadowing machines.  In the end the parallel tracks converged into a single machine where the sets were re-assembled into the finished product. System failures limited work stoppages to a small group of frames that could be 're-crunched' then injected into the production flow.  

These kinds of problems sets are becoming more common today as computational demands on computers become more pervasive in society. Unfortunately, software and hardware configurations remain somewhat unchanged and in many cases unable to handle the stresses of complex or high demand computations. Many software packages cannot recognize more than one processor or if they do handle multiple processors the loading is batched and prioritized using a convention like first in first out (FIFO) or stacked job processing. This is fine for a production use of the computational power as given in the examples earlier. However, what if the computational demand is not production oriented but instead sentient processing or manufactures knowledge? I would like to explore an interesting concept in which computational power in the cloud is arrayed in a virtualized neural net.

Arraying for Computational Power in New Ways

Figure 2: Computational Node


One solution is to leverage arcane architectures in a new way. I begin with the creation of a virtual computational node in software, Figure 2, to handle an assigned information process. Then organize hundreds or even tens of thousands of computational nodes on an virtualized backplane, Figure 3. The nodes communicate in the virtual backplane listening for information being passed then process it, and publish the new information to the backplane. A virtualized general manager provides administration of the backplane and is capable of arraying the nodes dynamically in series or parallel to solve computational tasks. The node arrays should be designed using object oriented concepts. Encapsulated in each node is memory, processor power, its own virtual operating system and applications. The nodes are arrayed polymorphically and each node inherits public information.  In this way, software developers can design workflow management methods, like manufacturing flow, that array nodes and use queues to reduce crunch time, avoid bottle necks, and distribute the workload. Mind you that this is not physical but virtual.  The work packages are handed off to the load manager which tasks the physical hardware in the cloud, Figure 3.

Figure 3:  Complex Computational Architecture


This concept is not new. The telecommunications industry uses a variation of this concept for specialized switching applications rather than general use computing. There are also array processors used for parallel processing. Even the fictional story, Digital Fortress by Dan Brown centered on a million processor system. Unfortunately, none of these concepts were designed for general use computing. If arrayed computational architectures were designed to solve complex and difficult information sets then this has the potential for enormous possibilities. For example, arraying nodes to monitor for complex conditions then make decisions on courses of actions and enact the solution.

The challenges of symbolic logic processing can be overcome using arrayed processing to virtualize neural nets.  A combination of sensory arrays for inputs, (node) neural-to-neural (node) processing, and valid pathways or lines of logic would provide the means to complete complex processing and output results that are otherwise difficult to achieve. If enough physical hardware participates the World Wide Web then the web could become an enormous neural processor solving some of the most incredibly complex computational problem sets.

The World Wide Web and Computational Limitations

This architecture within a cloud is limited to developing knowledge or lines of logic. Gaps or breaks in a line of logic may be inferred based on history which is also known as quantum leaps in knowledge or wisdom. Wisdom systems are different than knowledge systems. Knowledge is highly structured and its formation can be automated more easily.  Whereas wisdom is less structured having gaps in knowledge and information. Wisdom relies on inference and intuition in order to select valid information from its absence or out of innuendo,  ambiguity, or otherwise noise. Wisdom is more of an art whereas knowledge is more of a science.

Nonetheless, all the participating computers on the World Wide Web could enable a giant simulated brain. Of course, movies such as The Lawn Mower Man, Demon Seed, The Forbes Project, and War Games go the extra mile making the leap to self-aware machines that conquer the world. For now though, let's just use them to solve work related problems.

References:

Brown, Dan, May 2000. Digital Fortress, St Martin’s Press, ISBN: 9780312263126

Englander, I. (2003). The Architecture of Computer Hardware and Systems Software: An information Technology Approach. (3rd ed.). New York: John Wiley & Sons Inc.

Saturday, December 18, 2010

Using a MacBook PRO For Your Job Search

Commentary:  In August 2009 MicroSoft's Genuine Advantage shut my professional and work notebook computer down. The banner read I was a victim of piracy.  I was in the middle of a major project using MicroSoft Project.  This event forced me to seek a solution other than Microsoft solutions. 

The Fall of the Microsoft Empire

My machine was shut down by Genuine Advantage and the banner read I was a victim of piracy. Nothing was pirated on my machine. I owned everything. I provided Microsoft valid keys and receipts but they were unable to turn my machine back on. I swiftly moved all the data and critical applications to another Microsoft notebook machine to sustain my operational tempo. Then over the next several  months a Chinese Advanced Senior Technical Engineer and MicroSoft physicist who specialized in issues within the Microsoft Universe worked with me.  Instructions she sent me were rarely accurate. However, I had been MicroSoft certified years prior and was able to correct her instructions to do things like load registry updates but her choice of solutions were never fruitful.  Eventually, after months of no solutions from Microsoft they stopped responding to emails and I converted the machine to another operating system for training purposes. Simultaneously and early on, I began migrating software and processes to the MacBook Pro and purchased an iPad. My computational life instantly changed!

The Rise of the Apple Empire

Migration at first was mainly my music, audiobooks, and video products. After nearly a year of increasing skill in Apple products, I am currently at a point of working almost wholly from Apple solutions. Some activities like project management have to be conducted using Microsoft solutions. I now run those few processes in a virtualize workspace.  I will be able of fully dropping the use of MicroSoft solutions should an Apple solution become available for project management that can work with others who use the Microsoft Project solution.  I found the Apple solutions to be better organized, more reliable, and easier to use after adjusting to the graphical user interface differences and human interface device changes such as using my fingers on the iPad. I want to share with you how to use the MacBook Pro and iPad in your job search.

I assume you already own a MacBook notebook and the iPad product. My MacBook has the maximum drive and RAM possible giving me plenty of speed and room to work.  The software that best supports a job search includes:

Pages: This is the word processing capability. It costs about $11.oo for the iPad and $79.oo for the MacBook. The MacBook product also includes Numbers, the spreadsheet, and Keynote, the presentation package. I converted my resume to Pages with virtually no effort other than some minor formatting issues. The document is readily converted to a Word 2003 version file to be emailed out and can be worked in pages as a MS Word 2003 file too.  Pages also supports the universal file format, PDF. There really is no reason or major technological barriers to using pages. The only real barrier is the users mindset.

Bento: This is a database package and foundational to your search. It cost about $5.oo for the iPad and $49.oo for the MacBook. The package has dozens of templates available and can be readily adjusted to support numerous needs.  I choose to use the Contact Management template and modified it with additional fields. This process was ridiculously easy. I added fields to store files sent to a contact, to followup, and the type of contact I was engaging. I used the note field as a journal to document contact discussions. I also sync'd this with Bento on my iPad. Unfortunately, iPad Bento is not as robust and not all the fields move over.  Also the iPad presentation is vertical and linear. The same look and feel as MacBook Bento is not offered in the iPad version.

Mail:  This is Mac's email package. It is unlike MS Outlook and takes some practice to configure to your preferences. It is embedded with Snow Leopard, Mac's operating system. Mail does offer rules and smart boxes to sort out emails and can be a bit confusing at first. Nonetheless, it works and serves the communication purposes. You can email directly from the apps on the iPad or on the MacBook in several formats.

MobileMe: This is a virtual environment or your personal data cloud and cost $99/year. It is an efficient means to keeping your data the same on all your devices. However, I have found that is does not effectively work with iGoogle calendars and I have observed problems syncing with the Microsoft environment. These issues created the circumstances for me to shift further towards Apple since reliability and availability of information was critical to me. 

LinkedIn: This is essential to your networking effort. However, it does not replace getting out and talking to people. LinkedIn offers tools that integrate with your mobile devices and browsers. They offer an application for your iPad but no plugin for the Safari browser. LinkedIn does offer a plugin for the FireFox browser. Unfortunately, Firefox does not sync with Mobile Me. Therefore, you may desire to use Safari for everything except use of the LinkedIn plugin. I use both browsers on my MacBook Pro. I'll post an update regarding the use of the browsers in a further post after I have more experience. 


My Computational Life Changed

Computationally, I have had sort of a mid-life crisis after I have worked almost exclusively with Microsoft since 1982 when I purchased my first computer. I have jaunted into UNIX for about 2 years, dabbled with LINUX off and on, and other operating systems as sort of a fling from time to time. However, this shift to Snow Leopard is long term.  The changes are profound. I now carry my iPad almost everywhere and rarely tote a notebook. I perform work as the task demands where ever I am at shifting back and forth between projects. Granted, I need my MicroSoft notebook for most of my project work but my iPad lists the tasks, contact data, and other critical information while on-the-go. It takes some effort to sync the two worlds.

In terms, of a job search Bento is the underpinning of the search maintaining the contact list, journals, and files. LinkedIn will help you build a network. Pages provides the word processing capabilities to maintain a resume and develop cover letters. Mail helps communicate as sites like Career Builder, Monster, and corporate sites pour out emails based on search engines you have set.  On the go, MobileMe and the iPad keep you sync'd in and able to respond quickly from almost anywhere.  Virgin Mobile's MiFi ensures you can stay connected in places that do not offer WiFi service. Just throw the unit in your briefcase or backpack and you will have up to 4 hrs connectivity on your iPad significantly cheaper than if you use the 3G or 4G ATT service embedded in the iPad units. 

In the end, your computational life will become more mobile, more capable, and more reliable after switching to the Apple Universe.  It also gives you a transportable skill, you can use Apple products proficiently. 


Monday, December 13, 2010

Why is IPv6 Taking So Long?


The advent of IPv6 will create a culture in which nearly every device can be placed on the world wide web for various exciting purposes and possibilities. This is in response to the concern why IPv6 is taking so long. Addressing technologies are coupled with topologies and have undergone numerous design attempts as engineers sought ways for packets to locate their destinations. Tokens are an highly ordered approach but slowed networks as they grew in size. Ethernet approaches flooded networks with packets that had to be assigned time-to-live in order to reduce the packet traffic lingering on the networks. Sixteen bit Internet Protocol, IP, addressing has emerged as the dominant protocol used on most networking topologies. The draw back is the limited number of addresses are available as the Internet and networks grow in size and use. IPv6 is based on 128 bit addressing and is said to provide more than enough IP addresses for future demands. Many vendors are reluctant to implement the newer addressing technology. However, implementation may occur without most people being aware of the transition.

MicroSoft’s current official company position, although published sometime ago, remains in effect today. Microsoft has been loading IPv6 in all current operating systems and nearly all the older XP platforms. IPv6 is also in the server product lines as newer generations are released. The purpose for the implementation simultaneous with the older IPv4 is to debug addressing problems and build a sufficient base of machines that can operate in a seamless transition from IPv4 to IPv6. This yields broader scalability and increased devices on those networks when the transition become more mainstream. Microsoft remarks that the transition must be conducted in a responsible manner to prevent costly unproductive missteps (Microsoft, 2008).

The real challenge is not converting the personal computer. Instead, the challenge is implementing IPv6 on the vast array of other computational devices such as cell phones, PDAs, iPods, internet enabled TVs and other web devices. In many cases, the older technology device must simply become obsolete and move out of service over time. There is no time limit on IPv6 implementation and these older devices may persist for years to come. To the common user the transition will have just happened.

Reference:

MicroSoft (12 Feb 2008). Microsoft's Objectives for IP Version 6, Copyright 2009, Retrieved on 01APR09 from http://technet.microsoft.com/en-us/library/bb726949.aspx

Stallings, William. (2009) Business Data Communications, (6th Edition) New Jersey: Pearson Prentice Hall

Saturday, November 27, 2010

Business Intelligence


Commentary: Many vendors offer business intelligence solutions. Project managers need to understand not only the vendor application but more importantly the business strategic objectives.   

The article “Best Practices for Great BI Performance with IBM DB2 for I” discusses another vendor solution to the general information science problem of data storage. Practices such as Business Intelligence, BI, relate more to contextual presentation of data patterns. How data is stored, retrieved, then processed on demand is essential to good BI. BI is a subset of the broader discipline of Decision Support Systems, DSS, which are at the top of the information food chain. Information sub-systems collect, sort, validate, and store data before passing the information to the next level. As the data moves up through the levels the quality of the data improves.

Acquiring quality data for DSS systems and applications begins at a much lower level. Operational sub-systems collect data and business rules validate then store data usually in several different relational databases. Typical operational sub-systems data involves data entry of customer data, employee time clock data, repair data, bookkeeping, and logistical information collected by scanners. Information across these databases are then gathered or organized in support of operational level processes which add value to the data through operational processes. Typical operational processes involve activities such as purchase orders, payroll, travel, customer service, financial statements etc… Operational level data across numerous systems and databases is then rolled up into a DSS system. DSS level processes are dramatically different than operational processes. DSS processes look at the character of the data sets, for example trends, patterns, and behaviors, in order to form strategic decisions. Because of the large data sets involved in DSS; storage, processing, and reporting of the data is critical in order to meet on demand for review requirements in a timely manner.

The common approach currently in use is the data mart that are working subsets of larger primary databases and present a unique view. These data marts when organized in ways to permit multi-dimensional modelling of the operations are called data cubes. The use of methodologies such as online transaction processing (OLTP) and online analytic processing (OLAP) can continuously usher data into the data cubes in support of on demand reviews. Numerous vendors are beginning to offer services in this arena although the methodologies and markets are still being shaped. “BI will surge in the mid market” (Cuzillo, 2008) “In the last two years or so we have seen some important new technologies emerge and begin to influence BI, and I believe they’ll have an even more significant effect in the coming year. Some examples include SOA/Web services (and overall componentized design), in-memory analytics, integrated search, and the use of rich media services to provide more compelling (Web-based) user experiences.” (Briggs, 2008)

Commentary Decision support systems are a growing business interest as markets become increasingly volatile. Possessing a fundamental understanding of these systems and their value to business is pinnacle to architecting effective systems. Many businesses continue to struggle with the best way to employ information technologies and serve business intelligence needs. Project manager's implementing these kinds of projects need to be involved right from the inception in order to maintain a focus on the strategic objectives. It is the project manager who corals and focuses these projects for the senior leaders and achieving their visions.  

Reference:

Englander, I. (2003). The Architecture of Computer Hardware and Systems Software: An information Technology Approach. (3rd ed.). New York: John Wiley & Sons Inc.

Cain, M. (2008). Best Practices for Great BI Performance with IBM DB2 for i. Penton Media, Inc. Retrieved from http://www-03.ibm.com/systems/resources/Great_BIPerformance.pdf

Cuzzillo, Ted, Dec 2008. Analysis: BI Transformation in 2009, TDWI, Retrieved from http://www.tdwi.org/News/display.aspx?ID=9262

Briggs, Linda, Dec 2008. Q&A: Market Forces That will mold BI in 2009, TDWI Retrieved from http://www.tdwi.org/News/display.aspx?ID=9263

Healthcare Information Virtual Environment System (HIVES)

Healthcare information systems have a vast array of various equipment, clinics, labs, governmental agencies, manufacturers, doctor offices, and innumerable other organizations providing, collecting, and processing information.  Classic issues of stove piping or 'Silos' have emerged causing inefficiencies in the industry such as multiple lab test and/or diagnostics being prescribed. The advent of a nationalized health records system increases the complexity of these networks as well. In order to gain management and control over these information systems, the American National Standards Institute (ANSI) hosts the Healthcare Information Technology Standards Panel, (HITSP). This is one of several cooperative efforts, between industry and government to create standards. However, all too often the standards result in a highly complex architecture and system design. This is because early standards and architectures often focus on resolving major issues with little forethought into the broader architecture. Many argue that little information is known or that the project is far too complex. Years later, this results in an effort to simplify and streamline the system again.

Allowing a Frankenstein architecture to emerge would be a travesty when our initial objectives are to streamline the healthcare processes removing redundancies and latencies in the current system. The planners should design the system for streamlined performance early. Large scale projects like these are not new and history tells us many good things. The evolution of complex systems such as the computer, the car, and the internet have emerged out of a democratization of design. Literally, tens of thousands of people have contributed to these systems and those models are one approach to resolving the large scale complex information systems involved in healthcare. What we have seen emerge out of the democratization of design is a standardization of interfaces in a virtualized environment. For example, the headlamps are nearly identical for every car with standard connectors and mounts even though the headlight assemblies are artfully different on each car. The computer has standard hardware and software interfaces even though the cards and software perform different functions. The virtual computer is independent of vendor product specifications. Instead, the vendor performs to a virtual computer standard in order for their products and services to function properly.

Let us take a moment to explain that virtualization is the creation of a concept, thing, or object as an intangible structure for the purpose of study, order, and/or management. The practice is used in across a breadth of disciplines to include particle physics and information science. Within the information realm, there are several different virtualization domains to include software, hardware, training, and management virtualization. My interest is not in the use of any specific virtualized technology but instead in exploring healthcare virtualization management.

I propose a need for a Healthcare Information Virtual Environment System (HIVES), Figure 1, which is essential to reducing complexity and establishing a standard for all those participating in the healthcare industry. The virtual environment is not a technological system. Instead it is a management system or space in which medical information is exchanged by participating objects within the virtual environment. Real things like clinics, offices, data centers, and equipment sit on the virtualized backplane or space.  HIVES would have a set of standards for participating equipment, clinics, hospitals, insurance agencies, data centers, etc... connecting to the environment in order to exchange information. Many may remark that these standards exist. I am able to locate dozens of vendor products and services supporting hardware, software and even service virtualization which are not a standard virtualized management of the overarching healthcare environment that is what the nationalized healthcare system is attempting to manage. I have reviewed HITSP and noted there is no clear delineation of a virtualized managed environment.

Figure 1: HIVES


In such an environment, I envision data being placed into the environment would have addressing and security headers attached. In this way, data is limited to those listening and who have authorization to gather, store, and review specific information. For example, a doctor prescribes a diagnostic test. An announcement is made in the environment of the doctors request addressed to testing centers. Scheduling software at a testing facility participating in the environment picks up the request then schedules the appointment. It announces the appointment in the virtualized environment in which the doctor's office software is listening to receive the appointment data. Once the patient arrives the machines perform the diagnostics placing the patient's data back in the environment. A analyst picks up the record reviews it and posts the assessment in the environment. In the meantime, a data center participating in the environment that holds the patient's record is listening and collects all new information posted in the environment regarding the patient then serves those records to authenticated requests. The patient returns to the doctors office which request the patient's record from the data center through the environment. 

The advantages to having such an environment whether called HIVES or something else are astronomical. The patient's records are available to all participating in the environment, security levels and access can be administered in the environment efficiently to ensure HIPPA and other security compliance standards, bio-surveillance data is more readily available with higher accuracy in the data centers, the environment can be an industry driven standard and managed through a consortium, and the government could be an equal participant in the environment.  

Moreover, to be a participant, the manufacturer, clinic, lab, hospital, doctor office, data center or any others have to meet the clearly defined standards and become a consortium participant at some level. Thus, complexity of the architecture and systems interfacing can be tremendously reduced achieving the stated objectives of healthcare reform and streamlining.

Commentary:  Please feel free to comment and dialogue on this concept. I would especially enjoy commentary regarding the standards and any efforts at virtualized management of health care information. 

Sunday, November 21, 2010

Impacts of Complexity on Project Success

Commentary: This is the relevant portions of an extensive paper written in my Masters of Information Technology coursework.  The paper highlights a common concern among many project managers. That is the lack of quality information early in a project especially in complex projects.  The overall paper proposed research into project complexity and early planning efforts.

Introduction

Project management practice and principles have been maturing and continue to mature. The general paradigm to plan well applies to early project planning and has a significant influence on the success or failure of a project. This research is in support of identifying the key relationships between influential factors affecting scope and risk in complex projects during early project planning. Attention to the complexity is important since the nature of information technology, IT, projects are complex. Complexity tends to increases risk. "Project abandonment will continue to occur -- the risks of technology implementation and the imperfect nature of our IT development practices make it inevitable" (Iacovou and Dexter, 2005, 84). Therefore, this study is focused on the early information technology project planning practices when the project is vague and the outcomes are unknown and unforeseen. The purpose is to better manage scope gap early.

Problem Statement. Poor scope formulation and risk identification in complex projects during the early planning have lead to lower project performance and weakened viability. Therefore, project managers are challenged to manage these issues early in order to increase the project's viability and success.

Argument.  Project complexity influences performance just as taking shortcuts in a rush for results causes an outcome with complexity like characteristics. Lower performance outcomes may result from project essential factors relating to scope and risk objectives that are overlooked or not properly managed resulting in increased cost, delays, and/or quality issues that jeopardize the projects viability and success. 

Body of Works Review

This effort intends to explore the significant body of works that has emerged to date. Literature research was conducted across a diversity of project types in support of the research problem statement that poor scope formulation and risk identification of a complex project during the early planning affect project performance and project viability in relationship to complexity of the project. This is by no means the first time research of this nature has been explored in these three areas; scope definition, risk identification, and project complexity. 

The common threads in the body of works that has emerged spans decades to include project management as a whole, risk and scope factors that affect project success, information and communications challenges, and complexity impacts on scope and risk. The works researched in other disciplines provide many transferrable lessons learned. For example, construction and engineering projects have in common to information technology projects complexity issues as well as information reporting and sharing concerns. Other works from supporting disciplines contribute to factors on education, intellect, and learning in support of competency influences on risk. A 2001 trade publication article indicated that causes for failed projects tend to be universal.  The article's author, John Murray, concludes that information technology projects fail for a small set of problems rather than exotic causes (Murray, 2001, p 26-29).

In a 2008 construction project study, the researchers discussed the construction industries front end planning which is explained as the same as the project charter process. The works details a study of fourteen companies and their project planning processes then presents a model process. The study results are summarized into critical criterion of success. In conclusion, fifty percent of the projects did not have required information for front-end planning activities. Problem areas were identified in a follow on study to include weak scope and risk identification as well as other basic issues (Bell and Back, 2008).

The problems of scope definition researched in the body of works indicates that cooperative planning and information sharing have been key factors in developing scope. A 2007 study on concurrent design addressed the complexities and risk of concurrent design projects. The researchers posed a model of interdependent project variables. The linkages illustrate the direction of the communications or information sharing between the variables. In the researcher's analysis they conclude that through cooperative planning in the form of coupling and cross-functional involvement significantly reduce rework risk. Early uncertainty resolution depends on cross-functional participation (Mitchell and Nault, 2007).

The Technology Analysis and Strategic Management Journal published an article in 2003 discussing outsourcing as a means of risk mitigation. The outcome of the case under review was project failure due to a lack of clear requirements and poor project management. This was attributed to conflict and a loss of mutual trust between the outsourced vendor and the information technology client. The result was one vendor cutting losses due to weak commitment when compared to in house project support. The researcher suggested that shared risk may be effective in a partnership such as outsourcing but requires strong communication and some level of  ownership (Natovich, 2009, p 416).  This article's case study illustrates that cooperation is critical in information technology projects. A 1997 study discussed mobilizing the partnering process in engineering and construction projects during complex multinational projects. Researchers argued developing project charters fostered stronger partnerships and reduced risk. In general, the article promotes a shared purpose supported by a method based on vision, key thrusts, actions, and communication. The works offers management practices and predictors for conflict resolution and successful projects. One of the best predictors of success in high performance project managers is the ability to reconcile views rather than differentiate; influence through knowledge; and consider views over logic or preferences (Brooke and Litwin, 1997).

The literature has also indicated competencies of project members and conflict resolution have been key factors of interest. Northeastern University's explored strengthen information technology project competencies having conducted a survey of 190 employers finding that employers considered hands on experience, communications ability, and behavioral mannerism of the student among other attributes. The researcher makes a call for a mixture of improvements to student curriculum that involves project management skills both visionary and hand-on as well as group interaction (Kesner, 2008).  The efforts to strengthen competencies have not only been in traditional education institutions but also in professional organizations such as the American Institute of Certified Public Accountants (AICPA). A 2008 article discussed the accounting industry's approach to identifying and correctly placing information technology staff based on assessed competency levels. The AICPA is using a competency set that is found cross industry and levels of skill ("IT Competency", 2008).  Some dated literature is also indicating that in order to solve vague problem sets within complex project has centered on a willingness and ability to engage the vague circumstances, to think abstractly.  A 1999 psychology publication discussed the typical intellectual engagement involving a desire to engage and understand the world; interest in a wide variety of things; a preference for complete understandings of a complex problem; and a general need to know. The study associated intellect with the typical intellectual  engagement their environment in an intellectual manner, problem solve, believe they possess greater locust of control over the events in their lives (Ferguson, 1999, p 557-558).  Additional research is necessary in this area with this work being so dated.

In a 2006 article researchers sought to understand reporting to senior manager methodology regarding software development projects. The works discussed reporting and governance in an organization then break into four functional areas and further refine the best practices into a common view.  The researchers noted that little attention has been given to how senior managers and the board can be informed about project progress and offered several method of informing them. The researchers reported that senior managers need information grouped into three classes; support decisions, project management, and benefits realization assessments. The researcher then discusses a variety of reports and their attributes. The researchers concluded that senior managers and board members need effective reporting if they are to offer oversight to the software development project (Oliver and Walker, 2006).  Another 2006 study indicated that continuous reporting, information sharing, builds the case for compelling board member involvement based on four factors: cost overrun history, material expenditures, [software] complexity, and any adverse effects on the company (Oliver and Walker, 2006, p 58).

The challenges of project complexity management have utilized information technology governance as a key factor in project success.  Information technology governance has been sought as a framework to align organizational goals with project goals.  In a 2009 qualitative study, researchers sought to treat information technology governance, change management, and project management as closely related then stated a premise that information technology governance must be governed to ensure that problems due to weak governance are corrected.  They postulate the question how much information technology governance is a requirement. Then they organize information technology governance into three broad groups; corporate governance, scope economies, and absorptive capacity exploring these groupings. The researchers finally relate information technology governance to the enterprise at all levels discussing results of a survey given to numerous actors in the organization's CRM [Customer Relationship Management] projects. They also found that most companies surveyed had risk and problem management programs that were mature rather than given lip service. The problem areas that stood out were communicating with senior management as well as consultants and vendors. In conclusion, the researchers remark that information technology governance depends on senior management involvement and sound project management ability (Sharma, Stone, and Ekinci, 2009).

Given scope, risk and project complexity, information technology governance offers a framework for unifying organizational objectives.  Research completed in 2009 showed that information technology governance covers all the assets that may be involved in information technology, whether human, financial, and physical, data, or intellectual property (Sharma, Stone, Ekinci, 2009, p 30).  The same research has also shown that information technology governance required top down involvement stating that successful implementations of information technology governance depends on senior management involvement, constancy, and positive project management abilities (Sharma, Stone, and Ekinci, 2009, p 43).  Senior management requires information to be shared and a 2006 project journal publication supports remarking that continuous reporting builds the case for compelling board member involvement based on four factors: cost overrun history, material expenditures, [software] complexity, and any adverse effects on the company (Oliver and Walker, 2006, pp 50-58).

The body of works while much broader than sampled and demonstrates support and strength in a number of areas of the problem statement.  The literature selected ranges in date from 1997 to 2010 with the greater portion of the works were more recent, 2007 or thereafter. Some of the areas of work are dated or sparse. This indicates a need additional research such as in the area of problem solving abilities in vague or unclear circumstances.  While much of the research was across several industries principally from industry and trade journals in information technology, general construction, or engineering the project management principles and findings transferrable between project types. The works were also with several academic studies and only two open source articles.  Most of the works were authoritative under peer review. The dated works were cited more frequently than the more current works as to be expected.

The compelling thread line in the body of works is that scope and risk concerns influenced by project complexity with cooperation, information sharing, conflict resolutions, and competencies as significant factors in project success.

Discussion

Technology projects are challenged with a variety of factors that contribute towards the performance of the project. The body of works indicates that risk and scope complicated by project complexity directly influence project success from the outset. Thus, early project planning is crucial toward success. The body of works relating to the elemental aspects of competencies, information, cooperation, and conflict management offers historical support to risk and scope formulation. The one point that seemed to standout is information sharing and flow at all levels.  Additional research is necessary into the body of knowledge behind successful project managers and the relationship to the ability to reason through complex and obscure project problem sets as related to project related competencies. Dated literature indicates a relationship between the positive locust of control and willingness to engage abstract problems.

Commentary: I suggest that compartmentalizing a complex project into smaller projects should strengthen the locust of control and improve problem solving challenges. In short, the smaller problem set is more easily grasp than an overwhelming large set of problems. Thus, reducing risk and strengthening scope definition.  In breaking a complex project into smaller achievable projects, the organization will gain greater control over the entire process and gain incremental successes towards the ultimate goal. Continuous improvement would characterize such an evolution.  The master project manager must assess the order in which the smaller projects are completed. Some may be completed simultaneously while others may be completed sequentially. 

A risk of scope creep may be introduced as an outcome of mitigating scope gap. To remain focused all the projects must align with the organizational strategic objectives as they take strategy-to-task. New ideas need to be vetted in meaningful ways for the organization and aligned with the overall objectives in a comprehensive change management plan. 


Communication is also essential in managing complex projects. The use of a Wiki as a point of  foundational policies and information is often a best practice. 

Large scale sudden disruptions of an organization are required under certain circumstances. However, in most circumstances complex projects need to be properly broken into smaller manageable efforts then become part of a continuous improvement effort within the organization. 

References

(2004). Skills shortage behind project failures. Manager: British Journal of Administrative Management, (39), 7. Retrieved from Business Source Complete database.

(2008). AICPA's IT competency tool takes you down the path to success!. CPA Technology Advisor, 18(6), 60. Retrieved from Business Source Complete database.

Brooke, K., & Litwin, G. (1997). Mobilizing the partnering process. Journal of Management in Engineering, 13(4), 42. Retrieved from Business Source Complete database.

Chua, A. (2009). Exhuming it projects from their graves: an analysis of eight failure cases and their risk factors. Journal of Computer Information Systems, 49(3), 31-39. Retrieved from Business Source Complete database.

Ferguson, E. (1999). A facet and factor analysis of typical intellectual engagement (tie): associations with locus of control and the five factor model of personality. Social Behavior & Personality: An International Journal, 27(6), 545. Retrieved from SocINDEX with Full Text database.

Bell, G.R. & Back, E.W. (2008). Critical Activities in the Front-End Planning Process. Journal of Management in Engineering, 24(2), 66-74. doi:10.1061/(ASCE)0742-597X(2008)24:2(66).

Iacovoc, C., & Dexter, A. (2005). Surviving it project cancellations. Communications of the ACM, 48(4), 83-86. Retrieved from Business Source Complete database.

Kesner, R. (2008). Business school undergraduate information management competencies: a study of employer expectations and associated curricular recommendations. Communications of AIS, 2008(23), 633-654. Retrieved from Business Source Complete database.

Kutsch, E., & Hall, M. (2009). The rational choice of not applying project risk management in information technology projects. Project Management Journal, 40(3), 72-81. doi:10.1002/pmj.20112.

Mitchell, V., & Nault, B. (2007). Cooperative planning, uncertainty, and managerial control in concurrent design. Management Science, 53(3), 375-389. Retrieved from Business Source Complete database.

Murray, J. (2001). Recognizing the responsibility of a failed information technology project as a shared failure. Information Systems Management, 18(2), 25. Retrieved from Business Source Complete database.

Natovich, J. (2003). Vendor related risks in it development: a chronology of an outsourced project failure. Technology Analysis & Strategic Management, 15(4), 409-419. Retrieved from Business Source Complete database.

Oliver, G., & Walker, R. (2006). Reporting on software development projects to senior managers and the board. Abacus, 42(1), 43-65. doi:10.1111/j.1467-6281.2006.00188.x.

Seyedhoseini, S., Noori, S., & Hatefi, M. (2009). An integrated methodology for assessment and selection of the project risk response actions. Risk Analysis: An International Journal, 29(5), 752-763.
doi:10.1111/j.1539-6924.2008.01187.x.

Sharma, D., Stone, M., & Ekinci, Y. (2009). IT governance and project management: A qualitative study. Journal of Database Marketing and Customer Strategy Management, 16(1), 29-50. doi:10.1057/dbm.2009.6.

Skilton, P., & Dooley, K. (2010). The effects of repeat collaboration on creative abrasion. Academy of Management Review, 35(1), 118-134. Retrieved from Business Source Complete database.

Sutcliffe, N., Chan, S., & Nakayama, M. (2005). A competency based MSIS curriculum. Journal of Information Systems Education, 16(3), 301-310. Retrieved from Business Source Complete database.

Vermeulen, F., & Barkema, H. (2002). Pace, rhythm, and scope: process dependence in building a profitable multinational corporation. Strategic Management Journal, 23(7), 637. doi:10.1002/smj.243.

Saturday, November 20, 2010

Innovation Shatters Paradigms

Several years ago, ATT sought to leverage technology in the global networks heralding the move as the most advanced network in the world. Their references to nodes, topologies, and ATT’s most technologically advanced network were reminiscent of traditional networking approaches not an advanced technology or methodologies. While clearly a marketing campaign, this is myopic serving only established markets with known demand. ATT’s efforts are nothing more than a tactical grab for global market share stemming from a strategic plan to position for a perceived future marketplace. A true pioneer would define the marketplace instead of positioning themselves as a jackal ready for prey. As a jackal, all you get is just what ever seems to come along then battle for morsels. A slight paradigm shift in thinking could propel information processing into realms far beyond any of the current thinking.

One paradigm that needs to be shattered is the idea of information sharing. Now, you think what is this guy talking about? The X Files, a popular television show, has a byline, The truth is out there, that applies well to this discussion. The current information sharing assumption is that someone out there is processing needed information. All we need to do is find that information. Unfortunate for both parties that neither knows of the other, let alone what information is needed and how best to exchange that information between both sides once the need is discovered. We think we can just email it around as a CSV file or a word processing document. These formats require a level of technical skill and time to digest the data. In general, somehow computers, networks, and software are the mystic medium that makes the connection to the other side. However, the connection is not so mystic. Instead, the connection should be managed right down to the desk or computational device! 

Complex Adaptive Systems may offer a solution. Using the idea of the node, complexity of network connections, and Just In Time (JIT) manufacturing concepts information can be processed, advertised, and disseminated through dynamic networks globally. Nodes, instead of being peripherals, could become JIT U-Shaped information processing cells where inputs and outputs are managed. These inputs and outputs are globally accessible using the existing telecommunication networks. The producer node advertises its products and services then dynamically connects to an information consumer who has a need. 

The beauty in such a system is the economy that emerges. Consumer nodes purchase raw data and process final products becoming a producer node that are advertised over the networks at a price. This kind of thinking could drive numerous new markets to include truly virtual companies. If a true virtual company emerges then this may cause the digital profit model to gain independence from the brick and mortal anchor to which it is tethered. There are so many possibilities but industry needs independent creative thinkers who can shape the market rather than lie in wait in the weed patches of wrecked economies hoping and waiting for opportunity.

References:

Adrian Slywotzky, September 2003, The Art of Profitability, Grand Central Publishing, ISBN:9780446692274

Englander, I. (2003). The Architecture of Computer Hardware and Systems Software: An information Technology Approach. (3rd ed.). New York: John Wiley & Sons Inc.

Monday, November 15, 2010

Spiritual Machines Create Challenges for Project Managers

What I am going to talk about originated as a discussion from my Masters in Information Technology  program. This  may seem far fetched to many people but is an upcoming debate in the not-so-distant future. Holographic technologies have the potential to cause moral delimnas for project managers who must implement these systems when they arrive. The early technology will be inaminate and mechanical in nature. As time passes this technology will combine with neural nets and biological computing to create life-like machines that could potentially develop self-awareness.  It is never too early to debate the questions and challenges these systems pose.

Holography was commercially exploited as early as the 1960’s with the GAF viewfinder. As a young boy, I recall placing reels with images into a stereographic view finder looking at the comic book world of Snoopy and other stories of dinosaurs. Later, I explored holography deeper in technical books learning about how data is encoded in the collision patterns between reference and data beams. Science philosophy books explored the holographic universe and how the human eye-brain organ is a holographic system that interprets our world.

Scientists have struggled with the eye-brain to mind dilemma in humans. The brain is the mechanical operation while the mind is spiritual in character. Holographic systems store information in terms of ghostly images unlike conventional storage systems that store information in terms of attributes. According to Michael Talbot’s book “The Holographic Universe” holography’s ethereal images reflect the way the human mind processes reality. The human brain can suffer trauma loosing large areas of tissue but somehow retains unfettered memories and even character. Likewise, a curious quality of holography is that all the information is stored ubiquitously throughout the storage medium defeating divisibility short of catastrophic loss. Any divisible piece contains the complete information set. (Talbot, 1991) Thus, holography has the appearance of retaining the character or essence of the information stored despite failures and imperfections of where the data is embodied.

Current robotic research is developing systems that mimic human sensory and motor capabilities. Software and processing hardware emulates or mimics human neural circuitry to cause human-like actions including those emotional or to make human-like decisions. Both actions are mechanical in character operating based on local action. For example, tracking and catching a baseball in flight or if the baseball hits the robot instead to perform specific emotional responses. The elements of surprise and creativity are more or less spiritual in character and have not yet been mastered by science since they are not local actions that science deals with.  For example, reflecting on the flight of the baseball and describing it as screaming through the air is creative and not a local actions. In fact, self-awareness maybe a requirement to achieve surprise and creativity.

Holography's creates theological concerns since its resilient retention of information is not mechanical. Instead, holographic data storage is based on waveforms or electromagnetic energy patterns also known as light waves. These are often equated to spirituality. There are theological implications for example from the Judeo-Christian Bible makes parallels between light and the absence of light to spiritual existence. For example, in the Bible, Genesis 1.4; "God saw that the light was good, and he separated the light from the darkness.” Holographic ghostly images in storage and computational processing could depart silicon wafers and mechanical storage systems for the amino acids and proteins found in biological processing. Human tinkering could result in challenges by truly spiritual machines. If not careful these biological machines could develop a conscience and become annoyed with natural biological computers also known as humans. In the end, mankind’s technological conduct could potentially manufacture a nemesis. If for all the good in the world there is evil then the human responsibility is to dispense the good and forsake the evil. Holographic storage is the beginning of a computational era that has the potential to elevate or degrade mankind.

"The development of every new technology raises questions that we, as individuals and as a society, need to address. Will this technology help me become the person that I want to be? Or that I should be? How will it affect the principle of treating everyone in our society fairly? Does the technology help our community advance our shared values?" (Shanks, 2005).

The possibility of computational systems not based on silicone but amino acids and proteins, the building blocks of life, is clearly on the horizon and presents some puzzling questions. As these systems advance, project managers implementing these new systems could be faced with significant ethical and moral decisions. Literally, actions such as killing the 'power' on a living machine raises questions about life and the right to exist.  Will man-made biological computers perhaps through genetic engineering develop self-awareness, spirituality, and a moral code of their own? How far will this go? What other moral and ethical issues could arise from the advent of this technology?

Please feel free to comment. I would enjoy hearing from you.

References:

Lewis, C.S., August 2002. The Four Loves, Houghton Mifflin Harcourt, ISBN: 9780156329309

Englander, I. (2003). The Architecture of Computer Hardware and Systems Software: An information Technology Approach. (3rd ed.). New York: John Wiley & Sons Inc.

Kurzweil, Ray, 1999. “The Age of Spiritual Machines: When Computers Exceed Human Intelligence”, Penguin Books, ISBN: 97801402822023

Shanks, Tom, 2005. Machines and Man: Ethics and Robotics in the 21st Century, The Tech Museum of Innovation Website. Retrieved 21FEB09 from
http://www.thetech.org/exhibits/online/robotics/ethics/index.html

Talbot, Michael, January 1991. The Holographic Universe, Harper Collins Publishers, ISBN 9780060922580

Sunday, November 14, 2010

Social Media API'S, Architecting and Building Applications

This is a soft technical discussion of coding for social media. Some technical knowledge is helpful.

Social media has taken off and is here to stay. Preparing contextual material for the social media channels has been discussed in tremendous detail with innumerable books highlighting this aspect. However, creating new capabilities and leveraging social media in newer ways often requires creative use of the technology. This almost always translates to creating a new application. Most companies would react negatively to the thought of creating a new application having had poor experiences in the past. With a solid plan and incremental or evolutionary development and patience the process can be much more palatable. This begins with the understanding of the technology.

Twitter, Google, LinkedIn, Facebook as well as other social media instruments typically have a way to connect to their service. This connection is known as an Application Programming Interface, an API.  Through this interface, application specific data is passed bi-directionally in most cases through a standard variable that  passes the data.  Most instruments prefer that the application does not embed into their site. That means they want the application to run on a separate server somewhere else on the planet. Site information is passed to the remote server via the API regardless of geographical location. What this means for the business seeking to leverage social media is that they can have complete control of their application and connect it to multiple social media instruments as long as they meet the connection agreements.

Both the social media instrument and the remote application have API's that must match up. With multiple instruments in use, the connection is not always clean. Therefore, the remote application should make use of a Gateway for each instrument connected. This Gateway maps the instrument specific API's to your application specific API's.  

Most API's require authentication. Some social media instruments allow sending a simple username and password with each request. More advanced methods are now in use by most social media instruments. Twitter  uses an OAuth method which is similar to a valet key that limits access. Facebook uses a handshake approach and has two elements to authentication; a connection and the state of being logged in. The Facebook authentication is the most complex aspect of interfacing. The differences in authentication between social media instruments is also another reason to use a Gateway specific for that instrument. 

By properly architecting social media interfaces, companies can combine social media in unique ways to create niche markets, skirt around fierce competition, or otherwise reach their audience in meaningful ways. Project managers running these projects should look to combine Software Development Lifecycle (SDLC) and spiral waterfall models to achieve incremental progress.  Obviously, the project manager would want to prioritize connections and seek the greatest returns early. 

Friday, November 12, 2010

You Gotta Talk! And Talk A Lot...

I want to make this posting to discuss an emergent situation I am observing. But first, your social media experience may follow a pattern of the Social Media Maturity Life Cycle Model. My experience certainly did. When these instruments became known to me I was still in a mindset of the good ole job search method of mailing resu-letters as Jeffery Fox teaches in his book How to Land a Dream Job  (circa 2001) with the slight twist of emailing them.  When I finally began to build my social network, I relied on it too much and became discouraged but kept building and networking. Slowly, I became enlightened mostly through some hard knocks and currently my social media networks are becoming more active.

Let me regress a little bit to discuss some background. Over the past decades we have observed the advent of video games and other technology based activities that essentially require solitary and/or faceless, nameless interactions. People could literally say anything and discuss anything behind an avatar and screen name. Social skills were literally tossed out the window as condemnations, insults, and putdowns dominated a majority of the dialogues online. However, there is a resurgence of civility especially where professionals are concerned.

A growing body of works illustrates a movement away from technology towards increased social interactions. The authors of the book Brains On Fire (2010) remark that technology is a trap. A crutch. They argue that it is a detriment. Other works stemming from Orville Pierson (2005), Dale Carnegie, (1938), Stephen Covey (1989), and others stress that you got to get out and talk to people. As the Internet grew in popularity The Clue Train Manifesto (1999) declared to the people of Earth that markets are conversations in which you got to inspect the goods and ask questions. In other words, you got to talk to people.

While LinkedIn, Facebook, and other social media instruments are useful in one's job search, nothing is a substitute for good ole getting out there and talking to people and having people talk about you. Your social media networks are not truly useful as long as they remain in cyberspace. Plenty of movies like 'Lawnmower Man' and 'Tron' look to put one's essence into cyberspace but you need to go the other direction to get out of cyberspace. Only one movie, I can think of, Weird Science actually has cyberspace entering real space when a bunch of young boys feed data into their computer. They then have a seance and through a freak of nature a cyberspace being appears stunning the boys. Making that leap from building a successful social media network in cyberspace to talking to people is a challenge for many people. For some it can be a weird science. You've been putting connection data into the system, now is the time to bring it to life. How does one start the conversation, sustain the conversation, or even get difficult people involved? 

This can be an expansive discussion that delves deeply in one's psyche and considerable study into charismatic methods and psychology. However, there is a simple approach. You do not need a seance and  a weird science! You need the phone, your conversational skills, and some guts. CALL!  If you are uneasy about calling, practice in front of mirror, smile. Script lines if it helps but do not read them on the phone. People will know you are simply reciting lines. Nonetheless, CALL!

There are some simple rules for calling.
  1. Take notes.
  2. Call at a convenient time. When they answer ask if it is a good time to call. If they are busy schedule a new time to talk when convenient for them. 
  3. Disarm them. Tell them you are not calling for a job. Say that you are just touching base.
  4. Make small talk. Discuss common points relating to your relationship with them. If you lack knowledge on something they are passionate about read an article about it before calling.
  5. Keep it brief.
  6. Put em on a tickler to call every three to four weeks.
  7. In three to four weeks, CALL them again. 
Of course, you will need to assess each connection you contact and treat them based on strength of your relationship with them. If you've had an interview, do the interview followup process I discussed in my earlier blog postings;
The bottom line is you got to get off your duff and get out there. You need to meet and talk to people. The more you do this, the better you get at this, and the greater your opportunities!

Commentary:  I do not want to confuse people with my "Become a Good Conversationalist" post. In that post, you got to shut up and practice listening, especially during an interview. In this post, you need to get your message out and talk to many people.