15 Business Tech Tools for 2012

December 27, 2011

In 2011 Computer Economics, Inc. published a report on trends in technology that profiled organization IT solutions as investment strategies. The following is a statistical review of the major characteristics grouped into three categories: A) those experiencing the most investment activity, B) those with most interesting results, and C) those that are almost compulsory for doing business.

A) Group that is experiencing the most investment activity:

#1: ERP: The rate of investment in Enterprise Resource Planning pushes it to the top of this list of 15 technologies that businesses invest in even though it has the poorest risk to reward ratio. ERP strategies reach positive ROI and break even (BE) for about half of the companies that adopt them but total cost of ownership (TCO) frequently exceeds original budget estimates. This is a very mature business technology that remains a mandatory tool for large enterprises, but difficult to forecast as an expense. In comparison to other strategies, it must be considered high risk, and rewards in terms of ROI and BE are only classified as moderate.

#2: CRM: Customer Relationship Management strategies are currently experiencing high rates of investment. CRM has ROI and BE numbers similar to ERP, but CRM hits better TCO points because the actual costs of adoption meet original budget estimates for approximately 70% of the companies that invest. CRM can be classified as having moderate risk with moderate rewards.

#3: BI: Business Intelligence systems are experiencing very high rates of investment. BI systems have several capabilities but commonly use analysis tools to query internal databases and develop predictions for competition decisions. BI has equivalent TCO numbers to CRM, with slightly better BE points than most other technologies. BI can be classified as having only moderate risk with high rewards.

#4: Enterprise Collaboration: Identifying financial rewards in collaborations systems is a difficult proposition, but this has not slowed the rate of investment in these technologies. Enterprise Collaboration systems meet what could be referred to as the TCO standard for business technology, where actual costs are consistent with original budget estimates in 70% of cases. BE points are good, but ROI for this technology is lower with only a third of businesses getting the expected returns. Enterprise Collaboration systems should be classified as moderate in risk and moderate in financial reward.

#5: Mobile Applications: Less than half of businesses have adopted Mobile Apps, but Mobile Apps are one of only two technologies that have a higher pace of investment percentage than adoption percentage, signifying a very fast growth rate. Mobile Apps are positioned right in the center of the graph for risk and reward; a true bull’s-eye of moderation on both axis. This is not to be confused with average; the average position on the scatter chart for the whole group of technologies is closer to where the border between low and moderate risk intersects the border between moderate and high reward, so Mobile Apps are not quite as safe as the average strategy on this list.

B) Group with most interesting results:

#6: Unified Communications: UC can deliver any type of communication via real-time methods, i.e. chat, whiteboarding, voice, forwarding, video, etc., by combining a whole set of technologies into a consistent interface. UC’s value comes from its ability to integrate real time communications with delayed delivery communications, but it reaches the enterprise bottom line by integrating communications into the business process cluster. UC has great ROI numbers with two thirds of companies that adopt experiencing positive returns, putting it well into the high reward classification. Meanwhile risk is a little better than average, at low to moderate. Ultimately, and perhaps obviously, a typical UC solution from a provider such as Sprint for example, is more expensive than a traditional on-premise PBX system and voice package. But when the cost is weighed against significant gains in productivity, the scale tips toward adoption of these steadily improving technologies, which explains why the market is expanding and predicted (ABI Research) to “reach 2.3 billion by 2016.”

#7: Desktop Virtualization: Not to be confused with server virtualization (v12n), Desktop Virtualization is the arrangement where your hardware is on your desk, but most of “your” software is accessed over a network, or online. Desktop Virtualization systems are almost guaranteed to come in under budget, thus providing a great reward ratio. In most cases this type of operation also improves security, which reduces risk, giving an indirect bonus to the reward ratio as well.

#8: SaaS: Software as a Service has the best financial profile for low risk and high reward, of all technologies available. Its costs are very predictable, with 80% of businesses reporting that TCO met original budget estimates, and that nine out of ten businesses hit BE or saw positive ROI within two years.

#9: PaaS: Platform as a Service is a less mature technology than IaaS, or any other technology on this list. Almost no companies have implemented this true cloud environment, and few are considering doing so. However, risk of exceeding TCO estimates with PaaS is only moderate while rewards so far have been high.

#10: IaaS: Infrastructure as a Service is the purest form of the Cloud trinity. Companies are experiencing lower than average BE times, but predicting the TCO is easy and since ROI is acceptable, IaaS can be classified as low risk with high reward.

#11: SCM: The percentage of companies adopting Supply Chain Management is lower than I had expected. I forget that many industries have no use for either the planning or execution systems available within SCM. However, for the companies that have adopted it, which is about a third of the business economy, it has been a highly rewarding strategy because ROI for SCM is excellent, while risk is moderate. In the future, Cloud technologies should help with some of the challenges businesses currently face with implementing SCM strategies.

#12: Tablets: As a technology, tablets are economical but as a business strategy they are the second most expensive in terms of exceeding estimates for cost of ownership. Meanwhile reward has been measured as a flat line, not even getting off the floor.

#13: Legacy System Renewal: Not all companies have legacy systems, but as time marches on, the legacy renewal decision catches up to everyone. The question of whether to fix up existing equipment or to buy new can be a tough one to answer accurately. Upgrading to new equipment may seem like a no brainer, but it can be a gamble, take Microsoft Vista for example. Legacy system renewal is ultimately moderately risky, and moderately rewarding.

C) Group that is almost compulsory for doing business.

#14: Windows 7: Over three fourths of companies have already or plan to adopt Windows 7. It is one of those technologies that are almost a mandatory cost of doing business. Whether a large enterprise or mom and pop shop, Windows 7 gives at least moderate rewards with low risk.

#15: HRMS: Human Resource Management Systems are a very mature technology. HMRS has been around for a while and three quarters of businesses use some form of software to control employee information; obviously for large labor forces it is basically a necessity. Risk is in the low end of the moderate range and reward is in the high corner of the moderate range.

Predictive Analysis: Large Penetration Vendors

March 24, 2014

Predictive Analysis: Large Penetration Vendors.

Business Marketing

December 1, 2012

Strategic Alliances In Publishing:
Case Study of Readymade Magazine
September 23, 2010

Reinsch, Russell C.

 

     ReadyMade maintains many strategic alliances that are beneficial partnerships for the magazine. The bi-monthly magazine has a small circulation and a target audience profile of Generation Ys who are interested in Do It Yourself activities. ReadyMade, an independently owned periodical founded in 2001, was sold to Meredith Corporation, publisher of many periodicals including Better Homes and Gardens and Parents magazine, in 2006. Unlike most print publications that have suffered from the double whammy of declining ad revenues and subscriptions in the social media/online publishing boom, their ad revenues grew from 2007 through 2009 and circulation rates were holding steady. This is a markedly better scenario than other, better known, publications are facing. 
     As with all periodicals that are sold at book stores, newsstands, chains and other retail venues, Ready Made uses a large distributor to handle all the relationships with the outlets that sell their print publication. ReadyMade has one large standing order with the distributor and the distributor is then responsible for marketing the shelf space for ReadyMade. This simplifies and streamlines the supply chain, distribution channels and other management issues between ReadyMade and the outlets that display and sell their magazine. The distributor, in return for an agreed upon commission (a percent of the newsstand sale price) is responsible for placing the magazine in the different venues and negotiates the number of issues purchased with the retailers. These issues include the specific display and position of the publication, special retail promotions related to the publication, the number of copies ordered (referred to as draw), etc.
     There are many elements involved in selling print magazines and the distributor for ReadyMade handles the retail outlets only. Another method for selling magazines is by subscription where a person pays up front to receive his publication either through the mail or through an electronic version. Many Daily newspapers and magazines have set up digital/online editions and subscriptions. Examples include the Wall Street Journal and the Smithsonian magazine. Library subscriptions can be handled either by the retail distributor or by developing another strategic alliance with a distributor who specializes in the library market.
     ReadyMade has developed a market strategy that appeals to their target audience. Among a plethora of social media marketing options, there is a blog, Facebook and Twitter accounts, RSS feeds, and a free ReadyMade account that will guarantee to deliver two different newsletters and special offers straight to the account holder’s email account. As with most publications, the majority of the revenue is garnered through advertising. And, following industry standards, advertising rates are dependent upon circulation numbers. Circulation numbers include all sales from retail outlets, both internet and bricks-and-mortar based.
     Strategic alliances are an important part of keeping ReadyMade financially viable. With a distributor handling their distribution channels, ReadyMade can focus on the editorial side of their business while the distributor focuses on the marketing aspects of selling the print magazines.

A comprehensive infographic guide to UX careers

November 10, 2012

A comprehensive infographic guide to UX careers.

November 10, 2012

Crowdfunding

August 3, 2012

Crowd funding (CF) activities can be divided into three basic types: 1, equity, where the company attracts investors via the sales of shares, 2, donation and reward, where lenders are basically contributing for goodwill, and 3, peer to peer lending (p2p), where the company goes into debt to its investors.

Some CF platforms specialize in certain types of companies, projects or fund raising activities, while others have no niche and allow CF for anything. 15 of the top 40 sites require fundraisers to meet a pre-stated funding goal before the raised funds will be released, in an ‘all or nothing’ distribution scheme. Four sites distribute any funds that are raised regardless of the project’s goals, and the borrower keeps any money raised; and five of the CF platforms use some form of hybrid structure where funds distribution can go either way. Sites that allow users to keep any funds raised are especially popular.

Equity sites in the UK have been active for years. While the law allowing equity CF in the US was passed this April, the SEC has 270 days from the date of passage to write the regulations, and un-accredited investors cannot participate in equity CF on US sites prior to these regulations. Here is a summary of the more important sites alphabetically for K – Q.

Kickstarter. CF Type: Donation. Niche: loosely defined. $-Distribution: All or nothing. Summary: recognized as the #1 player in their segment. Kickstarter does not provide ACH management.

Kiva. CF Type: p2p. Niche: projects in developing countries. $-Distribution: borrower keeps any money raised. Summary: Launched in 2005, claims to be first mover in microlending for entrepreneurial projects. Funding Circle also claims this. Either way, Kiva is an important competitor in p2p.

Launchpad.com. CF Type: Equity. Niche: early stage angel capital for innovative technologies and life sciences. Summary: SEO is terrible.

Lendingclub. CF Type: p2p. Niche: personal and business loans. Summary: first to register with SEC and offer a secondary market for p2p loans, important competitor in the p2p space. Borrowers need $70K salary, 660 FICO, and clean record for 12 months prior. Lendingclub actually carries the notes. Nice stats page. There is no transparency between lenders and borrowers.

Medstarter. CF Type: Donation. Niche: Healthcare. $ Distribution: All or nothing. Summary: their niche is one specifically excluded on Kickstarter. In beta. Boring UX.

Microventures. CF Type: Equity & p2p. Niche: connecting angels with tech startups. Distribution: All or nothing. Summary: Registered broker-dealer; required to perform due diligence handling investor relations. Average raise is $150K. 4K investors, $4M in funded transactions (not necessarily a lot in comparison to some other sites).

OnSetStart. CF Type: Donation but positioned to do Equity. Niche: None. Distribution: All or nothing.   Summary: Project creators keep 100% ownership over their work. Tools for funding projects. Member of NCFA. Traffic is supposedly growing quickly but the site appears to have almost no activity. Rated online as very easy to use but site navigation is actually quite awkward.

Peerbackers. CF Type: Donations. Niche: None. $ Distribution: all or nothing with a twist; if project has not met its funding goal but the project owner can still deliver the promised rewards, then the amount raised will be released to them. Summary: interesting menu options for finding projects on the site. They have received good media attention.

Peoples VC. CF Type: Equity. Niche: Hard to tell. Summary: marketplace functionality, well designed calculator, strong integration tools, “Crowdvestor” education course. Peoples VC has some very sharp people on their team. One of the top US equity sites, also receiving good media attention.

Petridish.org. CF Type: Donations. Niche: scientific research. $-Distribution: All or nothing. Summary: small projects, median range $10-15K size. Average donation is $70 (comparable with Kickstarter). Killer graphics on website. In beta. Good media coverage.

Prosper. CF type: p2p. Niche: personal needs. Summary: investor oriented- site has a schedule with about 38 rates from AA to HR; Quick Invest feature; developer tools and data mining resources. Info on the procedures is broken down into categories and well presented. With $370 million in personal loans funded ( 3X more than Kickstarter ) they are one of the top CF sites.

Quirky. CF type: co-creation; kind of a fourth category of CF. Niche: inventors and nerds. Summary: the site evaluates product ideas, picks winners, manufactures those products, and sells them on and offsite, taking about a 2/3 cut of the revenue. Members earn money right through the site. Members vote on inventions and ideas with the most potential, while playing a legit pricing game fashioned along the lines of the Price is Right. Link layout and functional naming on the site is quirky, but graphics are good. Unique player in the co-creation segment, receives major media coverage, and maintaining several partnerships with over a dozen household name retailers.

Commercial functionality within LinkedIn

June 6, 2012

Executive Summary

LinkedIn is well balanced in financial terms, generating revenue through three categories of monetized solutions: recruiting, advertising, and subscriptions. Currently, each of the three monetized solution categories contributes fairly equally to the total revenue number. Accounts in the USA compose two thirds of total revenue, with international accounts making up the other third. LinkedIn also offers four categories of free products to its users, defined as: profiles; networking; information exchanges; and widgets for integration/APIs/mobile applications.

Three solution categories generate commerce

Recruiting (“Hiring Solutions”) grossed approximately $100 million in 2010. This category consists of job boards, talent locators, referral engines, a matching tool, plus a few other products. LinkedIn competes in this market with Monster, CareerBuilder, Indeed, and other businesses providing job search services. Posting a job opening on LinkedIn cost approximately $200 a month in 2009 [Walker 2009].

Advertising grossed approximately $80 million in 2010. Advertising options include pay per click (PPC) ads, targeted marketing windows, a recommendation function… LinkedIn is in direct competition with the broader marketing industry.

Subscriptions grossed $70 million in 2010. Subscriptions are primarily software products, including advanced intranet search filtering capability, an intranet search agent, statistical reporting on profile activities, and a handful of other business- and executive-oriented features.

Role of the LinkedIn site, within its industry

LinkedIn is compared to other prominent social networks in the USA in media articles, analyst reports, etc. although it is often excluded from such comparisons as well. LinkedIn owns the professional demographic however, acting more like a corporate blog that crosses national and business network platforms. LinkedIn’s users go to the true social networks for their social activity, differentiating LinkedIn as their professional network where they maintain higher levels of discretion with their connections and hold higher expectations of trust and security over their profile. While different than Facebook, Twitter and YouTube in terms of social activity it does compete with them in the ad revenue markets [Miller 2011, Vahl 2012].

LinkedIn is also a software company, albeit a non-traditional one in terms of vendor “lock-in.” LinkedIn could be considered slightly competitive to email providers with search capability such as Google or Yahoo, if it decided to expand its search capability outside of its own intranet.

What LinkedIn does

LinkedIn has a number of interesting products and features that allow users to network. Among them are Behance Portfolio Application, cardMunch for iPhone, Slide Share, dashboard analytics, and Groups. Groups are highly popular self-organized communities.

Points regarding functionality of the site

The majority of commercial activity on LinkedIn is generated by a minority of its users. Background research for this report indicates most people that use LinkedIn are not aware of many capabilities and products that are stated in the company’s 10K. This lack of awareness can be attributed to two factors. First, many LinkedIn products are neither promoted nor available for purchase through the website, only through “field sales organizations” of which there are three regional headquarters in the USA located in Chicago, New York, and San Francisco. The Field Sales Organizations perform offline sales operations, by calling directly on their customers.

Imagine a scenario where a new LinkedIn user builds a personal profile and is considering what else the site has to offer for his or her business. They may start looking around the website and checking out available products and considering upgrading to a paying customer. At what point are they able to see the existing inventory or selection of products that are not visible online? Never via LinkedIn; perhaps on YouTube! It would be the same as a car dealer that had half of their inventory on a separate lot, and allowed customers to walk around on the first lot with no idea there were other, possibly more attractive vehicles for sale on an exclusive lot somewhere else. We would think that the car lot was seriously “missing the boat” in terms of marketing their inventory.

The lack of awareness about LinkedIn products can also be attributed to the relative difficulty in locating tutorial information on what products are provided. LinkedIn user guides are only available in two places, on the Learning Center page of the LinkedIn website, and on YouTube. There are hundreds or maybe thousands more tutorials placed on YouTube by unaffiliated individuals that describe how to do something on LinkedIn than there are on LinkedIn itself.

Videos and information about LinkedIn’s products that are available on its own website are too hard to find. This explains why there are hundreds of questions from users asking how to do something in LinkedIn on the “Using LinkedIn Q&A” section. Users are asking for help on performing the simplest of tasks on the website. Answers are fielded by other LinkedIn users, with random latency. In other words, their question may be answered in less than five minutes, or possibly never, they cannot know. Where is the company itself in all this? These questions should be getting swiftly fielded by someone from the organization; this is a prime opportunity for a company representative to step in and be of assistance, possibly opening the door to selling advanced services thru a live chat window, or any other means of interaction with users. At a minimum, LinkedIn should take care to have a phone number prominently located on key pages, to facilitate their users who wish to make contact with the website itself for service questions, as The Ladders website does. Please see the accompanying Use-analysis of LinkedIn’s site functionality in Excel.

Part of the attractiveness of LinkedIn is its simplicity. The website has a classic web 2.0 appearance, but there are areas where too much simplicity or minimalism equates to a restriction for commerce. The “Upgrade to Job Seeker Premium” promotion “Unlock Salary Estimates for Jobs on LinkedIn” has small generic bar graph with three unmarked bars that sit statically in an ad box on the sidebar. If this space were used to show a demonstration of the product, with active screens inside the box instead of just a random graphic that has no informational value, LinkedIn users would be able to easily gain information about available products. Presentation slides could be positioned like any other advertisement in the sidebar, or quietly active on some portion of various pages, with an option for the user to turn the audio on when the demonstration catches their eye.

There are other similar situations throughout a user’s experience on the LinkedIn site that have a negative effect on the commercial functionality. Example: when a user clicks on something that is not a free product, they are immediately taken straight to a sales check-out page (a cash register) with a short list of about six features for sale, but the interested party has not been given an opportunity to see any information about these features. LinkedIn is egregiously missing the opportunity to build value in their products. Websites of other companies take advantage of these situations very effectively through chat windows that open up, or interactions with some type of avatar, offering to demonstrate products or answer questions for the buyer. LinkedIn could provide an option to see a pertinent video about the product that automatically loads with a big click to play triangle in plain view for the user to get a demonstration on the product they are interested in. Were this to happen at some point before the customer is forced to decide between buying the product without knowing its benefits, going elsewhere to look for information about what the product benefits are, or just rejecting the product offering altogether, the sale conversion rate for that product would be higher.

Looking forward: improvements for directly or indirectly improving commercial activity

Upgrade email.

Email is an important communication medium for LinkedIn’s demographic. LinkedIn may not endeavor to compete with Google or other email providers, but their users should not have to deal with overtly clunky methods for manipulating mail while they are in the LinkedIn email system.

For example, after reading a message, a user currently has to go back to the inbox just to select the next message in the inbox for viewing. This is an archaic extra step that has been eliminated by the more efficient email providers years ago. Basic capabilities for formatting text would be nice as well.

The future of search:

LinkedIn is clearly aware of the growing importance of mobile solutions. The increasing demand for mobile capabilities will have a direct effect on the type of search functions that result in commercial transactions. As reduction in traditional indexed search usage means shrinking market share for LinkedIn’s competitors that currently survive off of these products, the situations for LinkedIn to step in and find new markets must be appealing. Development of intellectual property should be directed toward the pin-point focused search paradigm, the way information is actually consumed on mobile. Search services of the pin-point nature for the mobile application return actual, direct answers to questions computationally, in the way Apple’s Siri does; not a search result list from which the user has to drill down further by choosing from the selection that was ranked by an algorithm. An example of a response to the question “What is the per capita income for the US, China, and Germany” is here: http://bit.ly/Kw7t1E Interfaces should also be constructed on a control panel model where users stay on the same page, versus the elevator hierarchy structure.

Give premium users more information.

Subscribers would pay for information on how an employer subsequently ranks submitted applications for employment. LinkedIn performs a pre-ranking service for the employers already, scanning the candidates resume or profile for keywords, etc. before sending the employer a ranking based on the results of the character recognition process. The software heavy lifting has already done, and the cost already paid for as part of the job posting service the business has subscribed to. Giving candidates the option to purchase the reused information would result in additional revenue for LinkedIn.

References

Miller, M. 2011. http://blog.hubspot.com/blog/tabid/6307/bid/10437/Study-LinkedIn-Is-More-Effective-for-B2B-Companies.aspx

Vahl, Andrea. 2012? https://andreavahl.com/facebook/linkedin-ads-vs-facebook-ads-a-case-study.php

Walker, M. http://www.inc.com/maisha-walker/2009/08/linkedin_the_11_most_useful_fe.html

Socializing Cloud Computing: Explaining the Best Parts as they apply to Business

April 24, 2012

It is fair to say that the top IT priority for most businesses is managing the migration of existing applications into virtualized environments (Kepes, 2011). True cloud computing is different than virtualization, and those differences are clarified later in this article, but both virtualization and cloud computing do share this key point of interest: small businesses, large enterprises, and government agencies are all moving their activities in the virtual/cloud direction, at a very fast pace. An IBM study declares that ninety percent of businesses are already using cloud or plan to do so by 2015.

Cloud computing is considered as unlimited, on-demand access to shared computing resources, requiring minimal effort on the part of the end user. It is within the public form of Infrastructure as a service (Iaas) that we really find new innovation. Two points: 1) It is often reported that the innovation stems from the way public cloud infrastructure shifts the costs of IT from a capital expenditure to an operating expenditure, but it is more accurate to say that it is the flexibility that is made available by Iaas for the usage model to go either way, from CapEx to OpEx or vice versa, that has enabled modification to the elder market structure. 2) A “private cloud” is nothing more than a buzzword inaccurately describing the virtualization of your own internal server infrastructure. When an organization pays for all its software development, server configuration, hosting, and it procures additional hardware to set up within the boundary of its own walls, it is neither reducing the load on the IT staff; introducing flexibility into the budget; or getting the advantages of unlimited resources; it is simply changing an old legacy deployment. The characteristic of on-demand driven commerce is crucial to the definition of a cloud. When you demand something from yourself, there is no commercial exchange of goods or services.

Currently, most cloud spending goes toward Software-as-a-service (Saas), and Saas continues to be the most appropriate cloud service for small businesses, as it allows users to access and run vendor supplied (off-the-shelf) applications that live on the internet. This is all that most small companies demand. With Platform-as-service (Paas), users have access to an environment with which they can develop their own software applications on vendor supplied tools that live on the internet. Note that an outsourced, third party data center is supplying service to the business in these situations. The main complaint with Paas and Saas, is there is commonly an issue when wanting to take all of your data with you when you leave your current cloud service provider; this is referred to as “vendor lock in.” Lock in is a very pervasive concern for organizations (Glaros, 2011).

Iaas environments let their users go even farther, to choose both the hardware and software combinations they want to run, thus giving the user the most control over configuration; again, the point is that as-a-service, a third party data center (usually remotely located) is supplying the shared infrastructure for the business as it is needed. One advantage of the additional control available in Iaas is that an enterprise’s existing off-premise data can be more easily migrated to or from different locations. Thus Infrastructure-as-a-service is the biggest departure from what has been available in the past. Its elasticity and levels of user control over configuration make it a significant evolutionary step in IT. It is probably not appropriate for most small business. Iaas diverges into different types: Private, Hybrid, Dedicated Host, Community, and Public, but as previously noted “private cloud” is a misleading term.

Public clouds are enabling modification to the existing market structure. Unlike “private clouds”, where the equipment and software require a large up-front investment, public clouds are usually less costly because the computing power can be purchased for as little as one hour, quickly brought online, and then quickly terminated when no longer required. By allowing for rapid scale up and scale down, the category of the IT cost is shifted from its traditional account, Capital Expenditure, to an Operating Expenditure.

The flexibility of the OpEx IT structure provides freedom. Departmental level managers in medium sized businesses can save 30% over internal IT expenses, and small business owners can save about 15% over managed services. Indirect savings can also be realized through reduced electricity use, reduced real estate space requirements, and because the business has the ability to apply more focus toward its core specialty. Keep in mind that the flexibility to choose between options is the real value concept, not necessarily in converting IT from a CapEx to OpEx. For example, game provider Zynga did the opposite and chose to go from OpEx to CapEx. At one point Zynga’s entire infrastructure was in the public cloud with Amazon Web Services (AWS). This makes sense when you hear that the game Farmville went from zero to one million users in five days, when it launched in 2009. Zynga eventually realized that they were better off owning and operating their own private infrastructure for the base of their data needs, because the AWS system could not be tailored well enough to optimize the unique performance requirements of various (mature) individual games. But they still needed Amazon’s massive capacity to scale up quickly for “bursting” and spikes in demand, so Zynga settled on a hybrid cloud structure where they own part of the infrastructure they need and rent some from AWS.

Security

Personal computers and mobile devices including smart phones and tablets act as access gateways to the cloud. As we shift more toward the use of mobile technology for moving data, and using tablets on WiFi and smart phones for computing, we need the computational power of the cloud to run our processes and applications because the small devices do not have the needed power. Mobile devices present significant security issues to networks however, and require additional measures for protection of data, above and beyond older technologies.

No matter the hardware, cloud services create concerns for systems that are fielded by federal government agencies and large enterprises, as these networks must comply with various information security requirements and regulations including FIPS, HIPAA, ECPA, Gramm-Leach-Bliley, HITECH, or the E-Government Act of 2002.

Cloud also creates concerns for business, and the adoption of public Iaas is hindered by fears over loss of privacy. This is a natural human sensitivity, but exacerbated by the fact that digital hacking is in open season on the internet. Ironically, the major crucial threat to protecting data that goes into the cloud has been found on multiple occasions to be a human factor, that being, the careless behavior of users. The breach into Amazon’s Web Services may be the most publicized example of public Iaas vulnerability. After the big headlines, Amazon has since received PCI level 1 certification, a move that has the fragrance of an ad hoc response intended to maintain their reputation, but as it turned out the main reason for the security failures were technically not Amazon’s fault. CASED scientists studied AWS and found users ignored or underestimated stated recommendations from Amazon. CASED has “developed a vulnerability scanner for virtual machines that customers create to run on Amazon’s infrastructure. It can be freely downloaded” at http://trust.cased.de/AMID (ScienceDaily, 2011).

It is important to understand where upon the stack the service provider’s accountability for security ends. Generally speaking, with Iaas, responsibility for implementing security on the higher layers of the stack usually falls on the consumer. With Saas, the details are ironed out in the SLA and remain close to the application or uppermost layers, and with Paas, the hand-off is somewhere in the middle.

When a firm does a risk assessment (RA) and balances the risk it wants to mitigate against its available resources, it can come up with a statement of applicability (SOA) to address which security controls to implement based on costs and benefits. Then the issue of assessing a provider’s claim of compliance to government regulations, or compliance to the contract with the customer and other cloud operations can be controlled. The current reality is that risks to compliance are managed, while managing disclosure of information is lacking in consistent methodologies. This is directly tied to calls for transparency from service providers (Ward, 2011).

Is lowering your price the only effective strategy in a recession or bad economy? (No)

March 1, 2012

The first consideration for any price setting is to define the goal or objective of the company. Companies measure product success in a variety of different ways and not all pricing techniques are appropriate for every objective the firm is undertaking. The marketing manager needs to know which activity is going to be measured: ROI, as is most common,  or market share? The answer to this question makes a big difference on what price to set for a product even in a recession.

Other important questions also need to be answered. For example, how long has the company been in business and/or what is the age of the industry? What is the relationship of this company to the competition? Is it a service oriented business? Organizations usually gain more control over their marketing and sales scenarios or more power over their suppliers the longer they are in business. In most situations service organizations should avoid lowering prices. However, almost everything in business is relative to what is happening with serious competitors, including unforeseen competitors that might enter the market because barriers like capital expenditure are low. In certain cases, lowering prices may be the best option in order to retain market share or to counteract potential loss of existing business.

In most cases differentiation is a better strategy than discounting. Cutting price before building value is the most unsophisticated approach to selling; and just pushing prices down will not guarantee success. For one thing, competitors can match prices. Secondly, customer loyalty can only be achieved by differentiation, not by price. A manager needs to be able to protect the reputation of the brand while also maintaining the gross margin on the product; irresponsibly cutting the price can affect both the brand recognition and the gross margin very negatively.

A successful manager needs to continually perform research to gather and update data on what really determines value in the relevant industry. The manager can then leverage that knowledge to better understand the kind of strategies that are employed by different competitors within the industry. Value-based pricing is a technique used in tough economies or in times of recession. Value bases can focus on design, superior technology, unique features, or even custom distribution. “To understand the customer’s perception of the value of your product or service, look at more subjective criteria such as customer preferences, product benefits, convenience…” (Small Business Notes).

A small business can alleviate some of the burden of marketing by using a seasoned sales representative that works on commission. A seasoned sales rep can be the secret to getting penetration without having too low of a price because he or she already has established relationships in the industry, knowledge of the territory, and the competition.

Small Business Notes: Pricing – Value-Based Pricing. Retrieved November 17, 2010. http://www.smallbusinessnotes.com/operating/marketing/pricing/valuebased.html

Kodak: switched from b2c to b2b.

January 17, 2012

Kodak made the front page of the Wall Street Journal last Thursday, with the Journal reporting that this American titan of the 20th century is now hoping to avoid bankruptcy by selling off their patent portfolio. For the past five years they have been able to hold back the bleeding by filing patent infringement lawsuits, (Spector, 2012) but that strategy is running dry. Kodak’s largest problem stems from failure to capitalize on their own innovation way back in 1975 [the digital camera] because they were reluctant to disrupt their existing revenue stream that came from film. Let’s take a look at how they attempted to deal with issues of market share and service customization over the last few years.

Until the world of photography was revolutionized with the advent of digital imaging, Kodak’s primary focus was on the consumer marketplace with a secondary focus on the business marketplace. As they lost market share in the consumer side of the business, Kodak focused on building its business products and services and successfully transformed itself into a business-to-business (b2b) powerhouse. In an On the Record podcast interview, communications consultant Eric Schwartzman declares “Over the last five years, Kodak’s revenue from consumer film has dropped from $15 billion to $200 million, but the company still has sales of $8 billion annually through a portfolio of new products, most of which are less than two years old and 80 percent of that revenue comes from business customers.” (Hayzlett, 2010)

Mark Weber was the Vice President of Worldwide Sales Partnership in Kodak’s Graphics Communications Group, and led the sales efforts for Kodak’s Digital Printing Solutions Group, a strategic business unit. In a 2008 video, Weber stated this business group was the fastest growing business in Kodak’s portfolio (Cengage, 2008). Weber delineated their specific marketplace for their products and services as the commercial printing industry, in addition to government and corporate businesses. The four main segments of products in their business portfolio were digital printing, consumables, workflow, and services.

In the video Weber describes the different approaches Kodak adopted as they transitioned into b2b. For example, Kodak had to change their sales model to a direct and indirect sales force model and adjust their customer touch points. Weber points out that services and solutions are the most difficult items to sell and that it is imperative to point out to potential clients not only the features but the benefits of their product and services offerings.

In a press release dated October 23, 2008, Weber is quoted as saying “Marketers and others who communicate with print continually strive to distinguish their materials and set themselves apart from their competition. With solutions that include using variable data printing for creating customized documents as part of an integrated, personalized campaign or producing a raised print that looks and feels like the item in the image, digital printing provides many opportunities to maximize communications effectiveness.” (Kodak Press Release, 2008)

Charles Lamb writes in his book MKTG2 that “An important issue in developing the service offering is whether to customize or standardize it… Instead of choosing to either standardize or customize a service, a firm may incorporate elements of both by adopting an emerging strategy called mass communication.” (Lamb, Hair & McDaniel, 2008) Weber states Kodak utilizes this mass communication strategy.

Kodak offers to help their customers grow their business with a full complement of services whether they be customized or standardized. Weber explains that some of the products and services are standard out of the box offerings. However their workflow product offering provides customized solutions and services which tie all of their capabilities together to both their commercial printers whether they are traditional or digital printing customers. In addition, Kodak’s web-to-print service offers some customization related to the regional and seasonal aspects of their customers printing business needs.

Customized services are also available as part of their packaging and transactional printing services. Weber describes the coupon printing capabilities Kodak provides to Papa John’s pizza where basically each coupon is customized to the specific consumer recipient. Finally Weber discusses Kodak’s outreach to their customer base through surveys and user group associations.

Kodak has been a household name for over century with their cameras and film. Some of the mistakes they made over the years are now classic corporate giant errors. As “Kodak teeters on the brink” of bankruptcy (Spector, 2012), the American icon is paying close attention to their customers so they can provide the best possible solutions and services, and escape a tragic end. Meanwhile business consultants everywhere are paying close attention to correlations between what Kodak does and whether they survive.

 

Cengage. 2011, March 19. Kodak – Services and Nonprofit Organization Marketing [Video file]. Retrieved from http://www.swlearning.com/marketing/now/lamb_marketing9e/eoc_video/ch1100.html

Kodak Press Release. 2008. Kodak Experts Discuss Emerging Trends and Opportunities in Free Graph Expo Seminars. Retrieved March 21, 2011 from http://www.kodak.com/eknec/PageQuerier.jhtml?pq-path=2709&pq-locale=en_US&gpcid=0900688a809d0756

Lamb, C., Hair, J. F. Jr., McDaniel, C. 2008. MKTG2. Mason, Ohio: Cengage Learning.

Hayzlett, Jeffrey. 2010. Consumer Film is Dead. But Kodak is Alive. Jeffrey Hayzlett Explains. Retrieved March 20, 2011 from http://ontherecordpodcast.com/pr/otro/death-of-film-kodak-jeffrey-hayzlett.aspx

Spector, Mike. 2012, January 5. Kodak Teeters on the Brink. New York Times, p. A1.

Challenges to Science Philosophy and Theory

January 13, 2012

Table of Contents

Section 1 –

Introduction. pg. 3

Definition of terms. pg. 4

Background. pg. 5

Section 2 –

Philosophical problems for science in the 20th century. pg. 7

Demarcation: the line between what is science and what is not. pg. 8

Falsification and Induction. pg. 8

Section 3 –

Theoretic problems for science in the 20th century. pg. 9

Constructivism. pg. 10

Section 4 –

Solutions in Philosophy and Theory. pg. 12

Section 5 –

Conclusion. pg. 13

 Section 1 – Introduction

     Science and its methods suffered from a full spectrum of extremism in the 20th century. Scientists in the 1900’s operated with an overly austere view of what defined their discipline. The prevailing philosophy of the time, now regarded as the ‘empiricist’ philosophy, was principally represented by a group called the Vienna Circle. In the decades following the turn of the century science was forced to deal with attacks directed toward the scientific method and doubts about justifications for theories, which presented challenges to both the philosophy of science and the social interpretations of the discipline.

The rigid and restrictive grasp of the empiricists was gradually loosened by powerful theories put forth by philosophers that challenged conventional thinking about science, namely the theories championed by Karl Popper, W. V. Quine, and Thomas Kuhn. As recognition of the qualities in these theories gained adherents throughout the scientific fields, the pendulum of sentiment swung away from the strict views held by the Vienna Circle, more to a moderate position, and in some ways closer to the meta-physical principles of the older centuries, like those from Francis Bacon and Rene Descartes. (Descartes felt that even if everyone were to agree on something, like the Ptolemaic theory of the universe, it may still be a deception).

Eventually some of the looser practitioners focused so intently on the shortcomings of the scientific method and whether we should believe science provides true accounts of our world, they pushed the pendulum past the point of common sense, swinging beyond the center-point of balance and over correcting into the other extreme, a range where relativism, realism, and constructivism postulate much different assertions about science and theory.

The thesis of this essay maintains that humans can understand reality and conceive whether theories are adequate by using the best parts of science, which are sufficiently evidentiary. It allows for the belief that science is and can be empirically successful without automatically warranting the belief that truths of theories always have to be perfect.

Definition of Terms

Ampliative rules: Likely able to go beyond the given information; providing justification for the inferred conclusion.

Constructivism: The constructivist concept of rationality involves conscious analysis, and deliberate design of models or rules. The models classify individual behaviors in order to explain general behavior. It is neo-classical, but not inherently inconsistent or in opposition to Vernon Smith’s  ‘ecological’ form of rationality. The two are different ways of understanding behavior, that work together.

Empiricism: A benchmark era for science, the years around 1900, when hypotheses would only be accepted under austere circumstances, where the cold hard facts having been confirmed and verified through deductive testing, were thought to be objective observation, and involving universal laws of nature.

Falsification: Karl Popper suggested the demarcation line for science could be found through falsifying theories instead of trying to verify them. So scientific theories needed to contain something that you can actually dispute. The position “Cherry pie is good,” is not falsifiable.

Induction: Considered the biggest problem for finding scientific criterion for theory choice. The problem of induction pre-dates the 1800’s; it is deeply philosophical and tricky to comprehend. Technically, a cognitive process that includes statistical laws, or conditional probability. An interesting place to start when setting out to understand induction is “the Monty Hall” problem, where pigeons learn from experience in laboratory tests to switch doors, but humans do not.

Realism: an overly loose interpretation of intangible, unobservable things, to the extent that they are considered objective items of evidence in every case. Even if they are independent of accepted concepts, they still make for empirical theory and belief in them is still required for coherent science. In one version of realism, the success of science put forth as the proof of its objectivity. Science has not historically been so successful however, in fact, it has been the opposite.

Underdetermination: the Duhem-Quine (D-Q) theorem: D-Q has two components; 1) there are too many unknowns for evidence to be sufficient for us to identify what belief we should hold or what decision we should make between rival choices; theories must remain unsupported.                                                                            2) A small theory can never be isolated and tested by itself; if a small theory appears to fail a test, the entire corporate body, or the test, or the scientist must be called into question, not the small theory.

Background of Philosophy

     As described in the introduction, science held to an extremely narrow concept and rigid interpretation of scientific procedure at the beginning of the 1900’s. The indisputability of facts were paramount virtues of clear cut reasoning and exacting rationality. Only unmistakable evidence could be used in investigations to discover rules and laws. Laws for prediction and truth are what distinguished science, and the activities of science were above this line of demarcation. This overly strict philosophy hampered practitioner’s efforts to understand the world around them. Skeptics, and critics of empiricism claimed that the true nature of testing is limited, as theories do not ever find perfect “truths;” and that empiricism failed to detect this very deviation between itself and reality.

Background of Theory

     After the Renaissance, human knowledge developed to the point where it established itself as a full or authentic partner to reality. Humankind came to trust that any subject could be credibly understood if the activities of science and technology followed systematic discovery of evidence. Intellectual communities received increasing support, gradually replacing the old world way of using the senses as inputs and then haphazardly constructing a belief from there. In this way science and technology eventually became institutionalized in the twentieth century. At the apex of scientific heyday, The Vienna Circle permitted only the narrowest of definitions of what constituted a valuable hypothesis. Scientists or the layperson could accept them or not, there was no middle ground. Nor was there any need to postulate about hidden entities, the Circle did not want the rules of the universe to have to continue into an infinite string of explanations.

Popper advocated an innovative way to identify the products of science, and argued that scientific inferences do not use induction. His theory loosened up the structure of what constituted the infamous demarcation point.

Kuhn wrote that everything is relative to the culture or time period in which the circumstance exists, and that the one thing that we do know for sure is that science will be rewritten in the future. Kuhn proposed that the context of time breaks the line-of-decent model from old science as the foundation for newer science; that two different periods of science are not be comparable, and he acknowledged the existence of subjective elements within science.

From there we viewed science’s dependency on theory: that science can never escape its relationship with theory, because even the laws of science will change over time or at least be conceived differently from one society or another. From this outlook, science is dependent on theory as a set up or precursor for the scientific method. In light of this dependency, social scientists highlighted various troublesome issues in scientific elements, such as conflicting evidence, partial evidence, and weird evidence, and used these issues to critique the scientific method.

Larry Laudan proposed splitting the action of problem solving from the concept of the solution. In this perspective effective problem solving remains a rational activity, while what counts as a solution is allowed to be relative, and in this way Laudan found an answer to a major problem for determining acceptability of a theory.

Section 2 – Philosophical Problems for Science.

     The how-to component of justifying a belief is the most important epistemic problem for scientific investigation. It also happens to be equally problematic for induction. Science entered the 1900’s with a pre-existing problem of induction, stuck like a thorn in its side that it carried around for hundreds of years. David Hume explicated the problem in his mid-nineteenth century works, and it has been seen as the major obstacle for science ever since.

A second serious challenge for science surfaced as more attention turned to the fact that every theory or at least some parts of theories are eventually found to be inadequate or wrong.

In a third challenge, we came to face the fact that scientific methodology, like all contexts that involve humans as the practitioners, is an activity that works in ways that we do not exactly understand. Although the empiricists in the Vienna Circle attempted to deny it, science in practice involves social aspects that are subjective, and a general method for obtaining ‘correct’ conclusions through objective investigation will not always follow some universal recipe for getting to an explanation of the world. Every person has a unique set of principles, we can each look at the same data and come to different conclusions, and science has proven to be unable to escape this ‘problem.’

Demarcation: In order to establish a solid baseline for the reputation of scientific methods, the demarcation line stood as the separation between science’s concrete evidence and everything else below it for Rudolph Carnap, Carl G. Hempel, and the Vienna Circle. They were very committed to observation and measurements that could be used to formulate laws with predictive power, and it was these bullet-proof rules that were the backbone of their model of science. Empiricists were especially enamored with the predictive power of a rule or law.

Falsification and Induction: Popper’s solution for demarcation suggested we not worry about confirmation, and instead focus on falsifying a theory. Popper argued that since we are limited by finite sets of observations, anything can technically be confirmed using induction, though he did not feel induction was used in true scientific critique, only deduction. Unfortunately, we cannot simply deny that we use induction. Wesley Salmon writes that with Popper’s falsification, we would be stuck in a situation with infinite conjectures; and, according to Salmon, Popper’s ideas when closely examined contain circular runarounds. Summarized by Scott Scheall at Arizona State University: “we cannot use a conjecture’s degree of corroboration as a measure of its reasonableness as a basis for prediction.  To do so would be to let induction in through the back door and we would again be saddled with the problem of induction.  In other words, a conjecture’s degree of corroboration tells us how well it has performed with respect to past predictive tests, but it tells us nothing (logically) about how it will perform in future tests.”

Thus Popper’s falsification, and its contingent sub premises of conjecture and collaboration, and demarcation fail to detail a demarcation for science or formalize the scientific method much if any better than past attempts. Laudan brings final clarification to the discussion however, noting that we never have sufficient justification to need an assertion to be true in a perfect sense in order to accept it; justification for induction is simply not required.

Section 3 – Theoretical Problems for Science

     As described above, Popper proposed falsification as the solution to the problem of induction. The D-Q theory of underdetermination also shows falsification is not a work around for the problem of induction. D-Q declares the procedures one would use to falsify theories are ambiguous, and second, that we can only falsify an entire corporate body, not a single/small theory in isolation. Later theorists then expanded on weaknesses identified by D-Q, interpreting D-Q as showing that rules or “as-if” rationalities are impossible.

In his attempt to loosen the overly strict grip of empiricist philosophy on science and provide guidance when deciding on what theory to follow, Kuhn championed the idea that demarcation is only relevant within normal science, and what makes a theory scientific is the absence of debate over theories; hence only whenever critiques are silent, are we experiencing science. Kuhn saw two distinct periods of scientific activity, with the period of what he referred to as normal science making up the super-majority of time, and only during the very rare revolutionary periods would Popper’s falsification be useful for demarcation. He also saw any challenge to a theory as necessarily directed at the scientist, not at a paradigm itself. Kuhn agreed with D-Q in this respect, but whereas D-Q underdetermination considers paradigms as more or less static and permanent, for Kuhn, neither the standards of evaluation or conditions in the field are permanent, they are always changing.

Changing scientific evidence causes problems for anyone who wants to adhere to a particular theory. Imagine a person makes a decision to eat fish for the omega 3 acids that are good for the heart, or decides to exclude fish from the diet because of the mercury content based on the existing knowledge and theories on food science. To then hear of a new study that has determined that those same omega 3 acids are now apparently bad for the prostate, and trans fats that were thought to be bad for the heart, are what is good for your prostate, calls the whole paradigm of food science into question.

People operate with some kind of personal philosophy either to believe in no theory at all, or some theory in particular, and might at this point find themselves with a freezer full of fish that they no longer wish to eat, because science had decided “healthy eating may be a much more complicated matter than nutritionists previously realized.” (The Week, 2011).

The D-Q principles (that advise theories must remain unsupported), and Bacon’s analysis that almost nothing is a full treatment of a subject for everyone (and that there is no single question on which all people can agree on the answer), and various misinterpretations of the critiques of empiricism & Popper, combined and led to unlicensed promotions of constructivism, realism, or relativism by Bruno Latour, Paul Feyerbend, and several others.

Laudan corrects the D-Q/Kuhn inseparability-of-paradigm pyramid structure by replacing it with a web structure, and weakens DQ to the simulacrum of rendering it moot. Laudan also liberalized the standard view of paradigms as static systems; he explained that they are always comparative, subject to change, and dependent on circumstances of context. Determining whether certain criteria are more important than others is not a straightforward process, but we have no reason to consider unbalanced concepts like relativism while we still have common sense at our disposal. Laudan also clarifies how induction is really not such a big problem when ampliative rules of evidence can be incorporated.

Constructivism

     Constructivism runs into problems in social studies because social theories are composites; they put construct parts into wholes, and schemes of relationships that are interpretations, but they are not able to do more than that. The constructed models leave out some of the parts. They are schemes that connect distinct, single things by using relationships that we understand; they create wholes, but this does not make them factual. We report on them using terms like ‘New York City’, that do not have sharp, precise definitions, because they may have a variety of properties. For example, a problem for the prominent social science of economics is that it cannot distinguish how people go from a starting point and through practice in self regulated systems, to finding equilibrium in personal exchange without the use of consciously constructed models. The structured model does not predict the higher level of cooperation or reciprocity that takes place in the market. Studying behavior, we see people use their unconsciously learned experience when they need to make spontaneous moves; they dynamically figure out what car insurance to buy or how to evaluate university ranking matrices, either without, or together with the existing  instructions in the constructed schemes; so the schemes often have little legitimate purpose or are redundant.

Section 4 – Solutions in Philosophy and Theory

     Laudan fixed the problems introduced by loose interpretations of D-Q by clarifying that science is neither so static nor inseparable as D-Q posits, and he split the over used concept of “theory” into big and little theories; where the big ones function as tools, and the little constituents do the solving of problems. Thanks to Laudan’s perspective we have a quality picture of the formality of the scientific method and clarity on how we can choose between theories.

People need to be able to understand reality, and conceive whether theories are true and whether evidence is real. This can be more difficult if the particular subject of discussion/observation involves something as invisible the chains of bondage in the Stockholm Syndrome. At the opposing poles of an ongoing argument over whether to believe in invisible entities before they have been technically verified, realists and empiricists hold firm beliefs on when an unobservable can be considered real. Bas Van Fraassen gives agnostic discourse on particles that are too small to see and he notes that the best available explanation is often good enough as a representation of the truth; but most importantly, he recommends an approach of taking unobservables on case by case bases. Decisions on invisible particles and unobservables are important, when we consider situations involving forensic science testimony, DNA, and other evidence that jurors may not fully appreciate which have the power to put people in prison. Jurors are often expecting science to be responsible for solving the case, when in fact forensic evidence is occasionally found to be invalid (Begley, 2010).

In a 1998 scientific paper published by the esteemed medical journal The Lancet, author Andrew Wakefield linked the childhood vaccine MMR to an increased risk of autism in children. Thirteen years later, after much debate, scientific exploration and reexamination, and a plethora of class action lawsuits, the link has been discredited and the author vilified for both “bad science” and for perpetrating a fraud. But the damage caused by the claim is hard to undo. Despite scientific evidence to the contrary, many people still believe that childhood vaccination is a confirmed major cause of autism. While it is acknowledged that vaccines can, on rare occasions, cause severe side effects, the U.S. Institute of Medicine rejects the link between vaccination and autism.

Common sense dictates we not get hung up on distinction between truth and what is useful; we can commit to a level just short of literal truth and accept the concept of approximation as weak, but having a necessary value for scientific claims. The position for science to move forward is to just be the best at solving problems. Adequacy is fine for this; it is reliable and economic, like the neighborhood play at second base. Scientists can referee cognitive practices from this position and judge questions of when invisible entities are ok, because they can observe when entities are used in, or for good theories.

Section 5 – Conclusion

     Science is simply a belief, like religion. No one size fits all regulations or broad views work for the man on the street; life is not a carrot or stick situation. Science remains the best alternative we have for knowledge and description of the world, and the social aspects of scientific practice and concrete evidence are both factors for determining preferences. It we do not try to take either one too far, technology will continue to pull science into balance, and we might find we have both the carrot and the stick.

Tension remains between followers of the Darwinian doctrine and followers of religious doctrines because of differences on conceptual grounds. A young person may try to decide between Darwin and St. Peter; or between industrial progress and environmental protection. Are they to throw their hands up? No, they can understand reality and conceive whether theories are true and whether evidence is real, with help from empirically successful science and technology.

Bibliography

Begley, S. 2010. But It Works on TV! Forensic ‘science’ often isn’t. Newsweek: Science. Pg. 26.

Curd, M. & Cover, J. A. 1998. Philosophy of Science: The Central Issues. New York, Norton & Company.

The Week. (2011) Health scare of the week. News: Health & Science. The Week: The Best of the U.S. and International Media, pg 21.