Friday, October 31, 2008

Intel-backed start-up tries to connect enterprise IT to the "cloud"

A start-up called Enomaly has developed virtualization management software that it claims will integrate enterprise data centers with commercial cloud computing offerings to form a single "virtual private cloud" that manages and governs both internal and external resources from a single console.

Founded in 2004 as a consulting company, Enomaly dropped its consulting business in early October to focus solely on its software efforts, which began in 2005 with an open source management tool that runs on top of the Xen hypervisor.

The vendor's primary offering now is the Enomaly Elastic Computing Platform (ECP), which co-founder and chief technologist Reuven Cohen says can manage multiple hypervisors and provide better integration with Internet-based services such as Amazon's EC2, which offers on-demand computing capacity. Enomaly also makes it easier to move workloads on virtual machines from one data center to another, even if separated by wide distances, Cohen says.

"The economic collapse is leading companies to look at alternatives to buying large amounts of infrastructure," Cohen says.

Intel helped bankroll the company's product development and is jointly building a next-generation content distribution engine with Enomaly, a custom system that Intel will market to its own customers, says Jake Smith, a technologist with Intel's server product group. (Compare server products.)

With ECP, Enomaly says, enterprises manage their own virtual servers and remotely accessed computing capacity with "an intuitive, browser-based dashboard [that] makes it easy for IT personnel to efficiently plan deployments, automate [virtual machine] scaling and load-balancing; and, analyze, configure and optimize cloud capacity."

Enomaly supports the Xen hypervisor, will support VMware within a few weeks and Microsoft's Hyper-V in 2009, Cohen says. Hypervisors lack migration capabilities that make it easy to move applications to services like Amazon EC2, and thus can be augmented with Enomaly's software to become more flexible, Cohen argues.

"They don't look at networking beyond their own infrastructure," Cohen says of the industry's major hypervisor vendors. "They assume you're going to stick within the context of their particular platform. In reality, there is a heterogeneous environment."

Because Enomaly is vendor-agnostic, the software provides the ability to bring into the cloud whatever virtual machine is best suited to run a particular application, an attribute Intel needs for its content distribution engine, Smith of Intel says.
Smith views Enomaly as a "cloud compute infrastructure built for cloud operators or those who want to operate their environment in the cloud from day one."

But he says VMware is better positioned than Enomaly to help enterprises bridge the gap between data centers and externally accessed cloud services.

"Just because you can do it technically doesn't mean you have production customers who have done that with you to date," Smith says. "Technically, Intel can build an 81-core chip but it doesn't mean we have it commercially available in production."

Enomaly says ECP provides the following benefits:

• Ability to combine many servers into a "single, seamless, sharable cloud."
• Automatically scale during times of high demand by accessing both "local and remote clouds."
• Partition public computing utilities such as Amazon EC2 into a quarantined private cloud.
• Ability to make data center resources rapidly available to any application, and ensure instant recovery and live maintenance of applications.

The open source download is available at Enomaly's Web site, and the company sells support and add-ons. About a half-dozen paying customers are using prototype installations of the technology, Cohen says, while many more use the open source software for free.

There are about 1,000 users in a beta program, including Microsoft, Oracle, GE, VeriSign and the U.S. Department of Energy, according to a 451 Group analyst report on Enomaly. Prototype projects include Intel's content delivery network and projects at France Telecom and Rackspace.

Cohen got his start in 1998 when he founded video streaming company Graphic Substance, and says he helped create the Napster interface. Most of his video streaming customers were in the World Trade Center, and thus his business ended after Sept. 11. Cohen then became involved in open source and content management, spending a few years as a freelance consultant before co-founding Enomaly.

Monday, October 20, 2008

Intel prepares for "stage 4" internet

Plenty of well-choreographed "gee-whiz" factoids and hoopla marked the opening of the Intel Developers' Forum in Taipei yesterday, with special emphasis given to what the chip maker calls the fourth stage of the internet - the pervasive or imbedded web.

In his keynote, senior vice president and general manager of Intel's Ultra Mobility Group, Anand Chandrasekher, primed the more than 3000 conference delegates by telling them that 40 years after Intel was born Asia now accounts for over 25 per cent of its revenues and that the region is also the fastest growing online.

The internet has changed everything, he said, before boldly adding: "The internet runs on PCs and PCs run on Intel architecture."

In 1971, he said that the Intel 4004 chip had 2250 transistors. In 2008, the Core 2 Duo had 820 million. "To make the Core 2 Duo using 1971 technology would produce a chip 8ft by 6ft in size and would require power used by 200 homes to run it."

He said that, "As the next billion people connect to and experience the internet, significant opportunities lie in the power of technology and the development of purpose-built devices that deliver more targeted computing needs and experiences."

He then cited the Atom and upcoming Nehalem processors and the Moorestown platform scheduled for release in 2009-2010, as prime examples of innovation and technology leadership.

But it is progress made in the Mobile Internet Devices (MID) segment that dominated day one of the forum, especially with the first working demonstration of the Moorestown platform.

Moorestown integrates the 45nm processor, graphics, memory controller and video encode/decode onto a single chip and an I/O hub codenamed Langwell, which supports a range of I/O ports to connect with wireless, storage and display components.

Chandrasekher said Intel will reduce Moorestown platform idle power by more than 10 times, compared to the first-generation MIDs based on the Intel Atom processor.

He said that Moorestown platforms will support a range of wireless technologies including 3G, WiMAX, WiFi, GPS, Bluetooth and mobile TV.

Intel is also collaborating with Ericsson for HSPA data modules optimised for Moorestown.

Kirk Skaugen, Intel general manager of Server Platforms Group, then expanded on the embedded web theme, explaining that in the beginning the internet connected mainframes and was the preserve of a "privileged few". The second stage in the mid-'90s saw PCs and servers connect "the many". We are now experiencing the "ubiquitous" web, he said, connected via cell phones. The fourth stage will be the "embedded web" where 15 billion devices talk to each other.

However, he warned that this pervasive, embedded web will dramatically increase the amount of data being created, putting a strain on storage capabilities, as well as the world's infrastructure backbone required to transmit the data.

Thursday, October 9, 2008

Intel Event Lobbies for unified 60-GHz spec

Intel Corp. hosted a gathering of about 120 researchers this week as part of an effort to drive toward a standard for 60-GHz wireless networks that could serve a broad range of computer and consumer systems. Currently two separate efforts at the IEEE are working on 60-GHz standards, targeting different uses.

The IEEE 802.15.3c group on wireless personal area networks is in an early draft stage for a standard that would enable multiple Gbits/second of throughput aimed at links between devices such as flat-panel TVs and set-top boxes. A separate IEEE 802.11 study group on very high throughput (VHT) wants to use 60 GHz to create a version of Wi-Fi with data rates up to a Gbit/second. The two groups have been debating possible overlap in their efforts since June.

"We want an interoperable solution that goes across multiple use cases and products and avoids a fragmented ecosystem," said Alan Crouch, general manager of Intel's Communications Technology Lab in Hillsboro, Oregon. "We need to not optimize for one particular use case or product," said Crouch whose lab hosted the two-day workshop on 60 GHz this week.

Startup SiBeam has announced silicon that delivers multiple Gbits/s of throughout at 60 GHz for consumer systems linked to flat-panel TVs. It is based on the draft 15.3c specification as well as a spec completed in January from the ad hoc WirelessHD consortium of consumer electronics companies it helped organize.

To date, some 15.3c members have suggested the VHT effort is not significantly different enough to warrant launch a new standard effort based on it. It's not clear whether SiBeam or any 15.3c members attended the Intel event. For its part, Intel is a member both of the WirelessHD group and chairs the VHT effort.

Crouch said a wide variety of PCs, peripherals and consumer and mobile devices want to use 60-GHz networks for high throughput at distances of one to ten meters. Senior researchers from Broadcom and Panasonic at the event talked about use of 60 GHz in handhelds, he said.

"It's important that we get all the industry players interested in 60 GHz engaged with these [IEEE] groups," Crouch said. He expressed confidence the event would influence engineers who will in turn influence the ongoing standards efforts.

"We need to let the IEEE process work," Crouch said. "There are a number of proposals on the table and in the coming months we hope to get clarity from the IEEE."

At the Hillsboro meeting, researchers discussed what Crouch called "some of the remaining technical difficulties with 60 GHz."

The issues included propagation losses of up to 20 dB at 60 GHz. The group also heard about ways to use directional antennas to handle penetration loss when a person walks between two 60-GHz devices.

Intel shakes AMD's chip-fabbing baby

AMD's plans to spin its debt-dependent chip manufacturing biz into a separate entity may free the chipmaker from a considerable financial burden, but old rivalries with Intel will assure things don't go down without a hitch.

AMD intends to own 44.4 per cent of a new chip fabbing company, tentatively called The Foundry Company, while Abu Dhabi's Advance Technology Investment Company (ATIC) will own the rest. Both AMD and ATIC will have equal voting rights.

Shortly after AMD announced the deal, Intel shot back voicing "serious questions" about how it will affect an existing cross-licensing agreement between the two companies.

Penned in 2001, the agreement lets AMD use various Intel licenses and patents. (Only a heavily redacted copy of the agreement is available to the public so its exact nature is unknown). The pact also restricts AMD from transferring any of Intel's technologies to a third-party.

Intel believes AMD's joint-venture may violate the agreement.

"Intel has serious questions about this transaction as it relates to the license and will vigorously protect Intel's intellectual property rights," Intel spokesman Chuck Mulloy told Reuters.

AMD meanwhile claims its lawyers have already poured over the transaction and say it won't violate any agreements with Intel.

Intel's lawyers launch probe into AMD's spin-off plans

Intel Corp.'s lawyers are evaluating whether a new manufacturing business spun out of Advanced Micro Devices Inc. could end a long-standing cross-licensing agreement between the firms.

On Tuesday, AMD announced plans to spin off its manufacturing operations into a separate company tentatively called The Foundry. The restructuring would let struggling AMD rid itself of the financial burden of running fabrication plants and provide a hefty influx of cash from its partner in the deal, Advanced Technology Investment Co. (ATIC).

Now, rival Intel is throwing a flag on the play.

"We certainly have to evaluate it," said Intel spokesman Chuck Mulloy. "It certainly could be a change in the competitive landscape."

Mulloy explained that Intel and AMD have licensed each other's patents since 1976. Among other things, the latest pact signed in 2001 calls for AMD to pay royalties to Intel for the use of its x86 architecture.

"Intel has serious questions about the AMD move as it relates to that licensing agreement," said Mulloy, who would not divulge how much AMD pays in royalties for the X86 architecture. "We don't have enough information. We will be evaluating it. Intel has an obligation to shareholders to protect its intellectual property."

Drew Prairie, a spokesman for AMD, told Computerworld that executives paid close attention to the restrictions in the company's various licensing agreements when making plans for the spin-off.

"We looked at this," he said. "We structured this in a way that this takes into account all our licensing agreements to ensure The Foundry will be able to manufacture all of AMD's products."

Mulloy said AMD did not contact Intel about the licensing agreements during the planning stage for the spin off. He added that Intel has not yet reached out to AMD about it, either.

The new company will be co-owned by AMD and ATIC, which is owned by the government of Abu Dhabi in the United Arab Emirates. ATIC will shell out $2.1 billion -- $1.4 billion going to the new company and the rest going straight to AMD, according to AMD.

The Foundry will assume about $1.2 billion of AMD's debt.

Industry analysts noted after yesterday's announcement that by splitting off its manufacturing operations into a separate company, AMD could be on track to become the nimble, innovative company that once had Intel on the run.

"It's like the old AMD after a spa and rehab vacation," said Dan Olds, an analyst at Gabriel Consulting Group Inc. "They've come back stronger financially and in better shape overall. They're still the same company, and they still [partially] own their fab operations. It's like they got a rich uncle to help them out."
Word of the spin-off was welcome news to Wall Street yesterday, which responded by lifting AMD's stock by 18% Tuesday morning during the same period that the Dow dropped by 200 points, noted John Lau, a senior semiconductor analyst and managing director at Jefferies & Co., who had predicted the spin off early last month.

Lau said the spin off of the chip-fabrication operation is a necessary move for AMD. "This fab spin-out changes the equation on how to remain competitive," he said. "Now it's a design race."

Monday, October 6, 2008

BlackArrow Hits $20 Million Bulls-Eye

BlackArrow, the independent provider of multiplatform ad-management systems for viewer-controlled video, today announced it has secured $20 million to further product development, expand distribution platform support and increase worldwide sales and marketing efforts. Participating in the round are BlackArrow’s existing investors: Cisco Systems, Inc., Comcast Interactive Capital, Intel Capital, Mayfield Fund and Polaris Venture Partners. To date, BlackArrow has raised a total of $38 million in private financing.

Purpose-built for video content, the BlackArrow ad-management system enables content providers and distributors to create new advertising revenue opportunities, and to reach audiences that increasingly view video programming via broadband, live streaming, video on demand and other emerging platforms outside of traditional, linear television airtimes.

“BlackArrow has achieved key milestones with our ad-management technology, and we’re well positioned for growth as television and other professionally produced video content extends its reach over various viewer-controlled platforms,” said Dean Denhart, president and CEO of BlackArrow. “This additional funding validates BlackArrow’s performance to date, and our strategy for delivering advanced, multiplatform video advertising systems that help customers maximize revenues. As audiences continue to embrace viewer-controlled video, BlackArrow is increasingly the partner of choice for reaching viewers wherever and whenever they are watching television-quality content.”

The BlackArrow system is designed to dynamically manage, decide and report on targeted advertising inserted against on-demand programming across multiple playout platforms. Versatile and adaptable, the BlackArrow system works across any combination of ad types, ad-sales models, distribution or syndication strategies and media playout environments to keep pace with today’s evolving video ad-sales and distribution opportunities, thereby improving the critical relationships between advertisers, content providers and distributors.

Dell Teaming with Intel, Motion Computing to Help Provide Anytime, Anywhere Wireless for Health Care IT

ROUND ROCK, Texas, Oct 01, 2008 (BUSINESS WIRE) -- Seamless, reliable access to patient information is critical to safe, quality care in an increasingly complex and mobile health care environment. Dell, Intel and Motion Computing have launched a new service to assess, design and validate the quality and coverage of wireless networks soon to become the backbone of health care information flow.
The new Mobile Point of Care (MPOC) Wireless Assessment service enables health care customers to assess whether their wireless network is reliable and can provide 100 percent coverage and 24/7 access to patient information. The service provides a comprehensive wired and wireless network analysis, design and validation to help ensure customers have a robust wireless network.
The ability to assess and treat patients using mobile technology is a growing trend across the health care industry. By 2010, 80 percent of hospitals are expected to have a wireless network, investing close to $10 billion in the next five years.(1) With that significant investment and patient care on the line, it's critical that hospitals have highly reliable wireless networks.
The MPOC Wireless Assessment service is a customizable network design and implementation that can include:
-- Site Survey and RF Analysis: Radio Frequency spectrum analysis and detection tools discover interference on existing networks while a physical evaluation of the facility evaluates issues that could adversely affect network performance.
-- Wireless Network Design: Network modeling and tools are used to develop network design architecture, enabling the proper construction and placement of wireless network components to support a seamless end-user experience.
-- Network Validation: Once deployed, a thorough analysis can be performed on the wireless network and a detailed report is provided for future growth and maintenance needs.
"In the world of health care, having secure and constant wireless connectivity is critical to patient safety," said James Coffin, vice president and general manager, Dell Health Care and Life Sciences. "Today's hospitals are complex technology environments with many users on a variety of mobile devices that are continually moving from room to room. Delivering a service like this that helps ensure seamless connectivity that supports interoperability is key to caregivers' ability to efficiently deliver high-quality care."

The top five reasons why Windows Vista failed

Microsoft gave computer makers a six-month extension for offering Windows XP on newly-shipped PCs. While this doesn’t impact enterprise IT — because volume licensing agreements will allow IT to keep installing Windows XP for many years to come — the move is another symbolic nail in Vista’s coffin.
The public reputation of Windows Vista is in shambles, as Microsoft itself tacitly acknowledged in its Mojave ad campaign.
IT departments are largely ignoring Vista. In June (18 months after Vista’s launch), Forrester Research reported that just 8.8% of enterprise PCs worldwide were running Vista. Meanwhile, Microsoft appears to have put Windows 7 on an accelerated schedule that could see it released in 2010. That will provide IT departments with all the justification they need to simply skip Vista and wait to eventually standardize on Windows 7 as the next OS for business.
So how did Vista get left holding the bag? Let’s look at the five most important reasons why Vista failed.

5. Apple successfully demonized Vista

Apple’s clever I’m a Mac ads have successfully driven home the perception that Windows Vista is buggy, boring, and difficult to use. After taking two years of merciless pummeling from Apple, Microsoft recently responded with it’s I’m a PC campaign in order to defend the honor of Windows. This will likely restore some mojo to the PC and Windows brands overall, but it’s too late to save Vista’s perception as a dud.

4. Windows XP is too entrenched

In 2001, when Windows XP was released, there were about 600 million computers in use worldwide. Over 80% of them were running Windows but it was split between two code bases: Windows 95/98 (65%) and Windows NT/2000 (26%), according to IDC. One of the big goals of Windows XP was to unite the Windows 9x and Windows NT code bases, and it eventually accomplished that.
In 2008, there are now over 1.1 billion PCs in use worldwide and over 70% of them are running Windows XP. That means almost 800 million computers are running XP, which makes it the most widely installed operating system of all time. That’s a lot of inertia to overcome, especially for IT departments that have consolidated their deployments and applications around Windows XP.
And, believe it or not, Windows XP could actually increase its market share over the next couple years. How? Low-cost netbooks and nettops are going to be flooding the market. While these inexpensive machines are powerful enough to provide a solid Internet experience for most users, they don’t have enough resources to run Windows Vista, so they all run either Windows XP or Linux. Intel expects this market to explode in the years ahead. (For more on netbooks and nettops, see this fact sheet and this presentation — both are PDFs from Intel.)

3. Vista is too slow

For years Microsoft has been criticized by developers and IT professionals for “software bloat” — adding so many changes and features to its programs that the code gets huge and unwieldy. However, this never seemed to have enough of an effect to impact software sales. With Windows Vista, software bloat appears to have finally caught up with Microsoft.
Vista has over 50 million lines of code. XP had 35 million when it was released, and since then it has grown to about 40 million. This software bloat has had the effect of slowing down Windows Vista, especially when it’s running on anything but the latest and fastest hardware. Even then, the latest version of Windows XP soundly outperforms the latest version of Microsoft Vista. No one wants to use a new computer that is slower than their old one.

2. There wasn’t supposed to be a Vista

It’s easy to forget that when Microsoft launched Windows XP it was actually trying to change its OS business model to move away from shrink-wrapped software and convert customers to software subscribers. That’s why it abandoned the naming convention of Windows 95, Windows 98, and Windows 2000, and instead chose Windows XP.
The XP stood for “experience” and was part of Microsoft’s .NET Web services strategy at the time. The master plan was to get users and businesses to pay a yearly subscription fee for the Windows experience — XP would essentially be the on-going product name but would include all software upgrades and updates, as long as you paid for your subscription. Of course, it would disable Windows on your PC if you didn’t pay. That’s why product activation was coupled with Windows XP.
Microsoft released Windows XP and Office XP simultaneously in 2001 and both included product activation and the plan to eventually migrate to subscription products. However, by the end of 2001 Microsoft had already abandoned the subscription concept with Office, and quickly returned to the shrink-wrapped business model and the old product development model with both products.
The idea of doing incremental releases and upgrades of its software — rather than a major shrink-wrapped release every 3-5 years — was a good concept. Microsoft just couldn’t figure out how to make the business model work, but instead of figuring out how to get it right, it took the easy route and went back to an old model that was simply not very well suited to the economic and technical realities of today’s IT world.

1. It broke too much stuff

One of the big reasons that Windows XP caught on was because it had the hardware, software, and driver compatibility of the Windows 9x line plus the stability and industrial strength of the Windows NT line. The compatibility issue was huge. Having a single, highly-compatible Windows platform simplified the computing experience for users, IT departments, and software and hardware vendors.
Microsoft either forgot or disregarded that fact when it released Windows Vista, because, despite a long beta period, a lot of existing software and hardware were not compatible with Vista when it was released in January 2007. Since many important programs and peripherals were unusable in Vista, that made it impossible for a lot of IT departments to adopt it. Many of the incompatibilities were the result of tighter security.
After Windows was targeted by a nasty string of viruses, worms, and malware in the early 2000s, Microsoft embarked on the Trustworthy Computing initiative to make its products more secure. One of the results was Windows XP Service Pack 2 (SP2), which won over IT and paved the way for XP to become the world’s mostly widely deployed OS.
The other big piece of Trustworthy Computing was the even-further-locked-down version of Windows that Microsoft released in Vista. This was definitely the most secure OS that Microsoft had ever released but the price was user-hostile features such as UAC, a far more complicated set of security prompts that accompanied many basic tasks, and a host of software incompatibility issues. In order words, Vista broke a lot of the things that users were used to doing in XP.

Bottom line

There are some who argue that Vista is actually more widely adopted than XP was at this stage after its release, and that it’s highly likely that Vista will eventually replace XP in the enterprise. I don’t agree. With XP, there were clear motivations to migrate: bring Windows 9x machines to a more stable and secure OS and bring Windows NT/2000 machines to an OS with much better hardware and software compatibility. And, you also had the advantage of consolidating all of those machines on a single OS in order to simplify support.
With Vista, there are simply no major incentives for IT to use it over XP. Security isn’t even that big of an issue because XP SP2 (and above) are solid and most IT departments have it locked down quite well. As I wrote in the article Prediction: Microsoft will leapfrog Vista, release Windows 7 early, and change its OS business, Microsoft needs to abandon the strategy of releasing a new OS every 3-5 years and simply stick with a single version of Windows and release updates, patches, and new features on a regular basis. Most IT departments are essentially already on a subscription model with Microsoft so the business strategy is already in place there.
As far as the subscription model goes for small businesses and consumers, instead of disabling Windows on a user’s PC if they don’t renew their subscription, just don’t allow that machine to get any more updates if they don’t renew. Microsoft could also work with OEMs to sell something like a three-year subscription to Windows with every a new PC. Then users would have the choice of renewing on their own after that

Intel and Yahoo! to Launch “Widget Channel”

Intel and Yahoo! last month announced plans to launch the “Widget Channel,” an application framework for TV and related consumer electronics devices that are based on the Intel Architecture. According to the companies, the Widget Channel will allow viewers to access Internet applications that have been designed for TV, while watching programming. It will be powered by the Yahoo! Widget Engine, a fifth-generation applications platform that will support a line-up of “TV Widgets”–small Internet applications that can be accessed by the remote control and that are intended to complement and enhance the TV viewing experience, by providing informational and entertainment content, and community features. The companies say that the Widget Channel will also allow developers to use Javascript, XML, HTML and Adobe Flash technology to create their own TV applications, thus “extending the power and compatibility of PC application developer programs” to TV and related CE devices. In addition to supporting the Yahoo! Widget Engine, Yahoo! says it will provide consumers with Yahoo!-branded TV Widgets that will be based on the various services it offers on the Internet.
Intel and Yahoo! say that the Widget Channel’s mini-apps will, among other things, enable consumers to access Internet videos, track stocks or sports teams, interact with their friends, access news reports, find out additional information about the programs they are watching, and share content with friends and family. The companies say that they will be easily personalizable, because they will be based on Internet services such as Yahoo! Finance, Yahoo! Sports, Blockbuster and eBay that are designed to be customized by end-users. “TV will fundamentally change how we talk about, imagine and experience the Internet,” Eric Kim, general manager of Intel’s Digital Home Group, said in a prepared statement. “No longer just a passive experience unless the viewer wants it that way, Intel and Yahoo! are proposing a way where the TV and Internet are as interactive, and seamless, as possible. Our close work has produced an exciting application framework upon which the industry can collaborate, innovate and differentiate. This effort is one of what we believe will be many exciting new ways to bring the Internet to the TV, and it really shows the potential of what consumers can look forward to.” Added Marco Boerries, EVP of Yahoo!’s Connected Life arm: “On the PC and mobile devices, Yahoo! is a leading starting point for millions of consumers around the world. Yahoo! aims to extend this leadership to the emerging world of Internet-connected TV, which we call the ‘Cinematic Internet.’ By partnering with leaders like Intel, we plan to combine the Internet benefits of open user choice, community, and personalization with the performance and scale embodied in the Intel Architecture to transform traditional TV into something bigger, better and more exciting than ever before. By using the popular Yahoo! Widget Engine to power the Widget Channel, we intend to provide an opportunity for all developers and publishers to create new experiences that can reach millions of TV viewers globally. Yahoo! plans to enable the Cinematic Internet ecosystem, which will benefit consumers, device makers, advertisers and publishers.”
According to Intel and Yahoo!, the Widget Channel will be powered by a set of platform technologies that include, in addition to the Yahoo! Widget Engine, core libraries designed to exploit the capabilities of the Intel Architecture. The Widget Channel framework will use a number of established Internet technologies to significantly lower the barrier-to-entry for developing applications optimized for the TV, the companies say. In order to spur development of widgets for the Widget Channel, Intel and Yahoo! plan to make a development kit available to manufacturers of TV and other CE devices, as well as to advertisers and content providers. The Widget Channel will also include a Widget Gallery, the companies say, to which developers will be able to publish their TV Widgets across multiple TV and related CE devices, and through which consumers will be able to browse and choose the widgets they would like to use.
The companies say that a number of partners are planning to develop and deploy TV Widgets, including Blockbuster, CBS Interactive, CinemaNow, Cinequest, Comcast (see below), Disney-ABC Television Group, eBay, GE, Group M, Joost, MTV, Samsung Electronics, Schematic, Showtime, Toshiba and Twitter. In addition, the companies say they are working with “industry members” to promote the development of open standards that would help grow the TV Widget ecosystem: as part of these efforts, they are sharing an early version of a development kit for the Widget Channel with a “selected” group of developers.
The Widget Channel software framework is designed to run on Internet-connected TV’s, cable set-top boxes, optical media players and other consumer electronics devices that are powered by a newly launched family of system-on-a-chip (SoC) media processors based on the Intel Architecture. The first of these Intel Architecture-based SoC’s is the Intel Media Processor CE 3100, which is billed by the company as a highly integrated chip that includes a high-performance Intel Architecture core and other functional I/O blocks to enable HD video decode and viewing, home-theater-quality audio, 3D graphics and the fusion of the Internet and TV experiences. Intel says it plans to release an Intel Media Processor CE 3100-based hardware development system, called the “Innovation Platform,” which will provide the initial development and validation environment for developers of widgets for the Widget Channel.
Comcast, the US’s largest cable MSO, announced last month that it is working with Intel to bring IP-based applications to TV, using the new Widget Channel application framework. The companies say that they expect to begin integration testing of the Widget Channel framework in the first half of 2009 on Comcast’s EPG, using tru2way/OCAP technology. “The Widget Channel enables interactive applications and tru2way technology has opened the door for these types of innovations to work in the cable industry,” Comcast Cable CTO, Tony Werner, said in a prepared statement. “We’re looking forward to working with Intel as we continue to bring our customers new features and services that further enhance their viewing experiences.” Comcast is billing its plans to implement the Widget Channel on tru2way as an “important milestone in the evolution of the tru2way ecosystem,” and says that the Widget Channel framework will complement tru2way technology and broaden the interactive TV developer community. (Note: for more on the Widget Channel, see article in this issue, and listen to [itvt]’s radio interview with Patrick Barry, VP of connected TV at Yahoo!’s Connected Life division, in Issue 7.99.)

High-speed RAM can damage Nehalem i7 processors

According to the Inquirer, Intel is advising motherboard and RAM vendors about their new X58+Core i7 combo and to abide to a strict memory voltage limit of 1.65 volts limit. If the advice is not followed, they have warned that the CPU will get fried.
This came to light first when admins on the XFastest forums < http://www.xfastest.com/redirect.php?tid=14549&goto=newpost > posted several photos of the ASUS P6T Deluxe motherboard with a big sticker over the DIMM slot warning that anything more than 1.65 V will destroy the CPU. ASUS has admitted that the story is true.
This doesn't seem to be really a problem since DDR3 JEDEC specs state that it should operate at 1.5 volts. But the problem may arise for overclockers with performance memory since many memory vendors offer faster RAM kits that operate at higher voltage levels. As an example, OCZ Reaper PC3-14400 operates at 1..9V, Mushkin's XP Series uses 1.9-1.95V while Corsair's Dominator high-end takes you all the way up to 2.1V.
While Mushkin will re-design their kit specifically to suit the X58/Core i7 combo and should be out sometime next month, the other memory vendors have not commented anything beyond stating that their kits are pending certification.
Intel is yet to explain why the memory voltage would damage the CPU, though we speculate that this could have something to do with the on-die (integrated) memory controller of the Nehalem. Though it is yet to hit the Indian market, people importing the Core i7 should make sure that they get a compatible RAM kit or they will have to underclock it

Inside Intel / The missile defense conundrum

The State Comptroller recently completed its latest report on how the defense establishment handles the development of weapon systems intended for active defense against rockets and short-range missiles. The report was presented to the Knesset's State Control Committee, the prime minister, the defense minister and to senior defense officials. It seems the report uses harsh language, uncovering several shortcomings in the Defense Ministry's process of deciding on which system to develop and why, to which military industry to grant the rights of development, and why one company should be preferred over another. The report's subjects now have four months to respond to the criticism leveled against them.
The report allows for a rare look at the close connections, sometimes reminiscent of a revolving-door policy, between Defense Ministry officials and those Israel Defense Forces officers involved in the processes of weapons' development, production and acquisition. And so it can happen that one day you are an IDF officer coordinating with the military industries, and the following day you get a job in that very same industry or in the Defense Ministry itself.
One of the report's sections deals with the ministry's controversial decision to allocate to Rafael Advanced Defense Systems a budget of almost one billion shekels to develop the "Iron Dome" system, which is supposed to protect against short-range rockets like Qassams and Katyushas. It is already clear that Iron Dome will not be ready by the date promised by then-defense minister Amir Peretz, and his successor, Ehud Barak. In addition, and contrary to what was promised, the system will not be able to contend with mortar shells. In any case, even if Iron Dome is operationable in two years' time - which is extremely doubtful - Sderot and the communities bordering the Gaza Strip will remain exposed to Qassams and to mortar rockets in the interim.
Solutions kept in storage
The comptroller's report also touches on the Defense Ministry's motives in not deploying a defense system in Sderot and the communities located in areas bordering on Gaza, leaving the residents without an active defense system until Iron Dome becomes operationable. This, although there are in fact systems that have proven themselves, including the Nautilus laser cannon and the Vulcan cannons as well as the Phalanx Close-In Weapons System, which could have provided a temporary, albeit partial solution to the problem until the Iron Dome's completion.
It was possible to get a taste of this faulty policy, adopted by the ministry and the IDF, at a seminar held two weeks ago in Herzliya, whose participants included four former commanders of the anti-aircraft alignment attached to the Israel Air Force. They pointed out that it would have been possible as early as several years ago to defend Sderot and the surrounding communities in the Negev. One option would have been to deploy several batteries of Vulcan Cannons with radar, they said, which would be capable of identifying targets at a short range of several kilometers, calculate the shell or Qassam's expected trajectory and launch 20 mm shells against them at rapid fire rate. The Vulcan was originally developed as a defense system for naval vessels, but for the past three years, Raytheon has been producing a land version as well.
Vulcan Cannons are currently deployed in Iraq, where they defend the Green Zone, Baghdad's urban center, which is considered the "beating heart" of the American command and the Iraqi administration - an area that is immeasurably larger than Sderot.
According to some of the speakers at the gathering in Herzliya, one only has to bring a number of such cannons to Israel and deploy them in the Negev - at the relatively high cost of some $15 million for every pair of cannons. According to Brig. Gen. Yair Dori, who commanded the IDF's anti-aircraft division in the years 2003-2004, even that is not necessary. There are currently 48 such cannons in IDF storage facilities, together with spare parts and ammunition; they have been taken out of operative use. "All that is necessary is to make them operable and deploy them," he says. In a conversation with Haaretz, Dori said that back in 2001, when the Qassams were first fired, the army's anti-aircraft alignment deployed a number of these cannons on the fence with the Gaza Strip, next to the Erez border crossing. "After half a year, for some reason, it was decided that the deployment should be canceled," Dori related.
Both Dori and Brig. Gen. Eitan Yariv, who headed the anti-aircraft division in the 1980s, believe that three or four Vulcan batteries would provide reasonable protection for Sderot. But Defense Ministry officials refuse to even hear about that. Dr. Avi Weinrib, who was the Defense Ministry coordinator responsible for the development of missiles and rockets, and who also spoke at the seminar, insisted that some 25 batteries would be necessary for defense. But the truth is that this is not the problem of the officials at the Defense Ministry - rather it is the responsibility of Barak, of Chief of Staff Gabi Ashkenazi, and of air force Commander Ido Nehushtan.
In order to deploy batteries for Sderot's defense, the IAF top brass would have to define this as an "operative demand." That has not happened, a failure Dori blames on the lack of attention paid by the IAF commander to anyone who is not a pilot and anything that is not an aircraft. In other words, the anti-aircraft batteries and the defense of Sderot are not exactly at the top of the air force commander's list of priorities. The one individual who should be able to see the entire picture and promote a change is Defense Minister Barak.
Two months ago there was a sense that Barak had finally seen the light and understood that there was a ready solution to the problem of defending Sderot and the Negev - that he had realized that he had been taken on a wild goose chase. All he needs to do is instruct the chief of staff and the air force commander to define the area's defense as "an operative need." Back then, Barak promised close aides that during his visit to Washington, he would discuss the possibility of bringing at least one Vulcan system to Israel, to examine its capabilities.
But it turns out that talk is one thing, and deeds, another. For now, the residents of Sderot and the Negev communities, currently enjoying the calm bestowed by the cease-fire with Hamas, will continue to be the victims of indecision, bureaucracy, and considerations that seem to be based on conflicts of interest.

Kontron grabs Intel's Communication Rackmount Server business

MUNICH, Germany — Embedded computer manufacturer Kontron AG (Eching, Germany) says he has signed an agreement with Intel to take over its communications rackmount server activities. The move aims at strengthening Kontron's position in telecommunications server markets. The agreement refers to 1U and 2U carrier-grade rackmount IP security server products with expected 2009 sales of about $40 million. It includes R&D and as support facilities in Columbia, South Carolina and manufacturing activities in Penang, Malaysia. The group currently has a total headcount of about 70 persons.
Kontron is already offering carrier-grade telecommunications servers in ATCA and MicroTCA technology. For IP-based security and multimedia streaming applications including video-on-demand, the company is seeking to broaden its product spectrum, explained Norbert Hauser, Vice President Marketing for Kontron. While ATCA servers are characterized by ultra-high availability these application markets require somewhat lower availability characteristics. For instance, hot board swap capabilities are less necessary.
In terms of architecture, the communications rackmount servers however are quite similar to ATCA servers, Hauser pointed out. Both types are typically equipped with the same chipsets supporting four-core processors, and in contrast to general purpose industrial severs, both types are not equipped with graphics capabilities.
Like Intel, Kontron plans to manufacture the servers in Penang where it runs its own production site. The facility has enough capacity to absorb the additional workload, Hauser said. "This is why we have expanded our Penang capacity. It gives us additional flexibility and creates synergies," the Marketing VP explained.
The move is subject to regulatory review. For Kontron, Intel's Columbia activities will emerge as a new subsidiary in the U.S and it will help to improve Kontron's standing in North America. "It will be a seamless transition", Hauser promised. He declined to elaborate on the financial aspects of the transaction.

Kingston jumps on Intel SSD train

Intel has bulked up its solid state drive (SSD) channel and rounded up Kingston to resell its products to business notebook and server users. This will complement Intel's own SSD OEM channel.
Kingston supplies memory products to businesses that want to upgrade their PCs and servers, as well as USB memory cards to store consumers' digital stuff - images, music, whatever. Adding SSDs is a logical fit.
Intel launched its SSDs earlier this year. There are mainstream multi-level cell 1.8-inch (X18-M) and 2.5-inch (X25-M) with 250/70MB/sec read/write speeds) format SSDs with 80GB capacity and a higher performance single-level cell X25-E with 32GB capacity and 250/170MB/sec read/write speeds. Higher capacities are coming with 160GB mainstream ones by the end of the year and 250GB flagged for 2009.
There will be two Kingston SSDNow products: a 32GB SSD (probably Intel's X25-E) and an 80GB SSD (probably Intel's X25-M). These will ship in the USA this quarter, but neither the actual ship date nor the prices have been revealed yet. Nor has availability outside the USA.
Kingston and Intel will be competing with OCZ, SanDisk, Imation, Toshiba, SuperTalent and several others in what is rapidly coming to look like an over-supplied yet still emerging SSD after-market. Product prices and supplier profitability are bound to fall, particularly if the credit crunch triggers a recession. It doesn't look as if this product sector will achieve a solid state for some time yet. ®