Go through Pass4sure L50-501 PDF before test | | Inicio RADIONAVES

Get our Killexams.com L50-501 think about instruments and training camp with study guide - test prep and practice test and get extreme achievement in the exam - - Inicio RADIONAVES

Pass4sure L50-501 dumps | Killexams.com L50-501 true questions | http://www.radionaves.com/

L50-501 LSI SVM5 Implementation Engineer

Study usher Prepared by Killexams.com LSI Dumps Experts


Killexams.com L50-501 Dumps and true Questions

100% true Questions - Exam Pass Guarantee with elevated Marks - Just Memorize the Answers



L50-501 exam Dumps Source : LSI SVM5 Implementation Engineer

Test Code : L50-501
Test denomination : LSI SVM5 Implementation Engineer
Vendor denomination : LSI
: 119 true Questions

you already know the exceptional and quickest way to cleanly L50-501 exam? I absorb been given it.
I efficiently comprehended the troublesome themes relish shipping Competence and content material expertise effectsfrom killexams. I correctly score ninety% marks. All credits to killexams.com. i used to breathe looking for a reference guidewhich helped me in planning for the L50-501 examination. My occupied calendar simply permitted me to extra time of twohours by using one approach or another. through booking and deciding to buy the killexams.com Questions/solutionsand examination simulaotr, I were given it at my entryway mission internal one week and commenced planning.


Dont forget about to attempt these concurrent dumps questions for L50-501 exam.
When I was getting organized up for my L50-501 , It become very stressful to select the L50-501 study cloth. I determined killexams.Com while googling the character certification sources. I subscribed and noticed the wealth of resources on it and used it to retain together for my L50-501 prefer a ogle at. I lucid it and Im so thankful to this killexams.Com.


Very light to accept certified in L50-501 exam with this study guide.
i would clearly recommend killexams.com to each person whos giving L50-501 examination as this now not just helps to glance up the concepts inside the workbook but additionally gives a fanciful concept about the sample of questions. first rate assist ..for the L50-501 examination. thank you a lot killexams.com crew !


strive out those actual L50-501 questions.
killexams.com materials are exactly as extraordinary, and the pack spreads All that it ought to blanket for an extensive exam planning and I solved 89/100 questions using them. I got every one of them by planning for my exams with killexams.com and Exam Simulator, so this one wasnt an exemption. I can guarantee you that the L50-501 is a ton harder than past exams, so accept ready to sweat and anxiety.


just try these actual test questions and fulfillment is yours.
It became simply 12 days to strive for the L50-501 exam and i used to breathe loaded with a few points. i was looking for a light and effectual manual urgently. eventually, I got the of killexams. Its brief solutions absorb been no longer difficult to complete in 15 days. in the true L50-501 examination, I scored 88%, noting All of the inquiries in due time and got 90% inquiries just relish the pattern papers that they supplied. an terrible lot obliged to killexams.


All is well that ends well, at ultimate passed L50-501 with .
i used to breathe in a rush to bypass the L50-501 exam because I had to retain up my L50-501 certificates. I should attempt to ogle for some on-line assist regarding my L50-501 prefer a ogle at so I began looking. i discovered this killexams.com and become so hooked that I forgot what i was doing. in the discontinue it became no longer in vain considering the fact that this killexams.com got me to bypass my test.


save your time and money, examine these L50-501 and prefer the exam.
I actually absorb lately handed the L50-501 exam with this bundle. This is a incredible respond if you requisite a brief yet amenable training for L50-501 exam. This is a expert degree, so count on that you nonetheless want to spend time playing with - practical breathe pleased is key. Yet, as a ways and examination simulations move, killexams.com is the winner. Their checking out engine surely simulates the examination, such as the particular query sorts. It does develop things less complicated, and in my case, I reckon it contributed to me getting a 100% rating! I could not trust my eyes! I knew I did nicely, however this changed into a surprise!!


worked difficult on L50-501 books, however the entire component absorb become on this test manual.
even though ive enough inheritance and breathe pleased in IT, I predicted the L50-501 examination to breathe simpler. killexams.com has saved my time and money, with out these QAs i would absorb failed the L50-501 examination. I got burdened for few questions, so I almost needed to wager, but that is my fault. I should absorb memorized well and concentrate the questions better. Its amend to realize that I surpassed the L50-501 exam.


Observed maximum L50-501 Questions in Latest dumps that I prepared.
I skip in my L50-501 exam and that was not a light skip however a extraordinary one which I ought to betray every person with haughty steam stuffed in my lungs as I had got 89% marks in my L50-501 examination from reading from killexams.Com.


Just tried once and I am convinced.
At the identical time as i was getting organized up for my L50-501 , It absorb become very worrying to pick out the L50-501 prefer a ogle at fabric. I discoveredkillexams.Com at the identical time as googling the pleasant certification assets. I subscribed and noticed the wealth of sources on it and used it to prepare for my L50-501 prefer a ogle at. I smooth it and Im so grateful to this killexams.Com.


LSI LSI SVM5 Implementation Engineer

LSI Industries: Planning For A vibrant Future | killexams.com true Questions and Pass4sure dumps

No result found, try original key phrase!LSI Industries' share rate has declined vastly in the ... They should notice, furthermore, that the creator of this article is himself a licensed knowledgeable architectural engineer who has specifi...

ANSYS Subsidiary Apache Design Launches RTL vigour model, Enabling Early Planning and Accelerating ultra-Low energy Design genesis | killexams.com true Questions and Pass4sure dumps

Pittsburgh – November eight, 2011 – ANSYS (NASDAQ: ANSS) subsidiary Apache Design Inc. launched RTL energy model (RPM™), a primary-in-classification creative technology designed to optimize a wide ambit of power-sensitive applications, comparable to ultra-low-energy electronics. RPM bridges the vigour gap from register-switch-language (RTL) design to physical implementation. the brand original know-how accurately predicts built-in circuit (IC) vigor habits at the RTL level with consideration for how the design is physically implemented. because of this, the expertise helps to permit chip vigour start community (PDN) and IC package design choices early within the design manner, as well as to breathe sure chip vigor integrity signal-off for sub-28nm ICs.

as a result of extensive ultra-low-vigour requirements and shortened design cycles, it is captious to develop power design change-offs, similar to dynamic voltage/frequency scaling, clock-gating/energy-gating schemes, and tackle preference, early within the design cycle, when changes are easier to develop and absorb less impact on schedule or cost.

“Apache’s inventive approach offers an entire entrance- to back-end vigor analysis circulate,” referred to Ruggero Castagnetti, several engineer, LSI service provider. “The capacity to stand in intellect the absorb an consequence on of low-power structure selection and chip working modes on power grid and tackle design change-offs early in the circulation allows LSI to superior foretell tackle freight and augment productiveness.” 

imaginitive know-how 

As a brand original offering to Apache’s PowerArtist™-XP application, RPM’s core applied sciences encompass PowerArtist Calibrator and Estimator (tempo™) for amend power estimation on the RTL level in advance of availability of actual layout as well as quickly body-Selector for crucial power-conscious cycle option.

pace uses proprietary statistics-mining and pre-characterization suggestions to create higher-quality vigour and capacitance fashions, as in comparison to usual wire load fashions tuned for timing closure. through due to the fact traits for quite a few circuit forms, comparable to combinational common sense and sequential facets, pace gives you RTL vigour inside 15 percent of gate-level vigor, resulting in greater cost effectual and higher-first-class effects.

quickly frame-Selector know-how performs vigor evaluation on RTL simulation vectors and selects a group of essentially the most power-essential cycles to consume All over the design circulation, from early design planning to ultimate chip sign-off. it may possibly precisely establish a few cycles representing the transient and height vigor qualities from tens of millions of vectors inside hours, enhancing productiveness and guaranteeing power signal-off integrity.

advanced Methodology

RPM permits a comprehensive power methodology from early design to sign-off through presenting actual-conscious RTL vigour records. Apache’s RedHawk™ leverages RPM to role PDN prototyping then generates an early-stage Chip vigour mannequin (CPM™) that is used by Sentinel™ utility for IC package design planning, equivalent to substrate layer alternative and decap optimization. RedHawk moreover makes consume of RPM to supply greater-practical switching activities for amend vigor sign-off.

“The introduction of RPM demonstrates Apache’s persisted dedication to delivering imaginative key applied sciences that tackle the famous low-vigor design challenges,” observed Vic Kulkarni, senior vice chairman of RTL enterprise at Apache Design. “Apache’s power budgeting circulation allows consumers to appropriate-measurement their energy delivery community, enhancing design efficiency and mitigating chip failure risks.”

Low-energy application Optimization

Designing for low-vigour purposes requires a strategy that addresses energy budgeting and allows for timely cost-delicate selections related to power. PowerArtist-XP software with RPM technology is optimal for superior node designs of low-vigour applications together with cell, green computing, and customer electronics gadgets. It helps bridge the gap from front-end RTL design to physical vigor sign-off, with greater predictable accuracy, multiplied working performance, and improved reliability for 28nm and under designs.


SAP and LSI Consulting support Small and Midsize Healthcare providers present stronger capabilities at lower can freight | killexams.com true Questions and Pass4sure dumps

jointly Developed software solution for Healthcare is based on SAP(R) most effectual Practices applications to handicap Healthcare suppliers With proven Integration and Operations techniques

NEWTOWN rectangular, Pa.-- The requisite for effectual operations in the healthcare provider trade isn't remoted to colossal health programs and health headquarters chains. Smaller and midsize healthcare issuer organizations are moreover below compel to constantly become extra productive whereas at the identical time providing splendid care. SAP AG (NYSE: SAP) in conjunction with LSI Consulting these days announced the supply of a healthcare-primarily based solution that leverages the vigour of the commercial enterprise useful resource planning (ERP) application SAP® ERP and SAP® most fulfilling Practices applications for small to midsize hospitals within the U.S. market.

The industries team at SAP labored intently with LSI to determine the methods and procedures of previous a hit SAP integrations at greater scientific centers, including those related to bigger-discovering associations running SAP technologies. The crew then aggregated the most suitable practices identified to build the answer, which can breathe offered by the consume of each the uninterested on-premise and handy on-demand fashions. The application is designed for hospitals and medical facilities within the small to midsize segment, described as offering four hundred beds or fewer, so one can consume it to obtain similar enterprise value in a shorter time term and at a subside can freight point than their higher counterparts.

As a relied on healthcare consultancy and implementation firm, LSI diagnosed the want for sophisticated expertise in these smaller, customarily neighborhood-based centers. however with restricted budgets and resources, small to midsize hospitals requisite assistance in securing an accelerated time to enterprise value from their IT implementations, with flexible deployment options, devoid of sacrificing high-quality or a client-focused approach. the brand original respond become as a consequence developed using the instructions realized from varied on-time, on-funds implementations carried out by way of LSI. The outcomes is a portfolio of pre-packaged templates and accelerators that allow repeatable success through confirmed options that engineer lots of the time-intensive decisions out of the implementation procedure.

"because of their long inheritance of presenting SAP solutions to tremendous healthcare provider corporations, they are in a several state to identify and acquire the most useful practices for implementation throughout a firm's core company processes," preeminent Steve Roach, director of Healthcare options at LSI Consulting. "The respond they now absorb now packaged is designed to handle the foundation set of elementary functionalities needed by hospitals enabling them to continue to breathe attainable in the smaller neighborhood surroundings. That functionality contains financial accounting, provide chain management, including trade cart processing, and analytics."

quite a lot of Deployment models present Flexibility in start

on account of the growing to breathe popularity and relevance of a variety of hosting methods, the original answer, which will moreover breathe purchased through LSI, will breathe provided via each an on-premise and on-demand mannequin. A hosted model is already purchasable, enabling for the solution itself to breathe managed, monitored and maintained by using SAP consultants in world-classification hosted facts facilities. This option cuts down on the requisite for IT specialists to maintain the device on web site, reducing the capital rate trial that can steer lucid of smaller corporations from benefiting from trade-main expertise. Licensing for a subscription-based model will develop into attainable in coming months.

"SAP has an established popularity globally as a issuer of healthcare solutions to tremendous scientific centers, and has worked in recent years to strengthen equally a valid thought solutions to small and midsize hospitals worldwide," referred to John Papandrea, senior vice chairman, international fitness Sciences Sector, SAP. "The introduction of this jointly developed solution, primarily configured to fulfill the needs of the U.S. market, shows their commitment to this section as they exercise most desirable practices, articulated within a template, to present a comprehensive solution with flexible deployment alternate options that could mitigate risk and allow swift time to cost for smaller hospitals."

About LSI ConsultingDedicated to the united states Healthcare, better education and research and Public Sector markets, LSI empowers businesses to maximize service, revenue, ROI and supply chain visibility for their shoppers. With over a decade of taste as an SAP implementer, its mission is to assist optimize their purchasers' enterprise operations to compel fulfillment of strategic organizational and public ambitions. LSI's comprehensive offering of full lifecycle services includes preconfigured options for accelerated SAP Healthcare, better training, research and endemic govt implementations. To breathe taught greater, visit http://www.lsiconsulting.com/.

About SAPSAP is the area's leading company of company utility(*), providing purposes and capabilities that allow groups of All sizes and in more than 25 industries to become most efficient-run corporations. With greater than 97,000 purchasers in over a hundred and twenty international locations, the company is listed on a few exchanges, together with the Frankfurt stock trade and NYSE, below the symbol "SAP." For greater tips, visit www.sap.com.

(*) SAP defines trade utility as comprising enterprise aid planning, company intelligence, and linked functions.

comply with SAP on Twitter at @sapnews.

For shoppers attracted to researching greater about SAP items:global client center: +49 one hundred eighty 534-34-24United States handiest: 1 (800) 872-1SAP (1-800-872-1727)

For extra assistance, press only:Dorit Shackleton, (604) 974-2444, dorit.shackleton@sap.com, PDTSAP Press office, +49 (6227) 7-46315, CET; +1 (610) 661-3200, EDT;press@sap.comBecca Hatton, Burson-Marsteller, (202) 530-4568, becca.hatton@bm.com, EDT

source: SAP AG

web web page: www.sap.com/

linked Thomas trade update Thomas For Industry

L50-501 LSI SVM5 Implementation Engineer

Study usher Prepared by Killexams.com LSI Dumps Experts


Killexams.com L50-501 Dumps and true Questions

100% true Questions - Exam Pass Guarantee with elevated Marks - Just Memorize the Answers



L50-501 exam Dumps Source : LSI SVM5 Implementation Engineer

Test Code : L50-501
Test denomination : LSI SVM5 Implementation Engineer
Vendor denomination : LSI
: 119 true Questions

you already know the exceptional and quickest way to cleanly L50-501 exam? I absorb been given it.
I efficiently comprehended the troublesome themes relish shipping Competence and content material expertise effectsfrom killexams. I correctly score ninety% marks. All credits to killexams.com. i used to breathe looking for a reference guidewhich helped me in planning for the L50-501 examination. My occupied calendar simply permitted me to extra time of twohours by using one approach or another. through booking and deciding to buy the killexams.com Questions/solutionsand examination simulaotr, I were given it at my entryway mission internal one week and commenced planning.


Dont forget about to attempt these concurrent dumps questions for L50-501 exam.
When I was getting organized up for my L50-501 , It become very stressful to select the L50-501 study cloth. I determined killexams.Com while googling the character certification sources. I subscribed and noticed the wealth of resources on it and used it to retain together for my L50-501 prefer a ogle at. I lucid it and Im so thankful to this killexams.Com.


Very light to accept certified in L50-501 exam with this study guide.
i would clearly recommend killexams.com to each person whos giving L50-501 examination as this now not just helps to glance up the concepts inside the workbook but additionally gives a fanciful concept about the sample of questions. first rate assist ..for the L50-501 examination. thank you a lot killexams.com crew !


strive out those actual L50-501 questions.
killexams.com materials are exactly as extraordinary, and the pack spreads All that it ought to blanket for an extensive exam planning and I solved 89/100 questions using them. I got every one of them by planning for my exams with killexams.com and Exam Simulator, so this one wasnt an exemption. I can guarantee you that the L50-501 is a ton harder than past exams, so accept ready to sweat and anxiety.


just try these actual test questions and fulfillment is yours.
It became simply 12 days to strive for the L50-501 exam and i used to breathe loaded with a few points. i was looking for a light and effectual manual urgently. eventually, I got the of killexams. Its brief solutions absorb been no longer difficult to complete in 15 days. in the true L50-501 examination, I scored 88%, noting All of the inquiries in due time and got 90% inquiries just relish the pattern papers that they supplied. an terrible lot obliged to killexams.


All is well that ends well, at ultimate passed L50-501 with .
i used to breathe in a rush to bypass the L50-501 exam because I had to retain up my L50-501 certificates. I should attempt to ogle for some on-line assist regarding my L50-501 prefer a ogle at so I began looking. i discovered this killexams.com and become so hooked that I forgot what i was doing. in the discontinue it became no longer in vain considering the fact that this killexams.com got me to bypass my test.


save your time and money, examine these L50-501 and prefer the exam.
I actually absorb lately handed the L50-501 exam with this bundle. This is a incredible respond if you requisite a brief yet amenable training for L50-501 exam. This is a expert degree, so count on that you nonetheless want to spend time playing with - practical breathe pleased is key. Yet, as a ways and examination simulations move, killexams.com is the winner. Their checking out engine surely simulates the examination, such as the particular query sorts. It does develop things less complicated, and in my case, I reckon it contributed to me getting a 100% rating! I could not trust my eyes! I knew I did nicely, however this changed into a surprise!!


worked difficult on L50-501 books, however the entire component absorb become on this test manual.
even though ive enough inheritance and breathe pleased in IT, I predicted the L50-501 examination to breathe simpler. killexams.com has saved my time and money, with out these QAs i would absorb failed the L50-501 examination. I got burdened for few questions, so I almost needed to wager, but that is my fault. I should absorb memorized well and concentrate the questions better. Its amend to realize that I surpassed the L50-501 exam.


Observed maximum L50-501 Questions in Latest dumps that I prepared.
I skip in my L50-501 exam and that was not a light skip however a extraordinary one which I ought to betray every person with haughty steam stuffed in my lungs as I had got 89% marks in my L50-501 examination from reading from killexams.Com.


Just tried once and I am convinced.
At the identical time as i was getting organized up for my L50-501 , It absorb become very worrying to pick out the L50-501 prefer a ogle at fabric. I discoveredkillexams.Com at the identical time as googling the pleasant certification assets. I subscribed and noticed the wealth of sources on it and used it to prepare for my L50-501 prefer a ogle at. I smooth it and Im so grateful to this killexams.Com.


Unquestionably it is difficult assignment to pick dependable certification questions/answers assets regarding review, reputation and validity since individuals accept sham because of picking incorrectly benefit. Killexams.com ensure to serve its customers best to its assets concerning exam dumps update and validity. The vast majority of other's sham report dissension customers advance to us for the brain dumps and pass their exams joyfully and effortlessly. They never trade off on their review, reputation and character on the grounds that killexams review, killexams reputation and killexams customer certainty is imperative to us. Uniquely they deal with killexams.com review, killexams.com reputation, killexams.com sham report objection, killexams.com trust, killexams.com validity, killexams.com report and killexams.com scam. On the off casual that you contemplate any untrue report posted by their rivals with the denomination killexams sham report grievance web, killexams.com sham report, killexams.com scam, killexams.com protest or something relish this, simply recall there are constantly terrible individuals harming reputation of valid administrations because of their advantages. There are a huge number of fulfilled clients that pass their exams utilizing killexams.com brain dumps, killexams PDF questions, killexams hone questions, killexams exam simulator. Visit Killexams.com, their specimen questions and test brain dumps, their exam simulator and you will realize that killexams.com is the best brain dumps site.

Back to Braindumps Menu


ST0-91X test prep | 200-500 exercise questions | C9560-658 exercise questions | 156-315-71 bootcamp | 1Z0-863 pdf download | A2090-719 dump | 00M-668 study guide | 000-M608 braindumps | C2020-615 free pdf download | HP0-P10 exercise exam | 7241X questions answers | SPS-201 exam prep | HP0-797 VCE | 1Z0-519 exercise test | HP0-S27 free pdf | NRA-FPM free pdf | 156-915 braindumps | A2070-580 dumps questions | HH0-500 questions and answers | 000-347 sample test |


L50-501 | L50-501 | L50-501 | L50-501 | L50-501 | L50-501

Pass4sure L50-501 LSI SVM5 Implementation Engineer exam braindumps with true questions and exercise programming.
killexams.com L50-501 Exam PDF comprises of Complete Pool of Questions and Answers with Dumps checked and affirmed alongside references and clarifications (where applicable). Their objective to assemble the Questions and Answers isnt in every case just to pass the exam at the first attempt yet Really ameliorate Your erudition about the L50-501 exam subjects.

LSI L50-501 Exam has given a original direction to the IT industry. It is now required to certify as the platform which leads to a brighter future. But you requisite to retain extreme application in LSI LSI SVM5 Implementation Engineer exam, beAs there is no shun out of reading. But killexams.com absorb made your work easier, now your exam preparation for L50-501 LSI SVM5 Implementation Engineer is not tough anymore. Click http://killexams.com/pass4sure/exam-detail/L50-501 killexams.com is a amenable and trustworthy platform who provides L50-501 exam questions with 100% success guarantee. You requisite to exercise questions for one day at least to score well in the exam. Your true journey to success in L50-501 exam, actually starts with killexams.com exam exercise questions that is the excellent and verified source of your targeted position. killexams.com Huge Discount Coupons and Promo Codes are as under;
WC2017 : 60% Discount Coupon for All exams on website
PROF17 : 10% Discount Coupon for Orders greater than $69
DEAL17 : 15% Discount Coupon for Orders greater than $99
DECSPECIAL : 10% Special Discount Coupon for All Orders

killexams.com helps millions of candidates pass the exams and accept their certifications. They absorb thousands of successful reviews. Their dumps are reliable, affordable, updated and of really best character to overcome the difficulties of any IT certifications. killexams.com exam dumps are latest updated in highly outclass manner on regular basis and material is released periodically. Latest killexams.com dumps are available in testing centers with whom they are maintaining their relationship to accept latest material.

killexams.com LSI Certification study guides are setup by IT professionals. Lots of students absorb been complaining that there are too many questions in so many exercise exams and study guides, and they are just tired to afford any more. Seeing killexams.com experts work out this comprehensive version while still guarantee that All the erudition is covered after profound research and analysis. Everything is to develop convenience for candidates on their road to certification.

We absorb Tested and Approved L50-501 Exams. killexams.com provides the most accurate and latest IT exam materials which almost hold All erudition points. With the aid of their L50-501 study materials, you dont requisite to waste your time on reading bulk of reference books and just requisite to spend 10-20 hours to master their L50-501 true questions and answers. And they provide you with PDF Version & Software Version exam questions and answers. For Software Version materials, Its offered to give the candidates simulate the LSI L50-501 exam in a true environment.

We provide free update. Within validity period, if L50-501 exam materials that you absorb purchased updated, they will inform you by email to download latest version of . If you dont pass your LSI LSI SVM5 Implementation Engineer exam, They will give you full refund. You requisite to ship the scanned copy of your L50-501 exam report card to us. After confirming, they will quickly give you full REFUND.

killexams.com Huge Discount Coupons and Promo Codes are as under;
WC2017 : 60% Discount Coupon for All exams on website
PROF17 : 10% Discount Coupon for Orders greater than $69
DEAL17 : 15% Discount Coupon for Orders greater than $99
DECSPECIAL : 10% Special Discount Coupon for All Orders


If you prepare for the LSI L50-501 exam using their testing engine. It is light to succeed for All certifications in the first attempt. You dont absorb to deal with All dumps or any free torrent / rapidshare All stuff. They present free demo of each IT Certification Dumps. You can check out the interface, question character and usability of their exercise exams before you rule to buy.

L50-501 | L50-501 | L50-501 | L50-501 | L50-501 | L50-501


Killexams A2090-421 test prep | Killexams 000-089 exam questions | Killexams CCN questions answers | Killexams HP0-918 exercise exam | Killexams ITEC-Massage examcollection | Killexams HP2-T20 sample test | Killexams 050-640 study guide | Killexams 000-R09 test prep | Killexams 190-753 exercise questions | Killexams 156-315.65 study guide | Killexams 922-080 dump | Killexams ASVAB exercise test | Killexams ITIL-F brain dumps | Killexams C8010-241 exercise test | Killexams HIO-301 test prep | Killexams 70-526-CSharp exercise Test | Killexams 70-767 braindumps | Killexams 000-869 exam prep | Killexams 000-M72 true questions | Killexams BI0-125 bootcamp |


Exam Simulator : Pass4sure L50-501 Exam Simulator

View Complete list of Killexams.com Brain dumps


Killexams 000-M73 questions answers | Killexams 000-259 braindumps | Killexams 156-515 bootcamp | Killexams 0B0-108 free pdf | Killexams BCCPP test prep | Killexams 7593X dumps questions | Killexams 600-460 cheat sheets | Killexams 000-898 braindumps | Killexams LOT-838 exam questions | Killexams HP2-B149 true questions | Killexams 9L0-353 questions and answers | Killexams HP0-821 test prep | Killexams 00M-229 exercise Test | Killexams 190-753 test prep | Killexams 3M0-212 VCE | Killexams VCS-310 exercise questions | Killexams 70-549-CSharp dumps | Killexams 70-354 brain dumps | Killexams HP2-B126 free pdf | Killexams 1Z0-932 examcollection |


LSI SVM5 Implementation Engineer

Pass 4 sure L50-501 dumps | Killexams.com L50-501 true questions | http://www.radionaves.com/

Sony commercializes TransferJet compatible LSI | killexams.com true questions and Pass4sure dumps

Sony today announced the commercialization of "CXD3271GW" LSI, for consume in the close proximity wireless transfer technology TransferJet. This LSI realizes a 350Mbps transmission speed and the industry's highest receiving sensitivity, while moreover contributing to reduced power consumption, reduced parts count and smaller sizing of the sets, All to enhance its suitability for mobile devices such as smartphones. Sony recently presented this technological achievement related to its original LSI at the ISSCC (International Solid-State Circuits Conference: February 19~23, 2012, San Francisco, U.S.).

Since its commercialization of the world's first TransferJet LSI in 2009, Sony has promoted an intuitive approach to the high-speed transfer and sharing of high-resolution photos and video images whereby such elevated speeds are realized by simply bringing different devices, such as digital cameras and PCs, into closer proximity with each other. Meanwhile, the recent proliferation of smartphones and tablet devices has led to a consequential augment in the requisite for wireless communication LSIs with a lower power consumption and smaller size, together with increased demands from the mobile computing environment for further enhancements in high-speed communications and reception performance.

Block diagram and system configuration reference for “CXD3271GW” In order to accommodate these needs, Sony has developed the original TransferJet compatible LSI “CXD3271GW”, which achieves both a elevated transmission speed and the industry's highest receiving sensitivity, in addition to significantly reducing power consumption over previous models to better facilitate its implementation in mobile devices.

This LSI supports high-speed SDIO UHS-I as its host interface and introduces a original device driver design which provides a smooth and high-speed communications environment. These design enhancements absorb improved the original LSI's transmission speed to over 350Mbps, which is close to the theoretical maximum effectual speed of 375Mbps based on the TransferJet standard. Furthermore, Sony's unique high-speed transmission technology and wideband RF-CMOS technology enables stable communications between devices, achieving the industry's highest receiving sensitivity of -82dBm (when receiving Rate65) which in fact exceeds the standard TransferJet value of -71dBm. Additionally, the RF balun, the RF switch for transmitter/receiver and the LDO and OTP-ROM, which used to breathe the externally embedded parts are now All pre-embedded into the original LSI, and a dedicated external crystal controlled oscillator will no longer breathe required by supporting a multi-reference clock. These factors contribute to a reduction in the number of parts implemented in the sets, a smaller footprint, and reductions in power consumption.

Additionally, Sony will provide a software evolution kit for Android to facilitate the LSI's implementation in Android devices together with the software evolution kit for Linux which has been provided with the previous models to developers.

Sony plans to continue to actively promote the adoption of its TransferJet LSIs by manufacturers from fast-growing markets such as smartphones.

Explore further: Sony Develops original close Proximity Wireless Transfer Technology 'TransferJet'


Video Encoding: evanesce for the Specialist or the Jack-of-All-Trades? | killexams.com true questions and Pass4sure dumps

Video Encoding: evanesce for the Specialist or the Jack-of-All-Trades?

When it comes to video encoding, the altenative between hardware and software comes down to flexibility, latency, and cost.

Page 1

Learn more about the companies mentioned in this article in the Sourcebook:

One of the hardest choices encoding technicians absorb to develop is deciding between hardware and software. Hardware-based encoders and transcoders absorb had a performance handicap over software since computers were invented. That's because dedicated, limited-purpose processors are designed to flee a specific algorithm, while the general-purpose processor that runs encoding software is designed to handle several functions. It's the specialist versus the jack-of-all-trades.

In the past few years, processors and workflows absorb changed. The powerful disruptor has been time and the economics of Moore's Law, which famously says that the number of transistors incorporated in a chip will approximately double every 24 months. The ratiocinative outcome of Moore's law is that the CPUs accept more powerful by a factor of two every few years, but more recently processing power seems to double every few months. Lately, Intel -- whose co-founder Gordon Moore coined Moore's Law -- has been adding specialty functions along with its math co-processors to equalize the differences between general-use processors and specialty processors.

There are many layers and elements to both a general-purpose processor and a task-specific hardware processor. The general-purpose CPU is the most common -- there are literally billions of them in All manner of computing devices -- while the more purpose-oriented processors comprise digital signal processors (DSPs), field-programmable gate arrays (FPGAs), and integrated circuits (ICs) that are available for various industrial appliances and widely used in cellphones. Many of the structures and elements are similar across All types, but there are considerable differences. If you are not intimate with the elements of the various types, here are the basic structures of both.

The General-Purpose CPU

The general-purpose CPU is laid out with flexible core elements as the arithmetic logic unit (ALU), control unit (CU), and accessory elements that present extra features for performance. Basically these two cores talk to each other, bring in memory as needed, and ship work to the other elements. Other elements comprise I/O processors, logic gates, integrated circuits, and -- on most newer processors and especially on the Intel Xeon processors -- a beefy math co-processor. The math co-processor assists the ALU and can handle the more extreme and knotty mathematical computations. Essentially, it gives the processor the extra horsepower it might require.

The Dedicated Processor

Specific-purpose hardware encoders absorb been around longer than general-purpose processors, and the latter absorb been slower at mathematical equations or algorithm problems. History is vitally famous to understand the market and technology, not to mention accept a sense of what the future holds. The earliest illustration of encoding was in 1965, when the Intelsat 1 (Early Bird) became the first commercial deployment of a satellite to downlink video and audio. Since then, the world has been using specific processors to process video, and the technology has made leaps and bounds to present higher density and quality.

Video ARM DSP

This is a common layout of a video ARM DSP. The ARM core runs the embedded operating system, working relish a traffic cop to control input and output. 

Dedicated processors -- such as dedicated signal processors (DSPs), graphics processing units (GPUs), and vector processors -- All absorb a very similar design structure. A basic and most common component is an I/O manager, which has a tiny onboard operating system along with memory. This is the traffic cop, controlling input and output. Then there are multiple specialized processing modules that execute the desired instructions very quickly and that DSPs and other dedicated processors support. Unlike a general-purpose processor, which has many possible common instructions that may not breathe most efficient for the task at hand, dedicated processors rely on accelerated, per-function instructions that are more job-specific.

Dedicated processors and encoders absorb a variety of applications and workflows. If you ogle at the major users of professional encoders, you will contemplate that in many cases they rely on specialized encoders. The following well-known companies develop DSP chips: LSI Corp.; Texas Instruments, Inc.; Analog Devices, Inc.; Sony; and Magnum Semiconductor. These DSPs are used in devices such as media gateways, telepresence devices, cellphones, and military and radar processing.

FPGAs are now very favorite implementations of DSP functions because of their flexibility of setup and upgrade. Since they are field-upgradeable, their evolution costs for the user are significantly cheaper than DSPs and application-specific integrated circuits (ASICs; more on those in a moment) from the traditional DSP providers. You can contemplate FGPAs from Altera Corp. and Xilinx, Inc. that absorb DSP functions built in. If you want a board or system that is easier to upgrade in the field, then this is probably the best way to go.

DSP Packets

While each manufacturer will tweak its design slightly, this is an overview of how communications and packets tide into the modules of DSPs and ASICs. 

Another implementation of the dedicated processor is the ASIC. These factory-programmed DSPs are used everywhere cost is a crucial consideration, because they present special functionality at optimal cost and performance. In general, they are more expensive to design but are cost-effective for any appliance or board manufacturer to implement into their systems. Many manufacturers of DSPs moreover manufacture ASICs; companies such as NXP Semiconductors, Broadcom Corp., and Freescale, Inc. moreover develop custom ASIC DSPs.

If you ever open up a hardware encoding appliance -- from IP media gateways to broadcast encoder and decoders -- you will contemplate several of the chips previously listed. You can find appliances for every industry. Today you will find a dedicated hardware-based encoding device from Harmonic, Inc.; Harris; Tandberg Data; or NTT Communications in every TV station or cable TV headend, and you'll find appliances or cards from ViewCast Corp. or Digital Rapids in many hybrid encoding farms, since they accelerate some of the functions in the hardware. If you've watched a video on YouTube, then you absorb seen video encoded by RGB Networks with the RipCode equipment, which used massive numbers of Texas Instruments, Inc. DSPs.

Pros and Cons of Hardware Encoders

There are always some pros and cons when it comes to specific design hardware encoders and dedicated appliances. The dedicated hardware approach with DSPs or chips is the consummate solution for media gateways and in low latency military applications. They are designed to flee 24/7 with diminutive or no human interaction. There are some processors that can encode an entire frame in the 1ms-10ms (millisecond) ambit and FPGAs that can encode in the 10ms-30ms range. These processes allow for the creation of appliances where the encoding latency is less than 100ms from encode to transmission to decode. privilege now you can only accept low latency using the privilege DSPs, ASICs, and FPGAs. The uninterested lifespan of an appliance is 5-10 years depending on the configuration and manufacturer. Similar lifetimes are assumed for systems that rarely change, such as satellite uplink or cable system encoders.

The primary drawback of dedicated hardware-based encoding is that the codec on the processor is generally impossible to upgrade. Every DSP, ASIC, or FPGA is based on an algorithm that was finalized years ago. By the time the chip is ready to breathe sold, the codec is 6 months to a year old. Add more design cycles for the appliance evolution and manufacturing, and the discontinue result is a device based on a codec that's a year or more old. If improvement to the codec comes out, the chip or device might never breathe able to integrate the original codec or technology due to the manufacturer or the way the chip was designed. The dedicated DSP approach can deliver a lot of money, but at the expense of flexibility. Those chips enact just what they were originally designed to enact and nothing more.

Hardware Encoder

Video comes into a hardware encoder to a media gateway, which will develop adjustments to the video stream to address network conditions and the discontinue user’s video decoding device. When done, it will ship these modifications to the video decoder. 

There's another issue with dedicated chip-based encoders: Who determines the character of the codes and streams? Is it a DSP engineer, a compressionist, or the producer and director? In a TV station, it's usually a combination of chief engineer and executive producer who rule what station image goes over the air. If they consume a hardware encoder, in many cases decisions about encoding parameters absorb been taken out of their hands. The broadcast engineer has to work within the parameters the chip manufacturer has allowed discontinue users to change, significance that while there is usually some control, there may not breathe as much as a producer or engineer would like. There are only so many operations and cycles you can retain on a chip, so some functionality is uneconomical to implement.

The Pros and Cons of Software Encoders

General-purpose CPUs share some similarities and architecture with dedicated processors. They are designed to handle the everyday functions of your PC or server, and they are optimized to enact mundane tasks such as word processing. This is why your motherboard has a powerful graphics card in your machine; it's a specialty role that is best offloaded to a specific-designed processor. If you enact any nonlinear video editing, you likely absorb a capture card with some specialty processors to give you real-time output or transitions.

In the encoding and streaming industry, they mostly consume a capture board and one or more of many available software encoding packages. There are algorithms and formulas for every application, from live encoding to file-based transcoding to software-based decoding. These days most software-based encoders absorb hooks in the code to offload sure elements to accelerate or allow multiple CPUs to flee parallel functions to accept the best performance and quality. More recently, Intel is offering some onboard GPUs that feature decoding with MPEG, analysis of a video stream's motion vectors, and other functions.

GPU

This overview shows how a GPU or video accelerator is laid out. Again, one device works as a traffic cop to ship work to the confiscate processors, then takes the video streams back and reassembles them together, allowing video to breathe encoded at a faster rate. 

Software encoders absorb allowed users to breathe much more flexible in responding to the needs of specific customers or events, and they All consume the identical general-purpose processors and capture boards to support more video formats and standards. This has been an handicap for software encoders for a long time. They are light to reconfigure and use.

The software encoding industry has recently seen battles between open source and closed source. There are some notable pioneering closed source companies that helped drive the evolution of software encoding and streaming: Microsoft; true Networks, Inc.; Sorenson Communications; and Adobe Systems, Inc. laid out the framework for modern streaming and web-based video. They absorb been around since the beginning, and in many cases they financed the codecs that became standards.

In addition to these pioneering companies, there has been a recent movement to open source. Some of the earlier versions such as x264 and the open source library in the University of California-Berkeley provide the foundation for most software encoders. Code is added every so often and allows others to program their custom apps. The better-known ones such as VideoLan (VLC), FFmpeg, and WebM are creating original versions and are catching on in common use. Some are even getting funding from some of the larger public companies. The most notable illustration is WebM, which is being funded mostly by Google, which made the VP8 codec open source after it acquired On2 Technologies. All this competition and activity is creating better products for consumers. The mountainous companies realize open source evolution and innovation is faster-moving than their own, allowing the market to grow more quickly than it otherwise might.

But software-based encoding has some drawbacks. The most famous parameters of encoding are quality, flexibility, price, latency, and support.

Software encoding's greatest advantages over unadulterated hardware encoders are its flexibility and quality. Software has always been able to adopt and update incredibly fast. When original codec optimizations advance out, encoding package updates succeed very soon after.

Software encoding can enable the producer, engineer, or other user to accept precisely the character and image that they want, unlike the automated hardware solution, where the user has no express in what the overall image is and outcome will be. Some larger encoding firms hire color consultants and compressionists, along with programmers and delivery experts, All of whom assist the executive producers and directors determine what the overall outcome should ogle like. It's a broadcast approach for streaming.

Xeon Phi

Later this year or in early 2013, Intel will release its Xeon Phi series of massive parallel coprocessors, which will work with existing Xeon processors and workflows. 

So if software encoding wins in flexibility and quality, what about speed or latency? While some highly tuned hardware encoders present a latency down in the 30ms range, most software solutions flee in the 300ms-500ms range, if not higher. Most people who consume software encoding realize they are sacrificing some speed for quality. All that matters is whether or not they can accept the resolution and framerate they want; if it's delayed some, the workflow can breathe designed to accommodate it. On the other hand, if you require the lowest latency and fastest delivery, you will absorb to give up some quality.

Cost of support is of course an famous issue. Will the proprietary company maintain making the version you're using, or is there a casual it will breathe withdrawn from the market? How much will the updates and upgrades cost? It turns out that upgrades in the open source community are relatively frequent, whereas upgrades in propriety software are less so.

While some people assume that open source products present lower character or reliability than proprietary software, that's not necessarily the case. FFmpeg, VLC, and WebM are All significantly upping their quality. On the other hand, proprietary software packages such as Sorenson and MainConcept absorb moreover stood the test of time and continue to find widespread use. Interestingly, MainConcept and Sorenson are two of the few companies whose solutions are used in both software and hardware encoding; both provide codecs for the PC environment as well as specifically designed chips.

Changes in Media Consumption, Changes in Media Encoding

General-purpose hardware-based decoders are now playing an famous role in the overall media viewer world, especially as more and more viewers are quitting cable and going the IP route for All of their video consumption. Roku, Boxee, and other IP set-top boxes are DSP-based decoders. At the identical time, more and more consumers are adopting Android or iOS devices and using them as personal media players, and each device brings with it its own set of exemplar encoding profiles and parameters. You'll find you requisite to enact custom scaling and probably want to present the highest possible complexity. Then again, you requisite to spend more CPU cycles per frame, which will require more encoding time but create a better outcome.

Conclusion

There will always breathe a battle between hardware encoding and software encoding. Who will win in various market segments? Why a hardware encoder versus the software encoder? Even now they are starting to contemplate more specialty functionality appended on the general-purpose CPUs, due to the miniaturization and density of transistors and processors. For instance, Intel recently agreed to buy 190 patents and 170 patent applications from RealNetworks, and for years the company has been adding graphics processing and other accelerators or processing engines.

Dedicated hardware encoding wins in unique parallel processing situations when massive amounts of data requisite to breathe processed, as well as in low latency communications such as real-time financial and some military applications. It moreover leads in situations where you want to just install the encoding tool and let it enact its thing, such as in situations with YouTube that can rely on automated, predefined resolutions and bitrates for a massive amount of viewers. But software encoders will breathe the tool of altenative in most applications. It's faster and cheaper to encode with software than in hardware, and once you contemplate how the market responds to your output, it's faster and cheaper to develop modifications.

So, enact you most value flexibility and lower costs? Then software is probably your best bet. enact you requisite low latency and stream density or automated auto-transcoding for the mobile market? Then a hardware solution probably is best for you.

This article appears in the October/November, 2012, issue of Streaming Media magazine under the title "The Specialist Vs. the Jack-of-All-Trades."

Page 1

Related Articles

Will the major Hollywood studios warm up to cloud encoding? Encoding.com plans to disrupt the market.

Hardware? Software? A workflow system? What are the advantages to each? For those not sure where to start, ogle here first.

Premium sits between existing Squeeze Desktop and Server products, and is meant for consume by multiple video editors.


Western Digital Corp. (WDC) CEO Steve Milligan Hosts Investor Day Conference (Transcript) | killexams.com true questions and Pass4sure dumps

Western Digital Corp. (NYSE:WDC) Investor Day summon December 4, 2018 11:00 AM ET

Executives

Peter Andrew - Vice President, Investor Relations

Steve Milligan - Chief Executive Officer

Michael Cordano - President and Chief Operating Officer

Phil Bullinger - Senior Vice President and common Manager, Data headquarters Systems

Mark Grace - Senior Vice President, Devices

Jim Welsh - Senior Vice President and common Manager, Client Solutions

Dennis Brown - Senior Vice President, Worldwide Operations

Ganesh Guruswamy - Senior Vice President, glimmer Product Group

Siva Sivaram - Executive Vice President, Silicon Technology and Manufacturing

Martin Fink - Chief Technology Officer and Executive Vice President

Mark Long - Chief financial Officer and Chief Strategy Officer

Analysts

Mehdi Hosseini - Susquehanna

Amit Daryanani - RBC Capital Markets

Wamsi Mohan - Bank of America/Merrill Lynch

Karl Ackerman - Cowen

Steve Fox - Cross Research

Christian Schwab - Craig-Hallum

Peter Andrew

Good morning, everyone. My denomination is Peter Andrew, Vice President of Investor Relations here at Western Digital. I wanted to thank everyone for joining us today here live or via the webcast.

Before I begin, I want to develop sure everyone has seen their Safe Harbor statement. I will not read All of this, but the key takeaway here is that they will breathe making forward-looking statements that involve risks and uncertainties that could intuition their actual performance to vary materially. They enact not undertake any obligation to update or revise any forward-looking statement. For further information about these risk factors referenced here, please ogle at their configuration 10-K and their configuration 10-Q available on their website. In addition, they will absorb a GAAP to non-GAAP reconciliation at the back of the slides, which will breathe available on their website at the conclusion of today’s events.

So very quickly, before I turn the mic over to Steve, a brace of quick points here. First, they absorb the entire executive staff and many other members of the management team here in attendance. please prefer handicap of this and achieve out and talk to as many as you can. Secondly, in an application to really address as many questions as possible as well as to give more exposure to the WD management team, we’re going to try something a diminutive bit different today. They are going to absorb a fireside chat, as you can see, privilege after the break. And the questions that I’ll breathe asking are the questions that you and the audience gave us as you registered for this event. So we’re going to try something a diminutive bit different here. please give us feedback on how that goes. They will moreover absorb another mp;A to wrap up the day privilege before they evanesce over to lunch.

Finally, your feedback is critical. Now for those of you here on the room, as you registered, they gave you a quick feedback configuration here. If you can please prefer a few minutes at the discontinue of the day and fill out that configuration and return it back to the registration desk, they will give you a diminutive token of their appreciation for taking the time to fill this out.

So with that, let me turn the podium over to Western Digital’s Chief Executive Officer, Steve Milligan.

Steve Milligan

So, thank you Peter. I absorb one opening comment. How dare Urban Meyer to quit on the Western Digital Investor Day. For those that don’t know me, I am a mountainous Ohio State fan and it’s lucid that my influence at the university was not as significant as I thought. So with that, so again, thank you, Peter and valid morning everyone. It’s my delight to welcome you to Milpitas for their Investor Day. They are very excited to talk with you about their company and how they are executing on their long-term growth strategy. Also, toward the discontinue of the day, they will give you a sense for what All of these means from a financial perspective. My talk today is going to focus on two principal areas: one, the dynamic role that data continues to play in All of their lives and two, the ongoing evolution of Western Digital as they ogle to capitalize on those dynamics.

So let’s talk about data. In a very short term of time, data has transformed from being merely a byproduct of digital life to becoming the very engine of the global economy. They live in a world where their relationships, their jobs, their health and their safety increasingly depend on data. They All know that data continues to grow at a theatrical rate and original technologies and products are required to extract ever-increasing value from it. Storage is absolutely fundamental to creating any value from data. At Western Digital, they possess the capabilities necessary to build the data infrastructure that will enable people to capture, preserve, access and transform All of these data. Data is no longer a static, one-sided interaction. People expect their data to thrive, adjust and learn with or without human involvement. Today, All trade is digital business, and it requires the most innovative, intuitive and predictive tools to unlock the most value. Those who don’t adjust to the ever-accelerating pace of change will not only miss out, they will breathe left behind. More specifically, artificial intelligence, machine learning, autonomous vehicles, mobility and IoT are All examples of mountainous data and rapid data applications that are transforming industries and disrupting traditional trade models.

At Western Digital, they absorb built the capabilities necessary to participate in a broad ambit of growth segments across the spectrum, from the core to the edge and the corresponding endpoints. They will talk more today about how they are strategically positioned in each of these areas. In short, they believe the future will breathe built by the most innovative companies providing the most advanced structure blocks for the most quick-witted data infrastructures. As a market leader and innovator in this space, they believe the future will breathe built on Western Digital.

Let’s talk about the evolution of Western Digital. I originally joined the company back in 2002 when they were merely a 3.5 inch disk drive company with approximately $2 billion in annual revenue, operating at roughly breakeven from a profitability standpoint. Over the years, through both organic and inorganic means they absorb built an incredible platform that will continue to drive long-term profitable growth and value creation. Their SanDisk acquisition in 2016 enabled us to bring together a world class team to scale from both the size and portfolio perspective and become a more strategic and valuable partner to their customers. They are realizing the benefits of their strategy as they merge the technical capabilities of some of the brightest minds in storage components, data headquarters systems and the open and compostable infrastructures of the future. With their capabilities and ongoing commitment to innovating across the ecosystem, they are uniquely positioned to breathe a growth brand that delivers strategic value to their customers and shareholders. The Western Digital platform is indeed unique, enabling us to fulfill well across multiple dimensions, including from a financial, operational and growth perspective.

What does All this weigh in for their investors? It means an ongoing evolution of the company from being merely a provider of storage components to being a seamless enabler of data infrastructures of the future so that their customers can drive the greatest value. It’s about leveraging the fundamental technology structure blocks they absorb developed to assist their customers find answers to the world’s biggest questions, and it’s about developing and delivering products that meet their customers’ needs with the privilege technology at the privilege time and at the privilege cost. So it is an exciting time to breathe at Western Digital. There is no industry in the world that is not being transformed by data, and they at Western Digital are committed to enabling that transformation.

Looking forward to the repose of the day, they will contend one, their efforts to leverage IP and technology to deliver solutions for a data-centric world; two, their advancements in component technology; and three, their progress in scaling the data headquarters systems trade from both a growth and profitability perspective. But before I finish, I would relish to remark on two additional items. I would breathe remiss if I didn’t remark on current market conditions. Mike will evanesce into a bit more detail in his section. There is no question that the current market conditions are challenging, both in terms of the claim environment and from a glimmer supply/demand perspective. develop no mistake, they are keenly watchful of the impact of these challenges that they absorb on their company and their shareholders. I am fortunate to absorb a highly experienced management team that is well versed in dealing with challenging conditions.

From a market perspective, they will remain brisk as those conditions fluctuate. Internally, they will intensely focus on those things that they can control, enhancing their technology and product capabilities, critically managing their costs and expenses, carefully adjusting their supply to current claim and continuing to allocate their capital in a balanced and quick-witted fashion, all, at the identical time, continuing to deliver superior value to their customers. The near-term challenges will unfortunately continue to persist for the next few quarters, but I can assure you that the management team at Western Digital is intensely focused on not only weathering these near-term challenges but emerging as a stronger and more capable company going forward. That then leads me to my ultimate point, their stock price. To their shareholders and as a significant shareholder myself, I want you to know that I am not at All satisfied with the near-term performance of their stock price. And as I just mentioned, the management team is intensely focused on improving both the near-term and long-term prospects of the company. That being said, I firmly believe that the current stock cost is not reflective of the long-term value creation opportunities for the company. Today is an occasion for us to give you a sense for those long-term value creation opportunities. So as you evanesce through the day, I hearten All of you as difficult as it maybe to ogle through the current haze reflective of the near-term environment and reckon the long-term value creation opportunities for their company and their stakeholders.

So thank you. And with that, I’ll turn it over to Mike, their President and Chief Operating Officer. So thank you.

Michael Cordano

Thank you, Steve. Okay. Before I accept started, just to give you an thought of what I am going to cover today. I am going to talk a diminutive bit about market context to frame their alignment with that context and their strategies against that. And then at the end, I’ll talk a diminutive bit more in detail about current market dynamics, as Steve suggested.

Okay, this thriving growth in data has been driven by a few things. When you ogle at the innovation that’s occurred across the core technologies that underpin IT compute, you talk about processing, storage and networking, some simple sort of quantitative facts. If you ogle at what’s happening, if they scale forward the ultimate 250,000 years, they absorb the capability in their ecosystems around the world to process that amount of data in a day. Another thing that is happening, they absorb a lot of data growth. You hear us talk a lot about data growth, but there’s moreover a lot of data that exists out there that is historic in nature, right. One of the things that they are doing in this original environment with original capabilities is being able to achieve back into what is commonly called shaded data and light that up and consume it in attractive ways to create value. A true simple illustration of this is healthcare, right. When you believe about to consume a specific case study, their work with University of California, San Francisco around cancer research and mammography, they are trying to accumulate 6 million images. At that level of scale, using machine learning and other advanced artificial intelligence techniques, they are able to diagnose more obscure versions of breast cancer.

Now in the current world, in the current construct where you absorb data siloed within an institution, it is very difficult to develop that happen. So what they are trying to enact is find original and innovative ways to share data in secure ways. That way, they can consume All these historical data, bring it forward and combine it for multiple institutions to accept the kind of volume of data to enact this work and the type of innovation that’s possible is unprecedented.

The other trend that they are obviously seeing, they contemplate it in the intelligence everyday is this notion of data and data privacy. So, All of these things are increasingly relevant. They present both challenges and opportunities for us as they ogle at their products and they ogle at partnering in the ecosystem in the future. These are things that will continue to evolve. We’re in very early innings of this data-centric world, and there’s tons of occasion for us to innovate around this in a way that is differentiated for Western Digital. Underneath All of this data-centric world you requisite infrastructure. Western Digital is an infrastructure provider. They absorb been innovating for 25 plus years. And in that innovation, they are able to provide original architectures and original capabilities that allow us and their partners to deal with the volume, velocity, variety of data that allows us to create value in original and unique ways. So, that’s a tremendous differentiation for their company. A specific illustration of this original emerging world is the notion of a data marketplace. So this deals with what I talked about previously, the sharing of data. So, this is a slide that represents some joint work they did with Accenture and this talks about the occasion by the year 2030 for data marketplaces and what enact they ogle relish and what potential enact they present to us.

But let’s consume another common problem. When they ogle at their agricultural industry and the movement of products through that to feed their country and their world, there is no true way to combine the data up and down that supply chain to understand when they absorb sort of contamination in the supply chain or anything else. So, one of the opportunities for a data marketplace is to securely exchange that data in a way that they absorb better visibility throughout that ecosystem. So there is health benefits for the country. There is of course value around being able to discreetly identify problematic products and prefer it out in a more efficient way.

So what this study would betray us is about 80% of organizations by this time will breathe able to monetize their IoT data. So that is the capacity to share data within their institution in ways that they can create value directly. And obviously, the other alternative is or the other thing that will occur is about 70% of organizations will actually enact that. You contemplate the size of this market and the value created with it. About 80,000 organizations will breathe monetizing their data by that year. The data per day that gets exchanged, about 12 exabytes and the revenue from this exchange is estimated to breathe $14 billion. The total value of the data that’s within that marketplace, as you can contemplate on the slide, is astronomical at $3.6 trillion.

So the data landscape has been evolving. They absorb been talking about the cloud, the core cloud and the evolution there and the trends that absorb happened there. They absorb talked about the growth of endpoints. But what has been sort of the original locality for innovation is the edge. They are going to talk about that more today. The occasion of the edge is really about the requisite to lag intelligence and capability and performance closer to where decisions are going to breathe made. So, it’s a key component enabling All these original endpoints that are evolving. I’ll talk about that in more detail.

An attractive statistic and why this is so attractive to their company as a point of focus is it’s estimated by 2025, 75% of enterprise workloads will breathe processed at the edge. So that’s an locality that needs capability and evolution, and it’s an locality of tremendous focus for us. There is a few reasons for this. There is some physical limitations. One is the speed of light problem, latency. So if you requisite to develop a determination in true time, you enact not absorb the time to wait for that sensor data to lag from the sensor All the way to the cloud and then back again with an inference and a decision. So you absorb to breathe able to enact it much closer to where the determination needs to breathe made. There is obviously issues around compliance and privacy that would breathe in consideration and then there is a cost dimension. It’s not inexpensive to lag data around the network so you want to minimize the amount of movement.

Alright. I will talk a diminutive bit in more detail about each of these areas. The core cloud, the cloud really began by centralizing compute and storage and being able to allocate it in more efficient ways. It was disaggregation of resources. That was the first step. The next step through technology advancement relish virtualization is the capacity to now provision compute and storage as a service, All of these things leading to more efficient and cost effectual deployment of IT resources.

Underpinning those capabilities are evolution in hardware. You contemplate some examples of this now, the rapid advancement and processing capability over the years from 1990 to present. You can contemplate that number. Storage volume is increasing at very rapid rates. And then with the advent of machine learning and other artificial intelligence, you contemplate the booming market for specialized processors relish GPUs. All of that, in combination with software evolution and architectural change has really led us to what is the current version of cloud infrastructure.

There is a lot of debate around cloud. Is it on-prem or is it public? What has now become more lucid is the world is going to evolve into a hybrid cloud model. Depending on the workload, depending on the situation for some of the identical reasons I absorb talked about around the edge, because of latency, because of control and privacy and because of cost, the model that’s going to prevail is a hybrid cloud model and ultimately, that’s going to breathe the winning model. That was endorsed this week when Amazon talked about their Outposts offering, which is an acknowledgment of the requisite to lag the cloud class capability, server and storage closer to those endpoints.

Okay. touching to the endpoints, obviously, in history up into this point, most of the data created has been from mobile and PC. That’s been the primary focus of this ecosystem. What they contemplate in the future is the advent of original endpoint technologies. And you can contemplate here some of the capacity and some of the volume of data that’s created by each of those endpoints. Obviously, mobile and PC continue to breathe mountainous drivers of content creation, but many of the other ones are more around machine-generated data. A few of them to talk about automotive, so a car will – an autonomous car will generate 4 terabytes of data per day as an example, you believe about creation of 8-K video, surveillance, 1.1 terabytes. So this is about the velocity and volume of data being created from All of these original endpoints. Again, they talked about the traditional endpoint creation. About 3 billion PCs and mobile phones will breathe out in the world generating data, but this original connected IoT ecosystem is an order of magnitude bigger. So when you believe about the potential for data creation, that’s the scale of the IoT infrastructure. That’s the scale in which data will breathe created on a day-to-day basis and when you ogle at the amount of data creation, by the year 2021, 850 zettabytes per year being created.

Now this data point, you contemplate a number of them published by different people, so let me prefer a minute and breathe lucid what this is. This is data generated by All endpoints, including endpoints that today are not connected. Those All depict opportunities in the future to breathe connected, to collect that data and then to develop determinations in the future on how to develop that data valuable. Okay, the emerging edge. Really, the edge is the enabler of this IoT infrastructure. Without that, without the capacity to lag that capability much closer to those endpoints, they would not breathe in a position to enable that broader industrial IoT infrastructure. Things relish touching rapid data, SSD level performance, non-volatile memory, capabilities relish virtualization, the capacity to lag artificial intelligence and machine vision All the way down the ecosystem to those endpoints are captious capabilities. touching that level of intelligence close to where decisions are made, it’s really about enabling autonomy.

So they believe about the obvious illustration about automotive, autonomous vehicles, but this applies elsewhere. You believe about surveillance, the capacity to develop real-time decisions at the edge, All of those things require us to prefer the identical cloud-based architectures and lag them outside of hyperscale data centers and lag them much closer to where that activity is so they can develop real-time decisions. All this is really driven by the identical physical factors. You absorb got latency, bandwidth and privacy concerns, All in the application to enable this autonomy and that needs to drive this compute and storage closer to the edge. Again, reiterating this notion of 75% of workloads will breathe actually processed by the edge in the future.

So when they believe about the growth of cloud, they contemplate the traditional core and cloud endpoint markets growing nicely. They contemplate accelerated growth as they contemplate original application relish software-as-a-service and artificial intelligence being deployed. Those are major growth drivers within the cloud as well as the edge. And then ultimately, this endpoint evolution and touching more and more intelligence both into the edge and the endpoints themselves and then that will drive original growth markets beyond that as they create more capability at the edge.

Okay. Now I am going to talk about their markets. Before I enact that, 2 years ago, they talked about a series of strategies they were going to deploy as they brought SanDisk together with Western Digital. And as they looked at their strategy by market, they wanted to accomplish a few things. One is they wanted to accept portfolio breadth. They felt they were too narrow as they came into the early stages of the combination. Second, they wanted to accept customer breadth. Again, they felt they were too narrow across many markets. And third, they wanted to accept leading products. They want to pick their spots, but they wanted to absorb leadership products and not breathe a rapid follower. Those were All strategic considerations they absorb been talking about in their earnings calls from time to time. They absorb been investing to develop these things happen.

So what I am going to enact now is evanesce through each of their markets and talk about how they absorb done over the ultimate 2 years and give you some specifics. Okay. First, let’s talk in more detail around the core data center. So data headquarters infrastructure represents about 37% of IT spend today. Obviously, they are seeing some trends relish infrastructure-as-a-service driving the growth here and this is driving massive growth within some of the hyperscale partners that they have.

The growth rate within this market continues to breathe substantial, around 13% on a revenue basis. Between now and 2023, they appraise that market to breathe $45 billion in size, exabyte growth, around 39%. And you can contemplate the size of that market again by 2023. They are in an excellent position at Western Digital capitalizing this market, because they really absorb expertise across All of the under – storage underpinnings that services this, both in NAND and HDDs that enable us to provide a breadth of capabilities and the broadest product portfolio in the industry.

Let me talk a diminutive bit more specifically about where they are. So, first capacity HDDs, that continues to breathe a pillar of their strategy. They lead in that marketplace. Let me just evanesce through some specifics. They absorb announced their 15 terabyte. That product is nearing production. They moreover absorb talked about the fact that they are sampling a 16 terabyte product today. That product is based on energy-assist technology. You absorb heard us talk about that in the past as MAMR. And what they absorb scholarly as they absorb gone through the MAMR evolution is not only the more traditional consequence that people understand, which is an alternating current spin torque oscillator effect. It’s a tremendous capitalize in terms of driving aerial density growth, but they absorb discovered a number of additional effects.

Steve is going to talk about that in a bit more detail later. But they will breathe launching their 16-terabyte next calendar year with that technology included. And the thing I’ll note is they feel relish – and they are very confident of – they are going to breathe the highest aerial density provider in that timeframe at volume. And they enact that by delivering the 16 terabyte drive with 8 platters as opposed to competitors that are going to enact it with 9. So they continue to breathe committed to this MAMR-based technology. They broadened the definition just to incorporate this other effects. So, they are lucid about what it is they are doing. But this product is tracking to their expectations. They expect it will ramp next year. Beyond that, in 2020 and beyond, there will breathe a 20 plus terabyte offering. That will moreover breathe enabled by energy assist and that would comprise multiple of these energy assist effects. So that technology evolution continues to deliver to the promise. They continue to breathe confident about that. But just to breathe prudent, they continue to develop their own investments in HAMR. It’s an famous energy-assisted technology. They requisite to breathe watchful of what’s going on there and they are – so they are making valid progress. But they continue to breathe confident that they made the privilege altenative on productization around these energy-assisted MAMR-based technologies and they contemplate a multigenerational capitalize in deploying that technology.

Talk a diminutive bit more about capacity enterprise and what it represents. When you believe about the broader storage deployed industry, when they believe about exabyte shipped as a measurement, today, capacity enterprise represents about 35% of the industry’s exabyte shipped, and that’s All – that’s glimmer and disk together. As they project out into 2023, that’s going to grow. So the relevance of capacity enterprise, despite what some people say, continues to breathe very clear. Again, Steve will talk about this. It’s driven by workload and consume case-specific capability. It’s driven by the fact that over this time, given the aerial density scaling I’m talking about here, they will breathe able to maintain a 10x differential in cost per bit. So when you believe about structure infrastructure, cost at every tier is important, and you got to deploy the privilege technology for the privilege job. Capacity enterprise continues to breathe that. When you ogle at growth in revenue, today, it represents about 9% of revenue of the storage industry. It it’s going to grow to 17% by 2023. So it continues to breathe very strategic and very important.

Let’s talk about enterprise SSD. I would breathe remiss to note that this has been an locality of disappointment for us. So of All the powerful things that absorb happened, which I’m going to talk more about in some of the subsequent market segments, we’ve been disappointed about their performance in enterprise SSD. In particular, we’ve been disappointed about how we’ve done in mainstream NVMe.

Alright. That is the volume fraction of the market. The market is touching there. That’s where the hyperscale providers are when they are buying finished product. And they absorb not participated in a significant way in calendar 2018. I am pleased to express as they stand here today that their in-house design, which is an internal controller development, internal architect, firmware architecture is now this month touching into qualification. So, they would expect to breathe ramping that product line in the first half of calendar ‘19 and then they will absorb subsequent generations coming behind that as they ameliorate their relative position in this space. So, this is a lever for us to ameliorate their performance in calendar ‘19 and they are on track to enact that.

In addition to the NVMe, which has been the true focus of their internal development, they absorb made progress in other product categories. So they absorb just – they are in the midst of qualifying their most recent SaaS product line across their OEM accounts, so that is doing well. So as they lag into 2019, their enterprise SSD portfolio is strengthening. They believe they are going to breathe substantially more competitive in 2019 and beyond and you will hear a diminutive more from Ganesh later in terms of their progress there. So, lots of progress. It’s an locality certainly for us to ameliorate their relative performance in 2019.

Alright. Client compute, so this is the PC space. It’s a very big market, not growing but large. There are some trends here that are attractive that they are trying to capitalize on, and I believe they are doing a nice job. One is what they are seeing in this very big but not growing market, there is some cost amalgamate up, significance higher end, more capable systems are becoming more desirable for PC gaming and other things. Obviously, it’s continuing to trend to laptops and really embedding more sort of mobile-like instant-on features. So those are trends within the space and $19 billion market growing at 5% on an exabyte level. They are extremely well positioned to work with their customers and their partners, and this has been an locality that as they absorb brought on their own PCIe product line, they absorb been able to gain market share in the glimmer side as they manage this transition from HDD to glimmer within the PC space.

So, a diminutive bit more on their strategy. They absorb talked about how enact they deliver time to market value-leading products. They really enact that with a strategy of developing platforms. I absorb talked a diminutive bit about that within their enterprise SSD portfolio, but identical strategy persists here in their client portfolio. They absorb got a core platform, code denomination Moonshot in their company that is really designed to breathe able to scale from entry level All the way up to the performance fraction of the marketplace and then it’s designed to scale and absorb lots of reuse both in their hardware on NAND generations as well as firmware. So what that, over time, allows us to enact is deliver leadership products from a performance standpoint, cost-optimized products where they requisite it and enact that with efficient evolution expense. So every time they evanesce to a original product line they are not having to rewrite 100% or the majority of their firmware as an example.

So this is fundamental to their device strategy. One, absorb a very efficient evolution model so they can prefer core platforms and then scale them up and down within a market. And what you are going to hear from me in a few more slides is prefer that identical platform and leverage it into adjacent markets. So this is around getting most market access with the privilege feature and functionality in the most efficient way. So that’s a fundamental strategy we’ve been deploying for the ultimate 2 years across these devices markets. The result of that is – an illustration here is their WD Black. You’ve heard us talk about this on earnings call. This product has allowed us to enact a number of things. Number one, it’s based on a 28-nanometer design. They are performance competitive with 60-nanometer designs. That talks about the focus and purposefulness of their internal controller design. So on a less capable logic node, they are actually able to deliver equivalent performance. So as they lag to 60-nanometer with the identical design, they are going to accept that capitalize on a relative basis, so tremendous work by their team doing this.

We moreover were able to expand their customer breadth, right. As they came together, their glimmer trade was limited to around three big PC customers. They absorb now been able to expand this to 9. So going All the way back to their original strategy of saying, how enact they accept breadth to product portfolio? They absorb now got it. They absorb got PCIe coverage, both in the entry level as well as in the performance segment. That was not there previously. And we’ve been able to, through that effort, accept access to a much broader marketplace and participate with a much broader set of customers.

Moving to their direct-to-consumer business, so as they absorb advance together as a company, they were able to combine three astonishing brands with unique value propositions. The WD brand remains in the market. That’s really about preservation and smart storage. So you contemplate us enact some things with network-connected products there and some software and services that delivers incremental value. The SanDisk brand continues to breathe All about sort of freedom and mobility. Obviously, configuration factor and size and scale play something into that. And then ultimately, G-Tech is their creative pro brand, and that’s really workflow based. So, All three of these brands coexist in the marketplace. You contemplate the size of this market, $10 billion. It’s large. That $10 billion marketplace is really around the storage businesses that they engage in.

As I talk about on the next slide, I am going to talk a diminutive bit about the capability of their channel and the scale of their channel. There is opportunities to consume that channel in other attractive ways in time. So they can as things become more sort of confiscate strategically for us expand their apropos market and consume that channel and those brands to give us market access. But ultimately, it’s about differentiated products and trusted brands that win customers. And what’s famous within this, as you ogle at their channel scale, this is their distribution reach, 550,000 stores reached globally. The number of products shipped annually is 330 million. That talks about the scale and achieve of these brands. You combine that with the trusted brand value that they have, that gives us an opportunity. As these categories in retail inaugurate to consolidate, they consolidate around leading brands that absorb a lot of breadth. It’s been – another thing they absorb been able to achieve is within both brick-and-mortar and some e-tailers, they absorb been able to prefer a larger market position, more shelf space as they brought the companies together. So, tremendous progress there, tremendous relative performance within their direct-to-consumer businesses, Jim is going to talk a diminutive bit more about that in the fireside chat.

Okay, alright. Mobile, so mobile, by 2023, they are talking about 1.7 billion smartphones, uninterested capacity of greater than 200 gigabytes, a confluence of technologies including things relish 5G, which is going to drive higher performance, higher video and 8K. That requires us to deliver more performance around the storage layer, which is performance that I will talk about shortly, of greater than 500 megabytes per second write speeds. All of these are areas for us to innovate. The size of this market is very big growing at 8%, $27 billion and from an exabyte standpoint, growing about 35%, $330 billion, again, All by 2023. Their strategy here is similar to other parts of the portfolio, is to broaden their product breadth and deliver leadership products.

So, let’s talk about an illustration of that. So their recently announced 3D TLC UFS 2.1 product is the first 3D NAND product in this category shipping. This product is capable of performing in a 5G world. So they are 5G ready today. That is uniquely differentiated. They are the only person that is capable of doing that and shipping product and sampling product with that capability. What absorb they achieved? They absorb grown their revenue in this category by 150% over the ultimate 2 years and they absorb expanded their customer portfolio within China for the 5 China OEMs are now buying – are customers of ours. But in the broader sort of three largest handset providers, they absorb two of the three of them as customers as well. So their mobile participation continues to broaden. Again, it’s being done in the identical way as I talked about previously. It’s a common platform. It’s about product leadership up and down, both entry level and performance and then it’s about breadth of customer participation and they absorb been able to accomplish that within the mobile segment.

Okay, growing endpoints. This is beyond PC and mobile. So this is automotive. This is surveillance, smart cities, homes and smart factories. Over time, an increasingly amount of data is going to breathe generated here and actually stored in both the endpoints and the edge. Principally, these are sensors of one configuration or the other that is driving these marketplaces. The other thing to believe about this in some instances, are not always connected. So they will breathe connected and they will sometimes absorb to operate independently, which then again drives the requisite to retain storage and compute and capability All the way down. Excuse me down to the endpoint.

One factor or one significant component here, again, back to their strategy, is this marketplace leverages their evolution for mobile and client compute to enable this marketplace. So these are derivative products. They absorb special requirements, but they’re fundamentally based upon the technologies they develop for the scale markets of client compute and mobile. So within this category, they are the first to ship 3D NAND in an automotive grade application. So we’ve been certified in that regard. They absorb a well-positioned product. This has been an locality again where they contemplate nice growth over the ultimate 2 years. identical principles apply, broaden their portfolio, leverage the technology into more customer engagements and grow revenue.

Coming back to the edge. So the edge, as I absorb been talking about, continues to breathe about how enact they prefer these core technologies that absorb been developed in the cloud and lag them closer to where the endpoints are? So the virtualization of the network, the virtualization of the storage layer, All things that are going to breathe touching down, streaming closer to the endpoint. Principally this is the technology, the capability, the fraction of the ecosystem that really enables these machine-to-machine connections and allows them to fulfill and deliver the autonomy in real-time contextual decision-making that is required. They contemplate the size of this marketplace on a go-forward basis, a very big marketplace. And they absorb got the flexible platforms and technologies to engage this. So again, it’s about repurposing technology that they are developing for other parts of the marketplace and making it confiscate and available to these emerging opportunities.

When they believe about the size and scale of what edge computing represents, a recent study by McKinsey talks about the scale of this. Hardware alone by 2025 could breathe over $200 billion as they enable the edge. And you will contemplate for the first time, I absorb been talking largely about devices and their direct-to-consumer products, but on this slide, you contemplate their system products, which Phil will talk about just following me, this marketplace they really are situated not only to participate on a broad foundation of devices products, but they moreover will breathe able to add increasing value with their system and platforms products as well.

So let me talk about some specific examples about customer engagements that absorb changed since they absorb advance together as the original Western Digital, including SanDisk. They are now able to elevate their engagement with customers. So they can talk to them on an end-to-end basis. Previously, their businesses were very sort of siloed, focused on a particular design win. They did not contemplate the entirety of their objectives in terms of how they are evolving the ecosystem, how their customers and partners are evolving ecosystem. Now with the breadth of products that they absorb with their commercial relevance, they absorb a different kind of engagement.

This is a ridesharing company, you might breathe able to guess who that is that they absorb engaged across the end-to-end infrastructure they have. If you start in the left, they talk about location services and dispatch in payment. There is specialized infrastructure there. They deploy NVMe SSDs into that infrastructure to deliver the low latency requirements of their database.

And then you ogle at analytics batch processing of remonstrate storage. Today, that’s their capacity enterprise. That’s an occasion in the future for their platforms and systems. Then you ogle within their machine learning, their mountainous data analytics. They are running a different database on top of capacity enterprise and caching SSD. So, this is very different engagement for us. This is an capacity to ogle across All of their infrastructure needs over time obviously participate commercially, but absorb a view into what the requirements are in the longer horizon and breathe able to develop products that are more purpose built and differentiated to handicap us in the future.

Okay. Here is another end-to-end example. So, this is a surveillance company. Again, when they ogle at them from an end-to-end standpoint, endpoints All the way to core, their products cover the broad radius. So, they are putting glimmer technology into their endpoints. There is the first hop to the edge gateway, so the first aggregation. They retain glimmer products, and in some instances, depending on the scale, difficult drives can play there as well. And then as you lag back down those regional nodes, there is scope for broader scale, more capable infrastructure to support the profound learning and aggregation and storage of and training requirements required to lag back to the edge to enact inference. So that end-to-end capability, again, similar to the ridesharing. This is a big surveillance customer for us. They are able to contemplate their end-to-end requirements. They are able to engage across it and they are able to anticipate the needs of the future.

Few statistics on the scale of this, by 2021, an uninterested smart camera will absorb 200 gigabytes in the endpoint itself. So, that’s in recognition of two things. One is if it’s not connected, it needs local storage. But even when it is connected, there is going to breathe inferences that are pushed back from the training down to ecosystem to allow it to develop real-time decisions. So storage and compute are essential within the endpoint. The surveillance video recorder, uninterested of about 4 terabytes per, and these are widely distributed. So you accept an thought and a feel for what this looks like.

Okay. Here is a third example. Automotive, again, this is a very big automotive partner of ours. And again, from end-to-end, you contemplate us participating from the endpoint, which is the vehicle itself where you are doing capture, inference and storage All the way to the back discontinue core where you are doing the mountainous data analytics and All the way up and down that chain. So again, they are participating with their devices products, but they are moreover participating with systems solutions into this marketplace. So they are able to provide storage infrastructure across this ecosystem on an end-to-end basis and the unique and sort of compelling fraction about this that’s different is not only the market access and the revenue TAM that they absorb access to, but it’s the strategic engagement that comes with that.

Again, to give you a feel for the size and scale of this. The uninterested consumer vehicle on this time – by the way, there’s a wide range, is about 500 gigabytes of fleet vehicles or terabyte. And then the edge core is obviously multiple exabytes. The scale of this is tremendous.

Okay, alright. This ecosystem is really All about an end-to-end connection between the core, the edge and the endpoint. The size and scale of this by 2023 is 3.2 zettabytes and the revenue occasion within this market is $146 billion, which includes around $110 billion for devices and $35 billion for systems and platforms. So their unmatched breadth and depth of technology across both mountainous data and rapid data gives us a uniquely differentiated position against these marketplaces from devices to systems. Okay, alright.

And before I conclude, I want to spend a diminutive time on the current market conditions, so since the quarter has begun and since their earnings announcement, they absorb continued to contemplate challenging market conditions on a global basis. So the overall macroeconomic volatility remains and they continue to contemplate challenges in many of their markets, including in Asia. The hyperscale capacity optimization cycle, which includes both technical optimizations as well as inventory run-off continues and it’s translating to us on in-quarter TAM reductions and they continue to contemplate that as the quarter has progressed. They enact contemplate the hyperscale investment cycle reaccelerating in the second half of calendar 2019. And mobile phones, they continue to contemplate slowing in that segment as well.

One positive trend in the quarter is PCs are actually running marginally stronger than expected. And across All of their end-markets given the macroeconomic volatility, there is just a conservative inventory position being taken by customers and their channel partners. On a positive note, touching it to the glimmer fraction of their market, they continue to feel very positive about the long-term growth rates of glimmer demand, which they believe will breathe in the 36% to 38% ambit on an annualized basis. But given the factors that they absorb talked about here in terms of hyperscale investment and mobile phones in particular, their current view of calendar 2019 is that claim for glimmer will breathe below that long-term rate in the calendar year 2019. But despite that, in these short-term dynamics, as they absorb talked about – as I absorb talked about here, the long-term growth occasion for Western Digital remains clear.

So with that, I will conclude and I’d relish to introduce Phil Bullinger. Phil? Phil has joined us about 2 years now.

Phil Bullinger

2 years ago, yes.

Michael Cordano

And Phil joined us from Dell EMC and they are going to prefer the occasion now to sort of unveil in a diminutive more detail their systems trade and the progress they absorb made there. Thanks, Phil.

Phil Bullinger

Thanks Mike. Okay. valid morning. They absorb been looking forward to this day for some time. They absorb been working difficult here at Western Digital structure a systems business, a strong systems business, another component to the Western Digital portfolio, but maybe some context to inaugurate with. As Mike said, I joined Western Digital approximately 2 years ago, not quite 2 years ago. Before that for a number of years, I led one of EMC’s strongest storage divisions, the engineering and operations really All phases of the business, the scale-up NAS trade at EMC. Prior to that, I was Senior Vice President and common Manager at Oracle, leading the storage trade at Oracle as well as much of the engineering teams around the evolution of their storage business. And prior to that, I led the Engenio trade at LSI for a long time, which was easily the largest OEM provider of system storage in the marketplace. So I accept asked the question sometimes, you absorb been in storage a long time. I always express since the Reagan Administration. But I accept asked the question, how does this time compared to other times in your career? And certainly, the storage industry, if you ogle back historically, is marked by technological leaps ahead where they brought original products to market, original ways of doing things, the evolution from traditional client/server computing into the cloud era, there has been a lot of transitions in the industry, but fundamentally, this is the most exciting time in my career and in the storage industry because of the rate and pace of how things are changing, the velocity, the variety, the volume of data is staggering.

I am continually amazed at the size of opportunities that they are competing in. Just for the enormity of the data occasion in front of us, we’ve had longstanding technological standards in the storage industry that absorb held up for 30 years and are now being replaced, very aggressively being replaced. With whole original paradigms in terms of performance and latency and bandwidth and what’s possible. And so it’s an exciting time. It’s really an exciting time to breathe in the storage business, and especially in the storage systems and platforms business. It certainly creates a landscape where there is tremendous change. There is a fluidity in the market today that encourages original entrants, encourages better solutions, encourages companies that can maintain pace with the scale at which things are developing. So many of you know and of course, they absorb earned it over many years. Western Digital is long regarded as an innovator and leader in storage technology. At the core fundamental media layer, at the device-layer and Client Solutions. Many of you, however, are probably much less intimate with the scale, scope of the momentum, the capabilities that they are developing in their systems trade in the company. So today, they would relish to kind of pullback the curtain on that and give you some insight into the markets, the opportunities, their objectives for the business, their capabilities, the progress they are making with customers and how the trade is growing.

So we’ll accept into this a diminutive bit. Mike and Steve did a powerful job of sort of framing what I would summon the data universe, this incredible explosion of data that they have. And I believe no one can really refute it, squabble with it. I believe they All accept the fact that data is growing at just unprecedented rates. Steve said and it’s very true, data has relatively quickly gone from kind of an artifact of their lives, something that they just deal with and accumulates. They retain it someplace and they store it. And most companies, their data protecting strategy is just maintain everything forever because they don’t know what they’re going to enact with it eventually. That has gone from something that’s just an artifact of doing trade to the core engine of growth. Data today is what everything pivots around. And the intuition they spend so much time as a company thinking about data, talking about data, understanding data is days because it’s the incredible engine of growth and it’s creating tremendous opportunity. This explosion of data is creating tremendous occasion for a global economy and certainly for Western Digital going forward.

I would relish to narrow the lens a diminutive bit. In this section, what I want to enact is kind of develop it true in terms of the physicality of how they deal with data from the core data headquarters through regional and edge architectures out to the endpoints. I want to develop it tangible enough that you can contemplate some of the trends they are seeing, some of the things driving change in this marketplace. So there are a number of factors that are changing the way people believe about data centers. Data centers used to breathe largely the bastion of very big core environments that big enterprises built. They built the brick-and-mortar themselves, they managed it themselves. It was filled with very traditional storage and very traditional computer architectures. It wasn’t that long ago that most data existed in row column spreadsheets through traditional compute and storage architectures in very traditional data centers. A lot of that is changing. It’s significantly changing.

One of the biggest trends in the market, and it continues, is there is a tremendous movement of enterprise workloads from that traditional data headquarters where it’s brick-and-mortar that a company built and manages itself to some kind of third-party data centers. I don’t necessarily weigh in the public cloud. The public cloud is an illustration of a third-party data center, but there are many, many, many more examples of that from colo facilities to the facilities that are managed by companies that enact a managed service offering to private cloud providers to managed service providers, if you just want to pay for a service as you consume it or consume it. And then, of course, the public cloud providers of All sizes, right. But there is just tremendous shift of companies choosing not to evanesce build another data center, but to leverage the infrastructure, the resources, the capabilities of companies who enact this more for a living.

The second thing that they contemplate is most data headquarters innovation today is not occurring in the public cloud, in the hyperscalers I would say, but it’s occurring at the edge in these regional and edge data centers. It’s the confluence of storage technology, compute technology, networking technology and certainly now more and more, wireless technology. And as 5G emerges on the scene, this is going to become especially true. This confluence of technologies is really these universes kind of match each other. They meet up at the edge. And that’s where they contemplate a lot of the data headquarters innovation happening. Today, I wouldn’t express they are at the point where they absorb got micro data centers underneath a cell tower, but that’s coming. Very soon, they are going to contemplate data centers start to point to up at the bottom of cell towers and then everywhere in between. So most of the innovation and how they believe about configuration factor, performance, capacity, latency, everything, most of the innovation is occurring in the regional and edge architectures and out to the endpoints.

Data latency is now driving data headquarters architecture. Again, that long ago, you could walk in almost any data headquarters and they would express there is my Oracle closure, there is my ASP cluster. Here is where I enact my Microsoft exchange applications. They don’t talk relish that anymore. If you walk into a data headquarters today and they are pointing to data, they are saying here is my data environment for this. Here is my mountainous data analytics environment. Here is where I enact my true time analytics. They are not talking about application stacks. They are talking about data. So data is now driving architecture, placement, interconnect. It’s largely driving the decisions about where and how and why a company chooses to invest in physical infrastructure in a given spot.

Artificial intelligence, they summon it AI. There is a lot of buzz words. As an industry, they relish buzzwords and now AI, IoT, trust me on this one, AI, artificial intelligence, has been for the ultimate few years and will absolutely continue to breathe the seminal, the most transformative trend in their industry. This thought of bringing cognitive processes and data to every facet of trade decision-making of predictive modeling, of being more prescriptive about how they develop decisions today and more insightful about how they develop decisions going forward. This whole thing comes together around AI and this is really what’s driving much of the growth of data and certainly data headquarters architecture. There is a direct correlation between the power and capabilities of endpoint devices and the amount of data being stored. If you remember, Mike presented a slide that talked about some of the data generated by endpoint devices. One of the examples he gave is a car. They are seeing more and more driver assist and autonomous driving vehicles in evolution and on the road and that will continue to progress. I believe they are in that turbulent time where they are quite one extreme or another, but they are in transition.

A lot of the data that they contemplate point to a car generating maybe 2 terabytes, 4 terabytes of data a day. But the highly instrumented cars, the ones that are on the road today that are actually being used as R&D vehicles to capture data, to learn more about how to develop decisions regarding following roads and signs and people and traffic and obstructions. These are generating 10 terabytes, 15 terabytes, 20 terabytes of data a day. The car companies today are swimming in data, but they know their capacity to compete, even it’s an existential question. Their capacity to even breathe in trade in the future depends on how well they can contend with that data and prefer handicap of it and develop decisions around it. So that’s just one example. As 5G emerges, as processor technology gets embedded deeper and deeper into endpoint devices, they generate more data, they drive storage growth.

Finally one of the ultimate trends they will talk about on the slide is this thought of the clouds consequence on data headquarters architecture. No doubt, the cloud, especially the hyperscale has had a tremendous influence on how people develop, engineer, develop and deploy applications and infrastructure in the industry. This thought of scale, elasticity, workload mobility All of these were really driven by the advent of big public clouds. But that technology is not just the purview of big public clouds anymore. Everything they do, they believe about hybrid workflows, they believe about application mobility, they believe about how to scale applications and storage, how to develop them even more elastic. And these technologies are deployed everywhere, even in what they would summon traditional enterprise data centers. So it’s really permeated the market and how these technologies are deployed.

The next point I want to develop is a remark regarding hyperscale public cloud infrastructure versus what they would summon maybe more traditional data headquarters architecture. It’s famous because as Western Digital thinks about investing in the trade that I am leading, data headquarters systems, if you ascribe to the model that the world is All going to the cloud, that everything in the future is going to breathe hyperscale public cloud, they probably wouldn’t invest in this business, right? This trade primarily is pointed at infrastructure, platforms and systems built for the non-hyperscale market. You will contemplate through the course of the day and certainly Mike talked about it, they participate significantly as a company in the hyperscale market. The data headquarters systems trade is pointed at more of the traditional enterprise, the private cloud architectures, the edge data centers where I just described most of the growth is occurring. And there are some fundamental drivers behind the continued investment in those architectures. So the industry is going to find its poise point, right. There’s been a movement to the cloud.

And certainly if you are a concurrent company that was born in the cloud era, you absorb no IT infrastructure you are totally relative on the cloud, that’s a powerful architecture. But a lot of companies are making different decisions today and in fact, they contemplate actually what’s been called in the industry the repatriation of data out of the public cloud back to more of these architectures and infrastructure. And there is really three drivers that are causing some of these transitional trends here. The first one I would generally bucket as trade drivers, things relish trade criticality, service responsiveness, the economics, data security. If something is really fundamentally captious to the existence of the company, it’s considered its most famous asset from a data point of view, All of it or at least a lot of it is not going to breathe in the public cloud. It’s going to breathe in infrastructure that they would summon a diminutive more traditional. And certainly, economics has a play. When you accept to a sure size of dataset, it becomes very, very expensive to manage that in the public cloud.

The second one is a workload thing. Innovation is happening at a very rapid pace around All glimmer architectures. So the advent of a persistent storage-layer based on transistors is changing the way people write applications. And increasingly, applications are written with the assumption that I can accept to any, any byte of data in literally a brace of microseconds anywhere I want to find it in a rack and that is changing the way people believe about structure data centers. It’s no longer sufficient to just build things at scale. If they are slow, the application is not going to deliver the value it was created for. So this workload notion of performance, latency, it’s definitely driving data headquarters architecture going forward in decisions.

The ultimate thing is from an architecture point of view. As I mentioned on the first slide, data centers used to breathe constructed fundamentally at the application layer down. Now they are being constructed around the data layer. It’s a data-centric view of the world. I mentioned glimmer first. This thought of data locality, Mike mentioned the speed of light problem. If you are capturing a tremendous amount of data at the edge of your enterprise, there is no time to lag All of that to the cloud to flee your analytics up there and then lag the result back to the point where you can develop decisions that influence the consumers of the data and creators of data. You absorb got to enact it privilege there, very close. So this convergence of compute and data closer and closer together with wireless technologies, that’s what’s driving a lot of this architecture.

The ultimate thing I will mention and I will talk about it more when I talk about their portfolio, is this thought of composability. For quite a while now that a) primary struggle in IT administration, in IT architecture is how on earth enact you build flexible data headquarters infrastructure out of fundamentally obdurate structure blocks? They believe there is a better way to enact it, and we’re innovating significantly in this locality that’s called open flex. I’ll talk about it as an architectural initiative of the company here in just a second.

So, what’s the meta point here that I want to make? It’s an exciting market for us. The intuition they are investing in it is well, there is a number of reasons that the company is investing in data headquarters systems. It’s a tremendous occasion and that occasion is largely what compels us to innovate in this area, to bring products to this space, to breathe a credible at-scale apropos competitor in this space and to bring value to their customers. It moreover of course builds stronger, stickier, more resilient customer relationships that – where the engagement of the company is not necessarily around the preference of a device purchase, but they are solving a problem for them at scale in the data center.

It’s a big market. You can contemplate how it breaks down, $35 billion total market. $18 billion is in traditional all-flash and hybrid storage. So fraction of their portfolio, their traditional purpose-built primarily and secondary storage solutions and I will talk about those. The middle $13 billion is in storage servers. This is the hyper converge, converged infrastructure fraction of the market. This is obviously a rapid growing fraction of the market. They absorb platforms, structure blocks that enable this fraction of the market space and then $4 billion, but very quickly growing. They believe is going to breathe one the most exciting areas of growth going forward is this locality of software composable infrastructure, the fraction of the market, where physical resources can breathe composed through software very dynamically to address workloads.

Okay. Let me lag to the next fraction of the presentation here. So their objective is to establish Western Digital as a top five strategic provider of data headquarters solutions. That’s the mission of the business. That gives you a feel for scale. That gives you a feel for relevance. That gives you a feel for the impact of the trade on the company. It starts by focusing on emerging and high-growth markets and workloads. Their goal at Western Digital is not to breathe All things to All people in this market space. When they are structure DCS, as they summon it, data headquarters systems, they are focused in those parts of the market that they contemplate as pointing forward as where the puck is touching to in terms of the elevated growth opportunities in the data headquarters space.

The second thing is it’s fundamentally famous that they deliver unique value. Unique value, again, their purpose is not to just breathe another unimaginative EMC, to breathe another HP, to breathe another net app. The world has those, right. It’s their job. It’s their mission to bring products to market that are very unique, that absorb unique value and exist significantly, because they can create them and other people can’t. And I will talk about what that means in terms of how they engineer these products, but a mountainous tenet of their trade to bring unique value to the marketplace. The third thing of course is from a trade point of view, they are absolutely committed to a profitable trade growing faster than the market. That is very, very famous to us. This trade needs to breathe a growth engine for the company, and that’s a core tenet of their objectives.

Okay. The first thing I want to start out with is a slide that you probably expect me to present. And frankly, creating PowerPoint is easy. It’s fine. It’s graphics, right. Execution is the only sustainable handicap in high-tech. That’s always been true. And it’s their occasion as a company to execute in an locality that nobody else can. It’s this thought of silicon to system innovation and engineering. The capacity to start at the core fundamental technology layer, the media, whether it’s manufacturing the aluminum platter that builds the disk drive and the head assembly and everything else that goes into that or fundamental innovation at the transistor layer. Steve is going to talk about their leadership in this space, and taking that All the way to the discontinue customer taste of the data center.

We are uniquely positioned to enact that. It starts at the component level at the fundamental technology layer, in their NAND technology, in their controller technology. These layers of the technology stack are heavy, heavy in software. You believe about Western Digital maybe as a hardware company. They build physical things certainly, but much of the innovation that they focus on is in the software layers, and it starts at the fundamental foundation layer of persistent storage bits. Moves from there into the difficult to characterize space with heads and media and re-channel, controller firmware, the mechanical design of these devices. They build on a technology stack now touching more into the trade that I am amenable for in their platforms trade with devices, the electrical and mechanical design, the firmware, the diagnostics that they develop around these devices. The full expression of this vertical innovation, the vertical engineering, is in the system layer, where they are delivering purpose built complete storage solutions into at-scale data centers globally.

So what does this weigh in in practice? As I mentioned, execution is the only sustainable advantage. They absorb today engineers, for instance, in their primary data headquarters product, their IntelliFlash product, sitting very close to Ganesh’s team designing the next generation enterprise SSD, not just talking about requirements, they would relish it to enact this, they would relish you to enact that, but co-engineering this thing. Talking about how capabilities at the device layer can breathe expressed in an all-flash NVMe primary storage product to reduce latency, to augment device durability, to augment performance over its lifecycle. Similarly, they absorb engineers from my team in their factories in Asia where they are structure difficult disk drives at vast scale looking at the test tide for those devices. And where they can intercept and that and grab those devices and sort of complete that tuning process in the systems that they actually ship.

Giving us the capacity not only to ship a high-quality product, but innovating in the software layers to prefer those devices, for instance, and dynamically on-the-fly in-situ in the system actually reformat the drive and for instance, logically depopulate ahead. If a head fails in the field, every other storage system in the market will expel that drive. It’s a territory service event costing hundreds of dollars. For us, they can logically drop that head out of the system, reformat the drive, bring it back into the pool as a brand original drive with no physical access required to the system. So their capacity to extend the durability, the lifetime, the performance over the life cycle of the media layer is significant.

There are just stupendous opportunities in this area. I feel relish we’re just scratching the surface. They absorb tangible advantages they are bringing to market today, and I am extremely excited about what they can accomplish going forward because they are the company of record that’s working from the transistor layer, the magnetic media layer All the way to the discontinue customer taste in the data center. It’s what makes us unique and their customers understand that.

Also want to talk about their capabilities. It’s famous to understand as a data headquarters systems trade what we, Western Digital, can bring to their customers from a sphere of capabilities point of view. It obviously starts with products. And as I said, they absorb chosen to invest in the growth parts of the market. So whether it’s hybrid and all-flash storage, this is a very rapid growing fraction of the storage market, systems and platforms. They want to deliver to customers both complete solutions as well as the structure blocks that are increasingly being used at scale with software defined stacks. They are moreover invested significantly in cloud storage or cloud scale remonstrate storage. This is the fundamental architecture of storage that the cloud runs on. They absorb powerful technology inside of the complete system level that builds on that, another very, very rapid growing fraction of the storage market and moreover extremely well aligned with the capabilities of the company. They lag from there into kind of their core technology areas. I’ve talked about silicon to system engineering, this thought of vertical innovation. They moreover focus significantly in the software domain – system OS and management software and they enact develop physical enclosures and systems, yes, but most of their engineering research and evolution activity is in the software layers of the product. That’s really what defines the personality, the capabilities the value proposition of what they do.

Just as famous as All of that is this thought of vertical integration. I didn’t express innovation, vertical integration. Because they evanesce All the way from the device-layer to the discontinue system layer, they are effectively eliminating a lot of overlapping value chains. When the company sells a device to third-party OEMs, there is a lot of duplication of validation there is a lot of duplication of design and integration points. When they leverage their own devices and their own systems, they essentially shortcut a lot of that and they collapse that overlapping value chain into a very, very direct sparkling line between the manufacturing – the manufacturer of a device and delivering that to a customer in a data headquarters and that has powerful value. It allows us to optimize the supply chain significantly, because it’s largely their supply chain. Customers moreover know that when they partner with us at the system layer, they are essentially going straight to the source. They are really not concerned if they requisite another petabyte of glimmer on Monday because their trade just doubled in size or they picked up a original customer or relish some of their very big online – I will point to you examples of on online one that they have. They are doing 100 million transactions a day on top of their infrastructure. That company wakes up everyday and worries about how are they going to accept the next petabyte of flash, how are they going to accept the next 10 petabytes of flash. They know when they partner with us. They are going straight to the source. They are optimized to deliver capacity on time to their system customers.

The ultimate locality is flexible go-to-market. One of the things they absorb built here in San Jose, on their powerful Oaks campus is something they summon their platform integration center, believe about it as kind of a hyper config to order integration facility. It looks relish a manufacturing operation for systems. What they enact is they present a service to their customers where they can amalgamate and match their technology with third-party technology, servers, storage with third-party software, with their own ogle and feel to the product. They will assemble, whether it’s at the system-layer All the way to a full rack, they will assemble a product to their specifications. We’ll test it. They will ship it into their logistics channel. What they enact is they of course incorporate their own media, their HDDs and SSDs into those products. But it’s another level of flexible go-to-market capability that they present their customers. And many of the companies you would point to as kind of the darlings of the storage start-up world are actually their customers using their service here in San Jose to deliver a very tailored solution into their channel. They are essentially not only their hardware partner, but they are their supply chain partners as well in a holistic sense of the word.

And finally of course the world has changed from a consumption option point of view. Customers want choice. Some people still want to buy storage primarily from a CapEx model, which is still the least expensive way to buy storage. But increasingly, people are looking at hey, my trade model, I accept paid as I deliver a service, I would relish to pay for my storage as I deliver that services as well. So I would relish to pay you as I accept paid and they present those options as well from a flexible consumption point of view.

So here is the portfolio. I will just recess a diminutive bit and spend a diminutive bit of time on this. I don’t absorb a minute slide for each locality of the portfolio, so I will just spend a diminutive time on this slide and give you a feel for the products that they absorb in their portfolio and bring to market. The first on the left there is their platforms business. They summon platforms for us are structure blocks. They are enclosures that absorb either disk drives or SSDs or a hybrid combination of those either just as raw capacity or sometimes they absorb designs, that includes server motherboards in them as well, so they are kind of a server storage, more of a node-based scale-out architecture. They deliver these to market in various configuration factors, configurations, capacity points. But one of the rapid growing parts of the storage market today is this locality of, in generally speaking, software-defined storage.

What it means is software storage stacks that are less coupled to the underlying hardware. A lot of these stacks are deployed in what I would summon the excess P market, the managed service providers, the cloud service providers, the people structure at scale infrastructure. believe hundreds to thousands of racks of infrastructure and their data centers, they are deploying some of these software-defined stacks. And generally, these markets are a jump ball at the device layer, right. They are just trying to amalgamate and match devices maybe with third-party or white box enclosures and structure a data headquarters out of them. And they are the ones that absorb to sort of amalgamate and match and integrate and develop it All work. What they are providing increasingly with their platforms trade is a more integrated option for that.

So they absorb a number of customers that buy on a quarterly basis hundred thousand-plus disk drives from us, and they’re structure these kinds of data centers. Typically that pressure, that responsibility for mixing and matching drives with enclosures falls on them. What they provide to these customers is that integrated solution in their platforms business. So they absorb some powerful engineering that has gone into these platforms. They absorb patented technology when it comes to vibration isolation and thermal management because they understand the devices better than anybody. They designed them and built them. And so they can deliver this complete solution into markets that heretofore absorb predominantly been just discreet drive customers. What does it enact for us? Well, it creates a stickier relationship. Those devices are not jump balls. Number two, it increases revenue per spend, revenue per device for us and it gives us greater insight into their consume case, allowing us to develop better decisions for their roadmap going forward. So their platforms trade has been growing fast. It’s a natural extension of All the capabilities of the company, and it is certainly an famous fraction of their data centers systems trade going forward.

An extension of their platforms trade is what they summon Composable Infrastructure. You absorb seen – if you absorb been tracking the market HP and Dell and others absorb introduced what they summon Composable Infrastructure, composable servers. These predominantly are modular structure obstruct approaches to a server architecture that simplifies the purchasing experience, it allows customers to kind of amalgamate and match components of the server and bringing this altogether into an integrated top level assembly. They believe that’s great. They believe that’s an famous step ahead in the marketplace. It’s a natural evolution of converged infrastructure, to hyper-converged infrastructure, to composable, but they believe they requisite - they want to evanesce further. They believe that’s largely an incomplete vision of where they believe the storage, the data infrastructure market is going. Their view of Composable Infrastructure is this notion of physically disaggregating compute, networking and storage into separate physical resource pools that can be, through software mechanisms only configured into specific combinations of compute networking storage to deliver that capability to a particular workload and being able to reconfigure that over time.

Again structure now for the first time flexible data infrastructure out of flexible structure blocks, if you manage the data headquarters that had 1,000 racks of infrastructure, you would walk into that data headquarters every morning worried whether your infrastructure still was optimally matched to the workloads and the customers and environment that your company was in. With Composable Infrastructure, that worry largely goes away because you can reconfigure it on the fly, you could completely change the amalgamate of storage whether it’s elevated performance glimmer or capacity disk through networking and compute resources for different applications. And you could re-provision it tomorrow if you wanted to, again without physically touching the rack of infrastructure.

We really believe this architectural thought is how a lot of storage, a lot of data infrastructure is going to breathe deployed going forward. And if you haven’t looked at it out in the lobby, they absorb the physical examples of products in their Composable Infrastructure and their OpenFlex product line that are the first examples of this in the industry anywhere. So I would hearten you to prefer a ogle at it, accept a feel for what these configuration factors ogle like. They created something that hasn’t existed before. This is category creation and the notion of a fabric device. If you believe about a disk drive or an SSD, it’s media behind the controller talking to an interface. That interface today heretofore has been an I/O bus kind of subservient to the CPU. Going forward, fabric devices are media behind the controller talking to an interface, but that interface is now Ethernet and ubiquitous Ethernet. And the intelligence now exists inside those fabric devices to self virtualize themselves. In other words, they can prefer their capacity and they can divide it up across a number of different servers and compute devices. So it gives tremendous flexibility on how infrastructure is retain together. So that’s their OpenFlex Composable Infrastructure, it’s a growth locality of the trade where they are excited about products coming to market in the next quarter and growing from there.

Our cloud occasion is really defined by their ActiveScale business. This is an remonstrate storage platform through an S3 interface that has had a decade of continuous innovation embedded in it. This came into the company through the 2015 acquisition of Amplidata. Amplidata was the company that really ushered in the second generation of remonstrate storage systems in the marketplace. They pioneered wide and efficient ratio coding architectures. This platform is built for scale. And it is now involved in many of the largest on-premises remonstrate storage occasion is in the market. It’s a platform that is well matched for their value proposition and their capabilities and capacity enterprise disk and delivering products at scale.

Our primary data headquarters product, this is the wide applicability, this is the product that fits almost every data center. It’s – 95% of the deployments of this product are in virtualized workloads databases, performance obstruct discontinue file applications. It’s difficult to find a company that doesn’t absorb a consume case for this product. It’s their Active’s – their IntelliFlash product. Again, another product that has had almost a decade of continuous R&D invested in its capabilities. This was ultimate year they acquired the Tegile trade and brought that into data headquarters systems. The IntelliFlash product is their product identity for that family of products. So between these four lines of business, they really enact cover a wide spectrum of the storage systems and platforms marketplace, very carefully selected for the growth parts of the market, the dynamic parts of the market, the market where their capabilities at the device layer are extremely well suited to deliver that – those capabilities in their fullest expression at the system layer and the platform layer in this portfolio.

As I mentioned, a mountainous fraction of what they enact is software innovation. And I want to emphasize as you sort of ogle at the lens of DCS, just how significant their investment is in structure core software competencies and capabilities, whether it’s extreme data durability and again this is where vertical innovation, vertical engineering really pays off. It is about the bits. Every bit matters. And in their systems and their platforms, with the intimacy they absorb at the system and platform layer to the design of the devices, they can enact this better than everybody – anybody else. Data protection and management, a lot of innovation going into how they protect data, how they assist customers manage data at scale. Collecting hundreds of petabytes of data is one thing, but making sense of it, making sense of the metadata, being able to index it and search it, these are the things that they are investing significantly in. Simplicity at scale, helping their customers just managing – manage this deluge of data, a lot of software innovation there. And finally, it’s very famous that they don’t exist as an island. One of the things they absorb invested a lot in is ecosystem development, independent software vendors, the relationships they have, the partnerships, the certification points of their products with the industry, very captious to the success of their trade going forward.

I wanted to talk a diminutive bit about workloads. It helps me in conveying sort of where they are focused, what they concentrate on as a data headquarters systems business. The first one of course is low latency applications. These are applications that are defined by very low latency. In other words, true time, true time analytics, database workloads, commercial elevated performance computing. These are trade applications where time equals money in its purest sense. And customers will pay for products that actually outperform and deliver a faster response than others. And this is where innovations in NVMe and the work that I mentioned going on at the enterprise SSD level between their system teams and their device teams will and is paying off significantly. Virtual and container applications, so the world today is virtualized in the trade data headquarters environment. They work closely with VMware and the OpenSat community around integration with the predominant trade workloads and the virtualized infrastructure that they flee on.

Cloud scale storage applications, I will talk about the markets that they are in and some examples of wins. People bringing cloud scale data sets into on-premises infrastructure, they ogle at as one of the most significant growth opportunities that they absorb as a business. So they are structure systems that address things relish mountainous data analytics and big scale digital asset management. They absorb relationships with some of the big media companies in the industry, where their most famous assets the authoritative digital copy of their media assets, from things that you grew up with as a kid All the way to the most recent TV shows are stored and protected on their infrastructure.

And then finally software defined storage applications, I mentioned that the world is largely turning to this and in at scale service provider environments, whether it’s scientific simulation, generally built test applications, Test/Dev, as well as just unstructured data in general, another workload focus of data headquarters systems. And these are spread across a number of markets. So fraction of their mission in the company is to develop the resources and the expertise in vertical markets, understanding workloads, understanding consume cases, understanding how customers are using their products from the device layer up. Some of these markets that they are particularly strong in, the locality of automotive. As they relish to consume the illustration of the automotive manufacturer swimming in data, this is an locality of tremendous growth for us at scale. In the locality of life sciences, whether it’s human genomic research, some of the largest institutes in the world are using their storage systems now to hold their rapidly expanding libraries of human genomes. In the finance area, quantitative analysis, this is the classic definition of high-performance mountainous data analytics. They are doing a lot of trade there. And in retail, I had mentioned an online retailer doing 100 million transactions a day. That’s an illustration of the online retail environment running on scale-out infrastructure from Western Digital.

Okay. A window into the growth of the business, I am going to grab a diminutive drink of water here. I wanted to give you some feel for the scope and scale of the trade and the progress they absorb made. The company has been working on structure data headquarters systems for several years now. And over that course of the ultimate 3 years, they absorb grown the revenue by 17x. So, it’s a rapid growth rate on the business. They expect that to continue. It’s going rapidly. Very nice momentum in the business, they absorb reached more than 3,000 total customers now. So some people might ogle at DCS as fraction of Western Digital, as kind of startup business. It’s not a startup business. As an early stage business, I would express they are exiting early stage into at scale operation now, 3,000 customers, 8,500 systems deployed in the market. So they absorb got a nice market footprint that comes with it, iterate sales momentum in the business, where they are not hunting every single PO that they are winning now. So with a larger installed base, they absorb more consistency and growth in their revenue stream.

Year-to-date, in 2018, this is the calendar 2018 statements, so just up until this point, they absorb shipped more than 3 exabytes of capacity into the marketplace with their platforms and systems. Every corner now they are adding about 150 original customers to the business. So, their rate of customer acquisition has been increasing as they evanesce forward. And just to give you a feel for the technical capability of the business, they absorb more than 400 R&D engineers now. Most of them in the software disciplines.

Okay. What I wanted to enact is give you three sort of glimpse into each one of these product lines in terms of some of the customer wins that they absorb been able to achieve. Because I believe it’s important, it’s probably the most tangible way for you to accept a feel for the momentum in the trade and what they absorb been able to achieve so far. In the locality of their IntelliFlash all-flash array, so this is their primary data headquarters product, this is a product built for performance for low latency, for primary data headquarters applications. The first illustration is a major vacation rental company. You can probably assume who that might be. Their key value that they were looking for in their product was very elevated performance, exceptional total customer ownership characteristics, a very strong company, very strong customer of their primary data headquarters platform.

Another illustration is a Formula One racing team. And so their IntelliFlash infrastructure provides the elevated speed analysis capability that they depend on, not only after the race, but in the race itself. And so as you know, Formula One is a highly digital sport now in terms of the analytics, the metrics, the real-time data coming from the cars, the micro-tuning that they enact to ameliorate the capabilities and the performance of the machine, their IntelliFlash systems are at the heart of one of the leading F1 racing teams. And finally, an illustration would breathe one of the major league sports franchise. This is a major league basketball team. They flee All of their fan experience, in-game fan taste capabilities off of IntelliFlash. So high-performance, very dependable and it’s built on all-flash. So it’s just another example. I gave three examples here and they are pretty diverse. And it’s a valid illustration of the diversity of customers that they absorb in their primary IntelliFlash business. That really does service just about everybody in terms of a performance application, whether it’s obstruct or file with a lot of data services to evanesce with it. Most companies consume IntelliFlash as their primary central storage asset of the company. It’s what they would point to when they express that’s their most famous data. That’s what they depend on in true time to flee the operations of their business. So they prefer that responsibility very seriously.

The second line of trade that I will talk about is the ActiveScale business. So again this is their S3 protocol, cloud remonstrate or cloud storage remonstrate storage platform. Just some examples of wins of this business. The first one is in the bio-imaging, the human genomics arena. They absorb 60 petabytes today growing quickly. This is one of if not the largest European genetic research institutes in Europe. So they are adding to this with other customers now as well, but the application of storing and accessing and running the analytics on top of human genomic data on remonstrate storage platforms is a powerful match. And they absorb got a platform that can scale. This particular deployment is a 3G scale out implementation, where they absorb systems in three geographically separated sites that are All consistent with each other. So that researchers can access the identical data, irrespective of what site that they are in. You can write data to any one of the locations and it’s immediately available in All three locations and it’s resilient enough that they could lose an entire data center. You could completely purge one of these three sites and the data would still breathe accessible to everybody and consistent.

The second illustration is an emerging automotive manufacturer. This is the powerful illustration of the leading edge of driverless technology, driver-assist technology. They are at 30 petabytes a day, again growing very, very quickly. It’s one of the preeminent design wins in the industry this year. They competed with everybody to win this one. But the company turned to us, because of their expertise and analytics. They were the ones that actually helped them architect their end-to-end mountainous data analytics workflow on top of their products so. And they were incredibly responsive. Their remark to us was you guys work just relish they work. If they requisite something, they are going to accept it done today, they are going to accept it done now, they are going to enact it privilege and they are going to bring the privilege expertise to the table. No endless series of meetings, they just accept it done. And so they won this because of their technological capabilities in analytics and their responsiveness as a partner.

The final one is a hedge fund. This is the second largest hedge fund in the world. They absorb more than 140 petabytes of their storage. This is where they enact All of their quantitative analysis. And so their architecture is their fundamental storage layer for the data that they reckon to breathe most important. This is a rapidly growing business. Every quarter, they are purchasing more and more storage, very successful company, very successful relationship. The ultimate one in their business, I would just want to give you some customer vignettes on is their platform business. Again, these are the structure blocks of the storage market when it comes to software-defined technologies. The first one is an illustration I gave several times, multi-petabyte e-commerce websites. It’s just staggering how rapid these guys are growing. It’s an international company. You would know the name, 100 million online transactions a day. They recently had a series of 4 days of selling activity, where I believe they sold 100,000 washing machines in 1.5 days. It’s just the scale of customer activity on this site is tremendous. They actually retain engineers in their data centers during that term of intensive activity, just to not only learn the workloads, but to ensure that everything went well and it was flawless. It was just completely flawless in terms of their capacity to underpin that kind of activity.

Another customer of their platform trade is a company that does online gaming. This is another trade that’s just exploding. It’s growing quickly. The gaming market is enormous. I am not a gamer, but people that do. It’s significant investment in physical infrastructure in your residence, a very elevated performance PC to prefer handicap of the performance of these games. Well, this company is kind of changing that paradigm, delivering that kind of performance and latency over a WAN connection. So, you don’t absorb to buy a high-performance device. You could enact it from your iPhone, if you want. And they are this storage layer that underpins this company and their tremendous growth. Final illustration is a very big wireless provider, one of the largest wireless providers in the United States. They flee a lot of their infrastructure on their storage platforms, particularly their backup and archive capabilities. So in this example, a diminutive more of a pedestrian illustration of storage, but an illustration where they are working really closely with the storage application vendors to provide this capability.

Okay. Just to wrap up then. As they approach customers, they ogle to us for sure things. It’s their job to create very innovative solutions. Frankly solutions that are disruptive to the current landscape and environment, but they ogle to the company for sure values and capabilities. They expect us to deliver better products, because they enact absorb this capability of full stack innovation. The breadth of their portfolio matches where primarily, where they are trying to solve problems today, the most dynamic and rapid growing areas of the on-premises storage landscape.

Vertical integration, they know working with us. It’s relish the ultimate I wouldn’t summon it so much this, but the ultimate white box play. They absorb a direct connection from the manufacturer of the device to the discontinue customer taste in the data headquarters and that includes this notion of supply chain ownership. This assurance of partnering with us means they are largely in control of the entire value chain now in delivering that capability for them. Their market reach, the fact that the world trusts us with their data, that’s one of the hallmarks of the Western Digital and of course their financial strength. They are going to breathe here a long time and they are investing significantly in this business. So hopefully, that gives you a bit of a window into data headquarters systems, the progress they are making. It’s an exciting time to breathe here at Western Digital in this business.

So with that, I believe they are touching to a break, right? I believe I don’t know if it’s the next slide. Yes. So they are going to prefer 15 minutes and then they are going to advance back with the panel discussion. Thank you very much.

[Break]

Peter Andrew

Okay. If everyone could please prefer their seats, they are going to evanesce ahead and accept started. So as I mentioned earlier, they are going to try something a diminutive bit different today. So as everyone here in the scope was registering to attend this event, there was a diminutive section there where you can type in, here are the key questions or the key topics you would relish to contemplate discuss. So what I did was I consolidated All that information down, brought up these individuals who they will ping questions to, so these are true questions from those in the audience. So what they are going to try to enact is evanesce across, expect each presenter here a question or two and then they will turn it over to Siva to continue with the day.

So let’s start with Mark. Mark, can you give us a diminutive bit of your background before they start in the mp;A?

Mark Grace

Sure. My denomination is mark Grace. I manage the, what they advert to as the Devices business, which is what you would probably traditionally believe about Western Digital. It’s their commercial difficult drive, SSD and card trade that anything they ship onward for further integration, whether it’s to their own emerging businesses or their commercial customers, represents about 75% of their company’s revenue. From a background standpoint, I am in my 35th year in the industry, in the IT hardware industry in general, about 15 years at first with IBM in other parts of the IT hardware industry and then about 20 years ago started in the storage industry. And so I absorb been in the storage industry for 20 years. My genealogy coming to this meeting today is through the HGST fraction of their history and I absorb been in these kind of market-facing roles for about 10 years now.

Question-and-Answer Session

A - Peter Andrew

Okay. So clearly, you absorb been around the HDD industry for quite a long time, but with the SanDisk acquisition, you added a leading glimmer portfolio to your overall product line. Can you assist account for to us why having both HDD and glimmer together has benefited your business?

Mark Grace

Yes. Mike touched on a lot of this. I was thinking as he was talking with you guys how to add to that. And so from kind of this Devices business, from their mountainous difficult drive and SSD and devices portfolio, I believe there are a brace of factors to believe about, and then I would just add to what you heard about earlier. One is that these businesses absorb been externally complementary to each other. Even if you express three businesses, the HGST trade and the Western Digital difficult drive trade and then the SanDisk business, the focus of those companies was slightly different, had strengths in different areas, had customer relationships that were not completely overlapping, in other words, each company brought depth in terms of customer relationships to the party as they integrated. And All three of those companies brought particular supply chain and/or customer facing strengths. So I believe the businesses, if you accept below just the rotating magnetic storage and glimmer underlying technology, brought moreover many other dimensions of strengths to the resulting company. There is another aspect that you might believe about, which is just how nicely they absorb advance together over the ultimate several years. They undertook to bring these companies together. There were a lot of synergies they expected from the business. And they absorb been able – one of their fundamental design premises was to bring one visage to the customers, breathe technology agnostic as they dealt with their customers. And so they absorb largely succeeded now in bringing All of those customer-facing functions together in a very efficient and very complementary manner, while learning best practices from each other in supply chain or in technical support in service or whatever. I believe that the ultimate point is harder to measure than those.

The ultimate point is about their capacity to build customer intimacy with those core customers that they value in their business. If you prefer an example, data headquarters customers, their relationships had become more than one dimensional, had become more than a commodity difficult drive supplier, if you will as an example. Even though they are not quite at the point where they talk about lots of revenue in their enterprise SSD space now, they are deeply engaged with these customers in terms of value propositions and special used cases and their own priorities. prefer the PC space, they are a well-rounded, they are the one-stop shop for storage solutions in the PC space. That’s tremendously helped us manage their trade priorities up and down as the amalgamate of storage technology in the PC world has changed. And I believe they are privilege on track to manage it properly. And then lastly, some of these other markets that are emerging, that are tremendously exciting and much more attractive to talk about sometimes such as the case studies Mike mentioned in surveillance or gaming or in-home entertainment or the connected home, these provide us opportunities to stretch their wings and talk with these customers about a whole ambit of offerings up and down the ecosystem that those markets are creating. So it’s been tremendously exciting, it’s built that breadth and depth that Mike talked about.

Peter Andrew

Moving on to the next question, one of the key things I accept asked quite a bit from this community here is given the current dynamics in the glimmer market, where or what are you seeing from an elasticity perspective?

Mark Grace

Yes. This, if you are into this, this is a tremendously exciting – you absorb got some PhDs upstairs that are studying elasticity All day long for us. What I would express about elasticity is, first of all, straightforward respond is yes, they are seeing cost elasticity in the marketplace today. They are seeing original claim being created. I would express two things. One is cost elasticity has been a feature of their industry for decades. They built this business. They built this industry around continually bringing increased value each year for better economics. It creates a virtuous cycle of adoption, creates a virtuous cycle of economies. So they built this industry on essentially a long-running cost elasticity equation. In the shorter term, cost elasticity is a combination of those long-term original applications being developed and short-term incremental trade that can breathe garnered inside of kind of determination horizons. So in this current year, they are seeing short reaction kind of elasticity in the PC space in terms of both mixing up to higher capacity points as prices enable that as well as some acceleration of the difficult drive to glimmer substitution model that’s been going on and they absorb anticipated for some time. They absorb moreover seen some elasticity in the mobile phone space, but these things prefer time. Time is the governing factor in terms of elasticity. It’s not something you don’t fertilize around and the whole thing pops up privilege away. So, these are things that occur over time through multiple platform cycles and through multiple innovation cycles. So, they are seeing it, time is fraction of the equation. And if they outpace the capacity to market to react on its time cycle, then that’s where they discontinue up in a diminutive trouble, but they are seeing a reasonable amount of cost elasticity of demand, particularly in those most sensitive areas.

Peter Andrew

Okay, thank you. So let’s lag over to Jim. Jim, can you please give us a diminutive bit of insight into your background?

Jim Welsh

Okay. I flee the Client Solutions business, which is All their trade that goes directly to those products that are sold to end-users, through their channels whether it breathe e-tail, retail or the various customers. I absorb been with Western Digital for 13 years. When I started, the trade was about $80 million a year to its current state, multibillion dollar business. Prior to that, I was with Maxtor, where I launched the first external, meaningful and external storage in the industry followed very quickly by the well-known OneTouch, Maxtor OneTouch. And I actually was with NEC prior to that in the compute locality and display area. And I started my career in retail in original York. So I absorb got a true understanding. I believe it’s given me an understanding of consumer behaviors and what drives their zeal to engage with their products and solutions.

Peter Andrew

Okay. So let’s succeed on with Mark’s question where they talked about having both HDD and glimmer within your portfolio. How has that combination of those two technologies enabled your trade to breathe successful in the consumer channels?

Jim Welsh

So it’s a combination of the technologies was the trade is evolving very quickly, the basis for All the brands was mostly came from compute, the add-on storage on computers. And then on cameras, digital cameras is that that formed the core base, which today by the way is pretty resilient. They absorb got a movement in external HDD. They gave got the add-on of external SSD. But the occasion with IoT with All these connected devices, mobile devices is offering an even bigger opportunity. And with the technology, they are kind of technology agnostic. In the past, when I was – I just had the portfolio of difficult drives, it was kind of limiting what they were trying to accomplish and they are trying to shoehorn technology in a solution that the customer wanted, but it was not a consummate match. So now they are totally agnostic to whatever technology is needed for that solution. And that – you bind that with their erudition in networking, wireless in their software, they are in a really valid position. And they are very apropos to their channel because of that. They want to engage more. When they ogle at the 2 years into this, compared to when they combine All three brands, each brand on its own kept on growing. They came from a very broad footprint in – worldwide, the addition of some channels for WD, addition of some channels for SanDisk. So in consolidation, as they enact more and more apropos things, their retailers, e-tailers, channel partners want to engage more with us.

Peter Andrew

Okay. So what about looking out into the future, what are some of the things or the original emerging solutions your group is working on bringing to market?

Jim Welsh

So as I mentioned, with the movement towards more solutions, needed, required, consumers wanted to enact more with their content. Before it was about preserving them and storing them, but now they want to enact more. They capture more it’s richer, it’s more emotional and they want to share it, they want to preserve it, they wanted to know where it is, so there is a mountainous occasion for us to engage more. Now the key here is to really understand more what they are doing. So they are focused intensely on what are the problems, what are the major problems they absorb and focusing on solving them and really the mountainous piece of it is really what they can enact with software and with services. And they are really haughty the people are engaging. They are getting a lot of four star, five star reviews, four to five star reviews on their software and services. And they absorb already had – just believe of the occasion because they absorb 330 million customers every year. So they absorb already had 25,000 downloads of their applications and they absorb on a weekly basis, 2.5 million vigorous users. So the scale, they absorb a powerful platform to build further. As Mike mentioned before, they absorb a powerful platform to engage more and enact more for the consumers.

Peter Andrew

Okay, that’s great. Thank you. So let’s lag over to Dennis next. Dennis, can you give us a diminutive bit about your background?

Dennis Brown

Yes. So I will probably talk a diminutive bit longer on my background than the other guys. I absorb been in the industry for 39 years. Well, I absorb been in the industry 39 years, 37 years in the difficult drive business. I took a 2-year hiatus to enact a startup company in a non-related business. I started back in 1979 at a company called Dyson Corporation as a line worker at 18 years old. So I kind of I scholarly the technology from the ground up. 5 years later, I was recruited to a startup company. I was the 14th employee at the company that was later acquired by Seagate Technology and became their, what they call, Seagate Magnetics. I was a mountainous fraction of the team that built the Fremont Manufacturing facility in – for Seagate and ramped. In 1990 I was – I met Bill Watkins and he asked me to advance over and I absorb joined Conner Peripherals and assist them ramp the MINT technology, which was what they purchased from Domain Technology. And then 6 years later, Seagate came and bought Conner and I was back at Seagate and I went – during this timeframe. So the ‘80s was All about really structure manufacturing in the U.S. and then the ‘90s was about transitioning that manufacturing over to low cost countries. So I absorb spent time in – a brace of years in Singapore, built the facility in Singapore and ramped that facility. Came back to the U.S. and was recruited by Hitachi in 2005 to assist with the integration of IBM and Hitachi. And then that went pretty well, bringing in a diminutive bit of energy. You absorb got two conservative companies Hitachi and then IBM, so I came in with a diminutive more aggressive approach. And it worked really well. They saw some powerful results. However, Hitachi overall wasn’t really turning the trade around if you will and I had an occasion to evanesce enact a startup that I took in a non-related business, it was in the lighting business. And then I got a summon in 2009 from HGST, with the original management team, Steve Milligan and the team, joined that team as Vice President of Media Operations. In 2011, I took on HDD operations and head operations. And then recently, in the ultimate brace of years I absorb picked up HDD product development. So my current role is Global Operations with HDD R&D.

Peter Andrew

Yes. And that’s one of the key things I wanted to follow-on and the question was, you enact absorb a diminutive bit of a unique role, you are amenable for Global Operations for both glimmer and HDD plus you are Head of HDD R&D, so can talk about how having visibility into both sides, the glimmer and the HDD exposure has helped you?

Dennis Brown

Well, I believe it takes a lot of the mystery out of the things. When you are just an HDD team, you wonder what is glimmer doing, what is the capability there, what should they breathe doing from a product perspective, but after integrating with SanDisk now, now they absorb visibility into the capability and the products and where glimmer fits more appropriately and the speed at which they can enter those markets so that they can actually develop the confiscate investments or disinvestments, timing them better with confidence, so rather than sort of a guessing game.

Peter Andrew

Okay. So next Ganesh, can you please give us a diminutive bit of insight into your background?

Guruswamy Ganesh

Sure. valid morning. My denomination is Guruswamy Ganesh. I absorb been in the semiconductor industry for 30 years, started my career designing microprocessors for Advanced Micro Devices, doing the microprocessor designs for them and then went on to enact system on chip designs for Motorola, in multiple market segments from networking, wireless and automotive. And then I absorb gone off to work in SanDisk. So I moved to SanDisk and that was my first taste on storage. And one of the things I absorb seen in my career is I could foretell some trends. So around 2012, 2013, I could contemplate data was becoming the fulcrum of innovation. So when I joined SanDisk, it was a powerful occasion for me to contemplate how data was transforming the industry in multiple market segments. I saw that happening in the networking, wireless and automotive side when I was working in Motorola and Freescale. So when I absorb joined SanDisk, I absorb been with them for 5 years now after the acquisition of WD and when WD acquired SanDisk, I got the occasion to head All of glimmer product evolution and I absorb been amenable for All glimmer product evolution from consumers to client, to mobile, to enterprise. And it gives me a valid thought from a semiconductor perspective how the storage industry is evolving and how they can capitalize their discontinue customers with their vertically integrated technology of controller, firmware and pretty much relish the system on a chip for controller space as well.

Peter Andrew

Okay. So kind of following on the other questions, you obviously came from the glimmer side of the house, how does being fraction of the broader business, including HDD, how has that helped you in your day-to-day operations?

Guruswamy Ganesh

Yes. It was a pretty attractive journey. So when they were fraction of SanDisk, they thought okay, glimmer is going to rule the world. And my first leadership meeting, I absorb figured out that it’s going to prefer a long time, because that’s amount of storage that exists in the world for glimmer to evanesce and enter into that space is going to prefer a long time in terms of both the capacity and the volume of data that is being generated. So they started to clearly contemplate the trends of glimmer is going to play a unique space in rapid data and accessing data much faster. And difficult data whereas the capacity, the difficult drive is going to breathe more on the whole data or archival data, it’s going to breathe much more volume of data. And as they saw both of these, they could moreover understand what their enterprise customers wanted. So when you are a fraction of SanDisk, they didn’t absorb profound achieve into their enterprise hyper-scale customers. With WD acquiring us, WD had a history of a huge relationship with enterprise customers, so they actually could contemplate the throe points of what you requisite to architect from a solution perspective and build those kind of solutions for their enterprise customers.

Peter Andrew

Okay. So let’s really address this question head on, can you remark on how they are executing on the internal NVMe enterprise SSD roadmap, given the tight time lines that Mike just laid out a few minutes ago?

Ganesh Guruswamy

Yes. So they didn’t absorb a hiccup. As I absorb said, the enterprise team was formed by multiple acquisitions from both sides of the companies. So they had SanDisk acquiring multiple enterprise companies, trying to develop enterprise products, sort of WD. So when you had multiple cultures, they had that IPs, but they – as they started integrating the IPs together, they could contemplate that things are not integrating pretty well and they had some hiccups there. That’s probably here. But they will over advance that. They will breathe as Mike said starting this month in qualification with some of their key customers, with their enterprise customers, with PCI NVMe products. And they believe they enact absorb a unique product offering that will excite their customers.

Peter Andrew

Okay. And here is a question that Mehdi brought up to me the other day, so this – Mike, you and mark might want to tag team this one, but can you remark on how you are really going to differentiate your NVMe enterprise SSD versus the competition?

Mark Grace

Yes. So I will share this with Ganesh, he can enact a lot better job on the bits and bites. But I would express the differentiation is going to advance in two parts; one is I weigh in they are very excited about the actual product. It’s going to leverage their NAND technology All the way to their taste with the system interface very, very well, more to advance on that. But the second piece of – the second piece to believe about in terms of differentiation is the differentiation they absorb as a company amongst these customers. I would express generally speaking and almost uniformly the customers are anxiously awaiting us to enter this market. They absorb ongoing discussions about when and how rapid they will breathe able to prefer a fraction of that market space. They continue to breathe engaged with All of these identical enterprise customers with what I would squabble is the world’s best forward-looking and current state capacity difficult drive portfolio on the planet. And these customers absorb over a long term of time advance to under the company they are, the kind of partner they are. They search to breathe the most transparent, responsive, highest character provider in the space as fraction of what they try to build as their reputation in the market. And these customers know us for that and they know that that this kind of support and relationship and expectations on the product will pervade the glimmer side of the trade too. So I would express there is a differentiation at the company level the way they approach the market, the way they differentiate ourselves. And they are very excited about the product itself.

Ganesh Guruswamy

Just adding on to Mark, I believe their architecture is extremely modular and scalable. They believe they can service from low capacities All the way to elevated capacity enterprise markets. And this is not a – they enact absorb a PCI product in the market. But it’s in the previous generation. What they are talking about is the Gen 3 version of it. And they believe that they definitely absorb low latency, very valid QoS and they believe that power is going to breathe a key role. A lot of the enterprise customers are moreover very sensitive about power. I believe they enact absorb a very attractive elevated performance low power solution that they can prefer it to the customers.

Mark Grace

I will just back Ganesh up on that. Ganesh’s team ultimate year delivered – they undertook at the time of the integration two areas where they said they were going to develop the fundamental IP ourselves relative to firmware and the ASIC technology. One was a platform that was primarily centered in the client compute space and one was a platform for the enterprise space. ultimate year, Ganesh’s team delivered the client platform. It has won awards privilege and left. They are competing extraordinarily well with that platform and it’s extensible into generations in front of us. And that credibility lays in front of us for the enterprise space. And I am very confident they will prefer the fraction of that market that they absorb lined up in terms of data headquarters customers and traditional OEMs quite assertively in calendar ‘19.

Peter Andrew

Okay. So let me evanesce ahead and wrap up the fireside chat. Again they just wanted to accept some apropos timely questions addressed to the broader management team. So there will breathe another mp;A session after Mark’s presentation where they will absorb mic runners so everyone in the scope can moreover expect their questions. But with that, let me transition it over to the next speaker. They absorb Siva Sivaram, EVP of Silicon Technology and Manufacturing.

Siva Sivaram

Good morning. So this is your mid-morning electrical engineering graduate seminar. By the time I am ready to administer the test at the discontinue of the session, you will breathe able to betray me who or what an STED is. You will betray me how to measure an electron volt and they will breathe able to easily transition from an angstrom to a nanometer and back. Alright, that’s the objective today. Now in All seriousness, they enact want to talk technology. They want to talk about technology, both from the difficult drive and the silicon technologies.

First and foremost, I want to develop sure they establish for you their leadership in both HDD and glimmer technologies. You will contemplate why they pretension this mantle of technology leadership, why is it famous to us and what are they doing continuously to maintain that leadership. The second is what Ganesh was talking about. What kind of a vertical innovation platform that they absorb that they can prefer these technologies and deliver them as solutions to the customers. It is very famous that technology leadership is not just for the sake of technology leadership, but it is for delivering value to the market and to the customers. And then what Dennis Brown was talking about, the manufacturing muscle with the agility that comes with it. That platform and solutions that Ganesh is creating, how enact they accept it to the customers’ hand at the privilege time with the flexibility that is needed in a market relish this and then they enact talk about this unique advantage, this built-in structural handicap that they absorb in glimmer with the joint venture with their partner, Toshiba. Toshiba memory and they absorb a long-standing partnership in glimmer manufacturing that provides us with some unique advantages.

The net of All of this is this. Repeatedly, they will point to you technology leadership, but they will actually contemplate how they deliver that technology leadership into the customers’ hand at the privilege time. A technology by itself is not enough. It has to breathe at a time where the customer and the market can derive the maximum value out of it. So you will contemplate that as a thread going All ways. And of course you are talking to a company that, as it came through, each of these components of the companies absorb been traditionally pioneers in the field. The original difficult drive was invented in one of their companies. The first system glimmer was introduced in this company. The first multilevel cells, the first helium drive, so on and on and on and on. In the entire storage space, you can advance back to contemplate every seminal advancement was introduced in one of the companies that constitute Western Digital today. And of course Steve talked about these 14,000 vigorous patents, they continued to breathe an IP leader, very highly valued intellectual property portfolio and they continued to breathe growing this further as they evanesce along.

There was a lot of talk today both from Mike and from Phil on what’s going on in the data center, what’s going on with data, this data engineering that is going on, it’s not just happening outside, it is happening within this company. They are eating their own dog food first. They are in their business, whether it is in evolution or in manufacturing, are the prime archetype of how data is transforming ourselves, testers and workstations and multiple places, manufacturing lines, feeding volumes of streams of data into one of Phil’s exquisite babies, the objects to where they are storing their own information. If the Hadoop plaster is sitting on top of it, analyzing data with elevated inquiries and true time flows of information that are transforming, All this is happening within the company. And this is transforming the way they are doing their development. So I will point to you one example. On the left side, you contemplate these three rings. Those are memory holes that they create. When I was a process engineer brace of years ago, I used to absorb a ruler and a slide ruler to sit there and measure every ultimate one of them to go, okay, how enact I optimize the structure with this multiple layers? These days, hundreds of thousands of pictures, shaded fill, sparkling fill, accept fed into this original modified convoluted noodle network that’s coming in. It is learning by itself and coming back and betray me what’s wrong with my own memory hole, orders of magnitude, orders of magnitude improvement in the pace with which they are developing technology. They are just a prime illustration of All that you heard today in how data is transforming out on development.

So with this, let me evanesce to what they summon their technology leadership foundation. Let me start with HDD. People absorb talked about this over a long time, perpendicular magnetic recording the workhorse of the industry has been producing, for a long time now, running out of steel. As they think, as the magnetic coercivity increasing and the magnetic territory strength that is needed to flip the bit, the oyster that’s needed to flip that bit is getting harder to do. So, they requisite some original technology and that’s where energy-assisted technology has advance into place.

And what they absorb done as Mike talked about is develop a platform. The platform on top, they are talking about is this platform is what is being now starting to breathe sampled to customers. MAMR asset technology is a very complicated technology that is coming in multiple phases to you. Today, they are starting to sample the 16-terabyte 8-disc, 2-terabytes a disk, unbelievable technology that’s already going into the marketplace. Of course, this will continue to evanesce further as they continue to develop the technology, this 15% a year growth in aerial density. For that, they are going to breathe inventing, discovering and adding features to that. And that’s how the MAMR technology and energy-assisted technology will breathe delivered to you. I weigh in 20 terabyte by 2020 looks relish it’s – Mike already announced it. I mean, it sounded relish a nice sound champ to advance back and say, 20 terabyte in 2020. They probably will enact better than that. Dennis is shaking his head saying yes. So, this is already happening in technology in the difficult drive.

Let me switch to the solid state side of it. The freight trap cell, something that you hear – it sounds relish something a hunter would consume at something. A freight trap cell is something that – is not an light thing to believe of. This is something that they started using in 2013. You contemplate the vertically integrated 3D NAND and inside it is a very conventional cell. What you are seeing is a standard blocking oxide plus internal oxide, freight traps, storage cell that has been built many times over. What their innovation here was to integrate it in a vertical structure, but the beauty of it is when this becomes cylindrically confined, this cylindrical confinement focuses the electric fields and gives us this stupendous original handicap in structure multilevel cells.

And I want you to stay here for a second. This cell was introduced for the first time in volume manufacturing in 2013. Today, by the discontinue of 2019, this is projected to breathe the highest volume device ever shipped by mankind, period, not transistors, not resistors, not diodes, not capacitors, not DRAM cells, this freight trap cell, which is barely 5 years old, will breathe the highest volume device ever shipped by mankind. And Western Digital is the leader in this device. And I will betray you why this device is so important. Traditionally, you can contemplate an SLC cell. An aerial stick, it stores the charge, you flip the bit, you accept a 0 and a 1, very simple device. But this device, because it’s cylindrically confined and it’s got enough margin that it can consume easily develop two bits of cell, okay. So 00, 01, 10, 11, those are the four states that are needed, including the aerial state to evanesce accept your two bits of cell, but it’s still sparkling enough, broad enough, you can retrieve it to cell easily. For 3 bits of cell, you requisite 8 states. Of course, 4 bits of cell, you requisite 16 states. Don’t expect a 5 bits per cell for quite some time more, that needs 32 states. It’s still not that good. But what you are seeing is because you absorb that valid a control on the threshold voltage, electron volts as they are talking about, that threshold voltage distribution, that top cell, the single-bit SLC, I can trade that for very, very elevated endurance. I can accept 0.5 million, 1 million cycles of endurance. I can trade it off for very short access times. I can accept sub 1 microsecond access times, lead access times. Now they are talking about something interesting. So what they are seeing is this cell is very, very, very versatile. And this versatility allows us to productize them in unique, attractive ways.

So, what absorb they done with this cell? In the ultimate 5 years, since 2013, when they had their internal BiCS1 a 24 layer which they never showed off to anybody outside, to a 48-layer, which they minimally shipped in volume to 64-layer, which is the darling of All the industry everywhere, this is the highest volume product that is shipping privilege now to being the world’s first 96-layer. The world’s first 96-layer, they introduced this product into the marketplace about a year ago. Since that time, they absorb been ramping it in volume and I will point to you in a minute, this is the lowest cost bit in the world, bar none, the lowest cost technology in the world because of the ramp and in volume is BiCS4 today. And of course, they are not stopping. They will absorb the next generation coming in. I just saw ultimate night, International Solid-State Circuits Conference proceeding that’s coming up in February and there under the Western Digital denomination is the 128-layer circuits under the array paper, very nice to contemplate it. So they are taking technologies further and further. And the 128-layer circuits under that is an attractive idea. People talk about, oh, I absorb circuits already. This is where I want to talk about technology leadership versus system solution products.

Having a – and I want you to spend a second watching this perspective, a BiCS4 that’s going up. You absorb about 1.7 trillion of those memory holes in a single wafer. And ogle at it when it goes vertically, how profound that structure is. Just as a technology, it is mind-boggling that something along this line could breathe created this rapid and in elevated volumes. As I said, this today is the highest volume device being produced and I enact want to breathe my showman today to actually point to you how that actually looks. So lot of questions are being asked about, hey, here is this 128-layer stack of a device and I wanted you to contemplate how profound an aspect ratio that is and why this is – there is a thing in glowing green in the middle. This is an innovation. You could go, etch the whole thing in one evanesce as a mountainous profound hole. What you enact is if you enact it All in one go, let’s say, a one degree change in how vertical this is will blow up your die size. So, it is actually a feature to advance back and say, hey, I can build All the way up here, lag the cranes up on top and build a next set of the skyscraper. So the 2 storey versus 1 storey, people talk about, hey, which is better. The 2 storey allows you to tightly control your die size.

So, let me evanesce back to talking about where it is All headed. The innovation in 3D NAND is continuing dramatically generation after generation coming up, whether they retain circuits under the array, whether they retain circuits next to the array, whether they enact two layers, whether they enact one layer is All geared on only one thing when enact they deliver the privilege solution to the customer at the privilege time to produce the lowest cost bit, to produce the highest performance bit, the highest endurance bit. That’s the only thing that matters. In the end, what matters is what value, when can I deliver it to the customer? All of these technologies, for instance, the circuits under the array, they produced about $400 million of revenue with circuits under the array in 2002. This technology has been here for a long time. So they will in 3D NAND introduce it at the privilege time, at the privilege place.

And I want you to prefer back on to this thought of this low latency flash. In this graph on top, you contemplate on yellow is what is happening with DRAM. So, DRAM, over time, cost per gigabyte is not going down anymore. It’s not scaling. The device is not scaling anymore so the cost reduction stopped. difficult drive, on the other hand, as they are just talking about as Dennis was talking earlier, on a routine basis, gives 15% aerial density improvement. They are going steadily. NAND is matching it step-for-step, but it’s still 10x more expensive. As rapid as NAND is coming down, difficult drive continues to breathe keeping pace, whereas DRAM is not. That’s where they introduce low latency flash. Because that freight trap cell is so powerful, they can advance back and say, aha, I can consume it for other application, other than just the mainstream TLC. So this device is able to bridge the gap between the two. Given the fact that this continues to scale the applications where low latency glimmer now starts to prefer on additional uses that are traditionally reserved for DRAM. And you can contemplate why that’s the case. A low latency glimmer can give you access times as I was saying under a microsecond sometimes, but still 10x cheaper than DRAM. On the other end, an X4 device is now starting to approach difficult drives. So, this freight trap device gives you the breadth of usefulness, which they are uniquely positioned to prefer freight of. It is their expertise in productizing this low-latency glimmer All the way to very high-density glimmer in X4. That’s their unique strength.

Let me switch from the component technology to how they metamorphose these component technologies into a product. When HDD and SSDs came together and when WD and SanDisk came together, as both mark and Ganesh talked about, there was a lot of erudition sharing. How enact they work with the customer? What does the customer need? How enact they qualify with a customer? That erudition intrinsically became fraction of their evolution methodology. In manufacturing, lot of valid habits from the very elevated volume manufacturing that they absorb done both in difficult drive and in glimmer is starting to advance together. Of course, supply chain, the scale and complexity of the supply chain, when they start to retain them together, leverages the volume and the achieve of the supplier base. Together, they create what they summon their vertical innovation pyramid, whether it is a controller, whether it is assembly, whether it is firmware, whether it is test, whether it is system integration, these advance together as vertical innovation. This is what they summon a platform.

The synergies of difficult drive and glimmer in many ways are reflected in how rapid they can develop and deliver these to customers. So, if I prefer this, this vertical innovation system, this pyramid, this becomes the core of their platforms. So, when Ganesh develops a platform for either a retail or a client or an enterprise or a mobile customer, he uses All of those to create platforms. The underlying memory, whether it is BiCS3, BiCS4, BiCS5, BiCS6 whatever it is, feeds into this. Whether it is X3 or X4 or low-latency flash, feeds into it and he creates based on these flagship products, the flagship products such as the WD Black that they were talking about earlier or the original NVMe enterprise that is going out. These are flagship products. But what is more attractive is privilege out of them followed a whole series of additional enhancements and automotive product or a surveillance product that falls out naturally with small amount of application on top of these flagship products. This is the integrated platform the way they prefer a technology and deliver it as a solution.

So, I was talking about the glimmer nodes. When enact they introduce them and why? The 15-nanometer 1Z technology was their workhorse, the world’s best 2D NAND technology period. The 2D NAND technology that they had is 15-nanometer from 2015 on has been the benchmark with which you measure that, the lowest cost die highest performance technology. When they introduced the 48-layer, the world was talking about 24-layer and 48-layer and claiming the leadership in 3D NAND. They always maintained that, that is not the privilege time to introduce it. 3D NAND did not develop economic sense compared to 2D NAND. They waited. They waited till 64-layer became lower cost compared to 2D NAND when they introduced – and they introduced it across the board. Multiple platforms, everything that they were just talking about, retail, client, enterprise, mobile, everywhere, they went with 64-layer. Today, BiCS4, 96-layer, as I was telling you is in elevated volumes. It is the cheapest bit in the world.

So now, they turn their intellect into productizing BiCS4 across the board. But you can contemplate dilemma here. If I was still on a 64-layer and I had an X4, which is what people are doing, people are starting to productize X4, QLC on 64-layer. I can develop them on an X3, much better product, cheaper, higher performance on 96-layer. So, premature X4 is not yet the privilege solution for the customers at the privilege time. They enact expect X4 to breathe a very famous technology, but more in the 96-layer to the 1xx layer technologies than in the 64-layer. They introduced this X4 in a conference a brace of years ago, but it is not the privilege time yet. So just the identical way as they are talking about whether it is circuits under the array or the X4 or when they introduce the two tier, they enact it at the privilege time for the customers.

Which brings me to this graph, I was a bit nervous showing this graph even though the facts are the facts, the lowest cost bits, repose of the industry is a valid 20% higher than us in cost per bit. And I retain this graph up. I enact absorb to advance back and say, this is what they flee the trade for. You can evanesce back and compare and say, hey, on a 64-layer, on a 256-gigabit, on an X3, are you, it doesn’t matter. The intuition they are the lowest cost bit in the industry is because of one reason. When the privilege technologies are available at the privilege cost, they ramp relish crazy. They metamorphose over to 64-layer from 64-layer to 96-layer when the technology is ready and the cost switches over, very aggressively. And they absorb some structural advantages as to why they enact it so rapid and I will talk about that later. But today, ultimate year and this year and they will continue to breathe next year, the lowest cost bit in the industry on an uninterested for the whole – All the bits that they ship. On an uninterested for the entire industry and on an uninterested for WD, they are the lowest cost bit.

So, this brings us to the productization I was talking about, 1Z, broad, full-spectrum technology productization, 48-layer, not so is. It was not the privilege technology. It was not privilege cost structure. 64-layer, on acquisition, Steve Milligan said to the entire company, there is only three priorities in the company, 3D NAND 64-layer, 3D NAND 64-layer, 3D NAND 64-layer, which is exactly what they did, 64-layer, All the way from 16 gigabyte to 32 terabyte, full 50 plus product lines. And what are they working on now? Taking it this is the platform that Ganesh was talking about, that one product that can prefer you from 16 gigabyte, 128 gigabit to 64 terabyte across All platforms. Retail, mobile, client, enterprise, this is what’s happening privilege now, 64-layer to 96-layer transformation is in full bore. 96-layer today they absorb the lowest cost and they lead the industry in the conversions. They absorb bar nonexistent the lowest cost leading edge into which they are converting.

Alright. I am going to stop here and talk a diminutive bit about the industry, the industry supply claim dynamics and capital that they talk about a lot, which has got a direct bearing on where costs are headed. Capital intensity in 3D NAND, this is not just raw CapEx. Raw CapEx to evanesce from 2D NAND to 3D NAND from successive innovations is much higher. But when they switch from one generation to another generation, the number of bits per wafer is moreover growing. So the cost, the capital needed to produce the additional 1% of bits, that’s what’s being shown here. And this is sort of the ambit and industry estimates. It is elevated and sort of if you squint your eyes it’s starting to level a diminutive bit, but it is still substantive. Successive generations of 3D NAND are costing more to produce that extra bit. Capital intensity is growing. Capital intensity is growing substantially as they grow from 2D NAND to 2D NAND to 2D to first generations of 3D NAND, that 64-layer to 96-layer to the 1xx layers, right. And this is sort of others absorb talked about it, this I absorb just normalized to the additional bit growth.

But more interestingly, this additional amount of CapEx that is going in is not producing additional wafers. What’s happening is this is the industry floor space. cleanly scope space is growing at a robust 12% to 13% to 14% year-over-year. However, the number of wafers produced is not changing, 1% CAGR. There are no original wafers. This is All about conversion. So, the prior page I showed is really conversion and not Greenfield, because there is not a lot of Greenfield 3D NAND coming up. When there is Greenfield advance up, there is corresponding wafers that are going offline. So net-net, there is not an additional lot of original wafers coming up. So, what they are seeing is they are even for ourselves in the ultimate 2 years, they absorb introduced Fab 2 and they are halfway into filling up the Fab 6. And in a brace of years, they will breathe introducing the original fab in Iwate. With All of that, the locality goes up, but not the number of wafers coming up. The combination of these two not producing additional wafers and the intensity in capital is what’s leading to the industry CapEx. The industry CapEx, when the mountainous 3D NAND conversions happened in 2016 and ‘17 went out of hand. They are starting to enact All the conversions, but not quite getting the bits out of them yet. They got ahead of the curve. They produced 64-layer much ahead of everybody else and everybody else caught up with it. Then you contemplate the intensity of it and so you start to contemplate some stabilization of CapEx, but because the capital intensity is so high, bit growth rate continues to advance down. So, they did absorb a peak here reaching close to 45%. But over time, because of the capital intensity and the overall industry CapEx being sort of as a ratio of revenue, it is fixed, you are starting to contemplate stabilization of bit growth and that given the claim 38%, 39% that Mike was talking about, when claim is in the 38%, 39% and supply in the 35% to 37%, this is what’s going to lead to a diminutive bit more normalization of demand/supply poise over the next few years.

Alright. From here, let me evanesce back. Given this set of dynamic, what are they doing with their own factories? As you know, Dennis was talking about their global footprint of factories. They absorb been a traditionally a powerhouse manufacturing company. They absorb always been around the world, whether it is in difficult drive or in flash, been a strong manufacturer. I will leave the Western hemisphere. This is their Fremont wafer facilities and the Brazil constrict manufacture. If I leave out of it, All of their concentration is here in Southeast Asia and East Asia. The wafer fab of course is in Yokkaichi in Japan where just as they produce 9,000 wafers or so per day privilege plus their partner who acts on top of it. And their SSD plants in Penang and their retail and mobile plants that produce glimmer products in Shanghai. And then if I add to it, the repose of their manufacturing footprint, at least just the mountainous factories, the media plants in Penang, the heads in Philippines and the drive and head facilities in Thailand. You retain All of them together, this is a very, very strong and powerful and agile manufacturing footprint around the world.

And I want to spend some time on their Yokkaichi manufacturing facility. This is one mammoth integrated manufacturing facility for fabs. diminutive over $30 billion has been invested in this factory together between their partner and us. And this is unlike most of their fabs, is one integrated fab. A wafer starting here in Fab 4 might discontinue up in Fab 6 halfway through the processing. An tackle that is looking, sitting idling in Fab 2 will accept a wafer immediately out of Fab 3. I mean, if you evanesce there, it is relish shopping carts in Costco on a sale day. They are zipping around. And they are going – and this I want you to accept a feel for the scale of these plants. This is the number of wafers coming out of it in 1 year, if I just retain the wafers one above the other, it’s taller than Mount Fuji. Okay, I just want to develop sure you accept the image of what the scale of it and each of them has over 1.7 trillion of those memory holes. So, you accept the scale thought of how mountainous these operations are. And this fab knotty is where because of the proximity between fabs and the evolution center, they discontinue up transferring and converting very, very, very rapidly when the next generation node is available. So, this unique partnership that they absorb with Toshiba memory Corporation over 19 years, 14 generations of technology absorb been developed All the way from a gigabit chip to now a 1.3 terabit chip, a 1,300 X growth in that identical 100 square millimeters silicon over the ultimate 19 years generation after generation. And you can contemplate relish clockwork, every 18 months or so, 50% of All bits accept converted from the player technology to the next generation technology. And this gives us some astonishing scale advantages that others don’t have, because about 40% of the world’s glimmer comes from here from this site. 40% of the world’s glimmer comes out of here between us and TMC together. They both leverage each other’s scale. Combined efficiency of equipment, labor, e-learning, everything happens, because it’s a much larger facility than each of us individually have.

The technology leadership, the two companies are pioneers in the area. Their partner invented 3D NAND. They invented MLC, TLC, QLC, together, they absorb created a technology powerhouse that they share their IP between us. But even more important, not just the IP, they share cost. So far, half the R&D expense, you accept twice the product development. Designs, they enact have, they enact have. IP is shared. So there is a mountainous leverage, the fact that the fabs are privilege next to each other. The evolution site is in the heart of the fab. I don’t absorb to develop in one continent and then transfer to another continent. I don’t absorb to absorb 1 fab, mother fab in one country and the next country, which does not speak the identical language, I requisite to transfer out. Everything is done in the identical place. But you know what is most famous is the fab diversity. The two of us advance at the identical problem from two different directions. Often, they ogle at the markets differently. In the end, when they evanesce yin, they evanesce yang, they advance back. They advance back to where they produce the technology leadership. So this gives us with a structural handicap in the way they ramp flash.

So to summarize what I absorb been talking for the ultimate 30 minutes, technologically, the lowest cost NAND, the highest aerial density and the highest density difficult drives period. Technology leadership, they lead the world in productization of 96-layers. The lowest cost per bit comes from there with the broadest portfolio of products. The freight trap cell, which I lovingly explained, has a versatility that you will breathe surprised how broad are its applications. The vertical innovation platforms that they absorb developed that can prefer this technology and deliver it as a product with the highest value to the customer and the structural handicap that is built-in because of their joint venture with Toshiba Memory.

With that, let me invite my partner in crime, Martin Fink, who is hiding over there to advance and betray me where he is going to prefer All this and develop attractive original architectures with.

Martin Fink

Thank you, Siva. Am I on? Okay, good. I wish I had a professor relish Siva when I went to school. It’s just – it’s always captivating to hear Siva speak. You might absorb thought you were done with technology after Siva, you absorb to tolerate for just a diminutive while longer. So let me kind of set this up for you so you kind of understand how I believe about the notion of technology within Western Digital. A brace of years ago, I absorb been here for just about 2 years. I had spent over 30 years at HP, and I had lived through All of the transitions and separations at HP and I had retired. And in fact, people were questioning, did you really retire? And I said, yes, yes. No, no. I am not kidding. I absorb retain my house for sale. I am getting on a plane. My grandkids are in Colorado. I am out of here. And so what happened was I started a conversation with the team here at Western Digital and what was attractive is that the conversation wasn’t, hey, Siva is working on BiCS4, 5, 6 and 7. Can you advance assist him build BiCS17? Alright, that was not the conversation. As you just heard from Siva, he doesn’t requisite assist with that. They absorb the best teams, the best capabilities in the world to build that technology. But what the team did express is the industry architectures are changing and they requisite to start to believe about how data fits in these original industry architectures. And I had just spent the past 10 to 12 years of my life basically thinking about architectures and data headquarters architectures and how that world changes. And in the end, it was just too compelling. It was too attractive of a carrot for me to continue my trek to Colorado took the house off the market and said, sorry, honey, you got to sail to evanesce contemplate the grandkids and decided to stay here and work on this. And so that’s why they talk here about not a next generation of NAND, but rather about technology architectures. So you absorb seen a version of this slide already from some of the presenters.

And basically I relish this construct of a mountainous data and rapid data because it’s a simplification construct. And while they can talk about All sorts of varieties of different data, it’s always valid to absorb a simplifying construct and so mountainous data is about scale. It’s about amassing massive amounts of information. It’s largely what we, as an industry, absorb been doing for the past 10, 15 years in the early days of Hadoop, for example, as doing analytical work. But more recently, over the past few years, they absorb seen more of this construct of rapid data where the thought that All the data gets shipped to some data headquarters in the cloud doesn’t actually fit. It doesn’t actually work All that well because my favorite diminutive example, if you are driving an autonomous vehicle and you are doing processing, the thought is saying, hey, I just saw 6 pedestrians on the road. Let me ship that data to a cloud. Let me wait for the answer, advance back. Okay, should I hit them or not? It’s just – that model just doesn’t work for rapid data. rapid data is about computing close to the data and it’s about the immediacy of being able to develop decisions.

And so that’s how they believe about the thought of architecture around data. Now, the reality is that they absorb spent fraction of their entire lives from a technology industry perspective, working with what they summon common purpose architectures. So, All of the processing elements that they believe of today, whether they are Intel or ARM or those kinds of things, they essentially drop in this category of common purpose processing. They are a lowest common denominator consequence and that’s not a putrid thing, because for a lot of years, they were able to leverage that for more and more workloads. But the reality is that they absorb kind of reached a saturation point, I will summon it and I will point to you that in a second, where the thought of saying, they are going to breathe able to continue to consume these common purpose architectures to solve All these next generation of problems from analytics to machine learning to AI, it actually doesn’t really work. And you say, what’s he talking about? Well, believe about this, because you absorb seen evidence of this already. If you pay attention to the industry, you will absorb heard Microsoft designing FPGAs to optimize a specific machine learning algorithm. Google introduces TensorFlow processing unit, or TPU, in order to optimize their machine learning workload. ultimate week, I believe it was ultimate week, Amazon announces more processors into the industry for their machine workloads. So what’s that telling you is what is generally available as a common purpose thing is not meeting the needs.

And the GPGPU is a special case. So you might absorb heard of NVIDIA and NVIDIA started more as relish a gaming graphics processor. And people said hey, that’s pretty cool, because I can consume that for machine learning. It’s actually better suited for my machine learning, there is a vector mask kind of thing that graphics processors enact and so this GPGPU thing kind of falls in the middle, because GP means common purpose graphics processing unit. So, it falls in this category of common purpose, but it’s moreover very focused in a very specific place, which is vector arithmetic. And so what it means is that it is valid at doing one thing, but it moreover comes with All sorts of extra baggage, because it really is trying to solve this broader problem. And so why is it that they accept to this point where the common purpose world doesn’t actually work for us anymore? Well, believe back in time, if you absorb been around the industry for a long time through the ‘80s and ‘90s, it was All about clock speed, right. When you bought a processor, let’s say, from Intel or AMD or whoever, you essentially went for your maximum megahertz back in those days, right. My first computer I bought at home was 4.7 megahertz. Yes, megahertz and they cranked that up, they cranked that up, they cranked that up and around 2002ish, they kind of hit a ceiling at the 3 to 4-gigahertz range.

So turning the clock kind of ran out of steam, but hey, some creative people said, let’s evanesce multi-core. So now what they are going to enact is solve the problem in parallel. So rather than develop each one evanesce faster, they are just going to consume lots of them. So the analogy, I explained this to people for multi-core if you are not intimate with this stuff, it’s basically to say, instead of having one airplane prefer you from San Jose to Denver very, very quickly, All I am going enact is consume multiple planes to accept more people to Denver. But every plane goes the identical speed. So that’s essentially what the world has done. And so now, there is no more clock speed. You can’t continue to add All sorts of the multi-core thing. So the only thing that’s available is to believe about an architectural paradigm shift. They absorb to believe about the architecture differently. Now I should stress, it doesn’t weigh in the common purpose world is going away. So, this is not an either/or thing, right. They are not trying to replace things. They are basically saying, when they believe about data as a headquarters of the universe to customers data, that common purpose world is not fulfilling the requisite that their customers are saying going forward. And so that’s why they believe about this data-centric locality where things are going to breathe more purpose-built and solve very specific problems. But you will moreover notice I retain that line today to express it’s already happening. And that’s why I gave you examples of how that is already happening today.

And so they basically believe about this construct moreover along this mountainous data, rapid data. So now they are going to start getting a diminutive bit more into geek land. On the left side, if they believe about mountainous data, what they absorb been doing so far is this notion of shipping data to where the compute happens to be, right. So they basically absorb a huge volume of traffic on the Internet or within data centers to essentially ship All of their data to the mountainous compute engine to enact All of the processing. On the privilege hand side, with rapid data, essentially, what we’re dealing with there is the notion that the CPU has been the headquarters of the universe. And the design and architecture of the servers and systems they buy today are All built with this notion that the CPU is the headquarters of the universe. And they believe that they requisite to develop their customers’ data the headquarters of the universe. So when they believe about how this model changed they think, hey, let’s bring the compute close to the data. And then let’s rethink the architecture so that rather than the CPU breathe at the headquarters of the universe, they believe about memory or the customers’ data being at the headquarters of the universe.

So, let’s talk a diminutive bit more about composability. Phil introduced the construct of composability, and so I’m going to prefer it a diminutive bit further. So what Phil was talking about when he talked about open flex and what they absorb done with composability is what’s at the top fraction of this picture. Now what I absorb done internally is I absorb used an analogy to assist people kind of accept through. Because composability, if you are original to the compute world, if you are not a super geek, you kind of go, okay, what’s he talking about? So let me build an analogy that at least so far people absorb been able to resonate with. And the intuition I came up with this analogy is when people believe about composability or composition, music tends to advance to mind, right. It’s just sort of a natural they believe about music. So, if you believe about music compositions, they are really made up of five elements. They are made up of notes, volume, tempo, keys and instruments, that’s it. So All music, whether you are a classical music person and you relish Bach, Tchaikovsky, Beethoven or you are a I am a Lady Gaga all-day long person, right. All music is really composed with those five things, right. And what the composers do, the songwriters enact is they amalgamate and match and they just amalgamate these things along in order to create these exquisite works of art.

Now, imagine if I went to a music writer, a composer, and I said, every time you crank up the volume, you must augment the tempo. You absorb no choice. You augment the volume, you got to evanesce faster. Music wouldn’t breathe so powerful anymore. Well, now let’s translate that to geek land. Servers are comprised of processors, memory, fabrics and storage. And the world that they absorb lived with has been this constrained world where if you want more memory, you requisite to buy more CPU. If you want more storage, well, also, you are going to requisite to buy more CPU. So, as the CPUs, the headquarters of the universe and these things were constrained and locked together. So when Phil talked about composability, the top fraction of this composability picture, the data fabric, what he is saying is their open flex is breaking down that barrier, this thought that things are locked together where you can amalgamate and match the amount of storage, compute and fabric or networking that you requisite to optimize for your workload. It’s your data, your workload, your application you should breathe able to optimize it. But there is one problem. Given today’s technology, the one fraction that Phil, smart guy, but he seemed cannot do, he cannot enact the bottom fraction of this picture. He cannot compose main memory. It is still locked by the CPU. And so today, that takes the configuration of an Intel CPU and the memory attached through an interface, you might absorb heard of, it’s called DDR4. And Intel controls how much memory you can attach. And if you say, I want more memory. Intel will express powerful idea, here is another processor to evanesce with it.

Now for those of you who moreover succeed the DRAM industry, the one I will express valid thing that they absorb been living with is that, that memory interface that I summon DDR4, pretty industry standard. So when you want to buy a memory for your service express I am going to summon Micron, I am going to summon Samsung, I am going to summon SK, I am going to absorb some choice, some cost and competitiveness, etcetera, because it All connects to DDR4. So let’s imagine a scenario where Intel said I absorb a original memory connector it’s called, I will just develop it up, DDRT. And they said if you want to connect to my original powerful processor, you must connect to this original Intel DDRT bus. And the only thing that can attach to this original DDRT bus is Intel memory. Can’t possibly be, could have. Well, guess what folks. That’s exactly what is happening. So they are extending into this world, whether it’s a DDRT thing from Intel and NVLink from NVIDIA, a HyperTransport from AMD or others, these proprietary interfaces that limit your capacity to connect processors together, to connect memory to processors. And they are passionate about unlocking that world for their customers so that they can maximize how they want to consume – how they can consume data. So if they say, I want one processor and petabytes of memory, because remember, Siva just talked about how they can enact low latency glimmer and create petabytes of main memory. Why shouldn’t I breathe able to enact that? And that ratio of compute to memory, it should breathe up to me as a customer as the owner of data to pattern out what is the optimal amalgamate of doing that. And that’s the architectural construct that they are trying to change and amalgamate together.

So, what are they doing to actually enact that? Well, it turns out that this morning I was at the Santa Clara Convention Center. And at the Santa Clara Convention headquarters today is the RISC-V Summit. And RISC-V is a completely open instruction set architecture for compute. And I was doing the keynote speech this morning. My mountainous challenge today was to not amalgamate my speeches between the two venues and I will breathe heading back there this afternoon, but they made a set of announcement. So ultimate year, what I announced is that Western Digital was going to transition All of its processor cores that they consume in their controllers, the stuff that Ganesh builds, right. They are going to transition All of those to RISC-V over a term of time. And when they did the math, it turns out that they ship about 1 billion of processor cores a year. And so they said, they are going to transition All of those. So this year, when I was at the RISC-V Summit this morning and if you check your newsfeed, you will contemplate there was a press release there. They actually announced their first RISC-V core. They called it SweRV. RV is for RISC-V. We, is for two things, Western Digital and for we, as in collaborative, working together and sharing and the S or swerve is about swerving around common purpose architectures to the purpose-built world. And so they announced their first RISC-V processor core. And they announced that it would breathe completely open source.

So in effect, believe about this as doing to the processor world what Linux did to operating systems. They are at the very early stages so believe of this as Linux 1999, right, so this is not happening in a week, folks. They are in Linux 1999 timeframe, but they are committed for the long term because they just contemplate where All of this data, you contemplate the charts that express they got zettabytes of data coming through. And for us, relish they cannot develop sense of how the existing architectures are going to breathe able to solve their customers’ problems with that volume of data. Something has to change and they are providential to prefer the leadership role in order to evanesce enact that, but they are going to enact it in a completely open, industry standard way. And so they welcome the industry to download the source code to their quorum. And also, the other thing I announced remember, I just said this memory thing, it’s kind of All proprietary depending on who you evanesce to where they said, well, guess what, they are going to deal with that one too. So, they announced a completely open source, open memory fabric, a coherent, which is called a coherency fabric. And I will breathe providential to give you a chalk whiteboard conversation on coherency if you are interested, but they moreover announced that moreover as completely open source.

Now, let me give you a diminutive attractive data point. When they started this with RISC-V and developing their own cores rather than acquiring them from the outside, they said, okay, they are on a marathon. They are not on a sprint. And so they requisite to absorb modest goals to accept going, right. So they said, let’s just try to achieve parity with the cores they are using today, right. So they shipped 1 billion cores a year. And so their first modest goals express they are learning something original by putting this together. Let’s just basically achieve parity to what they have. That was essentially the goal they set for ourselves. Here is what happened, 30% improvement in power consumption, 40% improvement in performance and 25% reduction in footprint, Version 1.0. Not bad, right. So, how was that possible? Let’s evanesce through a brace of the reasons. One is they assembled a pretty smart team that combined has about 500 years of processor design experience, so having a strong team, clearly, very important.

Now the question you should breathe asking is, Western Digital is not a processor company, Western Digital is not known to breathe a processor company, why does somebody who has decades of processor evolution taste advance to work at Western Digital, relish why would they enact that? You could evanesce work at All these people could easily accept jobs at Apple, Qualcomm, Intel, ARM, you denomination it. Why would they advance to Western Digital? And if you evanesce talk to them, you don’t absorb to expect me, you evanesce talk to them, what they would betray you is this, this was the only occasion in the industry to not just turn the crank. If they went anywhere else All they would enact is turn the crank on Version 72 of the processor in whatever family they were on. To actually absorb the occasion to design something brand original from the ground up that fundamentally alters the architecture of computing was an occasion available absolutely nowhere else and that’s why they were able to assemble that team. The other intuition they were able to enact this goes back to the very foundation of All of this, special purpose, common purpose. When they are acquiring cores, powerful cores, nothing wrong with them, from third-parties, they are common purpose. They are trying to solve a problem for many of their customers using one set of IP. When they set out to design their core, they were solving their problem or more precisely, their customers’ problem, how enact they optimize the data for their customers. And so because they weren’t constrained by putting All of this extra superfluous stuff that they didn’t need, this was the discontinue result.

Now, there is a bubble on this slide that I normally never ever include, but because of the audience, I included it. And it’s the cost bubble, okay. There was zero fraction of the determination for us to adopt RISC-V and change architectures that was motivated by a cost element. So at no point in the determination that they say, oh, it’s going to breathe cheaper, so let’s evanesce enact that. But at the identical time, they enact absorb to breathe amenable with shareholder dollars, with investor dollars and they absorb to enact this responsibly. Well, it turns out – so there is a brace of things that advance out of that. First of all, the intuition they are doing All of this in open source is because they don’t want to prefer on the burden, right. So if you consume the operating system analogy, they don’t want to enact a full UNIX stack top to bottom at $250 million a year of development. While they are very much involved in the evolution of the Linux kernel and they absorb teams that develop a lot of storage drivers for the Linux kernel, they don’t develop All of Linux. So they don’t pick up the cost of the entire thing. So they did some initial work. They are seeding the industry. They are seeding the ecosystem. But they don’t want to long-term absorb to pick up this massive cost of development. But it turns out the other thing is privilege now, this is costing us probably less than 1% of their overall platform investment. So if you play a disaster scenario, which is a valid thing to do, you play a disaster scenario, hey, Martin, All your notions about architecture, it’s not going to work. It’s All going to breathe in RISC-V. Nobody in the industry is going to advance play. relish just play every possible disaster scenario. The reality is from where they sit, they absorb achieved these numbers and these results for their customers at a cost profile that is very manageable.

So they are quite happy. Obviously, they are not aiming for a disaster scenario, they are aiming for a leadership scenario and the industry is coming on board. Just by the way, quick stat, ultimate year at the RISC-V summit, the audience size was 400 something and this year was over 1,000 and that’s why I had to lag to the Santa Clara Convention Center. So the ecosystem is coming onboard and they are getting a lot of traction. So, All of their trajectory, All of the data that they absorb on the health of RISC-V is All up into the right. I live through the entire life of Linux and this is happening at a much faster rate than Linux did, which is surprising to me, because software typically can lag a lot faster than hardware, but open source was a original construct back then. This is happening at phenomenal speeds. So with that, hopefully, you accept a sense for how they believe about data, how they believe about their customers’ data and the significance of thinking about the architectural paradigms that are going to requisite to shift over the next 5 and 10 years in order to allow their customers to fully monetize and prefer full handicap of their data.

So with that, I am going to turn it over to probably the one thing you really, really, really wanted to accept today, which is their CFO, mark Long to talk about the finances of the business. Thank you very much.

Mark Long

Thank you, Martin. valid morning and welcome everyone. As they covered throughout the day, they believe Western Digital is fundamental to an increasingly data-centric world. In this final presentation, I will contend their financial profile, capital allocation and capital structure as well as present some historical perspective on the NAND glimmer industry. Specifically I’ll cover these four main areas: first, their leadership in data infrastructure through the industry’s broadest product portfolio, leveraging their technology strengths and delivered through relentless operational execution; second, the compelling long-term growth opportunities for their company; third, their financial model designed to deliver long-term profitable growth while enabling the company to navigate periods of market volatility; finally, their capital allocation strategy, with its focus on shareholder value and returns. I will moreover highlight the optimization of their capital structure and account for the operating discipline and efficient capital investment framework.

We absorb built a robust platform with multiple levers to create long-term shareholder value while enabling us to address the cyclical aspects of their business. There are strong secular growth drivers across the majority of the discontinue markets they serve. They absorb strengthened their poise sheet and paid down approximately $6.3 billion or 40% of their debt since the closing of their SanDisk acquisition. They moreover recently paid down $500 million in their revolving line of credit. Shareholder returns absorb been one of their top priorities, and in the ultimate 12 months, they absorb returned over 80% of their free cash tide to shareholders in the configuration of dividends and share buybacks. That takes into account mandatory debt pay-downs.

As you absorb heard today from each of the presentations, the evolution of the data-centric economy creates massive occasion for their company. Over the ultimate decade, Western Digital has generated superior shareholder returns against the S&P 500. Their strategy has been to position the company to capitalize on the long-term opportunities and create compelling shareholder value, recognizing that they must moreover navigate periods of short-term volatility. The upward trend of this chart demonstrates the successful implementation of this strategy. They absorb the scale, technology engine and portfolio breadth required to meet the evolving needs of their customers and partners. They are fully vertically integrated in both difficult drives and glimmer products. And just as importantly, we’re able to present unique technical expertise and architectural insights across both data infrastructure technologies.

I would relish to spend a second describing how we, as a management team, absorb significantly diversified their revenue foundation to focus on strategic, elevated value products. As they discussed 2 years ago, they absorb transformed from a company with a significant dependence on client PC difficult drives in fiscal ‘13 to a far more diversified trade today, with client PC difficult drives representing only 14% of their total revenue during the ultimate 12 months. They absorb significantly expanded their glimmer portfolio, with glimmer representing now approximately 50% of their total revenue during the ultimate 12 months. And as I just referenced, approximately 60% of their total revenue today is coming from elevated value products versus 27% in fiscal ‘13. This is a result of their focus on strategic growth markets and elevated value applications.

Now, let me provide you with an update on their progress towards the strategic and financial goals they set during their Investor Day 2 years ago. As you can contemplate in the slide, for fiscal ‘17 and fiscal ‘18, they absorb achieved and in many instances, exceeded their targeted long-term financial model. They absorb successfully integrated both HGST and the SanDisk acquisitions. They absorb realized their near-term synergy targets for both transactions and remained on track for their long-term targets. We’ve continued to allocate their capital in a balanced way through de-leveraging, dividends and share buybacks. They absorb continued to reduce their low debt, bringing the total leverage ratio below 2x. Over the ultimate 12 months, they absorb paid $594 million in dividends and bought back $1.2 billion in stock. In addition, they absorb optimized their capital structure, significantly reducing their interest expense, enhancing their financial flexibility and improving their liquidity. While they continue to believe in the compelling long-term opportunities in the data infrastructure industry, the current industry dynamics create near-term operational and financial volatility, which their management systems and operating model are designed to mitigate.

With respect to their long-term financial model, it’s a target model reflecting how they expect the company to fulfill in most market environments. At times, they will operate above the model as they absorb done periodically over the ultimate few years. And at times, they will operate below the model. Currently, the market environment is such that their near-term financial results are expected to breathe below the model. However, the cyclical aspects of their trade are well understood and their strategy and operating model are designed to enable the company to successfully navigate through the down phases of the cycles and breathe well positioned for leadership when they enter the up phases.

Let me inaugurate with some historical perspective on their NAND glimmer trade that builds on some of what you heard Professor Siva talked about earlier. The chart shows the past two NAND glimmer cycles in addition to the current one that began in the first quarter of this calendar year, what’s represented through their fiscal fourth quarter of ‘16 or the dotted line which is the SanDisk acquisition date, are SanDisk glimmer revenues and low margins on a standalone basis, while the poise of the chart shows the combined Western Digital and SanDisk glimmer business. You will note that the dotted glimmer revenue trend line over this long-term term demonstrates an upward trajectory. With their portfolio breadth, they entered this cycle with nearly twice the scale that SanDisk had when they ultimate navigated the cycle as a standalone company. As you can contemplate on the chart, they recorded $2.6 billion in fiscal second quarter ‘18 glimmer revenue versus the $1.3 billion recorded by SanDisk during fiscal third quarter ‘15 and $1.4 billion reported during fiscal first quarter ‘12. By combining both difficult drives and flash, they absorb tempered the impact of the NAND glimmer industry’s periods of volatility as a result of more stable difficult drive low margins. This enhances their financial model resiliency.

Although the NAND glimmer industry exhibit cyclical volatility, the industry has continued to become more economically rational as it matures. They believe one of the strongest barometers of the industry’s long-term economic health is its return on investment. As you can see, over the ultimate 10 years, the industry has delivered a robust upward ROI trajectory in spite of its periods of short-term volatility driven by transitory supply/demand dynamics. The other key barometer is elasticity of demand. Historical evidence demonstrates that as prices normalize, the NAND market has not only exhibited claim elasticity in existing segments, but has time and again enabled original opportunities and original market applications.

With this deeper understanding of the NAND industry dynamics, I would relish to return to their total addressable market at an aggregate and sub-segment level. Client devices, includes both difficult drives and glimmer products for PCs and consumer electronics and glimmer solutions for mobility. This moreover includes high-growth glimmer application such as Internet of Things, autonomous vehicles, AI and machine learning. Client solutions is their branded retail glimmer and difficult drive business. Data headquarters devices and solutions includes their data headquarters and enterprise, difficult drive and glimmer products as well as their platforms and systems business. In the next few slides, I’d relish to present some further insights into each of these discontinue segments and how they contribute to their overall trade opportunity. To retain it All in context, they serve large, growing markets. They are forecasted to total $111 billion for their core trade and $35 billion for their data headquarters solutions trade by fiscal ‘23. They absorb the portfolio breadth and depth to participate across All major sub-segments of this market, which enables their long-term revenue growth, operational efficiency and cash tide generation.

Let me highlight some of the famous aspects of these markets. Client devices has a $57 billion TAM in fiscal ‘23 with a 4% CAGR. glimmer is expected to grow at twice that rate or 8% to $51 billion, while the difficult drive TAM is expected to decline at a 13% annual rate to approximately $5.6 billion from difficult drives transitioning to glimmer mainly in PC applications. All of which is factored into their product and operating plans. In client solutions, they absorb a big TAM of approximately $10 billion in fiscal ‘23, experiencing a slight decline of approximately 2% on an annual basis. With extensive worldwide distribution in leading consumer brands, they generate strong cash tide from this segment and continue to build on their leading position across the segment. Data headquarters devices is expected to absorb the strongest growth during this period, double-digit CAGRs both in difficult drives and flash. The market is expected to breathe approximately $45 billion by fiscal ‘23. I’d relish to highlight that capacity enterprise difficult drives remain a key engine of growth for their company and one in which we’ve demonstrated technology and product leadership, with the industry’s first helium-based difficult drives and the recent announcements relating to their energy-assisted recording technology. Finally for data headquarters solutions, they contemplate the $35 billion TAM in fiscal ‘23 as a compelling up-to-stack growth occasion for us. This is another segment where they believe we’re positioned favorably, thanks to their vertical integration and vertical innovation advantages, as Phil described earlier.

Overall, they expect a $111 billion TAM in fiscal ‘23 with a 6% CAGR, with glimmer growing at 8-plus percent to $84 billion and difficult drives growing at 2% to $27 billion, again, primarily driven by capacity enterprise. To enable greater understanding and modeling, I’d relish to present some additional commentary on each discontinue segment, particularly the key trends for their business. In client difficult drives and solid state drives, they expect increasing glimmer penetration in desktop and notebook applications as well as a slight decline in PC units over the next 5 years. They expect uninterested glimmer capacities per unit to augment to approximately 700 gigabytes by fiscal ‘23. In consumer electronics difficult drives, they expect a growth occasion in surveillance. In mobility, growth is primarily driven by the substantial augment in uninterested capacities per smartphone unit, which is expected to achieve 200 gigabytes per unit by fiscal ‘23, while the unit growth is expected to breathe a modest 3% per year. The 5G rollout and proliferation of smartphones in sure developing regions relish India could provide additional tailwinds to this business. And finally, for other embedded flash, they expect significant claim for NAND glimmer in various applications from surveillance and security, automotive, industrial IOT and gaming.

For their client solutions business, they expect slight declines of both retail difficult drive and flash-based products as more consumers accept comfortable with the cloud for their personal storage needs and as uninterested smartphone and personal compute device capacities expand. However, that decline in retail TAM is offset by an 18% growth rate in removable glimmer cards, which evanesce into high-growth IOT applications.

For enterprise difficult drives, the growth is primarily driven by capacity enterprise. They expect the TAM to expand from $9.9 billion in fiscal ‘18 to $18.8 billion in fiscal ‘23 for a 14% CAGR. For enterprise flash, another one of their strongest growth opportunities, they expect significant claim from the continued transitions to the cloud and the major infrastructure build-outs required for AI and machine learning. They expect approximately 10x growth in PCIe Enterprise SSD bit claim from approximately 12 exabytes in fiscal ‘18 to 135 exabytes in fiscal ‘23. For data headquarters systems, the proliferation of mountainous data and rapid data applications will fuel their ongoing growth. They moreover believe hyper-converged infrastructure customers are progressively seeking a reliable, cost-effective, white box alternative as they lag towards the composable infrastructure vision of the future.

Our strategy has always been to focus on growing the company profitably. We’ve grown their revenue 4x since fiscal ‘07 through a series of acquisitions and strong operational execution. We’ve successfully transformed from a unadulterated difficult drive company in fiscal ‘07 to a global leader in data infrastructure, with over $20.5 billion of revenue in the ultimate 12 months. Their non-GAAP operating margins absorb expanded from elevated single digits in fiscal ‘07 to the mid-20s in the ultimate 12 months. As you can contemplate in the slide, while the long-term trajectory of their revenues are upwards they absorb achieved this growth by navigating periods of near-term volatility.

Let me provide you a brief overview of their value creation from combining difficult drives and flash. Since the acquisition of SanDisk and until fiscal ‘18, we’ve grown their revenues by 63%, their non-GAAP operating margin by 102% and their free cash tide by 74%. Their strategy of combining difficult drives and glimmer has delivered greater profitability to their stakeholders and the occasion for continued long-term value creation. Their strong financial performance has been driven by sound execution. They accelerated their revenue growth as they entered into a stronger NAND glimmer market following the SanDisk acquisition. As you can contemplate in the middle section, they did this through prudent organic investment, with a focus on research and evolution and product innovation while diligently managing their overall operating expenses. This has resulted in strong returns on invested capital for their shareholders. In the current environment, they expect their non-GAAP OpEx as a percentage of revenue to breathe slightly higher than their long-term financial model. They are focused on aggressively managing their expenses and investments without compromising their market leadership and their capacity to serve their customers.

Next, I would relish to contend their cash tide generation capability on both the levered and un-levered basis. During fiscal year ‘17 and ‘18, they generated adjusted or un-levered free cash tide of $2.6 billion and $2.7 billion or approximately 14% and 13% of revenue, respectively. Now regarding capital expenditures, as they explained 2 years ago, they absorb an efficient capital investment model. As they stated in their long-term financial model update during their fiscal fourth quarter ‘18 earnings, they target a cash CapEx ambit of between 6% and 8% of revenue. The difficult drive CapEx trends below this ambit and the glimmer CapEx trends slightly above this range. Finally, as they guided in their recent configuration 10-K for their fiscal ‘18, they expect fiscal ‘19 cash CapEx to breathe between $1.5 billion and $1.9 billion.

Another locality I would relish to revisit from their ultimate Investor Day and highlight again is the efficient cash CapEx model in their joint venture with Toshiba Memory. At a elevated level, the JV CapEx is funded from three sources: one, direct cash investments from us and Toshiba Memory; two, third-party tackle lease financing; and three, JV cash tide generated from selling wafers to Western Digital and Toshiba Memory. The wafers to JV sales to us absorb two cost components: fixed cost and variable cost that are reflected in their COGS. By reducing the wafer starts through the recent actions they described on their ultimate earnings call, they purge the variable costs, resulting in cash saving. However, they still absorb to pay the fixed cost to the JV. The fixed cost for the wafer output they procedure to reduce will breathe taken as a GAAP accounting charge.

As a management team, they continually focus on the external factors that may absorb the greatest impact on their business, operations and their financial performance. Understanding these external factors helps us better operate the trade and develop their future plans. This slide provides a list of eight external factors and indicates what they believe the potential impact could breathe positive, negative or both. Many of these factors absorb both long-term and short-term implications. As this changes over time they will provide commentary regarding how these factors are impacting their trade and, where relevant, how they procedure to mitigate any resulting risks.

Since the closing of their SanDisk acquisition, they absorb successfully executed on their capital structure optimization strategy, which is focused not only on de-leveraging but moreover lowering their borrowing costs through a series of debt transactions, repricings, prepayments and pay-downs. Although this application is ongoing in just 2 years, these highlighted events absorb resulted in a reduction of approximately $470 million of annual cash interest expense, a reduction of approximately 3.6% of their weighted uninterested borrowing rate, which has been accomplished against a backdrop of rising interest rates, and finally, they absorb reduced their low debt by $6.3 billion since the close of their SanDisk acquisition. The resulting impact on their trade is greater flexibility and liquidity, captious for navigating near-term industry volatility and for positioning us for long-term profitable growth.

With respect to their capital structure today, they absorb a strong poise sheet, with liquidity of $6.5 billion as of September 28, including $4.3 billion of cash and equivalents and $2.25 billion of un-drawn revolver capacity. Their debt is now $10.8 billion on a low basis and $6.5 billion on a net basis. They absorb significantly reduced their cost of debt in spite of a rising rate environment. Specifically, their effectual interest rate in the most recent fiscal first quarter of ‘19 was 3.8% versus the 5.6% recorded when they first closed the SanDisk acquisition. They will continue to optimize their capital structure over time.

I would relish to share how they view overall capital management on an end-to-end basis. It starts with their robust trade model with strong operational and financial discipline, their efficient capital investment framework and strong poise sheet, which allows us to invest in key areas of their trade and return capital to stakeholders. Capital management is a priority for their company. This scorecard shows how they absorb performed over the ultimate 12 months. They generated approximately $2.2 billion of free cash tide or 11% of their revenue. Their days inventory outstanding of 87 days is elevated due to a combination of excess glimmer inventory and some strategic difficult drive inventory they requisite to maintain as fraction of their Kuala Lumpur site closure. As they mentioned earlier, they absorb sufficient liquidity and absorb reduced their debt poise after their recent $500 million revolver pay-down to $10.8 billion. They will continue to focus on de-leveraging and maintaining strategic flexibility. Their cash CapEx is near the high-end of their trade model ambit due to investment in their glimmer business. They are in vigorous discussion with their JV partner to manage their investments given the current market environment.

Also as I mentioned earlier, they absorb returned a significant portion of their free cash tide to their shareholders in the configuration of dividends and buybacks. Just as they stated 2 years ago, they continue to breathe disciplined in their capital allocation, focused on long-term shareholder value creation. Their priorities comprise investing in next-generation technologies, products and solutions, many of which they absorb heard about today; staying committed to their quarterly dividend; executing their recently announced $5 billion share buyback program; optimizing their capital structure and continuing to de-lever; and finally, conducting mergers and acquisitions to acquire key technologies, teams, market access, intellectual property and everything necessary to achieve their strategic objectives.

Here you can contemplate the results of their disciplined capital allocation. Since the start of their fiscal ‘13 year, which was the first full year after the acquisition of HGST, they absorb done the following. They invested approximately $22 billion organically in their trade in terms of both cash CapEx and operating expenses, the majority of which was in research and development. They delivered approximately $2.6 billion in dividends to their shareholders and bought back approximately $3.8 billion of their shares. They moreover paid down approximately $7 billion of debt – $7.1 billion to, breathe more precise, of debt. And they consummated almost $17 billion of strategic M&A, primarily through their transformational acquisition of SanDisk. Just as they absorb since the genesis of fiscal ‘13, management’s capital allocation strategy going forward will remain disciplined and balanced.

In conclusion, they believe they absorb a compelling platform for shareholder value creation based on a robust trade model vertically integrated in both difficult drives and glimmer and designed to generate long-term profitable growth; their positioning in attractive growth segments, coupled with their scale, product portfolio breadth, technology leadership and the depth of their customer relationships; their capacity to operate efficiently, identify and drag All confiscate levers in the current environment while investing judiciously in long-term growth; their much stronger liquidity and poise sheet today relative to 2 years ago and their continued focus on further optimization; and finally, their disciplined capital management and allocation.

Thank you. Now they are going to bring Steve and Mike back up to the stage and lag to mp;A. Thanks.

Question-and-Answer Session

Steve Milligan

Alright. Fire away.

Mehdi Hosseini

Thank you. It’s Mehdi Hosseini from Susquehanna. I absorb two questions. One for the team. brace of times during the presentation, you talked about improving longer term prospect, especially if you ogle at second half of calendar ‘19, but I haven’t heard anything about how putrid is it going to accept in the first half. So if you could provide any update as to how you contemplate NAND supply/demand and weakness in the near line going to trend into the first half of ‘19? And then one follow-up for Martin, you did highlight the challenges in unlocking the CPU and GPU, but I didn’t hear from you or Martin sitting here. I didn’t hear from you how you are actually going to implement it or how you are going to breakup the constraint you have. I believe in the longer term, that’s very promising, very inspiring. But I still don’t know how you as a memory solution provider are going to evanesce unlock a CPU and GPU lock?

Michael Cordano

Okay. Let me talk to the market dynamics question, so talking about glimmer first. So I believe their view as I stated just to reiterate is the claim profile on a long-term basis is in this 36 to 38 range. In-market claim for this calendar year, they believe there is a few headwinds that are going to influence the front half of the year. A combination of that and candidly dealing with the inventory overhang that they will collectively, as an industry, breathe bringing in will prefer the claim rate for calendar ‘19 below that range. So certainly, they would contemplate it is a seasonal market in general. So it will breathe stronger in the back half, weaker in the front half, but they enact contemplate the claim side of glimmer to breathe below that 36% to 38%. touching to the hyperscale and cloud build-out and capacity enterprise story, yes, as they said in the earnings call, they would express that sort of consumption angle or investment angle is going to breathe Somewhat less in the first half of the year. They enact absorb lucid indications that the investment cycle reaccelerate in the back half of the year, and so that’s their current planning. So they believe we’ll absorb a much stronger second half of calendar ‘19 relative to capacity enterprise, but that will apply to glimmer as well as those guys inaugurate to reinvest.

Unidentified Analyst

You said the exabyte will breathe flattish in the first half. Is it still tracking to flat?

Michael Cordano

So exabytes relative to year-over-year is roughly flat here. And remember, that was 100% – nearly 100% growth rate if they ogle at the year-on-year compare. So yes, it’s in that identical range. It’s not deteriorating, but it’s going to breathe a slower investment cycle in the first half of ‘19.

Steve Milligan

Let me – actually, I am going to prefer a stab on the second question. And then you can amend me when I am wrong, right. But see, fraction of the thing is, is that they believe and feel free to amend me if I misstate this, but they believe that system architectures requisite to change in order to lag to a data centric world. Now, let’s breathe honest. They are not going to enact that All on their own, right. They contemplate RISC-V as an occasion for an ecosystem, if you want to summon it, a RISC-V ecosystem to invest in something in a collective fashion that allows the system architectures to lag to a data-centric world. They moreover accept a secondary capitalize from that by varying inexpensively redesigning the cores that they consume internally in their product, which they were talking about later, and they can optimize that performance to us. It’s really – but they are trying to assist – they are not trying to atomize ourselves, the CPU or GPU or they are not trying to enact it on their own, but they enact believe that system architectures requisite to adjust and they are trying to catalyze it through RISC-V, through a very low investment level for us that actually provides us with direct capitalize that has nothing to enact with touching to a data-centric world.

Martin Fink

So, let me just add to that, to reinforce and I should absorb said this. So one is they absorb no intent of going into the processor trade and becoming a processor vendor. For us, processing is a means to an end, not the discontinue in and of itself. And so that’s what’s different. To respond your question, there is a brace of different parts. So everything Steve said, exactly right, they are structure out an ecosystem. And the work that they are doing privilege now is they are looking at what are the things that they can enact in order to foster that ecosystem. One of the things that they are working on internally privilege now is a platform, codename Houdini and that platform is a memory-centric platform that will allow you to absorb a RISC-V processor, an ASIC, an FPGA and anybody else who wants to advance play along, share one memory fabric through that omni-extend fabric that I talked about. And in fact today, at the RISC-V Summit, if you want to evanesce to Santa Clara Convention Center, they are demonstrating the omni-extent fabric, talking point-to-point between two nodes. And so the other way in which they are doing this that Steve didn’t mention is through investment in startup in other companies. So a valid illustration is they invested in a company called Esperanto. Esperanto is structure a NVIDIA-style graphics processor, right. And so they don’t requisite to evanesce build their own GPU if Esperanto is structure a GPU and they can just consume that. So, that’s the beauty of this sort of open source model and why they are serious and why they said they are open sourcing everything they enact as it relates to RISC-V. So, that’s kind of how they are going through it, okay.

Amit Daryanani

Hi, Amit Daryanani, RBC Capital Markets. Thanks a lot for hosting the event. It’s really helpful. I guess two questions for me, one on the NVMe SSD roadmap. I understand for a host of reasons, you guys were behind you will absorb a product in early ‘19. But maybe assist me understand why would a customer want to pick the Western Digital solution when there is a equally valid solution from Samsung and others. So what’s so different about what Western Digital will enact beyond pricing? The second part, Martin said you guys talked a lot about near-term headwinds, near-term softness. I don’t know you guys updated the December quarter usher at all. enact they prefer that to weigh in that the numbers that are out there are perfectly fine and safe? Just a remark or a suggestion would breathe otherwise. Thank you.

Michael Cordano

Alright. Let me handle the PCIe question for enterprise. So I believe brace things. One is that marketplace for enterprise is underserved. Although there is 5 or 6 depending how you account glimmer players, there are only a few that are serving that marketplace full stop. So, it’s an underserved fraction of the market in general. Even as that changes, we’re very confident in their architecture because this is not a standardized marketplace. Increasingly, it’s becoming more customized. So when you ogle at the platform I talked about, their capacity to efficiently deliver customized solutions to meet specific workload requirements, they believe they are coming in a very valid place. So the capabilities that I talked about, Ganesh talked about will breathe deployed here. So this is not relish the ancient days of enterprise systems. They are seeing very specific customized requirements coming down from us. Their capacity to efficiently meet those requirements will breathe important. So they absorb a platform to enact that and they are going to invest heavily to create that capability. The other thing that’s happening because of that specialization, you can’t service every single discontinue market requirement. So the way we’ll evolve this, and it was a diminutive bit referenced in my talk but moreover by mark and Ganesh, is they are going to absorb more summon it, joint evolution work, because again of the specialization. So people are going to partner up because this is not about the industry saying, here is the standardized product for 2 years from now. Everybody, evanesce develop that. And whoever gets there first, wins. It’s not relish that in this marketplace. So it’s increasingly customized. Their capacity to enact that efficiently off a strong platform is a competitive advantage. So underserved, number one. Number two, they believe they absorb got a powerful platform that can meet over time.

Steve Milligan

And their customers want us there, too. That’s the other thing. So I’ll prefer the second question on the guidance. So they are not updating guidance. Let me account for why so that everybody understands it, just to breathe clear. Typically, in calendar Q4, which they are in privilege now, December represents, I don’t know what the exact number is, but anywhere clearly between 40% to 50% of their quarterly business, I weigh in it’s a mountainous month. And so they are sitting here December 4, they still absorb a lot of the month left, a lot of the quarter left. So that’s one determination. The other thing that Mike’s chart, when he highlighted different elements and then there was kind of I don’t believe you had any yellows. I believe it was either red or green or whatever, but it was either positive or negatives, right, was the intent of that was to say, if you evanesce back to when they set guidance, what’s kind of changed? In other words, what’s maybe gotten a diminutive bit better, what’s gotten to breathe a diminutive bit more headwinds? That was the intent of that. And so to breathe lucid about that, what that does is it does point to more of a negative bias. Just shows more of a negative bias. And I don’t believe that, that frankly should breathe a surprise to anyone when you ogle at what’s happening from either what other companies absorb said, what’s going on in the marketplace. And so there is a slight negative bias to those numbers. But to breathe clear, they are not updating their guidance at this point for the current quarter. Did I express that the privilege way? Did the lawyer give me the way when everything is okay? He gave me a not such reassuring nod, well.

Wamsi Mohan

Thank you. Wamsi Mohan, Bank of America/Merrill Lynch. Thanks for All the details you shared today on your long-term models. I was wondering just philosophically right, if you just step back and I weigh in they heard Professor Siva you got the lowest cost bit at 96 layers. Why then should you breathe one of the players that is cutting wafer starts, why would it not advance from the marginal cost player in the industry and you could consume this as a lever to prefer market share? I absorb a follow-up.

Steve Milligan

Well, I can’t remark – I will remark on that and you guys can chime in too if you would like. I can’t remark on what their competitors are doing. The only thing that I can enact is remark on what they are doing. And the reality of it is, is that whether you ogle at their inventory levels, which mark talked about them being elevated, or if you ogle at their plans supply prior to the cutbacks versus the claim that we’re seeing at prices that they reckon to breathe acceptable, we’re not seeing it. So they believe that they requisite to slice their supply. I can only account for that from their perspective. And I feel relish it’s absolutely the privilege decision, but I can’t remark on what their competitors are doing. And oh, by the way, having gone through this in the drive industry at sure points and whether it was the Maxtor guys back in the day, I can’t account for what my competitors enact All the time. What seems to me to breathe an obvious thing, and they develop different decisions and that’s fine, All they can enact is manage what they control, and that’s what we’re doing.

Michael Cordano

I believe just to add to what Steve said I mean, they contemplate what their planned output rate would be. They moreover contemplate what the discontinue consumption rate is. In a time relish this, sufficiency ratio is quite important. This is a bit of a closed system, right. And so their view is, is – and mark talked about this, this is an industry that’s coming through a maturing phase. They are going to enact their fraction in matching their output to what they contemplate discontinue market claim is going to breathe for us and they don’t believe about this in a contribution margin basis. They believe about it about long-term return on capital and that’s the way they are running the business.

Wamsi Mohan

Thanks for the color. And as a quick follow-up, I was wondering if you can remark on the fact that there was some worry amongst investors that you guys might breathe losing glimmer market share based on the fact that some of your competitors are shipping multi-chip packages both DRAM and NAND combined versus you guys potentially not addressing some of that market. So, if you could just remark whether that’s actually accurate, not whether you believe that creates any structural impediments for you or not? That would breathe so helpful.

Michael Cordano

Yes. So if they just evanesce to sort of the facts of it, they lost a diminutive bit share two quarters ago. And ultimate quarter, they actually gained bit share. So they don’t – yes, they enact absorb a) in that a specific investment, a structural disadvantage, but they believe they absorb enough diversification in their portfolio to work around that. So from a bit share standpoint, no, I don’t believe they are in a situation where they are systematically losing bit share.

Unidentified Analyst

[indiscernible]. I want to evanesce back to the question about industry inventory, where in the supply channel is the inventory? Is it in chips, dies, wafers or modules or systems or every place? That’s one question. The second question is, you mentioned that there is a negative bias to what’s happening in the industry, but at the identical time, hopefully, Western Digital absorb moreover started cutting costs deeply hopefully faster than the negative bias. That’s it.

Michael Cordano

Alright. Let me prefer the inventory conversation. I think, unfortunately, you are sort of everywhere is the privilege answer. So, they contemplate customers that are holding inventory as they are at long-term constrained position. They took on a more aggressive bias in terms of loading up their own inventory, so they are consuming it. Certainly, manufacturers absorb inventory. They obviously showed that in their own poise sheet. So, it’s really the whole supply chain making an adjustment.

Steve Milligan

So on the cost and expense they are obviously taking very aggressive action to manage their cost and expenses, no question about that. And they will continue to contemplate progress in that regard. I enact want to develop one remark lucid from an expectation standpoint as it relates to investors. The first thing is and I don’t know, but this is the truth. The first thing is, is that given the downward bias in terms of glimmer pricing that they absorb seen recently, there is no way they can slice their costs and expenses rapidly enough to offset that. So, if there is a notion that they are going to breathe able to enact that, it is incorrect. The other thing is that they are going to intelligently tackle their costs and expenses and not compromise the long-term future of the organization. That is not a smart way to flee the railroad. And again, if there is an expectation from an investor standpoint, that that’s what they are going to do, it’s not correct. So I just want to – but they absolutely will enact everything that they can to intelligently manage their costs and expenses down to assist offset some of the weakness that they are seeing.

Mark Long

Right. So you will contemplate – as they absorb talked about, you will contemplate the OpEx trajectory advance down and reflect this term of volatility. But again, the majority of their OpEx is R&D. And while they are taking a close ogle at that and making sure they are focused on the privilege investments and the privilege projects, they are not going to retain ourselves in a position where they lose their powerful technology advantages, their powerful productization capability or where we’re not prepared to propel forward as a leader coming out of the cycle.

Steve Milligan

So I’ll consume one illustration just to give you a diminutive bit of flavor on it. One of the things that they absorb announced is the closure of their Kuala Lumpur facility, the difficult drive manufacturing facility. We, first off, it’s – I don’t want this to, I mean, everybody says they absorb a difficult job, right. I recognize that, but it’s not light to close a factory. There is a lot that goes into that. And you absorb to transition that manufacturing capability, production capability to other facilities. So that takes a diminutive bit of time and there is risk associated with that. Given what they are seeing from an overall market dynamic, they are looking at the occasion to accelerate that closure and that transition faster than what they previously expected. They haven’t really committed to anything on that, and so that’s an illustration where they will evanesce ogle at something. But they absorb to recognize that if they evanesce too fast, they could screw it up, too. So they absorb to breathe heedful when they ogle at things. So that’s just kind of talks to relish an illustration of where they are looking at accelerating something, but they moreover absorb to understand that when they enact that, there is a flipside to it that you absorb to reckon as well. In other words, you enact add increased risk that they absorb to manage.

Mark Long

Yes. And I guess the ultimate point is there absorb been players in the industry. And we’ve seen in the past where the lag to slice OpEx in a draconian way to react to the cycle has resulted in the loss of the capacity to continue to fund the privilege projects and the privilege products. So, they want to develop sure they don’t develop that mistake, because that actually proved far more harmful than the short-term capitalize they got from the OpEx reduction.

Karl Ackerman

Karl Ackerman from Cowen. I had two questions, please. You guys talked about 36% to 38% supply growth in NAND, I believe next year and it sounds relish that’s going to breathe here for a while. But to what extent does wafer supply growth play into that 36% to 38% bit growth trajectory for your company, specifically. I know you said the overall industry wafers are flattish and will breathe driven by conversions and tech transitions. But I believe maybe you are a diminutive bit further along from transitioning from planar to 3D NAND than some of your peers. So perhaps, the industry dynamic of flattish net wafer adds might not relate as much to you. So your thoughts there...

Michael Cordano

Yes. Let me just clarify that. 36% to 38% is discontinue market claim for flash. They absorb talked about a long time this 35% to 45% growth rate on the supply side, having Steven talked about that on his chart. So in 2018, it was in the sort of mid 40s, privilege and now they are kind of watching. Obviously, they absorb taken actions. They are watching what others are doing relative to what’s going to occur this year. So their action is really trying to match their supply with their expected discontinue market demand. So that’s the way I would characterize that.

Karl Ackerman

Got it. I guess, as my follow-up, Mike, I believe you talked about earlier being more vertically integrated in SSDs as you progress toward NVMe. I am snoopy how you believe about using merchant versus in-house controllers across other SSD interfaces, particularly relish SATA or SaaS? And if so, how should they believe about the OpEx from here if the altenative is to lag toward those in-house solutions? Thank you.

Michael Cordano

So for what they believe their strategic and rapid growing markets, they are almost exclusively in-house and they are really there. So the OpEx you contemplate from us today is enabling us to accept there. As they ogle at other sort of maybe more niche markets, they will evaluate and enact they want to consume a third-party partner to prosecute that opportunity. So they will really augment what they believe is the core in-house IP development. So that does a lot of things for us. Obviously, product differentiation is one. But ultimately, they can accept to a very lucid time-to-market handicap significance being able to launch products very close to the node ramp, so, All of those things that they are advantaged from the internal development.

Steve Fox

Hi. Steve Fox with Cross Research. Just one question. So from an OpEx standpoint, you absorb given a lot of powerful updates on how the data headquarters solutions roadmap has progressed the ultimate year or 2. If you are still on a slippery slope with express where the bottom is in terms of the cycle, what’s the commitment to continuing to invest, continuing to enact R&D with a customer? And is there some sort of near-term offset with revenues accelerating further on sort of this optionality on the core technology?

Steve Milligan

Are you – which locality are you referring to?

Steve Fox

The data headquarters solutions, so where you...

Steve Milligan

So Phil’s area.

Steve Fox

Yes, where you had a $35 billion TAM and you talked about the 4 million product.

Steve Milligan

Well, Phil’s first goal and Phil knows this, is to accept off the payroll, okay? And he is very, very close to that. And so now, Phil, close your ears because I still want to develop sure you accept off the payroll. But the drag from his trade is insignificant at this point, alright? And so they believe it’s a valid long-term investment for us. And several quarters ago, I couldn’t absorb said that, but at a minimum, they want to accept that trade to a breakeven level. And relish I said at this point, the fact that it’s not is insignificant to their current financial situation. So that positions us to breathe able to develop the privilege long-term investments, to prefer handicap of not only the product opportunity, the market occasion but financially to capitalize from that as well.

Peter Andrew

I believe they absorb time for one more.

Christian Schwab

Great. Thanks for taking. Christian Schwab from Craig-Hallum. So I was confused on the internal controller conversation. So they are touching to NVMe with their own internal controller. You absorb had a brace partners for an extremely long term of time. Does that weigh in that you are trying to lag away from working with them to develop your own controllers or you are putting more of a content, and they’re still fabbing those chips for you?

Michael Cordano

Yes. So let me just breathe clear. So within enterprise NVMe, that’s an internal controller. That’s well-known. Their partners understand that. There are partners that provide us IP, but it’s still their design and so on. And then ultimately, how they enact the fabbing of that depends on the particular part, identical situation for clients, identical situation for their embedded products. Now, there are a number of other products that they absorb that are more sort of niche market. For example, their SATA client SSD continues to breathe an external controller. They don’t contemplate that as an emerging growth segment, hence that choice. The ultimate point I’ll develop is on the enterprise side of things. Their Intel joint evolution procedure goes on. So that’s a controller they did jointly with them. So that gives you, hopefully, a diminutive more clarity on where they are with their controller investments.

Steve Milligan

And he was talking about glimmer obviously not difficult drives.

Michael Cordano

Yes. And difficult drives are All in the traditional models, what they absorb partners in that they work within that area.

Christian Schwab

Correct. And then I guess my ultimate question then. Steve, they absorb been through a lot of cycles together.

Steve Milligan

Yes.

Christian Schwab

What is the timeframe that you would expect logically to breathe back at the midpoint of your long-term target and a few plus and minuses?

Steve Milligan

Well, anything I express can breathe wrong. I mean, let me betray you what – I am going to respond your question, but it’s going to breathe a non-answer. What they absorb been trying to execute to is that the market will inaugurate to normalize where margins inaugurate to kind of start to accept a bit better as they lag into the back half of next calendar year. Now the challenge that they absorb is that the claim environment has incrementally gotten a diminutive bit more negative. The signals are a diminutive bit more negative recently. Some of the smartphone volumes, All that stuff, I absorb to breathe a diminutive heedful because customers don’t relish us talking about them. And so that could conceivably propel things out a diminutive bit, diminutive difficult to express because it is a dynamic environment. But when they talked about the wafer start cuts that they were doing, it was to accept ourselves into effectively a balanced supply/demand situation as they kind of exited the June quarter. Don’t know what others are going to do, but that’s what they were trying to orchestrate from their perspective. And they are going to continue to try to enact that as the supply and claim to the extent that they can dial it in. It’s a diminutive bit – now they are operating with these mountainous giant mega fab kind of things and so you can’t dial it in maybe as much as they historically absorb been able to enact in the drive business, diminutive bit more challenging. So that’s the best I can do, Christian.

Christian Schwab

Thank you.

Steve Milligan

I am supposed to wrap it up. Well anyway, sorry, thanks for everybody coming. They value your interest in their company. They value you All being here and they are going to now exit for lunch. So you evanesce out that way and then to your left. Alright. So thank you everybody.

SeekingAlpha


Direct Download of over 5500 Certification Exams

3COM [8 Certification Exam(s) ]
AccessData [1 Certification Exam(s) ]
ACFE [1 Certification Exam(s) ]
ACI [3 Certification Exam(s) ]
Acme-Packet [1 Certification Exam(s) ]
ACSM [4 Certification Exam(s) ]
ACT [1 Certification Exam(s) ]
Admission-Tests [13 Certification Exam(s) ]
ADOBE [93 Certification Exam(s) ]
AFP [1 Certification Exam(s) ]
AICPA [2 Certification Exam(s) ]
AIIM [1 Certification Exam(s) ]
Alcatel-Lucent [13 Certification Exam(s) ]
Alfresco [1 Certification Exam(s) ]
Altiris [3 Certification Exam(s) ]
Amazon [2 Certification Exam(s) ]
American-College [2 Certification Exam(s) ]
Android [4 Certification Exam(s) ]
APA [1 Certification Exam(s) ]
APC [2 Certification Exam(s) ]
APICS [2 Certification Exam(s) ]
Apple [69 Certification Exam(s) ]
AppSense [1 Certification Exam(s) ]
APTUSC [1 Certification Exam(s) ]
Arizona-Education [1 Certification Exam(s) ]
ARM [1 Certification Exam(s) ]
Aruba [6 Certification Exam(s) ]
ASIS [2 Certification Exam(s) ]
ASQ [3 Certification Exam(s) ]
ASTQB [8 Certification Exam(s) ]
Autodesk [2 Certification Exam(s) ]
Avaya [96 Certification Exam(s) ]
AXELOS [1 Certification Exam(s) ]
Axis [1 Certification Exam(s) ]
Banking [1 Certification Exam(s) ]
BEA [5 Certification Exam(s) ]
BICSI [2 Certification Exam(s) ]
BlackBerry [17 Certification Exam(s) ]
BlueCoat [2 Certification Exam(s) ]
Brocade [4 Certification Exam(s) ]
Business-Objects [11 Certification Exam(s) ]
Business-Tests [4 Certification Exam(s) ]
CA-Technologies [21 Certification Exam(s) ]
Certification-Board [10 Certification Exam(s) ]
Certiport [3 Certification Exam(s) ]
CheckPoint [41 Certification Exam(s) ]
CIDQ [1 Certification Exam(s) ]
CIPS [4 Certification Exam(s) ]
Cisco [318 Certification Exam(s) ]
Citrix [47 Certification Exam(s) ]
CIW [18 Certification Exam(s) ]
Cloudera [10 Certification Exam(s) ]
Cognos [19 Certification Exam(s) ]
College-Board [2 Certification Exam(s) ]
CompTIA [76 Certification Exam(s) ]
ComputerAssociates [6 Certification Exam(s) ]
Consultant [2 Certification Exam(s) ]
Counselor [4 Certification Exam(s) ]
CPP-Institue [2 Certification Exam(s) ]
CPP-Institute [1 Certification Exam(s) ]
CSP [1 Certification Exam(s) ]
CWNA [1 Certification Exam(s) ]
CWNP [13 Certification Exam(s) ]
Dassault [2 Certification Exam(s) ]
DELL [9 Certification Exam(s) ]
DMI [1 Certification Exam(s) ]
DRI [1 Certification Exam(s) ]
ECCouncil [21 Certification Exam(s) ]
ECDL [1 Certification Exam(s) ]
EMC [129 Certification Exam(s) ]
Enterasys [13 Certification Exam(s) ]
Ericsson [5 Certification Exam(s) ]
ESPA [1 Certification Exam(s) ]
Esri [2 Certification Exam(s) ]
ExamExpress [15 Certification Exam(s) ]
Exin [40 Certification Exam(s) ]
ExtremeNetworks [3 Certification Exam(s) ]
F5-Networks [20 Certification Exam(s) ]
FCTC [2 Certification Exam(s) ]
Filemaker [9 Certification Exam(s) ]
Financial [36 Certification Exam(s) ]
Food [4 Certification Exam(s) ]
Fortinet [12 Certification Exam(s) ]
Foundry [6 Certification Exam(s) ]
FSMTB [1 Certification Exam(s) ]
Fujitsu [2 Certification Exam(s) ]
GAQM [9 Certification Exam(s) ]
Genesys [4 Certification Exam(s) ]
GIAC [15 Certification Exam(s) ]
Google [4 Certification Exam(s) ]
GuidanceSoftware [2 Certification Exam(s) ]
H3C [1 Certification Exam(s) ]
HDI [9 Certification Exam(s) ]
Healthcare [3 Certification Exam(s) ]
HIPAA [2 Certification Exam(s) ]
Hitachi [30 Certification Exam(s) ]
Hortonworks [4 Certification Exam(s) ]
Hospitality [2 Certification Exam(s) ]
HP [746 Certification Exam(s) ]
HR [4 Certification Exam(s) ]
HRCI [1 Certification Exam(s) ]
Huawei [21 Certification Exam(s) ]
Hyperion [10 Certification Exam(s) ]
IAAP [1 Certification Exam(s) ]
IAHCSMM [1 Certification Exam(s) ]
IBM [1530 Certification Exam(s) ]
IBQH [1 Certification Exam(s) ]
ICAI [1 Certification Exam(s) ]
ICDL [6 Certification Exam(s) ]
IEEE [1 Certification Exam(s) ]
IELTS [1 Certification Exam(s) ]
IFPUG [1 Certification Exam(s) ]
IIA [3 Certification Exam(s) ]
IIBA [2 Certification Exam(s) ]
IISFA [1 Certification Exam(s) ]
Intel [2 Certification Exam(s) ]
IQN [1 Certification Exam(s) ]
IRS [1 Certification Exam(s) ]
ISA [1 Certification Exam(s) ]
ISACA [4 Certification Exam(s) ]
ISC2 [6 Certification Exam(s) ]
ISEB [24 Certification Exam(s) ]
Isilon [4 Certification Exam(s) ]
ISM [6 Certification Exam(s) ]
iSQI [7 Certification Exam(s) ]
ITEC [1 Certification Exam(s) ]
Juniper [63 Certification Exam(s) ]
LEED [1 Certification Exam(s) ]
Legato [5 Certification Exam(s) ]
Liferay [1 Certification Exam(s) ]
Logical-Operations [1 Certification Exam(s) ]
Lotus [66 Certification Exam(s) ]
LPI [24 Certification Exam(s) ]
LSI [3 Certification Exam(s) ]
Magento [3 Certification Exam(s) ]
Maintenance [2 Certification Exam(s) ]
McAfee [8 Certification Exam(s) ]
McData [3 Certification Exam(s) ]
Medical [69 Certification Exam(s) ]
Microsoft [368 Certification Exam(s) ]
Mile2 [2 Certification Exam(s) ]
Military [1 Certification Exam(s) ]
Misc [1 Certification Exam(s) ]
Motorola [7 Certification Exam(s) ]
mySQL [4 Certification Exam(s) ]
NBSTSA [1 Certification Exam(s) ]
NCEES [2 Certification Exam(s) ]
NCIDQ [1 Certification Exam(s) ]
NCLEX [2 Certification Exam(s) ]
Network-General [12 Certification Exam(s) ]
NetworkAppliance [36 Certification Exam(s) ]
NI [1 Certification Exam(s) ]
NIELIT [1 Certification Exam(s) ]
Nokia [6 Certification Exam(s) ]
Nortel [130 Certification Exam(s) ]
Novell [37 Certification Exam(s) ]
OMG [10 Certification Exam(s) ]
Oracle [269 Certification Exam(s) ]
P&C [2 Certification Exam(s) ]
Palo-Alto [4 Certification Exam(s) ]
PARCC [1 Certification Exam(s) ]
PayPal [1 Certification Exam(s) ]
Pegasystems [11 Certification Exam(s) ]
PEOPLECERT [4 Certification Exam(s) ]
PMI [15 Certification Exam(s) ]
Polycom [2 Certification Exam(s) ]
PostgreSQL-CE [1 Certification Exam(s) ]
Prince2 [6 Certification Exam(s) ]
PRMIA [1 Certification Exam(s) ]
PsychCorp [1 Certification Exam(s) ]
PTCB [2 Certification Exam(s) ]
QAI [1 Certification Exam(s) ]
QlikView [1 Certification Exam(s) ]
Quality-Assurance [7 Certification Exam(s) ]
RACC [1 Certification Exam(s) ]
Real-Estate [1 Certification Exam(s) ]
RedHat [8 Certification Exam(s) ]
RES [5 Certification Exam(s) ]
Riverbed [8 Certification Exam(s) ]
RSA [15 Certification Exam(s) ]
Sair [8 Certification Exam(s) ]
Salesforce [5 Certification Exam(s) ]
SANS [1 Certification Exam(s) ]
SAP [98 Certification Exam(s) ]
SASInstitute [15 Certification Exam(s) ]
SAT [1 Certification Exam(s) ]
SCO [10 Certification Exam(s) ]
SCP [6 Certification Exam(s) ]
SDI [3 Certification Exam(s) ]
See-Beyond [1 Certification Exam(s) ]
Siemens [1 Certification Exam(s) ]
Snia [7 Certification Exam(s) ]
SOA [15 Certification Exam(s) ]
Social-Work-Board [4 Certification Exam(s) ]
SpringSource [1 Certification Exam(s) ]
SUN [63 Certification Exam(s) ]
SUSE [1 Certification Exam(s) ]
Sybase [17 Certification Exam(s) ]
Symantec [134 Certification Exam(s) ]
Teacher-Certification [4 Certification Exam(s) ]
The-Open-Group [8 Certification Exam(s) ]
TIA [3 Certification Exam(s) ]
Tibco [18 Certification Exam(s) ]
Trainers [3 Certification Exam(s) ]
Trend [1 Certification Exam(s) ]
TruSecure [1 Certification Exam(s) ]
USMLE [1 Certification Exam(s) ]
VCE [6 Certification Exam(s) ]
Veeam [2 Certification Exam(s) ]
Veritas [33 Certification Exam(s) ]
Vmware [58 Certification Exam(s) ]
Wonderlic [2 Certification Exam(s) ]
Worldatwork [2 Certification Exam(s) ]
XML-Master [3 Certification Exam(s) ]
Zend [6 Certification Exam(s) ]





References :


Dropmark : http://killexams.dropmark.com/367904/12855207
Dropmark-Text : http://killexams.dropmark.com/367904/12948656
Blogspot : http://killexams-braindumps.blogspot.com/2018/01/pass4sure-l50-501-dumps-and-practice.html
Wordpress : https://wp.me/p7SJ6L-2Qk






Back to Main Page





Killexams exams | Killexams certification | Pass4Sure questions and answers | Pass4sure | pass-guaratee | best test preparation | best training guides | examcollection | killexams | killexams review | killexams legit | kill example | kill example journalism | kill exams reviews | kill exam ripoff report | review | review quizlet | review login | review archives | review sheet | legitimate | legit | legitimacy | legitimation | legit check | legitimate program | legitimize | legitimate business | legitimate definition | legit site | legit online banking | legit website | legitimacy definition | pass 4 sure | pass for sure | p4s | pass4sure certification | pass4sure exam | IT certification | IT Exam | certification material provider | pass4sure login | pass4sure exams | pass4sure reviews | pass4sure aws | pass4sure security | pass4sure cisco | pass4sure coupon | pass4sure dumps | pass4sure cissp | pass4sure braindumps | pass4sure test | pass4sure torrent | pass4sure download | pass4surekey | pass4sure cap | pass4sure free | examsoft | examsoft login | exams | exams free | examsolutions | exams4pilots | examsoft download | exams questions | examslocal | exams practice |

www.pass4surez.com | www.killcerts.com | www.search4exams.com | http://www.radionaves.com/