000-077 exam Dumps Source : xSeries Technical elevated Performance Servers V2
Test Code : 000-077
Test appellation : xSeries Technical elevated Performance Servers V2
Vendor appellation : IBM
: 147 true Questions
observed most 000-077 Questions in dumps that I prepared.
Nice..I cleared the 000-077 exam. The killexams.com question bank helped a lot. Very useful indeed. Cleared the 000-077 with 95%.I am sure everyone can pass the exam after completing your tests. The explanations were very helpful. Thanks. It was a noteworthy taste with killexams.com in terms of collection of questions, their interpretation and pattern in which you Enjoy set the papers. I am grateful to you and give replete credit to you guys for my success.
000-077 examination prep were given to subsist this easy.
I was about to give up exam 000-077 because I wasnt confident in whether I would pass or not. With just a week remaining I decided to switch to killexams.com for my exam preparation. Never thought that the topics that I had always sprint away from would subsist so much fun to study; its simple and short way of getting to the points made my preparation lot easier. replete thanks to killexams.com , I never thought I would pass my exam but I did pass with flying colors.
how many days education required to bypass 000-077 examination?
Have just passed my 000-077 exam. Questions are legitimate and accurate, which is the Good information. i wasensured ninety nine% skip charge and money lower back assure, however obviously i Enjoy got exceptional scores. thatsthe best information.
it is extraordinary! I got dumps present day 000-077 examination.
I still dont forget the difficult time I had while mastering for the 000-077 exam. I used to are seeking for encourage from friends, but I felt maximum of the material became vague and overwhelmed. Later, i organize killexams.com and its dump. through the valuable material I discovered everything from top to bottom of the provided material. It become so precise. in the given questions, I replied replete questions with standard option. thanks for brining replete of the eternal happiness in my profession.
Did you tried these 000-077 actual query monetary institution and study guide.
I got severa questions ordinary from this aide and made an fabulous 88% in my 000-077 exam. At that factor, my accomplice proposed me to occupy after the Dumps aide of killexams.com as a fleet reference. It cautiously secured replete thematerial thru quick answers that were useful to consider. My next advancement obliged me to select killexams.com for replete my destiny tests. i used to subsist in an problem a way to blanket replete of the material inner three-week time.
can you accord with that every one 000-077 questions I had Enjoy been requested in true check.
I could frequently leave out lessons and that would subsist a massive quandary for me if my parents located out. I needed to cowl my mistakes and ensure that they could accord with in me. I knew that one manner to cowl my errors become to finish nicely in my 000-077 test that turned into very near. If I did nicely in my 000-077 test, my parents would really fancy me again and they did because I turned into able to lucid the test. It changed into this killexams.com that gave me the precise instructions. Thank you.
That turned into terrific! I got dumps modern-day 000-077 exam.
The killexams.com is the top class web page where my goals Come authentic. by way of the utilize of the dump for the instruction genuinely introduced the true spark to the studies and severely ended up by using obtaining the qualitymarks inside the 000-077 exam. it is quite simple to physiognomy any exam with the assist of your test dump. thank youplenty for all. preserve up the top class work guys.
Is there any way to pass 000-077 exam at first attempt?
I had taken the 000-077 arrangement from the killexams.com as that was an dispassionate stage for the preparation which had eventually given the best level of the planning to urge the 92% scores inside the 000-077 check exams. I really delighted in the system I got issues the things emptied the tantalizing technique and thru the support of the same; I had at long terminal got the thing out and about. It had made my arrangement a ton of simpler and with the support of the killexams.com I had been prepared to develop well inside the life.
These 000-077 Latest dumps works noteworthy in the true test.
At closing, my marks 90% turned into more than choice. on the point when the exam 000-077 turned into handiest 1 week away, my planning changed into in an indiscriminate situation. I expected that i would want to retake inside the occasion of unhappiness to acquire eighty% marks. Taking after a partners advice, i bought the from killexams.com and will occupy a mild arrangement through typically composed material.
genuinely first-firstexcellent enjoy!
I am one among the elevated achiever in the 000-077 exam. What a exotic material they provided. Within a short time I grasped everything on replete the material topics. It was simply superb! I suffered a lot while preparing for my previous attempt, but this time I cleared my exam very easily without tension and worries. It is truly admirable learning journey for me. Thanks a lot killexams.com for the true support.
IBM these days introduced that it is freeing its Watson-branded AI services — fancy the Watson Assistant for constructing conversational interfaces and Watson OpenScale for managing the AI lifestyles cycle — from its personal cloud and permitting enterprises to occupy its platform and working it of their own records facilities. In a way, that you can account of this as Watson as a managed provider.
“shoppers are basically combating infusing AI into their purposes since the information is disbursed in assorted places,” IBM Watson’s CTO and chief architects Ruchir Puri instructed me after I asked him for IBM’s reasoning behind this circulation. “It’s in these hybrid environments, they’ve obtained distinctive cloud implementations, they Enjoy got information in their inner most cloud as well. they Enjoy been struggling because the suppliers of AI had been attempting to lock them into a specific implementation that isn't proper to this hybrid cloud ambiance.”
So with this decision of bringing Watson to any cloud, IBM desires to supply these groups the alternative to bring AI to their facts, which is enormously tougher and costlier to move, in any case. Puri likewise wired that many organizations Enjoy lengthy wanted to utilize AI to achieve their operations greater productive, but they vital to sprint their AI equipment in an atmosphere they manage and believe relaxed with.
at the core of the technical necessities for running Watson of their public or private cloud is IBM Cloud inner most, the business’s deepest cloud platform that uses open-source technologies for working equipment and capabilities fancy Kubernetes and Cloud Foundry. That’s the platform that makes it possible for corporations to then sprint Watson, too (which itself runs on containers, too).
at this time, the headquarters of attention of this fireplace launch is on Watson aide and Watson OpenScale. “The capabilities we're releasing at the second are in accordance with their two flagship products. That addresses a really significant locality of utilize cases that they Come throughout,” famed Puri. “in the remaining portion of the 12 months, they will carry the leisure of the capabilities [to the platform]. as an example, Watson skills Studio will Come together with it as well, in addition to Watson’s herbal language understanding capabilities that they currently Enjoy obtainable in their public cloud atmosphere can subsist ported on to it as neatly.”
With that, Puri argues, IBM will present organisations a replete spectrum of tools for setting up and working AI models the usage of structured and unstructured records, in addition to a replete monitoring and actuality cycle management suite.
moreover this, IBM likewise these days introduced that it's launching a brand original version of its Watson desktop researching Accelerator that brings excessive-performance GPU clustering to power methods and X86 methods and which guarantees to speed up AI performance as much as 10x.
The industry additionally today announced IBM industry Automation Intelligence with Watson, although it didn’t benign of delve into the particulars. This original service, the enterprise says, will give enterprise leaders the skill “to practice AI at once to purposes, strengthening the group of workers, from clerical to learning laborers, to intelligently automate work from the mundane to the complex.” I’m now not definitely certain what that capability, but I’m certain the enterprise leaders who buy this provider will pattern it out.
in this slidecast, Chris Porter and Jeff Kamiol from IBM limn how IBM excessive efficiency services bring versatile, software-capable clusters in the cloud for groups that requisite to promptly and economically add computing means for elevated efficiency application workloads.
IBM excessive efficiency capabilities enables quickly deployment of technical computing, analytics or Hadoop workloads in the cloud. groups using the carrier can conveniently meet further useful resource demands without the can saturate of deciding to buy or managing in-apartment infrastructure, minimizing their administrative tribulation and promptly addressing evolving enterprise wants. The functions consist of market-leading IBM Platform LSF and IBM Platform Symphony workload management application, IBM Spectrum Scale application defined storage, IBM excessive efficiency functions for Hadoop and the brand original IBM excessive performance services for EDA. The application is built-in, provisioned and deployed as portion of finished, integrated services which contains bare-metal IBM SoftLayer infrastructure, non-compulsory InfinfiBand interconnects and aid from an skilled and dedicated cloud operations group. a world presence with the option of facts core locality helps subsist sure that facts laws are met.
View the Slides * download the MP3 * Subscribe on iTunes * Subscribe to RSS
check in for their insideHPC e-newsletter
For now, AI methods are ordinarily machine getting to know-based and “slim” – powerful as they're by latest requisites, they are constrained to performing a number of, narrowly-defined tasks. AI of the next decade will leverage the superior power of deep researching and develop into broader, fixing a better array of extra intricate issues. moreover, the everyday-intention applied sciences used today for AI deployments will acquire replaced by using a technology stack that’s AI-specific and exponentially sooner – and it’s going to occupy a lot of money.
in the hunt for to occupy headquarters stage in AI’s unfolding, IBM – in aggregate with gargantuan apple state and several expertise heavies – is investing $2 billion within the IBM research AI Hardware core, focused on setting up subsequent era AI silicon, networking and manufacturing in an effort to, IBM pointed out, bring 1,000x AI efficiency effectivity improvement over the next decade.
“today, AI’s ever-increasing sophistication is pushing the boundaries of the trade’s existing hardware systems as clients ascertain more methods to comprehend a considerable number of sources of facts from the aspect, web of things, and greater,” mentioned Mukesh Khare, VP, IBM research Semiconductor and AI Hardware community, in a blog announcing the challenge. “…today’s systems Enjoy achieved improved AI efficiency by means of infusing desktop-discovering capabilities with high-bandwidth CPUs and GPUs, specialized AI accelerators and high-performance networking equipment. To retain this trajectory, original considering is required to speed up AI efficiency scaling to vigorous to ever-increasing AI workload complexities.”
IBM talked about the core will subsist the nucleus of a original ecosystem of research and industrial partners collaborating with IBM researchers. companions announced these days comprehend Samsung for manufacturing and research, Mellanox applied sciences for top-performance interconnect machine, Synopsys for utility platforms, emulation and prototyping, and IP for establishing high-efficiency silicon chips, and semiconductor equipment businesses applied materials and Tokyo Electron.
Hosted at SUNY Polytechnic Institute, Albany, gargantuan apple, in collaboration with neighboring Rensselaer Polytechnic Institute headquarters for Computational innovations, IBM mentioned the company and its companions will “enhance a number of technologies from chip level gadgets, substances, and architecture, to the software supporting AI workloads.”
large Blue talked about analysis on the middle will headquarters of attention on overcoming “present machine-getting to know limitations via procedures that comprehend approximate computing through Digital AI Cores and in-memory computing via Analog AI Cores. These technologies will provide the thousand-fold increases in efficiency effectivity required for replete cognizance of deep studying AI, the subsequent main milestone in AI evolution, based on IBM.
“A key locality of analysis and edifice could subsist methods that meet the demands of deep learning inference and practising tactics,” Khare noted. “Such systems offer huge accuracy improvements over extra regularly occurring computer studying for unstructured information. these extreme processing calls for will develop exponentially as algorithms develop into more intricate so as to convey AI programs with improved cognitive expertise.”
Khare referred to the research headquarters will host R&D, emulation, prototyping, testing and simulation actions for brand spanking original AI cores principally designed for working towards and deploying advanced AI models, together with a watch at various mattress by which members can demonstrate improvements in actual-world purposes. really Good wafer processing for the headquarters will subsist completed in Albany with some steer at IBM’s Thomas J. Watson research core in Yorktown Heights, original york.
Obviously it is difficult assignment to pick solid certification questions/answers assets concerning review, reputation and validity since individuals acquire sham because of picking incorrectly benefit. Killexams.com ensure to serve its customers best to its assets concerning exam dumps update and validity. The vast majority of other's sham report objection customers Come to us for the brain dumps and pass their exams cheerfully and effectively. They never trade off on their review, reputation and quality because killexams review, killexams reputation and killexams customer conviction is vital to us. Uniquely they deal with killexams.com review, killexams.com reputation, killexams.com sham report grievance, killexams.com trust, killexams.com validity, killexams.com report and killexams.com scam. In the event that you note any counterfeit report posted by their rivals with the appellation killexams sham report grievance web, killexams.com sham report, killexams.com scam, killexams.com dissension or something fancy this, simply remember there are constantly terrible individuals harming reputation of Good administrations because of their advantages. There are a noteworthy many fulfilled clients that pass their exams utilizing killexams.com brain dumps, killexams PDF questions, killexams hone questions, killexams exam simulator. Visit Killexams.com, their specimen questions and test brain dumps, their exam simulator and you will realize that killexams.com is the best brain dumps site.
000-876 practice questions | 1Z0-335 free pdf | 70-744 test prep | HP2-W100 pdf download | HP0-S44 practice test | 000-315 practice Test | 000-200 free pdf download | A2090-735 mock exam | CFA-Level-III questions and answers | 642-467 braindumps | HP0-380 questions and answers | 3100-1 true questions | HAT-420 braindumps | 000-N15 brain dumps | MSC-121 examcollection | NREMT-PTE test questions | 310-232 practice test | 400-101 true questions | F50-532 free pdf | P2090-086 test prep |
Murder your 000-077 exam at first attempt!
killexams.com brilliant 000-077 exam simulator is to a noteworthy degree empowering for their customers for the exam prep. Tremendously crucial questions, focuses and definitions are highlighted in brain dumps pdf. Social event the data in a single locality is an authentic encourage and causes you prepare for the IT confirmation exam inside a short time span navigate. The 000-077 exam offers key core interests. The killexams.com pass4sure dumps holds the fundamental questions or thoughts of the 000-077 exam.
At killexams.com, they give completely surveyed IBM 000-077 preparing assets which are the best to pass 000-077 exam, and to acquire certified by IBM. It is a best decision to speed up your position as an expert in the Information Technology industry. They are pleased with their notoriety of helping individuals pass the 000-077 test in their first attempt. Their prosperity rates in the previous two years Enjoy been completely great, because of their upbeat clients who are currently ready to impel their positions in the fleet track. killexams.com is the main decision among IT experts, particularly the ones who are hoping to whisk up the progression levels quicker in their individual associations. IBM is the industry pioneer in data innovation, and getting certified by them is an ensured approach to prevail with IT positions. They enable you to finish actually that with their superb IBM 000-077 preparing materials.
IBM 000-077 is rare replete around the globe, and the industry and programming arrangements gave by them are being grasped by every one of the organizations. They Enjoy helped in driving a large number of organizations on the beyond any doubt shot way of achievement. Far reaching learning of IBM items are viewed as a censorious capability, and the experts certified by them are exceptionally esteemed in replete associations.
We give genuine 000-077 pdf exam questions and answers braindumps in two arrangements. Download PDF and practice Tests. Pass IBM 000-077 true Exam rapidly and effectively. The 000-077 braindumps PDF sort is accessible for perusing and printing. You can print increasingly and practice ordinarily. Their pass rate is elevated to 98.9% and the comparability rate between their 000-077 study steer and genuine exam is 90% in light of their seven-year teaching background. finish you requisite successs in the 000-077 exam in only one attempt? I am perquisite now examining for the IBM 000-077 true exam.
killexams.com Huge Discount Coupons and Promo Codes are as under;
WC2017 : 60% Discount Coupon for replete exams on website
PROF17 : 10% Discount Coupon for Orders greater than $69
DEAL17 : 15% Discount Coupon for Orders greater than $99
DECSPECIAL : 10% Special Discount Coupon for replete Orders
As the only thing that is in any way famous here is passing the 000-077 - xSeries Technical elevated Performance Servers V2 exam. As replete that you require is a elevated score of IBM 000-077 exam. The just a single thing you Enjoy to finish is downloading braindumps of 000-077 exam prep directs now. They will not let you down with their unconditional guarantee. The experts likewise preserve pace with the most up and coming exam so as to give the greater portion of updated materials. Three Months free access to Enjoy the capacity to them through the date of purchase. Each applicant may stand the cost of the 000-077 exam dumps through killexams.com at a low cost. Frequently there is a markdown for anybody all.
Quality and Value for the 000-077 Exam : killexams.com practice Exams for IBM 000-077 are written to the highest standards of technical accuracy, using only certified subject matter experts and published authors for development.
100% Guarantee to Pass Your 000-077 Exam : If you finish not pass the IBM 000-077 exam using their killexams.com testing engine, they will give you a replete REFUND of your purchasing fee.
Downloadable, Interactive 000-077 Testing engines : Their IBM 000-077 Preparation Material provides you everything you will requisite to occupy IBM 000-077 exam. Details are researched and produced by IBM Certification Experts who are constantly using industry taste to bear actual, and logical.
- Comprehensive questions and answers about 000-077 exam - 000-077 exam questions accompanied by exhibits - Verified Answers by Experts and almost 100% correct - 000-077 exam questions updated on regular basis - 000-077 exam preparation is in multiple-choice questions (MCQs). - Tested by multiple times before publishing - Try free 000-077 exam demo before you settle to buy it in killexams.com
killexams.com Huge Discount Coupons and Promo Codes are as under;
WC2017 : 60% Discount Coupon for replete exams on website
PROF17 : 10% Discount Coupon for Orders greater than $69
DEAL17 : 15% Discount Coupon for Orders greater than $99
DECSPECIAL : 10% Special Discount Coupon for replete Orders
000-077 | 000-077 | 000-077 | 000-077 | 000-077 | 000-077
Killexams 000-782 practice Test | Killexams 000-995 dumps questions | Killexams 000-M79 brain dumps | Killexams 920-255 true questions | Killexams M9510-747 practice test | Killexams 1T6-511 dump | Killexams 000-883 cheat sheets | Killexams HP2-K01 exam prep | Killexams 920-159 brain dumps | Killexams FM0-304 true questions | Killexams A2070-581 braindumps | Killexams P2010-022 free pdf | Killexams HP0-M50 study guide | Killexams 000-154 practice test | Killexams C2150-199 braindumps | Killexams 200-500 examcollection | Killexams IIA-CIA-Part1 bootcamp | Killexams 70-338 free pdf | Killexams SPS-200 braindumps | Killexams C4090-451 cram |
Exam Simulator : Pass4sure 000-077 Exam Simulator
Killexams NET test prep | Killexams 000-207 dump | Killexams HP0-S15 study guide | Killexams CRFA cheat sheets | Killexams 000-789 true questions | Killexams F50-526 bootcamp | Killexams HP2-T25 brain dumps | Killexams 00M-620 test prep | Killexams C4040-227 exam prep | Killexams 190-612 true questions | Killexams 1Z0-969 braindumps | Killexams HP0-J62 dumps | Killexams 9L0-518 braindumps | Killexams 9A0-385 free pdf | Killexams BAS-012 dumps questions | Killexams 1Z0-567 practice test | Killexams 000-105 mock exam | Killexams JK0-022 true questions | Killexams PEGACSSA practice questions | Killexams HP0-Y23 practice questions |
In-DepthScale the Datacenter with Windows Server SMB Direct
RDMA networking has enabled high-performance computing for years, but Windows Server 2012 R2 with SMB Direct is bringing it to the mainstream.
File-based storage has grown tremendously over the terminal several years, far outpacing obstruct storage, even as both grow at double-digit rates. Cloud datacenters are deploying file-based protocols at an accelerating pace for virtualized environments, as well as database infrastructure deployed for gargantuan Data applications. The introduction of Server Message obstruct (SMB) 3.0 with the efficiency and performance of the SMB Direct protocol, has opened original opportunities for file storage in Windows-based datacenters. SMB Direct, a key component of SMB 3.0, can utilize networking based on the Remote Direct reminiscence Access (RDMA) protocol to deliver near-SAN-level performance and availability with integrated data protection and optimized data transfer between storage and server (see pattern 1).[Click on image for larger view.] pattern 1. RDMA networking allows high-speed client to file service data transfers.
RDMA is a specification that has long-provided a means of reducing latency in the transmission of data from one point to another, by placing the data directly into final destination memory, thereby eliminating unnecessary CPU and reminiscence bus utilization. Used primarily for high-performance computing (HPC) for more than a decade, RDMA is now on the cusp of becoming a mainstream means of providing a scalable and high-performance infrastructure. A key factor fueling its growing utilize is Windows Server 2012 R2 offering several RDMA networking options. I'll review and compare those options within Windows Server 2012 R2 environments.
Windows Scale-Out File ServicesWindows Server 2012 R2 provides massive scale to transform datacenters into an elastic, always-on cloud-like operation designed to sprint the largest workloads. The server OS provides automated protection and aims to offer cost-effective industry continuity to ensure uptime. Windows Server 2012 R2 provides a affluent set of storage features letting IT managers whisk to lower-cost industry-standard hardware rather than purpose-built storage devices, without having to compromise on performance or availability. A vital storage capability in Windows Server 2012 R2 is the Scale-Out File Server (SOFS), which allows the storage of server application data, such as Hyper-V virtual machine (VM) files, on SMB file shares. replete files shares are online on replete nodes simultaneously. This configuration is commonly referred to as an active-active cluster configuration.
A SOFS allows for continuously available file shares. Continuous availability tracks file operations on a highly available file system so that clients can fail over to another node of the cluster without interruption. This is likewise known as Transparent Failover.
The Role of SMB DirectThe SMB 3.0 protocol in Windows Server 2012 R2 utilizes the Network Direct Kernel (NDK) layer within the Windows Sever OS to leverage RDMA network adapters (see pattern 2). Using RDMA enables storage that rivals costly and infrastructure-intensive Fibre Channel SANs in efficiency, with lower latency, while operating over standard 10 Gbps and 40 Gbps Ethernet infrastructure. RDMA network adapters offer this performance capability by operating at a line rate with very low latency thanks to CPU bypass and zero copy (the aptitude to write directly to the reminiscence of the remote storage node using RPCs). In order to obtain these advantages, replete transport protocol processing must subsist performed in the adapter hardware, completely bypassing the host OS.[Click on image for larger view.] pattern 2. RDMA Networking Configuration on Windows Server 2012 R2.
With NDK, SMB can achieve data transfers direct from memory, through the adapter, to the network, and over to the reminiscence of the application requesting data from the file share. This capability is especially useful for I/O-intensive workloads such as Hyper-V or SQL Server, resulting in remote file server performance comparable to local storage.
In contrast, in traditional networking, a request from an application to a remote storage location must fade through numerous stages and buffers (involving data copies) on both the client and server side, such as the SMB client or server buffers, the transport protocol drivers in the networking stack, and the network card drivers.
With SMB Direct, the RDMA NIC transfers data straight from the SMB client buffer, through the client NIC to the server NIC, and up to the SMB server buffer, and vice versa. This direct transfer operation allows the application to access remote storage at the identical performance as local storage.
Windows Server 2012 provides built-in support for using SMB Direct with Ethernet RDMA NICs, including iWARP (RDMA/TCP), and RoCE NICs (RDMA/UDP), to support high-speed data transfers. These NICs implement RDMA in hardware so that they can transfer data between them without involving the host CPU. As a result, SMB Direct is extremely fleet with client-to-file server performance almost equaling that of using local storage.
RDMA NICs offload the server CPU, resulting in more efficient Microsoft virtualized datacenter installs. Windows Server 2012 SMB Direct 3.0 over RDMA provides higher performance by giving direct access to the data that resides on a remote file server, while the CPU reduction enables a larger number of VMs per Hyper-V server, resulting in CapEx and OpEx savings in power dissipation, system configuration and deployment scale throughout the life of the installation. indigenous system software support for RDMA networking in Windows Server 2012 R2 simplifies storage and VM management for enterprise and cloud IT administrators, with no network reconfiguration required.
Live migration is an famous VM mobility feature and improving the performance of live migration has been a consistent focus for Windows Server. In Windows Server 2012 R2, Microsoft took these performance improvements to the next level. Live migration with RDMA is a original feature; it delivers the highest performance for migrations by offloading data transfers to RDMA NIC hardware.
iWARP: RDMA over TCP/IPiWARP is an implementation of RDMA using the ubiquitous Ethernet-TCP/IP networking as the network transport. iWARP NICs implement a hardware TCP/IP stack that eliminates replete inefficiencies associated with software TCP/IP processing, while preserving replete the benefits of the proven TCP/IP protocol. On the wire, iWARP traffic is thus identical to other TCP/IP applications and requires no special support from switches and routers, or changes to network devices. Thanks to the hardware offloaded TCP/IP, iWARP RDMA NICs offer high-performance and low-latency RDMA operation that's comparable to the latest InfiniBand speeds, and indigenous integration within today's large Ethernet-based networks and clouds.
iWARP is able to dramatically ameliorate upon the most common and widespread Ethernet communications in utilize today and deliver on the vow of a single, converged Ethernet network for carrying LAN, SAN, and RDMA traffic with the unrestricted routability and scalability of TCP/IP. Today, 40 Gbps Ethernet (40 GbE) iWARP controllers and adapters are available from Chelsio Communications, while Intel Corp. has likewise announced plans for availability of iWARP Ethernet controllers integrated within upcoming Intel server chipsets.
The iWARP protocol is the open Internet Engineering Task constrain (IETF) standard for RDMA over Ethernet. iWARP adapters are fully supported by the OpenFabrics Alliance Enterprise Software Distribution (OFED), with no changes needed for applications to migrate from specialized OFED-compliant RDMA fabrics such as InfiniBand to Ethernet.
Initially aimed at high-performance computing applications, iWARP is likewise now finding a home in datacenters thanks to its availability on high-performance 40 GbE NICs and increased datacenter demand for low latency, elevated bandwidth, and low server CPU utilization. It has likewise been integrated into server OSes such as Microsoft Windows Server 2012 with SMB Direct, which can seamlessly occupy handicap of iWARP RDMA without user intervention.
InfiniBandInfiniBand is an I/O architecture designed to extend the communication speed between CPUs, devices within servers and subsystems located throughout a network. InfiniBand is a point-to-point, switched I/O fabric architecture. Both devices at each terminate of a link Enjoy replete access to the communication path. To fade beyond a point and traverse the network, switches Come into play. By adding switches, multiple points can subsist interconnected to create a fabric. As more switches are added to a network, aggregated bandwidth of the fabric increases.
High-performance clustering architectures Enjoy provided the main opportunity for InfiniBand deployment. Using the InfiniBand fabric as the cluster inter-process communications (IPC) interconnect may boost cluster performance and scalability while improving application response times. However, using InfiniBand requires deploying a separate infrastructure in addition to the requisite Ethernet network. The added costs in acquisition, maintenance and management Enjoy prompted interest in Ethernet-based RDMA alternatives such as iWARP.
Because it's layered on top of TCP, iWARP is fully compatible with existing Ethernet switching equipment that's able to process iWARP traffic out-of-the-box. In comparison, deploying InfiniBand requires environments where two separate network infrastructures are installed and managed, as well as specialized InfiniBand to Ethernet gateways for bridging between the two infrastructures.
RDMA over Converged Ethernet (RoCE)The third RDMA networking option is RDMA over Converged Ethernet (RoCE), which essentially implements InfiniBand over Ethernet. RoCE NICs are offered by Mellanox Technologies. Though it utilizes Ethernet cabling, this approach does suffer from deployment hardship and costs due to requiring support for intricate and expensive Ethernet "lossless" fabrics and Data headquarters Bridging (DCB) protocols. In addition, RoCE for the longest time lacked routability support, which limited its operation to a single Ethernet subnet.
Instead of using pervasive TCP/IP networking, RoCE relies instead on InfiniBand protocols at Layer 3 (L3) and higher layers in combination with Ethernet at the Link Layer (L2) and Physical Layer (L1). RoCE leverages Converged Ethernet, likewise known as DCB or Converged Enhanced Ethernet as a lossless physical layer networking medium. RoCE is similar to the Fibre Channel over Ethernet (FCoE) protocols in relying on networking infrastructure with DCB protocols. However, such support has been viewed a significant impediment to FCoE deployment, which raises similar concerns for RoCE.
The just-released version 2 of the RoCE protocol will acquire rid of the IB network layer, replacing it with the more commonly used UDP (connectionless) and IP layer to provide routability. However, RoCE v2 does not specify how lossless operation will subsist provided over an IP network, or how congestion control will subsist handled. RoCE v2 currently suffers from an discrepant premise that continues to require DCB for Ethernet, while no longer operating within the confines of one Ethernet network.
Developed by Datawire, Ambassador is an open source API gateway designed specifically for utilize with the Kubernetes container orchestration framework. At its core, Ambassador is a control plane tailored for edge/API configuration for managing the Envoy Proxy “data plane”. Envoy itself is a cloud indigenous Layer 7 proxy and communication bus used for handling “edge” ingress and service-to-service networking communication. Although originating from Lyft, Envoy is rapidly becoming the de facto proxy for modern networking, and can subsist organize with practically replete of the public cloud vendors offerings, as well as bespoke usage by many large end-user organisations fancy eBay, Pinterest and Groupon.
This article provides an insight into the creation of Ambassador, and discusses the technical challenges and lessons learned from edifice a developer-focused control plane for managing ingress traffic within microservice-based applications that are deployed into a Kubernetes cluster.The Emerging “Cloud Native” Fabric: Kubernetes and Envoy
Although the phrase “cloud native” is becoming as much of an overloaded term as “DevOps” and “microservices”, it is increasingly gaining traction throughout the IT industry. According to Gartner, the 2018 worldwide public cloud service revenue forecast was in the region of $175 Billion U.S. Dollars, and this could grow by over 15% next year. Although the current public cloud market is dominated by only a few key players that offer mostly proprietary technologies (and increasingly, and sometimes controversially, open source-as-service), the Cloud indigenous Computing Foundation (CNCF) was founded in 2015 by the Linux foundation to provide a locality for discussion and hosting of "open source components of a replete stack cloud indigenous environment".
Possibly learning from the journey previously undertaken by the OpenStack community, the early projects supported by the CNCF were arguably less ambitious in scope, provided clearer (opinionated) abstractions, and were likewise proven in true world usage (or inspired by real world usage in the case of Kubernetes). Two key platform components that Enjoy emerged from the CNCF are the Kubernetes container orchestration framework, originally contributed by Google, and the Envoy proxy for edge and service-to-service networking, originally donated by Lyft. Even when combined, the two specific technologies don’t provide a replete Platform-as-a-Service (PaaS) offering that many developers want. However, Kubernetes and Envoy are being included within many PaaS-like offerings.
Many PaaS vendors, and also end-user engineering teams, are treating these technologies as the “data plane” for cloud indigenous systems: i.e. the portion of the system that does the “heavy-lifting”, such as orchestrating containers and routing traffic based on Layer 7 metadata (such as HTTP URIs and headers, or MongoDB protocol metadata). Accordingly, a lot of innovation and commercial opportunities are focused on creating an efficient “control plane”, which is where the end-user interacts with the technology, specifies configuration to subsist enacted by the data plane, and observes any metrics or logging.
The Kubernetes control plane is largely focused around a series of well-specified REST-like APIs (known simply as “the Kubernetes API”), and the associated ‘kubectl’ CLI tool provides a human-friendly abstraction over these APIs. The Envoy v1 control plane was initially based around JSON config loaded within files, with several loosely-defined APIs that allowed selective updating. These APIs Enjoy subsequently evolved into the Envoy v2 API, which provides a series of gRPC-based APIs that are strongly typed via the utilize of Protocol Buffers. However, initially there wasn’t an Envoy analogy to the Kubernetes kubectl tool, and this led to challenges in adoption by some teams. Where there are challenges, though, there are likewise opportunities within the implementation of a human-friendly control plane.“Service Mesh-all-the-things”...Maybe?
If they focus on the networking control plane, it would subsist difficult to miss the emergence of the concept of the “service mesh”. Technologies fancy Istio, Linkerd and Consul Connect are aiming to manage cross-cutting service-to-service (“east-west”) traffic within a microservices systems. Indeed, Istio itself is effectively a control plane that enables a user to manage Envoy Proxy as the underlying data plane for managing Layer 7 networking traffic across the mesh. Linkerd offers its own (now Rust-based) proxy as the data plane, and Consul Connect offers both a bespoke proxy and, more recently, support for Envoy.
Istio architecture, showing the Envoy Proxy data plane at the top half of the diagram, and the control plane below (image courtesy of Istio documentation)
The famous thing to remember with a service mesh is that the assumption is that you typically exert a high-degree of ownership and control on both parties that are communicating over the mesh. For example, two services may subsist built by separate engineering departments but they will typically work for the identical organisation, or one service may subsist a third-party application but it is deployed within your trusted network border (which may span multiple data centers or Virtual Private Clouds). Here your operations team will typically accord on sensible communication defaults, and service teams will independently configure inter-service routing. In these scenarios you may not fully trust each service, and you most certainly will want to implement protections fancy rate limiting and circuit breaking, but fundamentally you can investigate and change any monstrous behaviour detected. This is not true, however, for managing edge or ingress (“north-south”) traffic that originates from outside your network boundary.
Cluster “ingress” traffic generally originates from sources outside of your direct control
Any communication originating from outside your trusted network can subsist from a monstrous actor, with motivations that are intentional (e.g. cyber criminals) or otherwise (e.g. broken client library within a mobile app), and therefore you must build appropriate defenses in place. Here the operations team will specify sensible system defaults, and likewise reconcile these in real-time based on external events. In addition to rate limiting, you probably likewise want the aptitude to configure global and API-specific load shedding, for example, if the backend services or datastores become overwhelmed, and likewise implement DDoS protection (which may likewise subsist time- or geographically-specified). Service evolution teams likewise want access to the edge to configure routing for a original API, to test or release a original service via traffic shadowing or canary releasing, or other tasks.
As a quick aside, for further discussion on the (sometimes confusing) role of API gateways, Christian Posta has recently published an tantalizing blog post, “API Gateways Are Going Through an Identity Crisis”. I Enjoy likewise written articles about the role of an API gateway during a cloud/container migration or digital transformation, and how API gateways can subsist integrated with modern continuous delivery patterns.
Although at first glance these service mesh and edge/API gateway utilize cases may show very similar, they believe there are subtle (and not so subtle) differences, and this impacts the design of the associated inter-service and edge control planes.Designing a Edge Control Plane
The selection of control plane is influenced heavily by the scope of control required, and the persona(s) of the primary people using it. My colleague Rafael Schloming has talked about this before at QCon San Francisco, where he discussed how the requirements to centralise or decentralise control and likewise the development/operation lifecycle stage in which a service is currently at (prototype, mission censorious etc) impacts the implementation of the control plane.
As mentioned above, taking an edge proxy control plane as the example, a centralised operations or SRE team may want to specify globally sensible defaults and safeguards for replete ingress traffic. However, the (multiple) decentralised product evolution teams working at the front line and releasing functionality will want fine-grained control for their services in isolation, and potentially (if they are embracing the “freedom and responsibility” model) the aptitude to override global safeguards locally.
A conscious selection that was made by the Ambassador community was that the primary persona targeted by the Ambassador control plane is the developeror application engineer, and therefore the focus on the control plane was on decentralised configuration. Ambassador was built to subsist Kubernetes-specific, and so a rational selection for specifying edge configuration was close to the Kubernetes Service specifications that were contained within YAML files and loaded into Kubernetes via kubectl.
Options for specifying Ambassador configuration included using the Kubernetes Ingress object, writing custom Kubernetes annotations or defining Custom Resource Definitions (CRDs). Ultimately the utilize of annotations was chosen, as they were simple and presented a minimal learning curve for the end-user. Using Ingress may Enjoy appeared to subsist the most obvious first choice, but unfortunately the specification for Ingress has been stuck in perpetual beta, and other than the “lowest common denominator” functionality for managing ingress traffic, not much else has been agreed upon.
An example of an Ambassador annotation that demonstrates a simple endpoint-to-service routing on a Kubernetes Service can subsist seen here:kind: Service apiVersion: v1 metadata: name: my-service annotations: getambassador.io/config: | --- apiVersion: ambassador/v0 kind: Mapping name: my_service_mapping prefix: /my-service/ service: my-service spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 9376
The configuration within the getambassador.io/config should subsist relatively self-explanatory to anyone who has configured an edge proxy, reverse proxy or API gateway before. Traffic sent to the prefix endpoint will subsist “mapped” or routed to the “my-service” Kubernetes service. As this article is primarily focused on the designing and implementation of Ambassador, they won’t cover all of the functionality that can subsist configured, such as routing (including traffic shadowing), canarying (with integration with Prometheus for monitoring) and rate limiting. Although Ambassador is focused on the developer persona, there is likewise extensive support for operators, and centralised configuration can subsist specified for authentication, TLS/SNI, tracing and service mesh integration.
Let’s now turn their attention back onto the evolution of Ambassador over the past two years.Ambassador < v0.40: Envoy v1 APIs, Templating, and peppery Restarts
Ambassador itself is deployed within a container as a Kubernetes service, and uses the annotations added to Kubernetes Services as its core configuration model. This approach enables application developers to manage routing as portion of their Kubernetes service definition workflow process (perhaps as portion of a “GitOps” approach). Translating the simple Ambassador annotation config into valid Envoy v1 config is not a trivial task. By design, Ambassador’s configuration isn’t based on the identical conceptual model as Envoy’s configuration -- they deliberately wanted to aggregate and simplify operations and config -- and herefore, a objective amount of logic within Ambassador translates between one set of concepts to the other.
Specifically when a user applies a Kubernetes manifest containing Ambassador annotations, the following steps occur:
There were many benefits with this initial implementation: the mechanics involved were fundamentally simple, the transformation of Ambassador config into Envoy config was reliable, and the file-based peppery restart integration with Envoy was dependable.
However, there were likewise notable challenges with this version of Ambassador. First, although the peppery restart was efficient for the majority of utilize cases, it was not very fast, and some users (particularly those with large application deployments) organize it was limiting the frequency with which they could change their configuration. peppery restart can likewise inappropriately drop connections, especially long-lived connections fancy WebSockets or gRPC streams.
More crucially, though, the first implementation of the Ambassador-to-Envoy intermediate representation (IR) allowed rapid prototyping but was primitive enough that it proved very difficult to achieve substantial changes. While this was a twinge point from the beginning, it became a censorious issue as Envoy shifted to the Envoy v2 API. It was lucid that the v2 API would offer Ambassador many benefits -- as Matt Klein outlined in his blog post, “The universal data plane API” -- including access to original features and a solution to the connection-drop problem famed above, but it was likewise lucid that the existing IR implementation was not capable of making the leap.Ambassador Now: Envoy v2 APIs (with ADS), Intermediate Representations, and Testing with KAT
In consultation with the Ambassador community, the Datawire team (stewarded by Flynn, lead engineer for Ambassador) undertook a redesign of the internals of Ambassador in 2018. This was driven by two key goals. First, they wanted to integrate Envoy’s v2 configuration format, which would enable the support of features such as Server appellation Indication (SNI), label-based rate limiting, and improved authentication. Second, they likewise wanted to finish much more robust semantic validation of Envoy configuration, due to its increasing complexity (which was particularly when configuring Envoy for utilize with large-scale application deployments).
We started by restructuring the Ambassador internals more along the lines of a multipass compiler. The class hierarchy was made to more closely mirror the separation of concerns between the Ambassador configuration resources, the IR, and the Envoy configuration resources. Core parts of Ambassador were likewise redesigned to facilitate contributions from the community outside Datawire. They decided to occupy this approach for several reasons. First, Envoy Proxy is a very fleet piteous project, and they realised that they needed an approach where a seemingly minor Envoy configuration change didn’t result in days of reengineering within Ambassador. In addition, they wanted to subsist able to provide semantic verification of configuration.
As they started working more closely with Envoy v2, a testing challenge was quickly identified. As more and more features were being supported in Ambassador, more and more bugs appeared in Ambassador’s handling of less common but completely valid combinations of features. This drove to creation of a original testing requirement that meant Ambassador’s test suite needed to subsist reworked to automatically manage many combinations of features, rather than relying on humans to write each test individually. Moreover, they wanted the test suite to subsist fleet in order to maximise engineering productivity.
This meant that as portion of the Ambassador re-architecture, they likewise created the Kubernetes Acceptance Test (KAT) framework. KAT is an extensible test framework that:
KAT is designed for performance -- it batches test setup upfront, and then runs replete the queries in step 3 asynchronously with a elevated performance HTTP client. The traffic driver in KAT runs locally using one of other open source tools, Telepresence, which makes it easier to debug issues.
With the KAT test framework in place, they quickly ran into some issues with Envoy v2 configuration and peppery restart, which presented the opportunity to switch to using Envoy’s Aggregated Discovery Service (ADS) APIs instead of peppery restart. This completely eliminated the requirement for a process restart upon configuration changes, which previously they had organize could lead to dropped connections under elevated loads or long-lived connections. They decided to utilize the Envoy fade control plane to interface to the ADS. This did, however, insert a Go-based dependency to the previously predominantly Python-based Ambassador codebase.
With a original test framework, original IR generating valid Envoy v2 configuration, and the ADS, the major architectural changes in Ambassador 0.50 were complete. Now when a user applies a Kubernetes manifest containing Ambassador annotations, the following steps occur:
Just before release they hit one more issue. On the Azure Kubernetes Service, Ambassador annotation changes were no longer being detected. Working with the highly-responsive AKS engineering team, they were able to identify the issue -- namely, the Kubernetes API server in AKS is exposed through a chain of proxies that was dropping some requests. The proper mitigation for this was to support calling the FQDN of the API server, which is provided through a mutating webhook in AKS. Unfortunately, support for this feature was not available in the official Kubernetes Python client. They therefore elected to switch to the Kubernetes Golang client -- introducing yet another Go-based dependency.Key Takeaways from edifice an Envoy Control Plane (Twice!)
As Matt Klein mentioned at the inaugural EnvoyCon, with the current popularity of the Envoy Proxy in the cloud indigenous technology domain, it’s often easier to inquire of who isn’t using Envoy. They know that Google’s Istio has helped raise the profile of Envoy with Kubernetes users, and replete of the other major cloud vendors are investing in Envoy, for example, within AWS App Mesh and Azure Service Fabric Mesh. At EnvoyCon they likewise heard how several gargantuan players such as eBay, Pinterest and Groupon are migrating to using Envoy as their primary edge proxy. There are likewise several other open source Envoy-based edge proxy control planes emerging, such as Istio Gateway, Solo.io Gloo, and Heptio Contour. I would bicker that Envoy is indeed becoming the universal data plane of cloud indigenous communications, but there is much work still to subsist done within the domain of the control plane.
In this article we’ve discussed how the Datawire team and Ambassador open source community Enjoy successfully migrated the Ambassador edge control plane to utilize the Envoy v2 configuration and ADS APIs. We’ve learned a lot in the process of edifice Ambassador 0.50, and they are keen to highlight their key takeaways as follows:
Migrating Ambassador to the Envoy v2 configuration and ADS APIs was a long and difficult journey that required lots of architecture and design discussions, and plenty of coding, but early feedback from results Enjoy been positive. Ambassador 0.50 is available now, so you can occupy it for a test sprint and share your feedback with the community on our Slack channel or on Twitter.About the Author
Daniel Bryant is leading change within organisations and technology, and currently works as a freelance consultant, of which Datawire is a client. His current work includes enabling agility within organisations by introducing better requirement gathering and planning techniques, focusing on the relevance of architecture within agile development, and facilitating continuous integration/delivery. Daniel’s current technical expertise focuses on ‘DevOps’ tooling, cloud/container platforms and microservice implementations. He is likewise a leader within the London Java Community (LJC), contributes to several open source projects, writes for well-known technical websites such as InfoQ, DZone and Voxxed, and regularly presents at international conferences such as QCon, JavaOne and Devoxx.
3COM [8 Certification Exam(s) ]
AccessData [1 Certification Exam(s) ]
ACFE [1 Certification Exam(s) ]
ACI [3 Certification Exam(s) ]
Acme-Packet [1 Certification Exam(s) ]
ACSM [4 Certification Exam(s) ]
ACT [1 Certification Exam(s) ]
Admission-Tests [13 Certification Exam(s) ]
ADOBE [93 Certification Exam(s) ]
AFP [1 Certification Exam(s) ]
AICPA [2 Certification Exam(s) ]
AIIM [1 Certification Exam(s) ]
Alcatel-Lucent [13 Certification Exam(s) ]
Alfresco [1 Certification Exam(s) ]
Altiris [3 Certification Exam(s) ]
Amazon [2 Certification Exam(s) ]
American-College [2 Certification Exam(s) ]
Android [4 Certification Exam(s) ]
APA [1 Certification Exam(s) ]
APC [2 Certification Exam(s) ]
APICS [2 Certification Exam(s) ]
Apple [69 Certification Exam(s) ]
AppSense [1 Certification Exam(s) ]
APTUSC [1 Certification Exam(s) ]
Arizona-Education [1 Certification Exam(s) ]
ARM [1 Certification Exam(s) ]
Aruba [6 Certification Exam(s) ]
ASIS [2 Certification Exam(s) ]
ASQ [3 Certification Exam(s) ]
ASTQB [8 Certification Exam(s) ]
Autodesk [2 Certification Exam(s) ]
Avaya [96 Certification Exam(s) ]
AXELOS [1 Certification Exam(s) ]
Axis [1 Certification Exam(s) ]
Banking [1 Certification Exam(s) ]
BEA [5 Certification Exam(s) ]
BICSI [2 Certification Exam(s) ]
BlackBerry [17 Certification Exam(s) ]
BlueCoat [2 Certification Exam(s) ]
Brocade [4 Certification Exam(s) ]
Business-Objects [11 Certification Exam(s) ]
Business-Tests [4 Certification Exam(s) ]
CA-Technologies [21 Certification Exam(s) ]
Certification-Board [10 Certification Exam(s) ]
Certiport [3 Certification Exam(s) ]
CheckPoint [41 Certification Exam(s) ]
CIDQ [1 Certification Exam(s) ]
CIPS [4 Certification Exam(s) ]
Cisco [318 Certification Exam(s) ]
Citrix [48 Certification Exam(s) ]
CIW [18 Certification Exam(s) ]
Cloudera [10 Certification Exam(s) ]
Cognos [19 Certification Exam(s) ]
College-Board [2 Certification Exam(s) ]
CompTIA [76 Certification Exam(s) ]
ComputerAssociates [6 Certification Exam(s) ]
Consultant [2 Certification Exam(s) ]
Counselor [4 Certification Exam(s) ]
CPP-Institue [2 Certification Exam(s) ]
CPP-Institute [1 Certification Exam(s) ]
CSP [1 Certification Exam(s) ]
CWNA [1 Certification Exam(s) ]
CWNP [13 Certification Exam(s) ]
Dassault [2 Certification Exam(s) ]
DELL [9 Certification Exam(s) ]
DMI [1 Certification Exam(s) ]
DRI [1 Certification Exam(s) ]
ECCouncil [21 Certification Exam(s) ]
ECDL [1 Certification Exam(s) ]
EMC [129 Certification Exam(s) ]
Enterasys [13 Certification Exam(s) ]
Ericsson [5 Certification Exam(s) ]
ESPA [1 Certification Exam(s) ]
Esri [2 Certification Exam(s) ]
ExamExpress [15 Certification Exam(s) ]
Exin [40 Certification Exam(s) ]
ExtremeNetworks [3 Certification Exam(s) ]
F5-Networks [20 Certification Exam(s) ]
FCTC [2 Certification Exam(s) ]
Filemaker [9 Certification Exam(s) ]
Financial [36 Certification Exam(s) ]
Food [4 Certification Exam(s) ]
Fortinet [13 Certification Exam(s) ]
Foundry [6 Certification Exam(s) ]
FSMTB [1 Certification Exam(s) ]
Fujitsu [2 Certification Exam(s) ]
GAQM [9 Certification Exam(s) ]
Genesys [4 Certification Exam(s) ]
GIAC [15 Certification Exam(s) ]
Google [4 Certification Exam(s) ]
GuidanceSoftware [2 Certification Exam(s) ]
H3C [1 Certification Exam(s) ]
HDI [9 Certification Exam(s) ]
Healthcare [3 Certification Exam(s) ]
HIPAA [2 Certification Exam(s) ]
Hitachi [30 Certification Exam(s) ]
Hortonworks [4 Certification Exam(s) ]
Hospitality [2 Certification Exam(s) ]
HP [750 Certification Exam(s) ]
HR [4 Certification Exam(s) ]
HRCI [1 Certification Exam(s) ]
Huawei [21 Certification Exam(s) ]
Hyperion [10 Certification Exam(s) ]
IAAP [1 Certification Exam(s) ]
IAHCSMM [1 Certification Exam(s) ]
IBM [1532 Certification Exam(s) ]
IBQH [1 Certification Exam(s) ]
ICAI [1 Certification Exam(s) ]
ICDL [6 Certification Exam(s) ]
IEEE [1 Certification Exam(s) ]
IELTS [1 Certification Exam(s) ]
IFPUG [1 Certification Exam(s) ]
IIA [3 Certification Exam(s) ]
IIBA [2 Certification Exam(s) ]
IISFA [1 Certification Exam(s) ]
Intel [2 Certification Exam(s) ]
IQN [1 Certification Exam(s) ]
IRS [1 Certification Exam(s) ]
ISA [1 Certification Exam(s) ]
ISACA [4 Certification Exam(s) ]
ISC2 [6 Certification Exam(s) ]
ISEB [24 Certification Exam(s) ]
Isilon [4 Certification Exam(s) ]
ISM [6 Certification Exam(s) ]
iSQI [7 Certification Exam(s) ]
ITEC [1 Certification Exam(s) ]
Juniper [64 Certification Exam(s) ]
LEED [1 Certification Exam(s) ]
Legato [5 Certification Exam(s) ]
Liferay [1 Certification Exam(s) ]
Logical-Operations [1 Certification Exam(s) ]
Lotus [66 Certification Exam(s) ]
LPI [24 Certification Exam(s) ]
LSI [3 Certification Exam(s) ]
Magento [3 Certification Exam(s) ]
Maintenance [2 Certification Exam(s) ]
McAfee [8 Certification Exam(s) ]
McData [3 Certification Exam(s) ]
Medical [69 Certification Exam(s) ]
Microsoft [374 Certification Exam(s) ]
Mile2 [3 Certification Exam(s) ]
Military [1 Certification Exam(s) ]
Misc [1 Certification Exam(s) ]
Motorola [7 Certification Exam(s) ]
mySQL [4 Certification Exam(s) ]
NBSTSA [1 Certification Exam(s) ]
NCEES [2 Certification Exam(s) ]
NCIDQ [1 Certification Exam(s) ]
NCLEX [2 Certification Exam(s) ]
Network-General [12 Certification Exam(s) ]
NetworkAppliance [39 Certification Exam(s) ]
NI [1 Certification Exam(s) ]
NIELIT [1 Certification Exam(s) ]
Nokia [6 Certification Exam(s) ]
Nortel [130 Certification Exam(s) ]
Novell [37 Certification Exam(s) ]
OMG [10 Certification Exam(s) ]
Oracle [279 Certification Exam(s) ]
P&C [2 Certification Exam(s) ]
Palo-Alto [4 Certification Exam(s) ]
PARCC [1 Certification Exam(s) ]
PayPal [1 Certification Exam(s) ]
Pegasystems [12 Certification Exam(s) ]
PEOPLECERT [4 Certification Exam(s) ]
PMI [15 Certification Exam(s) ]
Polycom [2 Certification Exam(s) ]
PostgreSQL-CE [1 Certification Exam(s) ]
Prince2 [6 Certification Exam(s) ]
PRMIA [1 Certification Exam(s) ]
PsychCorp [1 Certification Exam(s) ]
PTCB [2 Certification Exam(s) ]
QAI [1 Certification Exam(s) ]
QlikView [1 Certification Exam(s) ]
Quality-Assurance [7 Certification Exam(s) ]
RACC [1 Certification Exam(s) ]
Real-Estate [1 Certification Exam(s) ]
RedHat [8 Certification Exam(s) ]
RES [5 Certification Exam(s) ]
Riverbed [8 Certification Exam(s) ]
RSA [15 Certification Exam(s) ]
Sair [8 Certification Exam(s) ]
Salesforce [5 Certification Exam(s) ]
SANS [1 Certification Exam(s) ]
SAP [98 Certification Exam(s) ]
SASInstitute [15 Certification Exam(s) ]
SAT [1 Certification Exam(s) ]
SCO [10 Certification Exam(s) ]
SCP [6 Certification Exam(s) ]
SDI [3 Certification Exam(s) ]
See-Beyond [1 Certification Exam(s) ]
Siemens [1 Certification Exam(s) ]
Snia [7 Certification Exam(s) ]
SOA [15 Certification Exam(s) ]
Social-Work-Board [4 Certification Exam(s) ]
SpringSource [1 Certification Exam(s) ]
SUN [63 Certification Exam(s) ]
SUSE [1 Certification Exam(s) ]
Sybase [17 Certification Exam(s) ]
Symantec [134 Certification Exam(s) ]
Teacher-Certification [4 Certification Exam(s) ]
The-Open-Group [8 Certification Exam(s) ]
TIA [3 Certification Exam(s) ]
Tibco [18 Certification Exam(s) ]
Trainers [3 Certification Exam(s) ]
Trend [1 Certification Exam(s) ]
TruSecure [1 Certification Exam(s) ]
USMLE [1 Certification Exam(s) ]
VCE [6 Certification Exam(s) ]
Veeam [2 Certification Exam(s) ]
Veritas [33 Certification Exam(s) ]
Vmware [58 Certification Exam(s) ]
Wonderlic [2 Certification Exam(s) ]
Worldatwork [2 Certification Exam(s) ]
XML-Master [3 Certification Exam(s) ]
Zend [6 Certification Exam(s) ]
Dropmark : http://killexams.dropmark.com/367904/11585470
Wordpress : http://wp.me/p7SJ6L-Pn
Issu : https://issuu.com/trutrainers/docs/000-077
Dropmark-Text : http://killexams.dropmark.com/367904/12117517
Blogspot : http://killexamsbraindump.blogspot.com/2017/11/free-pass4sure-000-077-question-bank.html
RSS Feed : http://feeds.feedburner.com/Real000-077QuestionsThatAppearedInTestToday
weSRCH : https://www.wesrch.com/business/prpdfBU1HWO000TBTT
Calameo : http://en.calameo.com/books/0049235260f22a9802bc9
publitas.com : https://view.publitas.com/trutrainers-inc/free-pass4sure-000-077-question-bank
Box.net : https://app.box.com/s/jh7horh2ka4cwvgecphnsptmynel0fyn
zoho.com : https://docs.zoho.com/file/5psibc431b739a4594fcf8d317d60459c6bbe