Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
53 views
43 pages
US20230376390A1
blockchain management
Uploaded by
Sanjay Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download
Save
Save US20230376390A1 For Later
Share
0%
0% found this document useful, undefined
0%
, undefined
Print
Embed
Report
0 ratings
0% found this document useful (0 votes)
53 views
43 pages
US20230376390A1
blockchain management
Uploaded by
Sanjay Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Carousel Previous
Carousel Next
Download
Save
Save US20230376390A1 For Later
Share
0%
0% found this document useful, undefined
0%
, undefined
Print
Embed
Report
Download
Save US20230376390A1 For Later
You are on page 1
/ 43
Search
Fullscreen
U! cu») United States IS 202303763901 cz) Patent Application Publication co) Pub. No.: US 2023/0376390 Al JIBASA et al. (43) Pub. Date: Nov. 23, 2023 (si) CREATING A covTAIvERE continuation ofapptication No. 16/1628, fed 0a ANALYTICS PIPELINE Ki 29,208, sw Pat Ne 10888888 (01) Applicat: PURE STORAGE, NC, SANTA (62). Provisional application No, 6658867, fed on Ap, rhea) 17. 2018. pensions. anion No. 63 680.56 Hea on Mar 30, 208, provisional spinon No (72) Inventors: IVAN JIBAJA, SAN JOSE, CA (US); (62/648,368, filed on Mar. 26, 2018. CURTIS PULLES VICTORS (CA) PRASHANT JAIRUMAR Publication Casieaton SUNNYVAIE-CA(US) STERAN G1) Int DORSETT. SAN JOSE. CA US) eseitae) © 8 conan CRUMAV IAT SUNNYVALECA gy) G06 (US) NEIL VACHARATANL Coe Guat 172023 2013.01); Go6e 1/2004 pee PR oe ae) ee (2013.01); GO6F 2201/85 (2013.01) CHPLLAPPA SUNNYVALE CO ws) (37) ABSTRACT (21) Appl. No.: 18/362,81 Creating a containerized data analytics pipeline, including: (22) Filed: Jul 31,2023 Geployed within a coatiner that eters aa coutiner sued Ui, Apne che tht i enema to shred. erage accesible hy ee ‘components of the data analytics pipeline: and responsive to (©) Conimuaton o aptiation No, 1662.07, fled on stein tha component eh an asltce pts hs Feb 10, 2022 now Pat No I71472% which it Tale. deploying ater isiase ate compere ih tonimuton of apptiation No- 17010868, fed on tov cota t scone 0 sop seo re Sap 2, 2000, wn Pa. Nov 1,263,005, hich ist components of the data nays pplne ‘0 ‘Computing Deve 1648 Corrputing Device 1648 wis avid Hd i Corer tk Gmecirie |} | i Pian Secrdy SecondayPinay | | | | Conoterttoe conto tto | f - pele FI 1088 oe) | ac sap | ty 4 i it tt + tt i sore Sunes 1 ET Fosoae | [seaae | [serse i Dive see]E | | [opie | [ome | [ome |... | His Pe) ermine | eames | eazie i Perstent Storage Resource 170 ae! Porsstent Stage Resource 1708 i Sire ray 02h HH Sra ry iUS 2023/0376390 AI Nov. 23, 2023 Sheet 1 of 15 ZOT Keusy abesag BOLT eninosay abeioig warsisiag Wer Aeuy abes01S WOT aoinosey abeioyg waysisiag or an wae see] omg anuig ang abies abei01g abei01g 7 7 7 aT voor Y 4 Avewugyhrepucoeg fuepuodeg, Aue Ob Jet!02U09 ‘OLE seH}oqUOD BOTT sajjonuog YOTT seyonuog Patent Application Publication FBT cova Buanaog OT e020 Buyrcuog ~* ooUS 2023/0376390 AI Nov. 23, 2023 Sheet 2 of 15 Patent Application Publication al ‘Old or a tr Jepuedsg suoONysu) I wong qv ar saidepy wish Buyexedo sng ys0H 101 0h Tw OT eoinep Bussao0g 901 oso aso} sor TF saidepy ‘EOI sadepy YEOT sadepy ‘sng }SOH ‘sg 80H ‘sng }SOH TOF sajjonuogPatent Application Publication Nov. 23, 2023 Sheet 3 of 15 US 2023/0376390 Al 3 3 g 3 3 3 g 3 oO a & = zz 8 8 3 bs ay 8 117US 2023/0376390 AI Nov. 23, 2023 Sheet 4 of 15 Patent Application Publication agzi__ e8Zh dt Old ooH ma oof : aa oh L_) a SRST mn fb Beate CJ EH a er fe 214Patent Application Publication Nov. 23, 2023 Sheet 5 of 15 US 2023/0376390 Al 159 156 CPU a | 152 ge 3° £ 2 . Va Vv? l«— & 2 9g iz se] 2¢ 2 e| & é 7 L-8 [>s 8 [> 8 >sUS 2023/0376390 AI Nov. 23, 2023 Sheet 6 of 15 Patent Application Publication a “Old (™ = og Janog jewranrg (™ wt Og jeueyg wr yoy jewarg} = “Y— pauuogseyuy ‘suwod) oN U ev wv va ig pea 04 ¥EL Sv zy by Aowgny ty AV a a ar fu aynduiog pon aBeu01g Spon aB2I01g [enon a6eunig | aay i [ za oF C ont om arPatent Application Publication “) Nov. 23, 2023 Sheet 7 of 15 US 2023/0376390 Al Siorage Node Non-Volatie Solid State Memory NVRAM 204 Flash 206 152 Non-Volatle Sold State Memory ae PUD 10210 Flash 10 220 Controller 242 DRAM 216 DMA 214 Flash 222, T6KB Page 228 Ar Pa 1 | Energy Reserve 218 Regater a 22 FIG. 2CPatent Application Publication Nov. 23, 2023 Sheet 8 of 15 US 2023/0376390 Al Host Controller 242 Mid-Tier Controller 244 po Storage Unit Ba OTT Storage Unit im) ' \ ' ti ; \ 11 nvram 208] | | 1H NvraM 208 | | \ \ I ' | | SUconteler 246 1 | | sucontoter 245 \ ' ' \ ' 1 11 RAM i ' 1) RAM ' ' I : 1 ' ti ; \ i ! \ v1 ! ' . ; 1 1 i y FIG. 2DUS 2023/0376390 AI Nov. 23, 2023 Sheet 9 of 15 Patent Application Publication aynduo ez epera wz epeia ar Avouny we ayndwo: ‘2e apelgPatent Application Publication Nov. 23, 2023 Sheet 10 of 15 US 2023/0376390 Al FABRIC (SWITCH) 146] FABRIC (SWITCH) 146) Blade 252 Compute module 210 152 L | STORAGE UNIT FIG. 2F FABRIC (SWITCH) 146] FABRIC (SWITCH) 146} Blade 252 Blade 252 Blade 252 ‘Compute module ep on ‘Compute module 7 <@ a, B 152 STORAGE UNIT STORAGE UNIT 1 STORAGE UNIT 1-152 RAID stipes Gog C ¢ io span blades FIG. 2GPatent Application Publication Nov. 23, 2023 Sheet 11 of 15 US 2023/0376390 Al Cloud Services Provider 302 A t ' t ! Storage System 306 FIG. 3APatent Application Publication Nov. 23, 2023 Sheet 12 of 15 US 2023/0376390 Al Communications Resources 310 Processing Resources 312 FIG. 3BPatent Application Publication Nov. 23, 2023 Sheet 13 of 15 US 2023/0376390 Al [Data Analytics Pipeline 402 ontainer 410 {Container 416 Data 414 / wae Ee Create A Data Analytics Pipeline, Where Each Component Of The Data Analytics Pipeline {s Deployed Within A Container 420 Create A Failover Container 422 Lm} Detect That A Component Within The Data Analytics Pipeline Has Failed 424 dP Deploy The Component Within The Data Analytics Pipeline In The Failover Container 426 FIG. 4Patent Application Publication Nov. 23, 2023 Sheet 14 of 15 US 2023/0376390 Al forage System 502 Shared Storage 504 ‘Siorage Device 1506 Data Analytics Pipeline 402 iE Container 410 ner ‘Failover ‘Container 428 Create A Data Analytics Pipeline, Where Each Component Of The Data Analylics Pipeline Is Deployed Within A Container 420 t Create A Failover Container 422 LJ Lim| Detect That A Component Within The Data Analytics Pipeline Has Falled 424 Deploy The Component Within The Data Analytics Pipeline In The Failover Container 426 t Remove The Failed Component From The Data Analytics Pipeline 514 1 ‘Add The Component Contained in The Failover Container To The Data Analytics Pipeline 516 FIG.5Patent Application Publication Nov. 23, 2023 Sheet 15 of 15 US 2023/0376390 Al torage System 502 Shared Storage 504 {Storage Device ‘Storage Device | 1508 510 Data 414 - tf 4. Data Anaivioe Pipeline 402 ‘Container 410! i {Container 404 tFailover ‘Container 428 Create A Data Analytics Pipeline, Where Each Component Of The Data Anaiylics Pipeline Is Deployed Within A Container 420 ¥ Create A Failover Container 422 Lm} Detect That A Component Within The Data Analytics Pipeline Has Failed 424 1 Deploy The Component Within The Data Analytics Pipeline In The Failover Container 426 1 Send, From A First Component To A Second Component, A Pointer Indicating A Location Within Shared Storage Where The First Component Has Stored Its Output 604 1 Retrieve, By The Second Component, The Output Of The First Component From ‘The Location Within Shared Storage Identified By The Pointer 606 FIG. 6US 2023/0376390 AI CREATING A CONTAINERIZED DATA ANALYTICS PIPELINE, CROSS REFERENCE TO RELATED 'APPLICATIONS| 10001] This continuation application for patent entitled ‘oa fling date and claiming the benefit of earlier-filed US. Pat. No. 11,714,728, issued Aug. 1, 2023, which isa ‘continuation of US. Pat, No. 11,263,095, issued Mar. 1, 2022, which is 9 continuation of U.S. Pat, No. 10838833, jssued Nov. 17, 2020, which claims priority from US. Provisional Patent Application No. 62/648,368, filed Mar 26, 2018, US. Provisional Patent Application No. 62650, 736, filed Mar. 30, 2018, and US. Provisional Patent Appi- cation No, 62/688,867, filed Apr 17, 2018, each of which are hherei incorporated by reference in ther ene BRIEF DESCRIPTION OF DRAWINGS 10002] _ FIG. 1A illustrates a fist example system for data Storage in accordance with some implementations 10003] FIG. 1B illustrates a second example system for data storage in aevordance with some implementations. 0004} FIG. 1€ ilusistes a thied example system for data storage in accordance with some implementations 10005} FIG. 1D illustrates a fourth example system for data storage in aevordance with some implementations. [0006] FIG. 2A isa perspective view of a storage cluster ‘with multiple storage nodes and internal storage coupled to ‘each storage node to provide network attached storage, in accordance with some embodiments. [0007] _FIG. 2B isa block diagram showing an interoon- rect siteh coupling multiple storage nodes in aevordance ‘with some embodiments. [0008] "FIG. 2C isa multiple feve block diagram, showing ‘contents of storage node and contents of one of the non-volatile solid state storage units in accordance with some embodiments, [0009] FIG. 2D shows @ stonge server environment, ‘which uses embodiments of the storage nodes and storage Units of some previous figures in accondance with some ‘embodiments [0010] FIG. 28 isa blade hardware block diagram, shov- jing control plane, computeand storage planes, and authori- ties interacting with underlying physical resources, in accor- dance with some embodiments [OL] FIG. 2F depicts elasticity software layers in blades ‘ofa storage cluster, in accordance with some embodiments [0012] FIG. 2G depicts authorities and storage resources in blades of a storage cluster, in accordance with some ‘emboxtiments 10013] FIG. 3A sets forth a diagram of a storage system that is coupled for data communications with a cloud services provider in accordance with some embodiments of the present disclosure. [0014] FIG. 38 sets forth a diagram ofa storage system in socordance with some embodiments of the present disclo- [0015] FIG. 4 sels forth a low char illustrating an addi- tional example method of providing for high availability ia ‘data analytics pipeline without replies in accordance With some embodiments of the present dislosure [0016] FIG. § sets forth a flow char illustrating an addi- ‘ional example method of providing for high availability Nov. 23, 2023 data analytes pipeline without replicas in accordance with ‘ome embodiments ofthe present disclosure. [0017] FIG. 6 sets forth 8 flows char illustrating an od ‘ional example method of providing for high availability ia ‘data analties pipeline without replicas in accordance with some embodiments of the present disclosure. DESCRIPTION OF EMBODIMENTS [018] Example methods, apparats, and products for pro- Viding for high availablity in a data analytics pipeline ‘without replicas in accordance with embodiments of the present disclosure are described with reference to the ‘accompanying drawings, beginning with FIG. 1A. FIG. 14 illstates an example system for data storage, in accordance ‘with some implementations. System 100 (also refered 0 as “storage system” herein) ineludes numerous elements for purposes of illustration rather than limitation. Ie may be noted that system 100 may include the same, more, or fewer elements configured in the same or different manne in other ements [019] System 100 includes number of computing devises 164-8. Computing deviees (also referred 10 as “elient devices” herein) may be embodied, for example, a server in a data center, workstation, a personal computer, ‘a notchook, or the like. Computing devices 1644-B may be coupled for data communications to one oF more storage amrays 102A-B through a storage area network (‘SAN’) 158 fF a Tocal area network CLAN") 160. [0020] The SAN 158 may be implemented witha variety ‘of data communications fabrics, devices, and protocols, Por ‘example, the fabrics for SAN 188 may include Fibre Chan- fel, Ethernet, Infniband, Serial Attached Small Computer System Interface (‘SAAS"), oF the like. Data communications protocols for use with SAN 188 may include Advanced Technology Attachment CATA"), Fibre Channel Protocol ‘Small Computer System Inerface (SCSI), Interet Small Computer System Interface (iSCSI’), HyperSCSI, Non- Volatile Memory Express (NVMe") over Fabries, or the like. may be noted that SAN 188 is provided for illstra- ‘ion, rather than Hstation, Other data communication oo plings may be implemented between computing devices 164A.B and storage arrays 12A-B, [021] The LAN 160 may also be implemented with 2 variety of fabri, devices, and protocols. For example, the {bres for LAN 160 may include Flhemet (802.3), wireless (802.11), or the like. Data communication protocols for use in LAN’ 160 may include Transmission Control Protocol TCP"), User Datagram Protocol (‘UDP’), Internet Protocol (CIP), Hypertext Transfer Protocol CHTIP'), Wireless ‘Access Protocol ( WAP"), Handheld Device Transport Pro- {ocol (HDTP"), Session Initiation Protocol (‘SIP"), Real Time Protocol CRIP), or the lke [0022] Storage arrays 102A-B may provide persistent data Storage for the computing devices 164A-B. Storage array 102A may be contained in a chassis (not shown), and storage array 102B may be contained in another chassis (not shown), in implementations. Storage aay 1024 and. 102B- may include one or more storae array controllers L1OA-D (aso referred to as “controller” herein). storage array controller HIOA-D may be embodied 2s a module of automated com- puting machinery comprising computer hardware, computer software, ora combination of computer hardware and soft ‘ware. In some implementations, the storage any controllers TOAD may be configured to carry out various storageUS 2023/0376390 AI tasks, Storape tasks may include writing data reosived fom the computing devices 164A-H Vo sonage array 102A. ‘erasing dt fiom storage aay 102, retrieving dat rom Somage amay 10243 and providing’ data 0. computing ‘devices 1644-B, monitoring and reponing of disk utzon tnd perfomunce, performing redundancy operations, such 48 Redundant Aray of Independent Drives RAID’) of RAID-like data redundancy operations, compressing dot, ‘encrypting data, and 80 forth 10023]. Storage aay contol 110A-D may be imple- rented in a vary of way, inching a « Field Progen. mable Gate Amay FPGA’), a Programmable Logie. Chip (PLC), an Applicuion ‘Specie Integrated Circuit CASIC), ‘Syatemon-Chip (SOC), ot any computing ‘device that includes diseretecomponcals suc as process ing device, ental pressing unit, computer memory. oF Various adapters. Storage ary coniller I108-D muy include, for example, a data communications adapter cone Fgured vo support communications vi the SAN 1880" LAN 160. In some implementations. storage array. controle {10D may be independently coupled tothe LAN 160, In implementations, storage array controller MOA-D may include an VO contollero the like tht couples the storage aay controller 1A-D for data commtinications, trowgh 8 midplane (o0t shown), 00 a peristeat store resource 1704-1 (also refered as "storage esoure” herein). The persistent storage resource 1704-B main inlade aay nu ber of some drives ITLA (also refered to as "store ‘devices herein) and any number of nonvolae Random Access Memory (NVRAM') devies (not shown. 10024) In some implementations, the NVRAM devices of ‘ posistent storage resource 170A-B may’ be coated 10 resvve rom thestonge ary contol MOA-D, data to be Stored inthe storage drives 171A Ia some examples, the ‘data may originate from computing devices 1644-D. In some examples, writing data tothe NVRAM device may be ‘urricd out more quickly than directly writing data to the Storage dive I7LA-. In implementations, the storage array ‘ontoller 10A-D may be onfiuredt lize the NVRAM ‘devices asa quickly accesible ir for data destined to be writes to the storage drives ITLA. Lateney for write roquess using NVRAM devices as a hf maybe improved relative wo a system in which 0 storgge ary contoller IH0AcD writes data diet othe storage drives TIA. Is some implementations, the NVRAM devices may be implemented with compatet memory in the form of high Bandwidi ow latency RAM. The NVRAM device is refered to as “non-volatile” because the NVRAM device ‘nay reve or include a unique power source tha aintins the sate ofthe RAM after main power los othe NVRAM ‘device. Such a power source may bea bate one oF more apaciors, or the like, In response fo a power Toss the NVRAM device may be configured to write the conteats of the RAM toa persistent storage, sch asthe storage drives WIA 10025] In implementations, storage dive ITLA-F may refer to any deviee configured to record data persistently, where “pessiey” ur "persistent relers to a device's bility to maintain corded dota fer loss of power. In some implementations, sorage dive 1TLA-F may eomespond to noms stomge media. For example, the storge drive ITIA-F may be one or more slidestte drives (SSDS?) flash memory hosed! storage, aay type of solid-state oon: ‘olatle memory. oF any oer typeof aon-meshanical stor Nov. 23, 2023 age device. n other implementations, storage drive T1A-E ‘may include mechanical or spinning hard disk, such as harddisk drives (HDD. [0026] In some implementations, the storaxe aay eon ‘wllers 110-D muy be configured for oflowding device ‘mamigement responsibilities from storage deve ATLA-P in storage ary 1024-B. For example, storage aray control- Jers 1104-0 may manage control information that may describe the state of one oF more memory blocks in the storage drives I7LA-F. The conto! information may ind ate, for example, that particular memory block ha filed and should no longer be written to, tata particular memory block contains boot code for a storage array contol TOAD, the number of program-erase PE) eyes that hve been performed on a particular memory block, the age of data stored in a potcular memory block, the type of data that is stored in particular memory block, and sO forth. In some implementations, the control information may be stored with am associated memory block as metadata tthe implementations, the control information forthe stor age drives 1TLA-F may be sored in one or more particular memory blocks of the storage drives I7LA-F that are selected by the slomge array controller 110A-D. The selosted memory blocks may be lagged with an identifier indicating that the selected memory block contains contol information. The identifier may be wilized by the storage ray controllers 110-D in conjunetion with storage drives TTLA-F to quickly identify the memory blocks that contain coutrl information, For example, the storage controllers 1I0A-D may issue command to oeate memory blocks that contain contol information. It may be noted that contol information may be 30 lang that parts ofthe contol infor ‘mation may be stored in motple leston, thatthe control {information may he stored in multiple lacations for purposes of redundancy, for example, or that the contro infomation ‘nay otherwise be distributed cross multiple memory blocks inthe storage dive 1714. [0027] In. implementations, storage array controllers TOAD may oflload device management responsibilities from storage drives 17LA-F of storage array 102A-B by retrieving from the storage drives 1714-F, cont inform- tion describing the state of one oF more memory blocks in the storage drives I7LA-P, Retrieving the contol informa- tion from the storge drives 171A-F may be carted out, for cxample, by the storage aray controller HOA-D querying te storoge drives 1TIAT for the leation of control infor sation fora particular storage drive ITHA-P. The storage drives I7LA+F may be configured to exeeute instructions that enable the storage drive P71A-F to identify the location ofthe conto! information. The instrvtions may be exceed by a controller (not shown) associated with oF otherwise focated on the storage drive I7LA-F and may cause the storage drive 1714-1 «© sean a portion of each memory block t0 identify the memory blocks that sore contol information for the storage drives I71A-F. The storage hives LTIA-F may respond by ening @ response message to the storage array enrollee 110A-D that includes the Jocation of ont information forthe storage deve 171A-P Responsive o receiving the response message, storage array controllers HOA-D may ise request to read data stored atthe address associated with the location of eontrol infor ‘mation forthe storage drives I7LA-F. [0028] In other implementations, the storage array cone twollers MAD may further offload device managementUS 2023/0376390 AI responsibilities from storage drives 17LA-P by performing, in response (0 receiving the contro information, a storage ‘drive management operation. A storage drive management ‘operation may include, for example, an operation that is ‘ypically performed by the storage drive I71A-F (ea, the ‘controller (aot shown) associated with a panicular storage drive ITIA-F). A storage drive management operation may Jnchide, for example, ensuring that data is not written to failed memory blocks within the storage drive ITIA-F, ‘ensuring that data is waite to memory blocks within the Storage drive I71A+F in such a way that adequate wear leveling is achieved. and so forth, [0029] In implementations, storage array 102A-B may Jmplement two or more storage any conitollers HOA-D. For example, storage amay 102 may include storage array controllers 110A and storage aray controllers 110B. At 3 piven instance, a single storage array controller 10A-D {© storage array controller 10A) of a storage system 100 may’ be designated with primary status (also refered to as “primary controller” herein), and other storage aay con- rollers 110A-D (e.g, storage aray controller HOB) may be ‘designated with secondary status (also referred to as “see~ ‘ondary controller herein). The primary controller may have Panicula rights, such as permission t alter data in persis- teat storage resource 1704-B (eg. Writing data to persistent storage resource 170-2), Atleast some ofthe rights of the primary controller may supersede the rights of the secondary contoller. For instance, the secondary controller may not have permission to ater data in persistent storage resource 1994-8 when te primary controler has the right The status of storage array controllers 110A-D may change. For ‘example, storage array controller HOA may be designated ‘with secondary stams, and storage array controller 1108, tay be designated with primary status. 10030] In some implementations, a primary controller, such as storage aay controller HHA, may serve as the Primary controller for one of more storage arays 102-2, and a seeond controll, such as storage array controller {IOB, may serve as the Secondary controller for the one or more storage strays 102A-B, For example, sore array controller 110A may be the primary controler for storage aay 102A and storage array 102B, and storage array ‘controller IHOB may be the secondary controller for storage aay 102A and 1028. In some implementations, storage ray controllers 110C and 110D (also relered to as “storage processing modules”) may neither have primary or second- fry status. Storage array controllers HOC and ILOD, imple- mented a storage processing modules, may act as a com ‘munication interface between the primary and secondary ‘controllers (e., storage array controllers HOA and 1108, respectively) and storage array 102B. For example, storage array controller I1OA of storage array 102A may send 3 rite request, via SAN 188, 0 storage array 1028. The write request may be received by both storage array controllers 10C and H10D of storage array 102B. Storaue array con- twolles IOC and 110D facilitate the communication, 2. fend the write request to the appropriate storage deve TTIA-E. It may be noted that in some implementations storage processing modules may be used 0 increase the rimber of stonige drives controlled hy the primary and secondary controllers. 10031] In implementations, storage amay controllers TIOA-D are communicatively coupled, via a midplane (aot shown}, fo one or more storage drives 171A-F and to one or Nov. 23, 2023 wore NVRAM devices (not shown) that ae include as part Pa storage array 102A-B. The stonige army controllers HOA-D may be coupled to the midplane vis one o more dala communication links und the midplane may be coupled to the storage dives ITLA-F and the NVRAM devices via fone oF miore dala communications Hinks, The data eomna- ications Tinks described herein are coliesively illustrated by data communications links 1O8A-D and may include a Peripheral Component Interconnect Express (‘PCIe") bus, {or example. [0032] FIG. 1B ilfustates an example system for data orag, in accordance with some implementations, Storage array controller 101 illustrated in FIG. 1B maybe similar the sforape array controllers HHOA-D described with respect to FIG. 1A. In one example, storage array controller 101 may be similar (o storage array controller 110A or storage Ana controller 108, Storage amy controller 101 inelides ‘numerous elements for purposes of illustration rather than limitation, It may he noted that storage aeray controller 101 may include the same, more, or fewer elements configured inthe same or different manner in other implementations, It vay be noted that elements of FIG. 1A may be included ow to help illustrate features of storage aay controller 101. [0033] Storage anay controller 101 may include one or ‘more processing devices 104 and random access memory CRAM) IIL, Processing device 104 (or controller 101) represenis one oF more general-purpose processing devices stich as a meroprocestor, central processing Unit, oF the like ‘More particularly, the processing device 104 (or controller 101) may be a complex insirvtion set computing CISC”) microprocessor, redvced instruction set computing (RISC) roproeessor, very long instruction word. (VLIW?) ‘microprocessor, or a processor implementing other instrue- tion sels oF processors implementing « combination of instrction sets. The processing deviee 104 (or controler 101) may also be one or more special-purpose processing devices such as an application specific integrated cireuit CASIC), a field programmable gate array FPGA’), a igital signal processor (1 ‘or the Tike. [0034] ‘Tae processing device 104 may be connected to the RAM IH via a data communications fink 106, which may bbe embodied as a high speed memory bus such as Double-Data Rate 4 CDDR¢*) bus. Stored in RAM 111 is an ‘operting system 112. In some implementations, instrctions 113 are stored in RAM IIL, Instructions 113 may include ‘computer program instrvctions for performing operations in a direct-mapped flash storage system, In one embodiment, a ireet-mapped flash storage system is one that addresses data blocks within flash deves directly and without an ‘address translation performed by the storage controllers of the flash drives, [035] In implementations, storage array controller 101 includes one oF more host bus adapters 103A-C that are coupled tothe processing device 104 via data comayuni- cations fink 108A-C. In implementations, host bus adapters TOBA-C may be computer hardware that connects & host system (eg, the storage array controller) to other network fd storage aerays. In some examples, host bus adapters 103A-C may be a Fibre Channel adapter that enables the storage amay contmller 101 (© connect 10 a SAN, an [Ethernet adapter that enables the storage array controler 101 tw conieett0-8 LAN, or the like, Host bus adapters 103A-CUS 2023/0376390 AI aay be coupled 1 the processing device 104 via a data ‘communications Fink 108-C such a, for example, a PCle bus. 10036) In implementations, storage aeray controller 101 may include a host bs adapter 4 thats coupled to aa ‘expander I. The expander 118 may be use to atach a ost sysem to a lager mimber of slomge dives. The expander 1S ay for example, bea SAS expander ulized to enable the host bus duper Il to lish to sorage dives im Jmplemeataton Where the host bus adapter 114 is embodied 4a SAS controller {0037} is implementations, store aay contller 101 may include a switeh 16 eoupled tothe processing deviee 104 vin date comsmsnications link 109. The switch 16 may bea computer hardware device that ean erste multiple ‘endpoints out of single endpoint, thereby enabling nll ‘devices to share a single endpoint. The switch 116 may, for ‘example, bea PCle switch thats coupled to a PClebus (28 ‘data communications Tink 109) and presents multiple PcTe ‘conncction points t the midplane 10038] In implementations, store aery contllee 101 includes @ data communications link 107 for coupling the Sorage array controller 101 to other storage aay contro. iors In some examples, data communications tink 107 may be a QuickPath Interconnect (QP1 interconnect 10039} A traiional storge system that uses tratonsl ‘ash drives may inplements process across the Nash dives that ar part of the tational storage system. For example, ‘higher level process ofthe slorage system may’ iitite and contol a process aeross the Mash dives. However, Hash ‘ve ofthe tradonl storge system may include its own sore controller thot aso performs the process. Ths, for the vaditional storage system, higher level process (©, invited by the storage system) and a lower level process (a, initatel by a storage contr ofthe storage System) may both be performed {0040} To resolve various deficiencies of a waitional Storage system, operations may be performed by highee level processes and not by the lower level processes, For ‘example the fash storage system may inlude ash dives that do not inlude slorage coatrllew tht provide the process, Thus, the operating system of the ash, storage System itll may nite and cont the peoces. This may be accomplished by a direct mapped flash storage system that addreses data blocks within the Ash drives directly and without an ess translation performed hy the sore ‘ontrollets of the fash drives {0041} The operating sytem of the Nash storage systom say identify and maintain a ist of allocation nis rose rutile Mash drives ofthe Mash storage system. The allo ‘aot units may be entre erase blocks or mulliple erase blocks. The operating system may maintain map or ales range tht drcelly maps addresses to erase blocks of the flash drives of the fash storage system {0042} Direct mapping to the erase blocks of the fash ‘ives may be Use! 10 rewrite data and erase data. For ‘example, the operations may be performed on one or more ‘allocation units tht include fst data ad a second data ‘where the fit dat sto be retained andthe sscond data is no Tongor being used by the flash storage system, The ‘operating system may iniiste the process to write the fist data 10 new locations Within oer allocation wits and ‘rising the second data and marking the allocation sits as Being available for use for subsequent data. Thus, the Nov. 23, 2023 process may only be performed by the higher level operating System of the flash storage system without an additional lower level process being peefomed by controllers of the dash drives [0043] Advantages ofthe process being pecformed only by the operating system of the flash storage system include increased reliability of the fash drives ofthe Hash storage system as unnecessary’ or redundant write operations are not being performed during the process. One possible point of novel here is the concept of initiating and controling the process at the operating system of the fash storage system, In addition, the process ean be controlled by the operating system across mUltiple Mash drives, This is in contrast to the process being performed hy-a storage controller of a Bash “rive, [0044] _ storage system can consist of two storage aay ‘canirollers that share a set of drives for failover purposes, oF it could consist of a single storage array controller that provides a storage service that utilizes multiple drives, or it could consist of a distributed network of storage array onirolles. each with some mumber of drives of some amount of Fash storage where the slorge aay controllers in the network collaborate to provide a complete store service and collaborate on various aspects of a stormge service including storage alloation and garhage collection. 0045} FIG. 1C iostrates a thied example system 117 for ata storage in aovordance with some implementations. System 117 (also referred to as “storage system” herein) jncludes numerous elements for purposes of illustration rather than fimitation. It may be noted that system 117 may include the same, more o fewer elements configured in the same or different manner in ther implementations. [0046] In one embodiment, system 117 includes a dual Peripheral Component Interconnect (PCI) fash storage device 118 with separately addressable fast write storage System 117 may include a storage controller 119. In one embodiment, storage controller 119A-D may be a CPU, ASIC, FPGA, or any other eicuity that may implement ‘onto siructires necessary aevording tothe present disclo- sure, In one embodiment, system 117 includes fash memory Gevices (eg, including Hash memory devices 120d), ‘opemtively coupled to various channels of the storage {device controller 19. Flash memory devices 120a-n may be presented to the controller II9A-D as an addressable col- Ieetion of Flash pages, erase blocks, and/or contro elements silicent to allow the stonige device controller HI9A-D to program and retrieve various aspects of the Flash, In one ‘embodiment, storage device controller H9A-D may perform ‘operations on fash memory devices 120a-n including stor jing and recieving data cootent of pages, arranging and erasing any blocks, tracking statistics related to the use and reuse of Flash memory pages, erase blocks, and cells ‘rocking and predicting error codes and faults within the Flash memory, controlling voltage levels associated with programming and retrieving contents of Flash eels ete. [0047] In one embodiment, system 117 may include RAM T21 to store separately addressable Fast-write data. In one embodiment, RAM 121 may be one or more separate tiserete devices. In another embodiment, RAM 121 may be integrated into storage device controler 119A-D or multiple storage device controllers. The RAM 121 may be tilized for ‘ther purposes as wel, such as temporary program memory {ora processing device (eg., » CPL) inthe stomge device controle 119,US 2023/0376390 AI 10048] In one embodiment, system 117 stored energy device 122, such as 4 capacitor. Sioned eneniy device 122 may store energy sullicient to power the storage device controller 119, some amount ofthe RAM (ez, RAM 121), and some amount of Flash memory (eg. Flush memory 1202-120n) for suficent time to write the contents of RAM to Flash memory. In one ‘embodiment, stomge device controler HIVA-D may write the contents of RAM to Flash Memory ifthe storage device ‘controller detects loss of extemal power. 10049] In one embexliment, system 117 includes two data ‘communications links 1280, 1236, In one embodiment, dats ‘communications Finks 1284, 1286 may be PCI interfaces, In ‘another embodiment, data commusiations links 1234, 1235 may be based on other communications standards (¢. HyperTranspor, InfiniBand, ete), Data communications links 1232, 123) may be based on non-volatile memory ‘express (NVMe") or NVMe over fabrics (’NVMP) spect fications that allow extemal connection tothe storage device ‘controller 119A-D from other components in the storage system 117. It should be noted that dats communications Tinks may be interchangeably refered to erein as PCL buses {or convenience. [0050] System 117 may also include an extemal power source (not shown), whicl may be provided over one or both data conimonicatons links 1232, 123, or which may be provided separately, An allemative embodiment includes @ Separate Flash memory (not shown) dedicated for use ia Moring the coalent of RAM 121, The storage device con- troller 19A:D may present a Togical device over a PCT bus Which may ince an addressable fastatrite logical devi ‘or a distinct part of the logieal address space of the storage ‘device 118, which may be presented as PCI memory or as Persistent storage. In one embodiment, operations to store {nto the device are directed into the RAM 121, On poser failure, the storage dovies controller II9A-D may’ write sored content associated with the addressable fastwrite Jogical storage to Flash memory (eg, Flash memory 1200- 1) for long-term persistent storage. {0051} In one embodiment, the logical device may include Some presentation of some or all ofthe content ofthe Flash memory devices 1202-n, where that presentation allows 2 storage system including a storage device 118 eg, storage system 117) to directly address Flash memory pages and direetly reprogram erase blocks from storage system com- ponents tht are extemal to the storage device through the PCI bus. The presentation may also allow one or more ofthe ‘extemal components to contro and retrieve other aspects of the Flash memory inching some or all of: tracking satis- ties rolated 10 use and reuse of Flash memory pages, erase blocks, and cells aeross all the Flash memory” devices: tracking and predicting error codes and faults within and across the Flash memory devices; controlling voltage levels fssociated with progrimming and retrieving contents of lash ool ete 10052] In one embodiment, the stored energy device 122 ‘ay be suficent to ensure completion of in-progress opera tions (© the Flash memory devices 120a-120n, The stored ‘energy device 122 may power storage device controller 119A-D and associated Fash memory devices (e 1200-7) for those operations, as well as forthe storing of fast-write RAM to Flash memory: Stored energy device 122 may’ be used to sore accumulsted statistics and other parameters kept and tracked by the Flash memory devices 120a-n Nov. 23, 2023 andr the storage deviee controller 119. Separate captors or stored energy deviees (uel as smaller capacitors near or embedded within the Flash memory devices themselves) ‘may be used for some or all of the operstions deseribed herein. [0083] Various schemes may be usd to track and optimize the Tife span of the slored energy component, such as adjusting vollage levels overtime, partially discharging the storage energy device 122 to measure corresponding dis charge characteristics, ete. Ithe available energy decreases ‘ver ime, te elective available cupociy ofthe addressable fiastwrite storage may be decreased to ensure that t can be ‘written safely based on the curently available stor energy. [054] FIG. 1D illustrates third example system 124 for data storage in accordance with some implementations. In fone embodiment, system 124 includes storage controllers 1282, 1285. In one embodiment, storage controllers 1280, 1285 are operatively coupled to Dual PCI storage devices 119, 19h and 1192, 19d, respectively. Storage controllers 1282, 1256 may be operatively coupled (ex, via a storage network 130) t some number of host computers I27a-n [0055] In one embodiment, two storage controllers (e.g. 1252 and 1286) provide storage services, such a a SCS block storage uray. file server an object server, a database ‘or data analytes service, ete, The storage controllers 1280, 1255 may provide services through some number of network interfaces (eg, 1262~d) 0 host computers 127a-n outside of the storage system 124. Sorage controllers 1284, 1286 may provide integrated services or an application entirely within the storage system 124, forming a converged storage and ‘compute system. The sioage controllers 1284, 128 may utilize the fist write memory within or across stonge vices 119d to joumal in progress operations to ensire the operations are not lost ona power failure, storage controller removal, storage controller of storage system shutdown, oF some fault of one or more software of bard- ‘ware components within the storage system 124. [0056] In one embodiment, controllers 128, 128 operate fas PCT masters fo one or the other PCT buses 1284, 1280. la ‘another embodiment, 1280 und 1285 may be bated on other communications standards (eg, HyperTransport, Infin- Band, ec). Other storage system embodiments may operate storage controllers 1282, 1286 as mulismasters for both PCT buses 128<, 1286. Altemately, a PC/NVMe/NVMI switch- ing infastrecture or fabric may conneet multiple storage controllers. Some storage system embodiments may allow storage deviees to communicate with each other directly rather than communicating only with stomge controllers. In ‘one embodiment, a stonige device controller 1194 may be ‘operable under direction from a sloeage controller 1289 to synthesize and transfer data to be stored into Flash memory devices from data that hos been stored in RAM (eg... RAM 121 of FIG. 1C). For example, a recalculated version of RAM content may be transfered after storage controler Jha determined that an operation has fully commited across the storage system, or when fast-write memory onthe device has roached a certain used capacity, or after a certain amount ‘of time, to ensure improve safety of the data or t release addressable fest-write capacity for reuse, This mechanism may be used for example, o avoid a seoond transfer over a bus (eg. 1284, 1285) from the storage controllers 128, 1285, In one embodiment, a recalcultiUS 2023/0376390 AI ‘compressing data, attaching indexing or other metadata, ‘combining multiple data segments together, performing era sure code calculations, ete 10057] In one embodiment, under direction from 2 storage controller 1284, 128, a storige device contoller 1192, 195 may be operable to calculate and transfer data to thee storage devices from data stored in RAM (eg, RAM 121 of FIG. IC) without involvement of the storage conteollers 12a, 128. This operation may be used to mirror data stored Jn one controler 1260 to another controler 128, ort could be used to offload compression, data aggregation, and/or ‘erasure caging calculations and transfers to storage devices to reduce load on storage controllers or the storage controler Jmerface 1294, 1296 to the PCI bus 1280, 128 10038] _A storage device controler H19A-D may include mechanisms for implementing high availability primitives or use by other parts ofa storage system extemal othe Daal PCT storage device 118. For example, reservation or exehi- son primitives may be provided so that, in a storage system ‘with two storage controllers providing a highly availsble storage service, one stage controler may prevent the other storage controler from accessing or continuing to acces the Storage device. This could be used, for example, in cases ‘where one controller detects that the other controller is not Tunetioning properly or wre the interconnect heween the tvo storage controllers may itself not be funetioning prop- edly. 10059] In one embodiment, storage system for use with Dual PCI direct mapped storage devices with separately addressable fast write storage includes systems that manage ‘erase blocks or groups of erase blocks as allocation units for storing data on belalf ofthe storage service, or lor storing mictadata (e.g, indexes, logs, ete.) associated withthe stor- age service, or for proper management of the storage system itself Flash pages, which may be a few kilobytes in size, ‘may be sien as data arrives of asthe storage system is 0 persist data for long intervals of time (e., above a defined threshold of time). To commit data more quickly, oF 10 reduce the aumber of wites to the Flash memory devices, the storaye controllers may fist write data into the sepae rately addressable fast write storage on one or more slonie doves, 10060) In one embodiment, the storage controllers 1282, 1255 may initiate the use of erase blocks within and across storage devices (eg. 18) in accordance with an age and ‘expected remaining lifespan ofthe storage deviees, or based, ‘on other statistics. The storage controllers 1284, 128 may ‘niga garbage collection and data migration between stor ‘age devices in accordance with pages that are n0 longer needed as well as to manage Flash page and erase block lifespans and to manage overall system performance 10061] In one embodiment, the storage system 124 may utilize mimoring andor erasure coding schemes as pat of ‘toring data into addressable fast write sore adoe part ‘of writing data into allocation units associated with erase blocks. Erasure codes may be used across storage devices, as ‘woll as within erase blocks or allocation units, or within and ‘across Flash memory devices on a single storage device, to provide redundancy against single or multiple storage device failures or to protect against internal corruptions of Flash memory pages resting from Flash memory operations oF fom degradation of Flash memory cells. Miroring and Nov. 23, 2023 erasure coding at various levels may be used to recover fr ‘multiple types of Failures that eceur separately or in come bination [0062] The embodiments depicted with reference 10 FIGS, 2AG illustrate a storage cluster that stores ser dats, sich fas user dato originating from one or more user or elent systems or other sources external to the storage cluster The Sorage cluster distbutes user data across storage nodes hhowsed within a chassis, or across multiple chassis, using erasure coding and redundant copies of metadata, Brasure foding refers to # method of data protection or econstrie- ‘ion in which data is stored aeross a set o different locations, stich as disks, stomige nodes or geographic locations. Flash memory is one type of solid-state memory that may be {ntegraied with the embodiments, although the embodients may be extended to other types of solid-state memory or other storage medium, including non-solid state mesory Conteol of storage locations and workloads are distributed across the slomige locations in & clustered peer-to-peer system, Tasks sueh as mediating communications between the various storage nodes, detecting when a storage node has become unavailable, and balancing Vs (inputs and outputs) cross the various storage nodes, are all handled om Gistbuted basis Data is laid out or distributed across ‘multiple stonige nodes in data fragments or stripes that support data recovery in some embodiments. Ownership of ata can be reassigned within a cluster, independent of iaput and output patterns. This architecture described in more etal below allows. storage node in the luster to fail, with the system remaining operational, since the data can be reconstructed from other storage nodes and thus remain available for inpot and output operations. Ia various embed ‘ments, a storage node may be referred to asa cluster node, a blade, or a server. [0063] The storage cluster may be contained within a chassis, Le, an enclosure housing one of more storage odes, A mechanism to provide power to each storage node, such as a power distrbution bus, and a communication ‘mechanism, such as a communication bus that enables communication between the storage nodes are included within the chassis, The storage cluster ean run as an inde- pendent system in one location according to some embod- ‘ments. In one embodiment, a chassis contains at leat 160 instances ofboth the power distribution and the communi- cation bus which may be enabled or disabled independently ‘The internal communication bus may be aa Ethemet bus, however, other technologies such as PCIe, InfiniBand, and cers, are equally suitable. The chassis provides a port for ‘an external communication bus for enabling commuication between multiple chasis, directy or trough a switch, and ‘with elient systems. The extemal commiiniation may use a technology such as Fhere, InfiniBand, Fibre Channel, et. Tn some embodiments, the external communication bas uses ferent communication bus technologies for interchassis and client communication. Ia switch is deployed within oF between chassis, the switch may act asa translation between ‘multiple protocols or technologies. When multiple ebassis are connected to define a storage cluster, the storage cluster ay be accessed by a client using either proprietary inter- aces or standard interfaces such as network file system CNFS?), commion intemet filesystem (CTFS"), smal com puter system interface (‘SCSI’) or hypertext transfer proto- col CHTTTP"). Translation from the client protocol may ‘chassis extemal communication bus orUS 2023/0376390 AI within each storage node. In some embodiments, multiple ‘chassis may be coupled or connected to each other through ‘an aggregator switch. A portion andor all ofthe coupled or ‘connected chassis may be designated asa storage cluster. AS «discussed above, each chassis ean have multiple blades, each blade has a media aovess control (MAC") address, but the storage clusters presented to an external network as having ‘single chister [Paddress and a single MAC address in some ‘embosiiments 10063] Bach storage node may be one oF more storage servers and cach storage server is conected to one or more non-volatile solid. state memory units, which may be referred lo as storage unils or storage devices, One embodi- ment includes a single storage server in each storage node ‘and between one Wo eight non-volaile sold state memory nits, however this one example is not meant tobe limiting. The storage server may include a processor, DRAM and interfaces forthe interoal communication bus and poser
fishes an indirection to data. Indirection may be refered to ‘a the ability to reference data indirety, inthis case via aa authority 168, in accordance with some embodiments. Segment identifies set of non-volatile solid state storage Nov. 23, 2023 152 and a local identifier into the set of non-volatile solid slate storage 152 that may contain data, In some embodi- ‘meas, the local identifier isan offset nto the device al may be reused soquentially by suliple segments, Ia other ‘embodiments the local identifier is unigue for a specific ‘segment and never reused, The offsets in the non-volatile solid state storage 152 are applied t locating data for ‘writing to or reading from the non-volatile slid state storage 152 (in the form of a RAID stripe). Data is striped across multiple units of non-volatile solid state storage 182, which ‘may inchude or be different from the non-volatile sold state storage 152 having the authority 168 for a particular data segment. [0071] Irthere is « change in where a particular segment fof data is located, eg, during a data move or a data reconstrietion, the authority 168 for that dats sepmes should be consulted, at that non-volatile solid state stomage 182 or storage node 150 having that authority 168. In order {0 locate a particular piece of data, embodiments calculate a hash value fora data segment or apply an inode number or ‘data segment number. The outpt of this operation points ‘oa non-volatile solid state storage 152 having the authority 168 for that particular pioce of data. In some emboctiments there are two stages to this operation, The first stage maps an cnlty identifier (D), eg. a Segment number, inode number, or directory number to an authority identifier. This mapping ‘may include a ealeuation such as a hash or bit mask. The second stage is mapping the authority identifier t© a par ticular non-volatile solid sate storage 182, which may’ be ‘done through an explicit mapping. The operation is repeat able, so that when the calculation is performed, te result of the calculation repeatably and reliably points to a particular non-volatile solid state storage 182 having that autbority 168. The operation may include the set of reachable storage odes as input, I the set of reachable non-volatile solid state storage units changes the optimal set changes. In some ‘embodiments, the persisted value is the curreat assignment (sich is always true) and the calculated value is the target assignment the cluster will attempt to reconfigure towards. ‘This calculation may be used to determine the optimal ‘non-volatile solid state storage 152 for an authority in the presence of a sot of non-volatile solid state storage 152 that fare reachable and constitute the same cluster, The calcul tion also determines an ordered set of pecr non-volatile solid state storage 152 that will also record the authority’ 10 ‘non-volatile solid state storage mapping so thatthe authority ‘may be determined even if the assigned non-volatile solid stale storage i unreachable. A duplicate or subsite suthor- ty 168 aiay be consulted if a specie authority 168 is ‘unavailable in some embodiments. [0072] With reference to FIGS. 2A and 2B, two of the ‘many tasks of the CPU 136 on a storage node 180 are to break up write data, and reassemble read data. When the system has determined that data to be writen, the author- ‘ty 168 for that data is located as above. When the segment ID for data is already determined the request 1 write is forwarded to the non-volatile solid state storage 152 eur rently determined w be the host of the authority 168 deter ‘mined fom the segment. The host CPU 156 ofthe storage ‘ode 150, on which the non-volatile solid state storage 152 and corresponding authority 168 reside, then breaks up or shards the data and transmits the data out to various non- volatile solid state storage 152. The transmitted data is ‘written as 2 data stripe in aecondance with an erasure codingUS 2023/0376390 AI Is, data is requested to be is data is posed. In reverse, ty 168 for the segment ID ‘containing the dat is located as described above. The host CPU 156 of the storage node 150 on which the non-volatile solid state storage 182 and corresponding authority. 168 reside requests the data from the non-volatile solid state storage and corresponding stonige nodes pointed ta by the authority In some embodiments the data is read fom fash Storage asa daa stripe. The host CPU 136 of storage node 150 then reassembles the read data, comreeting any ero (i present) according (othe appropriate erasure coding scheme, nd forwards the reassembled data tothe network. In fursher ‘embodiments, some or all of these tasks ea he handled ia the non-volatile sold state storage 182, In some emboxl- ments, the segment host requests the data be sent to storage node 150 by requesting pages from storage and then sending the daa to the storage node making the original request 10073] In some systems, for example in UNDGsiyle fle systems, data is handled with an index node or inode, which specifies a data structure that represents an object ina fle system. The object cauld bea fileora directory for example. Metadata may accompany the object, as attbutes sueh as Permission data and creation timestamp, among. other attbutes. A segment number could be assigned to all or a portion of such an object ina file system, In other systems, data segments are handled with a segment number assigned ‘elsewhere. For purposes of discussion, the unit of dstibu- tion is an entity, and an entity ean be a file, a directory or @ segment. That is, entities are units of data of metadata stored by a storage system. Entities are grouped into sets called authorities, Pach authority has an authority owner, which is 4 storage node that has the exchisive right to update the ‘entities in the authority. In other words, a stomge node ‘contains the authority, and that the authority, in turn, con= 10074) A segment is 2 logical container of data in accor dance with some embodiments. A segment is an address space between medium address space and physical Bash Tocations, ie. the data segment number, are in this address space. Segments may also contain meta-data, which enable data redundancy to be restored (rewritten to different ash Jocations or devices) without the involvement of higher level software. In one embodiment, an intemal format of a seg- ‘ment contains client data and medium mappings to deter mine the postion of that data. Bach data segment is pro- tected, eg from memory and other failures, by breaking the segment into a number of data and. party shards, where ‘applicable. The data and parity shards are distributed, i. sriped, across non-volatile solid state storage 182 coupled to the host CPUs 186 (See FIGS. 2F and 2G) in aevordance ‘with an erasure coding scheme. Usage othe term segments refers to the container and its place in the address space of segments in some embodiments. Usage of the term stripe refers to the same set of shards as a segment and includes how the shards are distributed along with redundancy of party information in aecordanee with some embodiments. [0075] A series of address-space transformations takes place across an entire storage system. At the top are the ‘directory entries filenames) which fink to an inode, Modes Point into medium address space, where data is logically stored. Medium addresses may be mapped through 2 series ‘of indirect mediums to spread the load of large files, oF implement data services like deduplication oF snapshots Nov. 23, 2023 Seument addresses are then translated into physical Bash locations. Physical ash locations have an address range Dounded by the amount offs in the system in aevordance with some embodiments, Medium addresses and segment addresses are logical containers, and in some embodiments tse a 128 bit or lamer ideaifier so as to be practically infinite, witha likelihood of reuse calculated as longer than the expected life of the system. Addresses from logical containers ae allocated in a hierarchical fashion in some embodiments. Initially, each non-volatile solid state storage ‘unit 182 may'be assigned a range of address space. Within this asigned range, the non-volatile sold state storage 182 js able to allocate addresses without synchronization with other non-volatile solid state storage 182, 0076] Data and metadata is stored by a set of undestying Storage layouts that are optimized for varying, workload pattems and storage devices. ‘These layouts incorporate ‘multiple redundancy schemes, compression formats and index algorithms. Some of these layouts store information about authorities and authority masters, while others store file metadata and file daa, The redundancy schemes include ‘error correction codes that tolerate earned bits within a Single storage device (such as a NAND flash chip), erasnre odes that tolerate the fare of multiple storage nodes, and replication schemer that tolerate data center or regional failures. In some embodiments, low density parity chock CLDPC?) code is used within a single storage unit. Reed- Solomon encoding is used within a storage cluster, and ‘mirroring is used within a storage grid in some embodi- ‘ments. Metadata may be stored using an ordered log stme- tured index (such as a Log Structured Menge Tree) ad large data may not be stored in 2 log stuctured layout [0077] In order to maintain consistency across multiple copies of an entity, the storage nodes agree implicitly on two ‘things through calculations: (1) the authority that contains the entity. and (2) the storage node that contains the author ‘ty. The assignment of entities to authorities ean be done by pseudo randomly assigning entities to authorities, by sli {ing entities into ranges based upon an externally predced key, of by placing a single entity into each authority. Examples of pseudorandom sehemes are linear hashing and the Replication Under Scalable Hashing (RUSH) family of hashes, including Conmatled Replication Under Scalable Hashing CRUSH). In some embodiments, psevdo-ran- {dom assignment is utilized only for assigning authorities to rodes because the set of nodes can change. The set of authorities eannot change so any subjective function may be ‘applied in these embodiments. Some placement schemes Aulomatically place authorities on storage nodes, while other placement sehemes ely on an explicit mapping of authori tiesto storage nodes. In some embodiments, a pscudoran- ‘dom scheme is uilized to map from each authority to a set ‘of candidate authority overs. A pseudorandam data dis bution finction related to CRUSH may assign authorities to storage nodes and create a list of where the authorities are ‘assigned, Bach storage node has & copy othe pseudorandes data distribution funetion, and can arive atthe same caleu- lation for distbuting, and Tater finding or locating an authority. Fach of the psettorandom schemes requires the reachable set of storage nodes as input in some embodiments ‘in onder to conclude the same target nodes. Once an entity hha been placed in an authority, the entity may be stored on physical devices <0 that no expected failure will lead to ‘unexpected data loss. In some embodiments, rebalancingUS 2023/0376390 AI algorithms attempt to store the copies of all entities within fn authority in the same layout aid om he same set ol machines 10078] Examples of expected failures inchude device fale lures, stolen machines, datacenter fies, and regional disss- ters, such as nuclear or geological events. Different failures lead to different levels of acceptable dita Joss. In some ‘embodiments, a slolea stomge node impacts neither the security nor the reliability of the system, while depending on system configuration, a regional event could lead to no loss ‘of data, a few seconds oF minutes of lost updates, or even, ‘complete dat loss 0079] In the embodiments, the placement of data for Sorage redundancy is independent of the placement of suthorities for data consistency. In some embodiments, Sorage nodes that contain authorities Jo-not contain any persistent storage. Instead, the stonige nodes are connected ‘© non-volatile solid state storage units that do not contain ‘uihorites. The communications interconnect between stor ‘age nodes and non-volatile solid state storage units consists ‘of multiple communication technologies and has non-uni- orm performance and fault tolerance characteristics, In some embodiments, as mentioned above, non-volatile solid Sate storage units are connected to storage nodes via PCT express, sorage nodes are connected togelher within 3 single chassis using Ethernet backplane, and chassis are ‘conned together forma storage cluster, Storage clusters are connected fo clients using Ftheret or fiber channel in ome embodiments, IT muliple storage clusters ae contig: ured into a storage grid the multiple storage clusters are ‘connected using the internet or other long-distance networks ing links, such a8 a “metro seale” fink or private lnk that does 201 traverse the internet. [0080] Authority owners have the exclusive right 0 modi entities, fo migrate enfites from one non-volatile solid state storage unit to another non-volatile solid. state Storage mit, and to ald and remove copies of entities, This allows for maintaining the redundancy of the underlying ‘data, When an authority owne fails, is poing to be decom missionod, or is overloaded, the authority is transferred to & new storage node. Transient failures make it nontrivial t0 ‘ensure that all non-faulty machines agree upon the new authority loeation. The ambiguity that arses de to transient failures can be achieved automatically by’ consensus protocol such as Paxos, hot-warm failover schemes, via ‘manual intervention by # remote system administrator, or by ‘8 Tocal hardware administrator (such as by physically removing the fled machine from the luster, or pressing & button on the failed machine), In some embodiments, 2 consensis protocol is used, and failover is automate. Ift00 ‘many failures or replication events occur in too shoe time period, the system yoes into a sel preservation mode and halts replication and data movement activities until an administrator intervenes in aecordance with some embodi- ments [0081] As authorities are transfered between storage nodes and authority owners update entities i thie author ties, the system translers messages between the slorage nodes and non-volatile solid state storage units. With regard to persistent messages, messages that have diffrent pure poses are of differen types. Depending on the type of the ‘message the system msitains different ordering and dura- bility guarantees. As the persistent messages are being processed, the messayes ane temporaily stored in multiple Nov. 23, 2023 durable and non-durable storage hardware technologies some embodiments, messages are stored in RAM, NVRAL ‘and on NAND tlash deviees, and a variety of protocols are ‘sed in order to make elfcient use of each siorage medium. Lateney-sensitive client requests may be persisted in repli- cated NVRAM, and then later NAND, while backgrodnd rebalancing operations are persisted diretly to NAND. [0082] Persistent messages are persistently stored prior to being transmitted. This allows the system to conte t0 serve client requests despite fuifures and component replace- ‘ment, Although many hardware components contin unique ‘dentfers that are visible to system administrators, mate Tacturer, hardware supply chain and oagoing monitoring {quality control infrastructure, applications running on top oF the infrastructure address virwalize addresses, These Virt- alized addresses do not change over the lifetime of the storage system, regarlless of component failures and replacements, Ths allows each component of the storage system t be replaced overtime without reconfiguration oF disruptions of client request processing, ie, the system supports non-tisruptive upgrades, [0083] In some embosdtnens, the virtualized adresses are stored with sufficient redundancy. A continuous monitoring system correlates hardware and software status and the hardware identifiers, This allows detection and prediction of {alures due o faulty components and manufeeturing details ‘The monitoring system alko enables the proactive transfer of authorities and entities away from impacted devices before Failure cceurs by removing the component from the critical path in some embodiments, [084] FIG, 2C isa multiple level block diagram, showing ‘contents of a stonige node 150 and contents of a non-volatile solid state storage 152 of the storage node 150, Data is fcommiinicated to and from the storage node 180 by @ network interface controller (‘NIC") 202 in some emboxt meats. Bach storage node 180 has a CPU 186, and one or ‘more non-volatile solid state storage 182, as discussed howe. Moving dawn one level in FIG. 2. each non-volatile ‘oid state storage 152 has a relatively fast non-volatile soi State memory, such as nonvolatile sindom accest memory CNVRAM") 204, and Hash memory 206. In some embodi- rents, NVRAM 204 may be a component that does not require progranverase eyeles (DRAM, MRAM, PCM), and can be a memory that ean support being written vastly more often than the memory i read from, Moving down another level in FIG. 2C, the NVRAM 204 is implemented in one embodiment as"high speed. volatile memory, such as dynamic random access memory (DRAM) 216, backed up by enengy reserve 218. Energy reserve 218 provides sul cient electrical power keep the DRAM 216 powered long tenough for contents to be transferred to the flash memory 206 in the event of power failure. In some embodiments energy reserve 218 isa capacitor, super-capacitor, battery, oF other device, that supplies a suitable supply of enemy sficent 10 enable the transfer of the contenis of DRAM 216 (0 a stable storage medium in the case of power loss. Te fash memory 206 is implemented as multiple flash dies 222, which may be refereed to as packages of Mash dies 222 ‘ran aray of lash dies 222. It shouldbe apprecited thatthe flash dies 222 could be packaged in any number of ways, ‘witha single ic per package, multiple dies per package (ic. :multichip packages), in hybrid packages, as bare dies on 3 printed cieuit hoard or other substrate, as cncapstlated dies, ft, Inthe embodiment shown, the non-volatile solid stateUS 2023/0376390 AI storage 152 has a controller 212 or other processor a an ‘input ouput (UO) port 210 coupled to the controller 212. 110 port 210 is coupled to the CPU 186 and/or the network ‘interface controller 202 ofthe Mash stoge node 180, Fash ‘input output (VO) port 220 is coupled to the flash dies 222, ‘anid a dirwot memory acoess unit (DMA) 214 is coupled (6 the controle 212, the DRAM 216 andthe flash dies 222. In the embodiment shown, the ]O port 210, controller 212, DMA unit 214 and flash VO port 220 are implemented on & programmable logic device (‘PLD') 208, eg. a fcld pro- fgrummable gate array (FPGA). In this embodiment, exch Fash die 222 has pages, onganized as sixteen KB (kilobyte) pages 224, and a register 226 through which data can be ‘written Wo or read from the Dash die 222, In furerembodi- rents, other types of solid-state memory ate used in place ‘for in addition to flash memory illustrated within lash die 22. 10085] Storage clusters 161, in various embodiments as disclosed herein, ean be contrasted with storage ary tenerl. The storage nodes 150 are part ofa collection that ‘reales the storage cluster 161, Bach stonige node 180 owns a slice of data and computing required to provide the dats. Multiple storage nodes 150 cooperte t store and retrieve the data, Storage memory of storage devices, as used in storage arrays in general, are less involved with processing fand manipulating. the data. Storage memory or slomge devices ina storage array receive commands to read, weit, ‘or enite data, The Morage memory or slomage devices in 3 Sorage array are not aware ofa larger system in whieh they are embedded, or what the data means. Storage memory oF storage deviees in storage arrays can include various types ‘of storage memory, such as RAM, solid state drives, hard disk drives, ete, The storage units 182 described herein have rnutiple interfaces sctive simultaneously and serving mul- tiple purposes. In some embodiments, some of the fetion= ality ofa storage node 180 is shifted into storage unt 152, transfoming the storage unit 152 into a combination of storage unit 152 and storage node 150, Placing computing (relative storage data) into the storage unit 182 places this ‘computing closer to the data itself. The various system ‘embodiments have a hierarehy of storage node layers with ‘iffeent capabilities. By contrast, in a storage array, a ‘controller owns and knows everything about all of the data thatthe controller manages in a shelf of storage devices. la a storage cluster 161, a8 described herein, multiple contol- Jers in multiple storage units 152 and/or storage nodes 150 ‘cooperate in various ways (e.g. for erasure coding, data shurding, metadata communication and redundancy, slomage ‘capacity expansion or contraction, data ravers and so on), [0086] FIG. 2D shows a stonge server environment, Which uses embodiments of the slonge nodes 150 and storage units 152 of FIGS. 2A-C. In this version, each Storage tnit 182 has a processor such as controller 212 (see FIG. 20), an FPGA (bed programmable gate array), flash memory 206, and NVRAM 204 (which is super-capacitor backed DRAM 216, see FIGS. 28 and 2C) on a PCle (peripheral component interconnect express) boatd in & chassis 138 (see FIG. 2A). The storage unit 152 may be implemented as a single board contsining storage, and may be the largest tolerable fee domain inside the chassis, In some embodiments, upto two storage units 152 may fal and the device will continue with no data lose, [0087] |The physical storage ie divided into named regions based on application usage in some embodiments. The Nov. 23, 2023 W NVRAM 204 is a contiguous block of reserved memory the stomige unit 152 DRAM 216, and is backed by NANI flash. NVRAM 204 js logically divided into multiple memory regions wallen for wo as spool (ee. spool region). Space within the NVRAM 204 spools is managed by each authority 168 independently, Each device provides an amount of storage space to each authority 168, That futhority 168 further manages lifetimes and allocations ‘within that space. Examples of a spool include distributed transactions or tions. When the primary power. stomge unit 152 fails, onboard superespacitors provide a short duration of power hold up. During this holdup interval, the tontents of the NVRAM 204 are flashed fo flash memory 206. On thenext power-on, the contents ofthe NVRAM 204 fare recovered from the flash memory 206, [0088] As forthe storage unit contoler, the responsibility of the logical “controller” is distributed across each of the blades containing authorities 168. This distibution of logi- cal control is shown in FIG. 2D a¢ a host controller 242, nietier contoller 244 and storaue unit controlled) 246 ‘Management ofthe control plane and the stonige plane are treated independently, although parts may be physically corlocated om the same blade, Each authority 168 effectively serves as an independent controller, Fach authority: 168 provides its own data and metadata structures, its own background workers, and maintains its awn Tiecyele [089] FIG. 2. isa blade 252 hardware block digram, showing a contol plane 284, compute and storage planes 26, 258, and authorities 168 interacting with underlying physical resources, using embodiments ofthe storage nodes 180 and storage units 152 of FIGS. 2A-C in the storage server environment of FIG. 2D. The contol plane 254 is partitioned info a number of authorities 168 which can use the compute resources jn the compute plane 286 to run on ‘any of the Blades 282. The storage plane 258 is partitioned into a set of devices, cach of which provides acoess to Rash 206 and NVRAM 204 resources. [0090] In the compute and storage planes 286, 258 of PIG. 26, the authorities 168 interact with the underlying physical resources (ie. devices). From the point of view of an authority 168, its resources are stiped over all of the physical devices, Prom the point of view of a device, i provides resources to all authorities 168, irrespective of ‘where the authorities happen to nun, Each authority 168 bas allocated of has been allocated one or more partitions 260 of Storage memory in the storage units 182, ez, partitions 260 in flash memory 206 and NVRAM 204, Bach authority 168 uses those allocated partitions 260 that Belong to it, for ‘writing oF reading user data. Authorities can be associated with differing amounts of physica storge of the system, For example, one authority 168 could have a larger number of partons 260 or lager sized partitions 260 in one or more Storage units 152 than one o¢ more other authorities 168. [0091] FIG. 2F depicts elasticity software layers in blades 252 ofa storage cluster, in accordance with some embed us. Inthe elasticity structure, elasticity software is sym- metric, i, each blade’s eompute module 270 run the three identical livers of processes depicted in FIG. 2F. Storage ‘managers 274 execute read and write requests from other blades 252 for data and metadsta stored in local storage unit 152 NVRAM 204 and flash 206. Authorities 168 fulfil elient requests by issuing the necessary reads and writes 10 the blades 282 on whose stomge units 182 the corresponding data or metadata resides. Endpoints 272 parse client eonUS 2023/0376390 AI nection requests received from switch Fabric 146 superv sory software, relay the client connection requests 10 the authorities 168 responsible for fulfillment, and retay the ‘authorities’ 168 responses to clients. The symmetric Uhre Iayer structure enables the storage system’s high degree of ‘concurrency. Flasticity scales out efficiently and relioly in these embodiments. In addition, elasticity implements 9 unique scale-ot technique that balances work evenly across all rsources regardless of elit secess patter, and max mizes concureacy by eliminating much of the noed for inter-blade coordination that typically occurs with eonven- tional distributed locking. 0092} Sil referring to PIG. 2F, authorities 168 running in the compute modules 270 ofa blade 252 perform the internal ‘operations required to fulillelient requests. One feature of ‘elasticity is that authorities 168 are stateless, i, hey cache fctive data and metadata in their own blades” 282 DRAMs for fst aocess, but the authorities store every update in their NVRAM 204 partitions on three separate Blades 252 until the update has been writen (0 Mash 206, AIl the storage system waites to NVRAM 204 ate in triplicate to partitions ‘on thre separate Blades 252 in some embodiments, With triple-mirored NVRAM 204 and persistent stomge pro- tected by party and Reed-Solomon RAID checksums, the storage system ean survive concurrent feiure of two blades 282 with no loss of data, metadata, or access to either 10093] Because authorities 168 are stateless, they cua migrate between blades 282. Fach authority 168 has 9 unique identifier. NVRAM 204 and flash 206 partitions are ‘associated with anrhorites” 168 identifies, not with the blades 252 on which they are running in some embodiments Thos, when an authority 168 migrates, the authority 168 ‘continues to manage the same storage partitions from is new location. When a new blade 252 is installed in an ‘embodiment ofthe storage cluster, the system automatically rebalances load by: paritoning the new blade's 252 storage for use by the system's authorities 168, migniting selected suthorities 168 tothe new blace 282, starting endpoints 272 ‘on the new blade 252 and including them in the switch {abric's 146 client connection distribution algorithm, 10098] _ From their new locations, migrated authorities 168 Persist the contents of their NVRAM 204 partitions on Bash 206, process read and write requests from other authorities 468, and fll the client requests that endpoints 272 direct to them. Similarly, if a blade 252 fails or is removed, the system redistribute its authorities 168 among the system's remaining blades 282, The redistributed authorities 168 ‘continue to perform their original functions from their new locations [0095] FIG. 2G depicts authorities 168 and stomze resources in blades 282 of a storage cluster, in accordance ‘with some embodiments. Each authority 168 is exclusively responsible fora partion ofthe lash 206 and NVRAM 204 ‘on each blade 282. The authority 168 manages the content and integrity of its partitions independently of ther authori- ties 168. Authorities 168 compress incoming data and pre- serve it temporarily in their NVRAM 204 partitions, and then consolidate, RAID-proteet, and persist the daia ia segments of the storage in their flash 206 partitions. As the authorities 168 write data to flash 206, storage managers 274 perform the necessary flash translation to optimize write Performance and maximize media longevity. In the back- ‘ground, authorities 168 “garbage collect" oF reclaim space ‘occupied by data that eliets have made obsolete by over Nov. 23, 2023 ‘writing the data. It should be appreciated that since autor ties’ 168 partitions are disjoint, there is no need for dstib- tuted locking to execute client and writes oF to perfor background functions 0096] The embodiments described herein may utilize various software, communication andor networking peoto- cols, In addition, the configuration of the hardware andor software may be adjusted to accommodate various proto- cols. For example, the embodiments may uilize Active Directory, which is a database based systom that provides authentication, directory, poliey, and other services in a WINDOWS" environment. In these embodiments, LDAP Lightweight Directory Access Protocol) is one example application protocol for querying and modifying items in Girecory service providers such ax Active Directory. la some embodiments, a network lock manager NLM") is utlized as a facility that works in cooperation with the [Network File System ("NFS") to providea System V style oF advisory Mle and record locking over a network. The Server ‘Message Block (‘SMB") protocol, ane version of which is also known as Common Intemet File System (CIES'), may be integrated with the storage systems discussed herein. SMP operates as an application-layer network protocol ‘ypically used for providing shared access to fies, printers, and serial ports and miscellaneous communications between rodes on & network, SMB iso provides an authenticated inter-process communication mechanism, AMAZON™ $3 (Simple Stonige Service) is a web service offered by Ama- ‘on Web Serviees, and the systems deseribed herein may interlace with Amazon S3 through web services interfaces (REST (representational sate transfer), SOAP. (simple object access protocol), and BitTorea!). A RESTful APL (application programming interface) breaks down a trans- faction to ereate @ series of small modules. Fach module Addresses a particular underlying arto the teansaetion The ‘contol of permissions provided with these embodiments, especially for object data, may include utilization of an access contro! list ACL"). The ACL. isa list of permissions fatached to an object and the ACL spocfies which users oF system processes are granted access to objects, as well as ‘What operations ae allowed on given objects. The systems ‘may ulilze Interact Protocol version 6 (IP), as Well as IPv4, for the communications protocol that provides an identification and location system for computers on net- ‘works and routes trafic aeross the Internet, The routing of packets between networked systems may inclde Equal-cost ‘multipath rooting CECMP"), which is a routing strategy ‘where next-hop pocket forwarding to a single destination ean occur over ritiple “best paths” which tie for top place in routing metic calculations, Muli-path routing can be ‘sed in conjunction with most routing protocols because it isa perhop decision limited wo single mute, The software ‘may suppor Muli-enancy, which is an architecture in Which a single instance of a software application serves ‘multiple customers. Bach customer may be referred 1 28 @ tenant, Tenants may be given the ability to customize som parts of the application, but may not customize the spi ation’s code, in some einbodiments. The embodiments may ‘maintain audit fogs. An audit log i @ document that records ‘an event in a computing system. In addition to documenting ‘what resources were accessed, audit log entries typically include destination and source addresses, a timestamp, and ‘ser login information for compliance with various regula- ‘ions. The embodiments may support various key manage-
You might also like
Wo2020056331 Pamph 20200319 4096
PDF
100% (2)
Wo2020056331 Pamph 20200319 4096
53 pages
US2020394728A1 Original Document 20220119075153
PDF
No ratings yet
US2020394728A1 Original Document 20220119075153
44 pages
The Technical Foundation of IoT
PDF
No ratings yet
The Technical Foundation of IoT
489 pages
Seminar Report
PDF
100% (1)
Seminar Report
42 pages
United States Patent: (10) Patent No .: US 9, 872, 088 B2
PDF
No ratings yet
United States Patent: (10) Patent No .: US 9, 872, 088 B2
54 pages
Us 20160261932 A 1
PDF
No ratings yet
Us 20160261932 A 1
54 pages
Apple Patent Modularized Devices
PDF
No ratings yet
Apple Patent Modularized Devices
34 pages
Apple Patent Keyword Detection Using Motion Sensing
PDF
No ratings yet
Apple Patent Keyword Detection Using Motion Sensing
43 pages
United States Patent: (10) Patent N0.: US 7,263,462 B2 Funge Et A) - (45) Date of Patent: Aug. 28, 2007
PDF
No ratings yet
United States Patent: (10) Patent N0.: US 7,263,462 B2 Funge Et A) - (45) Date of Patent: Aug. 28, 2007
20 pages
Ulllted States Patent (10) Patent N0.: US 8,490,074 B2
PDF
No ratings yet
Ulllted States Patent (10) Patent N0.: US 8,490,074 B2
96 pages
Common Data Model For Identity Access Management Data
PDF
No ratings yet
Common Data Model For Identity Access Management Data
45 pages
Nordstrom Inventory
PDF
No ratings yet
Nordstrom Inventory
43 pages
Deep Learning and Reconfigurable Platforms in The Internet of Things
PDF
No ratings yet
Deep Learning and Reconfigurable Platforms in The Internet of Things
14 pages
Us 20200257317 A 1
PDF
100% (1)
Us 20200257317 A 1
24 pages
Snapchat Object Recognition Based Photo Filters Patent Application
PDF
No ratings yet
Snapchat Object Recognition Based Photo Filters Patent Application
29 pages
Us 5533123
PDF
No ratings yet
Us 5533123
37 pages
Us 20130254838
PDF
No ratings yet
Us 20130254838
31 pages
United States: (12) Patent Application Publication (10) Pub. No.: US 2012/0310602 A1
PDF
No ratings yet
United States: (12) Patent Application Publication (10) Pub. No.: US 2012/0310602 A1
22 pages
Us 20120259843
PDF
No ratings yet
Us 20120259843
25 pages
Vision Only Patent
PDF
No ratings yet
Vision Only Patent
18 pages
Ulllted States Patent (10) Patent N0.: US 8,543,871 B2
PDF
No ratings yet
Ulllted States Patent (10) Patent N0.: US 8,543,871 B2
13 pages
US20220399936A1
PDF
No ratings yet
US20220399936A1
39 pages
US20210314342A1
PDF
No ratings yet
US20210314342A1
58 pages
US11810080
PDF
No ratings yet
US11810080
31 pages
Us20230028934a1 - Patent 1
PDF
No ratings yet
Us20230028934a1 - Patent 1
95 pages
US Patent: Determining and Generating Search Refiners For Applications
PDF
No ratings yet
US Patent: Determining and Generating Search Refiners For Applications
19 pages
Anomaly Detection Techniques Using Deep Learning in IoT A Survey
PDF
No ratings yet
Anomaly Detection Techniques Using Deep Learning in IoT A Survey
4 pages
US10776080
PDF
No ratings yet
US10776080
56 pages
Acm Mobicom ' : The TH Annual International Conference On Mobile Computing and Networking
PDF
No ratings yet
Acm Mobicom ' : The TH Annual International Conference On Mobile Computing and Networking
16 pages
US20170262862A1
PDF
No ratings yet
US20170262862A1
37 pages
Acm Mobicom ' : The TH Annual International Conference On Mobile Computing and Networking
PDF
No ratings yet
Acm Mobicom ' : The TH Annual International Conference On Mobile Computing and Networking
12 pages
Firm Presentation K&K - IIPRD
PDF
No ratings yet
Firm Presentation K&K - IIPRD
20 pages
US20230004174A1 Vehicle Autonomy Architecture
PDF
No ratings yet
US20230004174A1 Vehicle Autonomy Architecture
36 pages
Unpacking WiFi 6 SEPs Declaration and Standards Contribution Data
PDF
No ratings yet
Unpacking WiFi 6 SEPs Declaration and Standards Contribution Data
63 pages
Computers 12 00034 v2
PDF
No ratings yet
Computers 12 00034 v2
17 pages
مصطفى محمد الصديق .. إمتحان الذكاء الاصطناعي
PDF
No ratings yet
مصطفى محمد الصديق .. إمتحان الذكاء الاصطناعي
11 pages
US20220215248A1
PDF
No ratings yet
US20220215248A1
26 pages
Machine Learning and Data Analytics For The Iot: S.I.:Applyingartificialintelligencetotheinternetofthings
PDF
No ratings yet
Machine Learning and Data Analytics For The Iot: S.I.:Applyingartificialintelligencetotheinternetofthings
29 pages
L 0017398760 PDF
PDF
No ratings yet
L 0017398760 PDF
24 pages
Leveraging Computational Storage For Power-Efficient Distributed Data Analytics
PDF
No ratings yet
Leveraging Computational Storage For Power-Efficient Distributed Data Analytics
36 pages
CN 103955438 B
PDF
No ratings yet
CN 103955438 B
12 pages
Parallel Processing
PDF
No ratings yet
Parallel Processing
16 pages
Nucurrent - Curr - Assignee
PDF
No ratings yet
Nucurrent - Curr - Assignee
56 pages
Pervasive AI For IoT Applications A Survey On Resource Efficient Distributed Artificial Intelligence
PDF
No ratings yet
Pervasive AI For IoT Applications A Survey On Resource Efficient Distributed Artificial Intelligence
54 pages
5
PDF
No ratings yet
5
28 pages
Patent Application Publication (10) Pub - No .: US 2017 / 0270324 A1
PDF
No ratings yet
Patent Application Publication (10) Pub - No .: US 2017 / 0270324 A1
41 pages
Table of Contents
PDF
No ratings yet
Table of Contents
3 pages
ETI (Unit 2)
PDF
No ratings yet
ETI (Unit 2)
22 pages
Patent 2
PDF
No ratings yet
Patent 2
1 page
US10732891
PDF
No ratings yet
US10732891
87 pages
Us 10788990
PDF
No ratings yet
Us 10788990
26 pages
6
PDF
No ratings yet
6
25 pages
US11687437
PDF
No ratings yet
US11687437
98 pages
US20170337534A1
PDF
No ratings yet
US20170337534A1
50 pages
US20230376929A1
PDF
No ratings yet
US20230376929A1
40 pages
US20230063792A1
PDF
No ratings yet
US20230063792A1
18 pages
Patent Application Publication (10) Pub - No .: US 2022/0261882 A1
PDF
No ratings yet
Patent Application Publication (10) Pub - No .: US 2022/0261882 A1
52 pages
US10187214
PDF
No ratings yet
US10187214
14 pages
Patent Application Publication (10) Pub - No .: US 2023/0009304 A1
PDF
No ratings yet
Patent Application Publication (10) Pub - No .: US 2023/0009304 A1
93 pages
US20170039084A1
PDF
No ratings yet
US20170039084A1
13 pages
WO2021159913A1
PDF
No ratings yet
WO2021159913A1
32 pages
WO2021143387A1
PDF
No ratings yet
WO2021143387A1
29 pages
US8321956
PDF
No ratings yet
US8321956
21 pages
4
PDF
No ratings yet
4
43 pages
US9713290
PDF
No ratings yet
US9713290
14 pages
US11857189
PDF
No ratings yet
US11857189
225 pages
US11871923
PDF
No ratings yet
US11871923
96 pages
US11876679
PDF
No ratings yet
US11876679
84 pages
Mark J. Gustavson - US Patent Application
PDF
No ratings yet
Mark J. Gustavson - US Patent Application
32 pages
US20190172026A1
PDF
No ratings yet
US20190172026A1
26 pages
IIPRD - Patent Validity Analysis - 1
PDF
No ratings yet
IIPRD - Patent Validity Analysis - 1
6 pages
IOT
PDF
No ratings yet
IOT
4 pages
US11857265
PDF
No ratings yet
US11857265
38 pages
US20230394938A1
PDF
No ratings yet
US20230394938A1
31 pages
Intelligence at The Extreme Edge A Survey of Tinyml
PDF
No ratings yet
Intelligence at The Extreme Edge A Survey of Tinyml
31 pages
United States Patent: Lang (45) Date of Patent: May 16, 2017
PDF
No ratings yet
United States Patent: Lang (45) Date of Patent: May 16, 2017
9 pages
Clara, CA Francisco ,: (12) United States Patent
PDF
No ratings yet
Clara, CA Francisco ,: (12) United States Patent
15 pages
United States Patent (10) Patent No.: US 8,970,075 B2
PDF
No ratings yet
United States Patent (10) Patent No.: US 8,970,075 B2
14 pages
United States Patent: (10) Patent No .: US 10, 396, 629 B1
PDF
No ratings yet
United States Patent: (10) Patent No .: US 10, 396, 629 B1
6 pages
Candidates Qualified For Viva-Voce of Patent Agent Examination 2024
PDF
No ratings yet
Candidates Qualified For Viva-Voce of Patent Agent Examination 2024
11 pages
US9002889
PDF
No ratings yet
US9002889
22 pages
US20180173719A1
PDF
No ratings yet
US20180173719A1
11 pages
US20220019367A1 Patente Revisada
PDF
No ratings yet
US20220019367A1 Patente Revisada
86 pages
5 6141030663152735828
PDF
No ratings yet
5 6141030663152735828
4 pages
Us20210081836a1 D2
PDF
No ratings yet
Us20210081836a1 D2
68 pages