i'dWc@sdZddlZddlZddlZddlZddlZddlZddlZejd ddfkrddl m Z nddl m Z ddl m Z ddlmZddlmZdd lmZmZmZmZmZmZmZmZmZdd lmZmZdd lmZdd l m!Z!dd l"m#Z#ddl$m%Z%ddl&m'Z'ddl(m)Z*ddl+m,Z-ddl.m/Z/ddl0m1Z1ddl2m3Z3ddl4m5Z5m6Z6m7Z7m8Z8m9Z9m:Z:m;Z;m<Z<de*fdYZ=de*fdYZ>de*fdYZ?de*fdYZ@de*fdYZAd e*fd!YZBd"e*fd#YZCd$e*fd%YZDd&e*fd'YZEd(e*fd)YZFdS(*s2 Checkpoint classes which drive archive creation. iNii(t OrderedDict(tetree(t itemgetter(turlparse( tApplicationDatatCalledProcessErrortrunt DC_PERS_LABELtDC_LABELtINSTALL_SSL_DIRt_tSYSTEM_TEMP_DIRtPopen(tUnifiedArchivetutil(tAIISOImageBootMenu(tDataObjectDict(t CreateISO(tDistro(t InstallEngine(tAbstractCheckpoint(tINSTALL_LOGGER_NAME(t Filesystem(tSize(tTransferCPIOAttr(tCerttCA_CerttCredtKeytFiletSoftwaretSourcet HTTPAuthTokentInstantiateUnifiedArchivecBsAeZdZdddddZdZdZedZRS(sInstantiate a UnifiedArchive object from an existing OVF Unified Archive descriptor, including its list of ArchiveObjects, and insert it into the data object cache. cCsltt|j|||_tjj|_d|_||_ ||_ ||_ ||_ |j dS(s'Construct an instance of this checkpoint. Arguments: path The path to the UnifiedArchive (URI) key Credentials, for use with HTTPS URIs cert cacert http_auth_token For Glance image retrieval N(tsuperR!t__init__tpathRt get_instancetdoctNonet credentialstkeytcerttcacertthttp_auth_tokent_check_credentials(tselftnameR$R)R*R+R,((scheckpoints.pyR#Hs       cCsdS(sUReturns an estimate of the time this checkpoint will take in seconds i((R.((scheckpoints.pytget_progress_estimate^sc Cst|j|j|j|jgs|jjdtdd}d}|rg|dj t j t }n|dk r|j t j t }|j tjt}|rtjjtttj}|j tjt}|r|j||_n|j tjt}|r$|j||_n|j tjt}|rT|j||_n|j tjt} | r| j|_qqqntj|j|j|j|j|_dS(sAn AI client instance will store its credentials in the DOC. If none were passed, check there before moving on. If any are present, save them to local files and store the paths on the Unified Archive once instantiated. t class_typet max_countiiN( tanyR)R*R+R,R&tget_descendantsRR'tget_first_childRt SOURCE_LABELRt FILE_LABELRt CRED_LABELtosR$tjoinR tstrtgetpidRt ELEMENT_NAMEtsave_credentialRRR tHTTP_AUTH_TOKEN_LABELR t CredentialsR(( R.t soft_nodetsrct ua_file_spectcredtbasedirR)R*tca_certR,((scheckpoints.pyR-ds:$  !   cCs0|jjd|j|j|jdkr@ttdnt|j}|jd krt j |j |j |j |j|j rt jtd|jntjtj|j|j}|jj|jdt|jjjdt}|s t|j|j|j|jdS( sExecution of this Checkpoint instantiates a UnifiedArchive object which represents an existing archive file. The UnifiedArchive is then placed in the DOC. s5InstantiateUnifiedArchive: path [%s] credentials [%s]s#checkpoint requires a path argumentthttpthttpss)server error: '%s' does not accept rangestvolatileR1N(RGRH(tloggertdebugR$R(R't ValueErrorR RtschemeRtserver_accepts_rangestgeturlR)R*R+R,t ArchiveErrorthostnameRt fromstringR tget_descriptorR&timport_from_manifest_xmltitertTrueRIR5tAssertionErrort version_checktset_ondisk_attributes(R.tdry_runtparsetroottua((scheckpoints.pytexecutes&     N( t__name__t __module__t__doc__R'R#R0R-tFalseR^(((scheckpoints.pyR!Bs     4tInitializeUnifiedArchivecBs5eZdZdZdZdZedZRS(sCheckpoint which initializes the archive and its contents based upon user criteria. This is the first Checkpoint in the set used for Unified Archive creation. Note, this checkpoint requires an ApplicationData instance to be in the DOC and for its data_dict to contain the following: path: path to the file to create. this path may include a network service (e.g. ssh, nfs, etc). state: state data, used by archiveadm.guest to instantiate a UnifiedArchive object to a known state zones: list of zones (archive objects) to include. an empty list will drive a default set based upon archive type (recovery or clone) exclude_zones: list of zones to explicitly exclude from this archive embed_zones: force zone embedding regardless of archive type archive_type: either 'recovery' or 'clone' cCsktt|j|d|_d|_d|_d|_d|_d|_ d|_ d|_ d|_ dS(N( R"RcR#R'R&tstateR$tzonest exclude_zonest exclude_dst archive_typet root_onlyt embed_zones(R.R/((scheckpoints.pyR#s        cCsdS(sUReturns an estimate of the time this checkpoint will take in seconds i((R.((scheckpoints.pyR0scCstjj|_|jjjdt}|j}|jd|_|jdk rXdSy|d|_ |d|_ |jdpg|_ |jdpg|_ |jdpg|_|jd pt|_|jd pt|_Wn"tk r}td |nXdS( sSRetrieve data from data object cache to be used by this checkpoint R1RdNR$RhReRfRgRiRjsdata_dict lookup error: (RR%R&RIR5Rt data_dicttgetRdR'R$RhReRfRgRbRiRjtKeyErrort RuntimeError(R.tappdataRkte((scheckpoints.pyt _parse_docs    c Csu|j|jdk rdtj|j}|jjj||jj d|j |j |j dS|jj d|j |j |j |jtj}g}x@|j D]5}||kr|jj d||j|qqW|r tjtddj|nxR|j D]G}tj|}|dkr|jj d |||j|qqW|rtjtd dj|nx@|jD]5}||kr|jj d ||j|qqW|rtjtd dj|n|jr<t|jttjkr<tjtd n|j dkrt|j dkrd|j krtj|_tjjd}|r|jj|qn|jdk r.tj}g} x-|jD]"} | |kr| j| qqW| r.tjtddj| q.ntj|j |j |j |j|j |j!}|jjj|dS(sTExecution of this Checkpoint instantiates a new UnifiedArchive object and places it in the DOC. The new UnifiedArchive's ArchiveObject elements are initialized as well, in preparation for DatasetDiscovery. The main thrust of the initialization is to give subsequent checkpoints an object which indicates which Solaris instances from the system are to be archived. This selection logic is based upon user criteria. If 'archive_type' is set to 'recovery', then a single ArchiveObject is required. This single ArchiveObject would be of the 'global' zone, and is to contain all zones and datasets on the system, barring those passed in explicit exclude lists. If 'archive_type' is set to 'clone', then each of the Solaris instances (read as zones) on the system is archived separately, and only the active BE and related datasets are included. By default, of no zones are passed all zones are archived. As a special case, the guest protocol may set a state string in the configuration. This state string is a primitive serialization of a UnifiedArchive instance. If discovered, this checkpoint will initialize a new UnifiedArchive to the state described by the state and insert it into the DOC. s_InitializeUnifiedArchive: initialized from guest state: UUID [%s] archive_type [%s] zones [%s]NsUInitializeUnifiedArchive: path [%s] archive_type [%s] zones [%s] exclude_zones [%s]s6InitializeUnifiedArchive: included zone '%s' not foundszones not found: %st,t installedtrunnings1InitializeUnifiedArchive: zone '%s' state is '%s'szones must be installed: %ss6InitializeUnifiedArchive: excluded zone '%s' not foundsexcluded zones not found: %ss6all installed zones excluded, at least one is requiredtrecoveryitglobalt incompletes(invalid dataset exclusion: not found: %st (RsRt("RqRdR'R t deserializeR&RItinsert_childrenRJRKtuuidRhReR$RfRtget_configured_zonestappendRPR R:tget_zone_statetsettget_all_installed_zonestlentget_installed_zonestget_zones_with_stateRltextendRgtget_all_datasetstinit_newRjRi( R.RZR]t all_zonestinvalidtzoneRdt inc_zonest all_datasetstinvalid_datasetstds((scheckpoints.pyR^sz               !$    (R_R`RaR#R0RqRbR^(((scheckpoints.pyRcs   tDatasetDiscoverycBs5eZdZdZdZdZedZRS(s]Determine each ArchiveObject's list of datasets to include and exclude based upon the user criteria and the discovered ArchiveObjects. Clone archives include only the active BE from all desired zones and related datasets. For non-global zones, related datasets are the zone's delegated datasets, and the zonepath dataset. Recovery archives contain top-level replication streams for each pool on the system, and thus all datasets and boot environments are included. Users may optionally provide a dataset exclusion list. This provides a mechanism to exclude certain ZFS assets from the archive which might otherwise be included. Each element in this list is recursive. This list is stored in the ApplicationData's data_dict, on the 'exclude_ds' key. All archives exclude swap and dump devices by default. cCs,tt|j|d|_d|_dS(N(R"RR#R'R]Rg(R.R/((scheckpoints.pyR#s cCsdS(sUReturns an estimate of the time this checkpoint will take in seconds i((R.((scheckpoints.pyR0scCsUtjj}|jjdtj}|jd|_|jjdt |_ dS(sSRetrieve data from data object cache to be used by this checkpoint R1RgN( RR%R&RIR5RRkRlRgR R](R.R&Rk((scheckpoints.pyRqscCsd|j|jjd|jjx:|jjD],}|jjd|j|j|jq0WdS(sExecution of this checkpoint looks up the UnifiedArchive object which represents the archive being created and populates each of its ArchiveObject entries' include and exclude dataset lists. This is done in preparation for ZFS stream creation. s%DatasetDiscovery: UnifiedArchive [%s]s$DatasetDiscovery: ArchiveObject [%s]N(RqRJRKR]R{tarchive_objectstdataset_discoveryRg(R.RZtarchive_object((scheckpoints.pyR^s     (R_R`RaR#R0RqRbR^(((scheckpoints.pyRs    tPrepareArchiveImagecBs5eZdZdZdZdZedZRS(sUsed during new archive creation. For each ArchiveObject, prepare boot environments as needed. This checkpoint will take different actions depending upon the archive type and other configurable parameters. cCs,tt|j|d|_d|_dS(N(R"RR#R'R]t prepare_only(R.R/((scheckpoints.pyR#s cCsdS(sUReturns an estimate of the time this checkpoint will take in seconds i((R.((scheckpoints.pyR0scCsUtjj}|jjdt|_|jjdtj}|j d|_ dS(sSRetrieve data from data object cache to be used by this checkpoint R1RN( RR%R&RIR5R R]RRkRlR(R.R&Rk((scheckpoints.pyRqscCs|j|jjd|jj|jj|jrtjj dtjj d|jj j |jj j rtjj d|jj j nx[|jj D]J}|jjdkrtjj d|jj |j j |j jfqqWndS(sEach ArchiveObject requires some level of preparation before the archive snapshots and streams are created. All archives have a mimimum set of modifications, namely to have the active boot environment's path_to_inst file reverted, and to have the links in devfs cleaned up to the minimum required. This is required for archive portability. For clone archives, further actions are then taken. The system is unconfigured, SSH keys are destroyed, non-packaged files are removed, device configuration links are removed and the image is otherwise prepared for deployment on any number of same-ISA system. If the clone archive being created is of a global zone, zone configurations are removed for any zones which are a) not in a configured state and b) not included in the Unified Archive. This preserves configurations which might be used as zone templates, and removes all installed zones' configurations which are not included. s(PrepareArchiveImage: UnifiedArchive [%s]sArchive image prepared: sArchive BE: %s s!Archive BE rollback snapshot: %s s solaris-kzs %s|%s|%s N(RqRJRKR]R{tprepare_archive_imageRtsyststdouttwritet _archive_beR/trollbackRRtbrandt mountpoint(R.RZR((scheckpoints.pyR^s          (R_R`RaR#R0RqRbR^(((scheckpoints.pyRs    tCreateArchiveStreamscBs8eZdZedZdZdZedZRS(sqUsed during new archive creation. For each newly-minted ArchiveObject, create ZFS stream package files. cCs>tt|j|d|_d|_d|_||_dS(N(R"RR#R'R]t mock_streamst skip_checkt direct_clone(R.R/tdirect((scheckpoints.pyR# s    cCsdS(sUReturns an estimate of the time this checkpoint will take in seconds i((R.((scheckpoints.pyR0scCsstjj}|jjdt|_|jjdtj}|j dpQt |_ |j dpit |_ dS(sSRetrieve data from data object cache to be used by this checkpoint R1RRN( RR%R&RIR5R R]RRkRlRbRR(R.R&Rk((scheckpoints.pyRqs cCsO|j|jjd|jj|jjd|jd|jd|jdS(sExecution of this checkpoint results in ZFS stream packages being created for each of the ArchiveObjects in the under-construction Unified Archive. A recursive snapshot is created for each of the datasets in the 'include' list of each ArchiveObject. These snapshots are tagged with the ArchiveObject's UUID. Once created, each of the datasets listed in the 'exclude' list is recursively destroyed. This results in a named snapshot per ArchiveObject which represents the data we want in the UnifiedArchive. All snapshots are then streamed to the staging directory, which is the directory of the path set in the ApplicationData. Each ZFS stream is created in its own subprocess in order to parallelize the build. Once the streams are created, the snapshots are destroyed and each ArchiveObject's 'zfs_streams' list is updated. s)CreateArchiveStreams: UnifiedArchive [%s]Rtskip_cap_checkRN( RqRJRKR]R{tcreate_archive_streamsRRR(R.RZ((scheckpoints.pyR^%s     (R_R`RaRbR#R0RqR^(((scheckpoints.pyRs    tCreateZoneMediacBsKeZdZdZdZdZedZdefdYZ RS(s`Drive the creation of bootable install media for inclusion in the Unified Archive. A Solaris Automated Installer ISO image is created for each OS version in the list of systems being archived. This media allows for deployment portability in virtual environments where bootable media is expected (Solaris Kernel Zones, LDoms, etc). cCs>tt|j|d|_d|_d|_d|_dS(N(R"RR#R'R]tbuildtmockR(R.R/((scheckpoints.pyR#Ks    cCsdS(sUReturns an estimate of the time this checkpoint will take in seconds i((R.((scheckpoints.pyR0RscCsgtjj}|jjdt|_|jjdtj}|j d|_ |j d|_ dS(sSRetrieve data from data object cache to be used by this checkpoint R1t mock_mediaRN( RR%R&RIR5R R]RRkRlRR(R.R&Rk((scheckpoints.pyRqXs cCs|j|jjd|jjx|jjD]}|jjd krNq0n|jj j }dj |j dddd!}|jj j|}|d k r||_ q0nyq|j|||j|_|jr|jj}n|jj}||_ |jjd|jj|jWq0tk r}|jjd|jj||jd k r|jj|jjd k rtjj |jj|jj}tjj|rtj|qqntjt |q0Xq0W|jjd |jjd S( s'Execution of this checkpoint looks up the UnifiedArchive object which represents the archive being created and determines if any media files need to be created, and what version they need to be. The resulting ISO image files are placed in the staging directory. s$CreateZoneMedia: UnifiedArchive [%s]tlabeledt solaris10t.iis%UnifiedArchive [%s]: created media %ss%UnifiedArchive [%s]: media failure %ss-CreateZoneMedia: UnifiedArchive [%s] completeN(RR(!RqRJRKR]R{RRRtsystemt os_versiontbranchR:tsplittai_mediaRlR't MediaBuildRRRtcreateR/t Exceptiontteardowntiso_nameR9R$t _staging_dirtexiststunlinkRRPR (R.RZRt os_brancht os_updateRRptiso((scheckpoints.pyR^dsD   "         RcBsPeZdZdZdZdZdZdZdZdZ dZ RS( s Class to abstract the creation of a basic AI ISO file. Drops a reference to itself on the UnifiedArchive instance's ai_media list so that abort_create() can handle it in its thread. attributes; media_dir the final destination for the media file iso_name name of the ISO file os_version desired OS update (e.g. '5.12.0') methods: create creates the ISO mock creates a mock ISO teardown tears down the build environment and temporary resources s/usr/sbin/mounts/usr/sbin/umountcCs|jj|_||_d|_|j|_d|_d|_ d|_ d|_ d|_ d|_ ||_tjt|_|s|jn||jj_dS(N(R]Rt media_dirRR'RR{t_uuidt_dst _build_patht _build_tmpt _ai_sourcet _ai_source_dst _ai_mediat_aotloggingt getLoggertILNRJt_verify_staging_capacityt media_build(R.RRR((scheckpoints.pyR#s           cCsotdj}tj|j}|j|j}||krk|jjd|t t d|jndS(s5Determine if we can fit new media in the staging areat750mbsaMediaBuild._verify_staging_capacity: staging area available: %s insufficient capacity to proceeds0estimated media size larger than staging area %sN( Rt byte_valueR9tstatvfsRtf_blockstf_frsizeRJRKRnR (R.testimated_media_sizet statblocktavailable_size((scheckpoints.pyRs   cCstj}tdtj|jf|_|jjrYtt d|jj ny|jj Wn)t k r}tt d|nX|jj d|_tjj|jd|_i|jd6|jd6|jd6|jd 6}|jjjtt|id d 6d d 6}|jjjtt|d|jtjf|_t|jj dd}|jjj|dS(s;Prepare the build dataset and push DC elements into the DOCs %s/build_%ss'unique build dataset %s already exists?s%failed to create media build area: %sRttmpt pkg_img_pathtba_buildttmp_dirRs boot/bios.imgsbios-eltorito-imgs boot/uefi.imgsuefi-eltorito-imgs %s_ai_%s.isos.isoiN(!RR%RRt get_root_poolRRRRnR t full_nameRRRlRR9R$R:RRR&RIRzRRt persistentRRtplatformt processorRRtrsplit(R.tengineRptd((scheckpoints.pyt _prep_builds0 "          cCsd|jtjf}tjj|j|}t|d}|jdWdQXtjj |}t j ||}||j j j|j<|S(sMock an AI ISOs %s_ai_%s.isosw+ttestfileN(RRRR9R$R:RtopenRtgetsizeR tAIMediaRR]R(R.R/t path_to_mocktftsizeR((scheckpoints.pyR sc Cstj}y|jWn8tk rT}|jjd|ttdnXyttdd|jd|j }|j |j j j dt}|j}|jd|_|jjd|_Wn8tk r}|jjd |ttd nXtd }|j|_d |_d g|_d|_|j|_ttjj|jjd}|j dt!y|j Wn8tk r}|jjd|ttdnXt"j#dkrytj$tjj|jd|j%ddddtjj|jdtjj|jdg}t&|t'd} | j t&|j(tjj|jdgtj)tjj|jdWqtk r}|jjd|ttdqXny)t*ddd|jg} | j Wn8tk r<}|jjd |ttd!nX|j+d"|j-j._/|j0} tjj1tjj|j2|j0} t3j4| | } | |j-j.j5|j <| S(#sCreate an AI ISOs MediaBuild prep_build failed: %ss"Failed to prepare media build areasdownload-ai-packagetfstversionR1tai_image_datasetRs%MediaBuild unable to locate media: %ss6Failed to locate AI media, --exclude-media may be useds ISO transfertinstalls./s-pdmt tmp_pkg_imageRZs!MediaBuild unable to populate: %ssFailed to build base AI mediati386tusrs-otros-Fthsfss solaris.zlibtbootcfgs failed to create boot config: %ssFailed media boot configurationR/s create-isotexcludesfailed create-iso: %ssFailed final media creationN(6RR%RRnRJRKR tDownloadAIPackageRRR^R&RIR5RRkRlRRRRRtdsttactiontcontentst cpio_argsRBRR9R$R:R/tdestroyRbRRtmkdirtMOUNTRRtUMOUNTtrmdirRRR'RR]RRRRR RR(R.RRpt dl_ai_packageRoRktnodeRtcmdRRR/RR((scheckpoints.pyRs|                 %#   cCs|jdk r|jjr|jdk rtjj|jd}tjj|rtj|r|t|j |gq|qn|j dk r|j j dt n|jj dt dt ndS(sXTeardown and cleanup any temporary resources in use by this ISO RRZt recursiveN(RR'RRR9R$R:tlistdirRRRRRbRV(R.t build_usr((scheckpoints.pyRns( R_R`RaRRR#RRRRR(((scheckpoints.pyRs   * X( R_R`RaR#R0RqRbR^tobjectR(((scheckpoints.pyRCs    AssembleUnifiedArchive: UnifiedArchive [%s] assembly complete.N(RqRJRKR]R{tassemble_archive(R.RZ((scheckpoints.pyR^s      (R_R`RaR#R0RqRbR^(((scheckpoints.pyRs    RcBseZdZdZdZdZdZdZd d d d dZ dZ dZ d d d Z d Z d Zed ZRS(sSAttempt to find and download the solaris-auto-install package from the publishers configured on the system. For use in media creation. If a suitable image is found, this checkpoint will create a new dataset and create the pkg image in it. The pkg image dataset's name is set on the ApplicationData dictionary's "ai_image_dataset" key, and should be destroyed by the caller when no longer needed. Note this pkg image dataset's mountpoint should be passed in any exclusion lists for building images (i.e. as part of the 'exclude' list in CreateISO and CreateUSB). s/usr/sbin/mounts"install-image/solaris-auto-installs /usr/bin/pkgs/usr/bin/pkgrepos/usr/sbin/umountcCstt|j|||_||_|dk|dkArSttdn||_||_ ||_ t j j |_ d|_dS(s3Construct an instance of this checkpoint. Arguments: repo_uri URI of an IPS repository which provides the AI source key Credentials, for use with HTTPS IPS URIs cert version A specific update version to search for or validate sboth key and cert requiredN(R"RR#Rtrepo_uriR'RLR R)R*RRR%R&timagedir(R.R/RRR)R*R((scheckpoints.pyR#s      cCsdS(sUReturns an estimate of the time this checkpoint will take in seconds i((R.((scheckpoints.pyR0scCs|jjd|jg}|dkr|jdddd|dj|jg}|djdk r|djdk rd|djd|djg|d d +n|j|n|j|jddd|jgx|D]}|jd |jd d |jg}|jdk r@|jdk r@|j d|jd|jgn|j dk rh|j d|j gn|j||j|qWy/x|D]}t |qW|jjdWn2t k r}t jtd|jjnXdS(sUCreate a temporary pkg image at self.imagedir using the force flag as we may be doing this more than once while looking for an image. 'pub' is the name of the publisher to set in the pkg image and 'origins' is a list of util.OriginInfo namedtuples which describe said publisher's origin information. s+DownloadAIPackage: creating pkg image at %ss image-creates-fs--users-pis-ks-ciis-Rs set-publishers-gs--proxys-DownloadAIPackage: AI image creation completes-Unable to create a suitable package image: %sN(RJRKRR'tPKGturiR)R*R}RtproxyRRRRPR tpopentstderr(R.tpubtoriginstcmdsRtoriginterr((scheckpoints.pyt_create_pkg_images6   &*   "  c Cs|jd|j}|jddd||g}|dk rc|dk rcd|d|g|dd+nyt|d id d 6}Wntk rdSXg|jjD]}|jd ^qd }dj |j|gSdS(sSearch all packages on publisher(s) found at 'uri'. Return string of package FMRI of highest version matching our desired OS update. 'key' and 'cert' are the credentials needed to access 'uri' if it is a secure repo. s@*-tlists-Hs-ss--keys--certiitenvtCtLC_ALLiit@N( tPNAMERtPKGREPOR'RRRt splitlinesRR:( R.RR)R*R/Rtptltfmri((scheckpoints.pyt_ai_image_get_fmri!s 0cCstjj|jd}tjdt}zy&t|jdddd||gWn&t k r{t j t dn@Xtjj|d}tjj |st j t d nWd t|j|gd tjtj|Xd S( sFinal check to ensure archive support in the image. Mount the compressed install environment and verify that archive software is present. s solaris.zlibtdirs-FRs-oRs$unable to mount AI image environmentssbin/archiveadms,AI image does not support archive deploymentNt check_result(R9R$R:RttempfiletmkdtempR RRRRRPR RRR tANYR(R.t zlib_patht zlib_mounttcli((scheckpoints.pyt_ai_verify_archive_support5s& c sg}|jr|jjd|j|j|j|j|j|j}|d"krttj t d|jn|j d"tj |jd"|j|jg|fn|jjd|jyr|jd|j }t|jdddd|gd id d 6}g|jjD]}|jd ^q}Wn&tk r[tj t d nXg|D](}t|tdjdd^qc}gtj|td D]\} } t| d ^q}t|jddddgd id d 6}g|jjD]} | jd ^q|jdfdx3|D]+\} }|j | tj| |fqCWx1|D]\} } }|j| | t|jd|jddd|g|jjd|| d"k rd| nd| d jy|jWn/tj k r3}|jjd||qyqyX|jjd|| d"k rYd| nd| d jPqyW|jrtj t d ntj t d!d"S(#sDownload the AI images>DownloadAIPackage: checking publishers on '%s' for package: %ss%URI '%s' does not publish an AI images;DownloadAIPackage: searching all publishers for package: %ss@*-Rs-afs-vs-HRRRis+unable to find a supported AI image versionspkg://t/it publishers-FttsvR)csj|dS(Ni(tindex(tx(tall_pubs(scheckpoints.pytss-RRs-qs--accepts)DownloadAIPackage: installed '%s' from %ss pubisher '%s'sURI '%s'sDownloadAIPackage: '%s': %ss,DownloadAIPackage: using source '%s' from %sspublisher '%s's1IPS repo publishes an unsupported AI client images#unable to find a supported AI imageN( RRJRKRRR)R*R'RRPR R}t OriginInfoRRRRRRRttupleRt itertoolstgroupbyRRtsorttpkg_get_originsRRRR&(R.tai_pkgsRt name_patterntprocRt full_fmrisRtfmrist__tgroupRR R R((R,scheckpoints.pyt_image_downloadIsh      0 2 8,#    cCsttjj|jjd}|j|jd|_|j |j j j dt }|j}||dR#tdatasett ai_sourcetarchive_source(R.R/R?R@RA((scheckpoints.pyR#s   cCsdS(sUReturns an estimate of the time this checkpoint will take in seconds i<((R.((scheckpoints.pyR0scCstd}|j|_d|_dg|_d|_|jjdrtj dt dd}|j d d |j|g}t |||_ n |j|_ y|jWn)tk r}|jjt|nX|jjdrt |j|gnd S( s]Create a TransferCPIO object and copy the contents of the ISO to the dataset s ISO transferRs./s-pdms.isoRtprefixt ai_iso_mp_s-FRN(RR?RRRRR@tendswithR R!R RRRBR^RRJtcriticalR;R(R.RttempdirRR((scheckpoints.pytcopy_componentss$         cCsytjj|j|j}tjj|r=tj|ntj|j|t j |j tjj|jddS(sCopy the archive into the dataset and update the default AI manifest with one that looks for the archive on boot s archive.uarN( R9R$R:R?tDEFAULT_MANIFESTRRtsymlinktARCHIVE_MANIFESTtshutiltcopyRA(R.tdefault_manifest((scheckpoints.pyt add_archives  c Cstjtjj|jd|jjd|jddddtjj|jdtjj|jdg}t|dS( suMount solaris.zlib in the dataset so grub2 can use components to update the grub2 configuration file Rsmounting solaris.zlibs-oRs-FRs solaris.zlibN( R9RR$R:R?RJRKRR(R.R((scheckpoints.pyt mount_zlibs cCsY|jjd|j|jdk r6|jntjdkrU|jndS(s3Copy the Unified Archive into the media build area.s(=== Executing Add Archive Checkpoint ===RN( RJtinfoRGRAR'RNRRRO(R.RZ((scheckpoints.pyR^&s   N(R_R`RaRHRJRRR'R#R0RGRNRORbR^(((scheckpoints.pyR>s     (GRaR0RR9RRKRR t version_infot ordereddictRt collectionstlxmlRtoperatorRRtsolaris_installRRRRRR R R R tsolaris_install.archiveR Rtsolaris_install.boot.bootRt%solaris_install.data_object.data_dictRt3solaris_install.distro_const.checkpoints.create_isoRt(solaris_install.distro_const.distro_specRtsolaris_install.engineRt!solaris_install.engine.checkpointRt Checkpointtsolaris_install.loggerRRtsolaris_install.target.logicalRtsolaris_install.target.sizeRtsolaris_install.transfer.cpioRtsolaris_install.transfer.infoRRRRRRRR R!RcRRRRRRRR>(((scheckpoints.pytsL       @:~=F<;'$