diff --git a/404.html b/404.html new file mode 100644 index 00000000..c93949e0 --- /dev/null +++ b/404.html @@ -0,0 +1,2289 @@ + + + + + + + + + + + + + + + + OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+ +

404 - Not found

+ +
+
+
+ +
+ + + +
+
+
+
+ + + + + + + + \ No newline at end of file diff --git a/assets/Access-Point.svg b/assets/Access-Point.svg new file mode 100644 index 00000000..d9e92657 --- /dev/null +++ b/assets/Access-Point.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/assets/Madison_Skyline.jpeg b/assets/Madison_Skyline.jpeg new file mode 100644 index 00000000..c978218a Binary files /dev/null and b/assets/Madison_Skyline.jpeg differ diff --git a/assets/OSGConnect_Logo.png b/assets/OSGConnect_Logo.png new file mode 100644 index 00000000..0d31e859 Binary files /dev/null and b/assets/OSGConnect_Logo.png differ diff --git a/assets/OSGConnect_Logo_Dark_BG.png b/assets/OSGConnect_Logo_Dark_BG.png new file mode 100644 index 00000000..f3f3a75c Binary files /dev/null and b/assets/OSGConnect_Logo_Dark_BG.png differ diff --git a/assets/OSG_Logo.png b/assets/OSG_Logo.png new file mode 100644 index 00000000..e69de29b diff --git a/assets/OSG_Logo.svg b/assets/OSG_Logo.svg new file mode 100644 index 00000000..b7a7598d --- /dev/null +++ b/assets/OSG_Logo.svg @@ -0,0 +1,15 @@ + + + + + + + + diff --git a/assets/OSG_Logo_Big.png b/assets/OSG_Logo_Big.png new file mode 100644 index 00000000..6db5cd99 Binary files /dev/null and b/assets/OSG_Logo_Big.png differ diff --git a/assets/OSG_Portal_Logo.png b/assets/OSG_Portal_Logo.png new file mode 100644 index 00000000..277c46f4 Binary files /dev/null and b/assets/OSG_Portal_Logo.png differ diff --git a/assets/OSPool_Stylized.svg b/assets/OSPool_Stylized.svg new file mode 100644 index 00000000..ef4581c6 --- /dev/null +++ b/assets/OSPool_Stylized.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/assets/PATh/registration/cilogon.png b/assets/PATh/registration/cilogon.png new file mode 100644 index 00000000..51103424 Binary files /dev/null and b/assets/PATh/registration/cilogon.png differ diff --git a/assets/PATh/registration/comanage-email-verification-form.png b/assets/PATh/registration/comanage-email-verification-form.png new file mode 100644 index 00000000..5985766d Binary files /dev/null and b/assets/PATh/registration/comanage-email-verification-form.png differ diff --git a/assets/PATh/registration/comanage-enrollment-form.png b/assets/PATh/registration/comanage-enrollment-form.png new file mode 100644 index 00000000..b5bb671f Binary files /dev/null and b/assets/PATh/registration/comanage-enrollment-form.png differ diff --git a/assets/PATh/registration/comanage-verified-email.png b/assets/PATh/registration/comanage-verified-email.png new file mode 100644 index 00000000..56e1568a Binary files /dev/null and b/assets/PATh/registration/comanage-verified-email.png differ diff --git a/assets/PATh/registration/ssh-add-key.png b/assets/PATh/registration/ssh-add-key.png new file mode 100644 index 00000000..16b65ab7 Binary files /dev/null and b/assets/PATh/registration/ssh-add-key.png differ diff --git a/assets/PATh/registration/ssh-authenticator-select.png b/assets/PATh/registration/ssh-authenticator-select.png new file mode 100644 index 00000000..18456375 Binary files /dev/null and b/assets/PATh/registration/ssh-authenticator-select.png differ diff --git a/assets/PATh/registration/ssh-edit-profile.png b/assets/PATh/registration/ssh-edit-profile.png new file mode 100644 index 00000000..e097ecb8 Binary files /dev/null and b/assets/PATh/registration/ssh-edit-profile.png differ diff --git a/assets/PATh/registration/ssh-homepage-dropdown.png b/assets/PATh/registration/ssh-homepage-dropdown.png new file mode 100644 index 00000000..ff53a47e Binary files /dev/null and b/assets/PATh/registration/ssh-homepage-dropdown.png differ diff --git a/assets/PATh/registration/ssh-key-list.png b/assets/PATh/registration/ssh-key-list.png new file mode 100644 index 00000000..ae7eac8c Binary files /dev/null and b/assets/PATh/registration/ssh-key-list.png differ diff --git a/assets/PATh_Logo_Primary_Color_Portal.png b/assets/PATh_Logo_Primary_Color_Portal.png new file mode 100644 index 00000000..7a0398fa Binary files /dev/null and b/assets/PATh_Logo_Primary_Color_Portal.png differ diff --git a/assets/PATh_Logo_Round_Color.png b/assets/PATh_Logo_Round_Color.png new file mode 100644 index 00000000..dc300f90 Binary files /dev/null and b/assets/PATh_Logo_Round_Color.png differ diff --git a/assets/PATh_Logo_Round_Color_White_Border.png b/assets/PATh_Logo_Round_Color_White_Border.png new file mode 100644 index 00000000..5d405046 Binary files /dev/null and b/assets/PATh_Logo_Round_Color_White_Border.png differ diff --git a/assets/ap7-images/cilogon.png b/assets/ap7-images/cilogon.png new file mode 100644 index 00000000..51103424 Binary files /dev/null and b/assets/ap7-images/cilogon.png differ diff --git a/assets/ap7-images/comanage-email-verification-form.png b/assets/ap7-images/comanage-email-verification-form.png new file mode 100644 index 00000000..d15a96b9 Binary files /dev/null and b/assets/ap7-images/comanage-email-verification-form.png differ diff --git a/assets/ap7-images/comanage-enrollment-form.png b/assets/ap7-images/comanage-enrollment-form.png new file mode 100644 index 00000000..8acd6669 Binary files /dev/null and b/assets/ap7-images/comanage-enrollment-form.png differ diff --git a/assets/ap7-images/ssh-authenticator-select.png b/assets/ap7-images/ssh-authenticator-select.png new file mode 100644 index 00000000..f6e8508c Binary files /dev/null and b/assets/ap7-images/ssh-authenticator-select.png differ diff --git a/assets/ap7-images/ssh-edit-profile.png b/assets/ap7-images/ssh-edit-profile.png new file mode 100644 index 00000000..e76bb05d Binary files /dev/null and b/assets/ap7-images/ssh-edit-profile.png differ diff --git a/assets/ap7-images/ssh-homepage-dropdown.png b/assets/ap7-images/ssh-homepage-dropdown.png new file mode 100644 index 00000000..13a74329 Binary files /dev/null and b/assets/ap7-images/ssh-homepage-dropdown.png differ diff --git a/assets/ap7-images/ssh-key-list.png b/assets/ap7-images/ssh-key-list.png new file mode 100644 index 00000000..71b7ccfa Binary files /dev/null and b/assets/ap7-images/ssh-key-list.png differ diff --git a/assets/file-text.svg b/assets/file-text.svg new file mode 100644 index 00000000..d6b8b8b9 --- /dev/null +++ b/assets/file-text.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/assets/gracc-screenshot.png b/assets/gracc-screenshot.png new file mode 100644 index 00000000..2a81a025 Binary files /dev/null and b/assets/gracc-screenshot.png differ diff --git a/assets/images/favicon.png b/assets/images/favicon.png new file mode 100644 index 00000000..1cf13b9f Binary files /dev/null and b/assets/images/favicon.png differ diff --git a/assets/javascripts/bundle.c44cc438.min.js b/assets/javascripts/bundle.c44cc438.min.js new file mode 100644 index 00000000..66025568 --- /dev/null +++ b/assets/javascripts/bundle.c44cc438.min.js @@ -0,0 +1,29 @@ +(()=>{var ta=Object.create;var xr=Object.defineProperty;var ra=Object.getOwnPropertyDescriptor;var na=Object.getOwnPropertyNames,Rt=Object.getOwnPropertySymbols,oa=Object.getPrototypeOf,Sr=Object.prototype.hasOwnProperty,an=Object.prototype.propertyIsEnumerable;var on=(e,t,r)=>t in e?xr(e,t,{enumerable:!0,configurable:!0,writable:!0,value:r}):e[t]=r,$=(e,t)=>{for(var r in t||(t={}))Sr.call(t,r)&&on(e,r,t[r]);if(Rt)for(var r of Rt(t))an.call(t,r)&&on(e,r,t[r]);return e};var sn=(e,t)=>{var r={};for(var n in e)Sr.call(e,n)&&t.indexOf(n)<0&&(r[n]=e[n]);if(e!=null&&Rt)for(var n of Rt(e))t.indexOf(n)<0&&an.call(e,n)&&(r[n]=e[n]);return r};var vt=(e,t)=>()=>(t||e((t={exports:{}}).exports,t),t.exports);var ia=(e,t,r,n)=>{if(t&&typeof t=="object"||typeof t=="function")for(let o of na(t))!Sr.call(e,o)&&o!==r&&xr(e,o,{get:()=>t[o],enumerable:!(n=ra(t,o))||n.enumerable});return e};var Ke=(e,t,r)=>(r=e!=null?ta(oa(e)):{},ia(t||!e||!e.__esModule?xr(r,"default",{value:e,enumerable:!0}):r,e));var un=vt((wr,cn)=>{(function(e,t){typeof wr=="object"&&typeof cn!="undefined"?t():typeof define=="function"&&define.amd?define(t):t()})(wr,function(){"use strict";function e(r){var n=!0,o=!1,i=null,a={text:!0,search:!0,url:!0,tel:!0,email:!0,password:!0,number:!0,date:!0,month:!0,week:!0,time:!0,datetime:!0,"datetime-local":!0};function s(_){return!!(_&&_!==document&&_.nodeName!=="HTML"&&_.nodeName!=="BODY"&&"classList"in _&&"contains"in _.classList)}function c(_){var je=_.type,he=_.tagName;return!!(he==="INPUT"&&a[je]&&!_.readOnly||he==="TEXTAREA"&&!_.readOnly||_.isContentEditable)}function u(_){_.classList.contains("focus-visible")||(_.classList.add("focus-visible"),_.setAttribute("data-focus-visible-added",""))}function f(_){!_.hasAttribute("data-focus-visible-added")||(_.classList.remove("focus-visible"),_.removeAttribute("data-focus-visible-added"))}function p(_){_.metaKey||_.altKey||_.ctrlKey||(s(r.activeElement)&&u(r.activeElement),n=!0)}function l(_){n=!1}function d(_){!s(_.target)||(n||c(_.target))&&u(_.target)}function h(_){!s(_.target)||(_.target.classList.contains("focus-visible")||_.target.hasAttribute("data-focus-visible-added"))&&(o=!0,window.clearTimeout(i),i=window.setTimeout(function(){o=!1},100),f(_.target))}function b(_){document.visibilityState==="hidden"&&(o&&(n=!0),F())}function F(){document.addEventListener("mousemove",U),document.addEventListener("mousedown",U),document.addEventListener("mouseup",U),document.addEventListener("pointermove",U),document.addEventListener("pointerdown",U),document.addEventListener("pointerup",U),document.addEventListener("touchmove",U),document.addEventListener("touchstart",U),document.addEventListener("touchend",U)}function K(){document.removeEventListener("mousemove",U),document.removeEventListener("mousedown",U),document.removeEventListener("mouseup",U),document.removeEventListener("pointermove",U),document.removeEventListener("pointerdown",U),document.removeEventListener("pointerup",U),document.removeEventListener("touchmove",U),document.removeEventListener("touchstart",U),document.removeEventListener("touchend",U)}function U(_){_.target.nodeName&&_.target.nodeName.toLowerCase()==="html"||(n=!1,K())}document.addEventListener("keydown",p,!0),document.addEventListener("mousedown",l,!0),document.addEventListener("pointerdown",l,!0),document.addEventListener("touchstart",l,!0),document.addEventListener("visibilitychange",b,!0),F(),r.addEventListener("focus",d,!0),r.addEventListener("blur",h,!0),r.nodeType===Node.DOCUMENT_FRAGMENT_NODE&&r.host?r.host.setAttribute("data-js-focus-visible",""):r.nodeType===Node.DOCUMENT_NODE&&(document.documentElement.classList.add("js-focus-visible"),document.documentElement.setAttribute("data-js-focus-visible",""))}if(typeof window!="undefined"&&typeof document!="undefined"){window.applyFocusVisiblePolyfill=e;var t;try{t=new CustomEvent("focus-visible-polyfill-ready")}catch(r){t=document.createEvent("CustomEvent"),t.initCustomEvent("focus-visible-polyfill-ready",!1,!1,{})}window.dispatchEvent(t)}typeof document!="undefined"&&e(document)})});var fn=vt(Er=>{(function(e){var t=function(){try{return!!Symbol.iterator}catch(u){return!1}},r=t(),n=function(u){var f={next:function(){var p=u.shift();return{done:p===void 0,value:p}}};return r&&(f[Symbol.iterator]=function(){return f}),f},o=function(u){return encodeURIComponent(u).replace(/%20/g,"+")},i=function(u){return decodeURIComponent(String(u).replace(/\+/g," "))},a=function(){var u=function(p){Object.defineProperty(this,"_entries",{writable:!0,value:{}});var l=typeof p;if(l!=="undefined")if(l==="string")p!==""&&this._fromString(p);else if(p instanceof u){var d=this;p.forEach(function(K,U){d.append(U,K)})}else if(p!==null&&l==="object")if(Object.prototype.toString.call(p)==="[object Array]")for(var h=0;hd[0]?1:0}),u._entries&&(u._entries={});for(var p=0;p1?i(d[1]):"")}})})(typeof global!="undefined"?global:typeof window!="undefined"?window:typeof self!="undefined"?self:Er);(function(e){var t=function(){try{var o=new e.URL("b","http://a");return o.pathname="c d",o.href==="http://a/c%20d"&&o.searchParams}catch(i){return!1}},r=function(){var o=e.URL,i=function(c,u){typeof c!="string"&&(c=String(c)),u&&typeof u!="string"&&(u=String(u));var f=document,p;if(u&&(e.location===void 0||u!==e.location.href)){u=u.toLowerCase(),f=document.implementation.createHTMLDocument(""),p=f.createElement("base"),p.href=u,f.head.appendChild(p);try{if(p.href.indexOf(u)!==0)throw new Error(p.href)}catch(_){throw new Error("URL unable to set base "+u+" due to "+_)}}var l=f.createElement("a");l.href=c,p&&(f.body.appendChild(l),l.href=l.href);var d=f.createElement("input");if(d.type="url",d.value=c,l.protocol===":"||!/:/.test(l.href)||!d.checkValidity()&&!u)throw new TypeError("Invalid URL");Object.defineProperty(this,"_anchorElement",{value:l});var h=new e.URLSearchParams(this.search),b=!0,F=!0,K=this;["append","delete","set"].forEach(function(_){var je=h[_];h[_]=function(){je.apply(h,arguments),b&&(F=!1,K.search=h.toString(),F=!0)}}),Object.defineProperty(this,"searchParams",{value:h,enumerable:!0});var U=void 0;Object.defineProperty(this,"_updateSearchParams",{enumerable:!1,configurable:!1,writable:!1,value:function(){this.search!==U&&(U=this.search,F&&(b=!1,this.searchParams._fromString(this.search),b=!0))}})},a=i.prototype,s=function(c){Object.defineProperty(a,c,{get:function(){return this._anchorElement[c]},set:function(u){this._anchorElement[c]=u},enumerable:!0})};["hash","host","hostname","port","protocol"].forEach(function(c){s(c)}),Object.defineProperty(a,"search",{get:function(){return this._anchorElement.search},set:function(c){this._anchorElement.search=c,this._updateSearchParams()},enumerable:!0}),Object.defineProperties(a,{toString:{get:function(){var c=this;return function(){return c.href}}},href:{get:function(){return this._anchorElement.href.replace(/\?$/,"")},set:function(c){this._anchorElement.href=c,this._updateSearchParams()},enumerable:!0},pathname:{get:function(){return this._anchorElement.pathname.replace(/(^\/?)/,"/")},set:function(c){this._anchorElement.pathname=c},enumerable:!0},origin:{get:function(){var c={"http:":80,"https:":443,"ftp:":21}[this._anchorElement.protocol],u=this._anchorElement.port!=c&&this._anchorElement.port!=="";return this._anchorElement.protocol+"//"+this._anchorElement.hostname+(u?":"+this._anchorElement.port:"")},enumerable:!0},password:{get:function(){return""},set:function(c){},enumerable:!0},username:{get:function(){return""},set:function(c){},enumerable:!0}}),i.createObjectURL=function(c){return o.createObjectURL.apply(o,arguments)},i.revokeObjectURL=function(c){return o.revokeObjectURL.apply(o,arguments)},e.URL=i};if(t()||r(),e.location!==void 0&&!("origin"in e.location)){var n=function(){return e.location.protocol+"//"+e.location.hostname+(e.location.port?":"+e.location.port:"")};try{Object.defineProperty(e.location,"origin",{get:n,enumerable:!0})}catch(o){setInterval(function(){e.location.origin=n()},100)}}})(typeof global!="undefined"?global:typeof window!="undefined"?window:typeof self!="undefined"?self:Er)});var Rn=vt((Us,Pt)=>{/*! ***************************************************************************** +Copyright (c) Microsoft Corporation. + +Permission to use, copy, modify, and/or distribute this software for any +purpose with or without fee is hereby granted. + +THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH +REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, +INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR +OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +PERFORMANCE OF THIS SOFTWARE. +***************************************************************************** */var pn,ln,mn,dn,hn,bn,vn,gn,yn,kt,Or,xn,Sn,wn,rt,En,On,_n,Tn,Mn,Ln,An,Cn,Ht;(function(e){var t=typeof global=="object"?global:typeof self=="object"?self:typeof this=="object"?this:{};typeof define=="function"&&define.amd?define("tslib",["exports"],function(n){e(r(t,r(n)))}):typeof Pt=="object"&&typeof Pt.exports=="object"?e(r(t,r(Pt.exports))):e(r(t));function r(n,o){return n!==t&&(typeof Object.create=="function"?Object.defineProperty(n,"__esModule",{value:!0}):n.__esModule=!0),function(i,a){return n[i]=o?o(i,a):a}}})(function(e){var t=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(n,o){n.__proto__=o}||function(n,o){for(var i in o)Object.prototype.hasOwnProperty.call(o,i)&&(n[i]=o[i])};pn=function(n,o){if(typeof o!="function"&&o!==null)throw new TypeError("Class extends value "+String(o)+" is not a constructor or null");t(n,o);function i(){this.constructor=n}n.prototype=o===null?Object.create(o):(i.prototype=o.prototype,new i)},ln=Object.assign||function(n){for(var o,i=1,a=arguments.length;i=0;f--)(u=n[f])&&(c=(s<3?u(c):s>3?u(o,i,c):u(o,i))||c);return s>3&&c&&Object.defineProperty(o,i,c),c},hn=function(n,o){return function(i,a){o(i,a,n)}},bn=function(n,o){if(typeof Reflect=="object"&&typeof Reflect.metadata=="function")return Reflect.metadata(n,o)},vn=function(n,o,i,a){function s(c){return c instanceof i?c:new i(function(u){u(c)})}return new(i||(i=Promise))(function(c,u){function f(d){try{l(a.next(d))}catch(h){u(h)}}function p(d){try{l(a.throw(d))}catch(h){u(h)}}function l(d){d.done?c(d.value):s(d.value).then(f,p)}l((a=a.apply(n,o||[])).next())})},gn=function(n,o){var i={label:0,sent:function(){if(c[0]&1)throw c[1];return c[1]},trys:[],ops:[]},a,s,c,u;return u={next:f(0),throw:f(1),return:f(2)},typeof Symbol=="function"&&(u[Symbol.iterator]=function(){return this}),u;function f(l){return function(d){return p([l,d])}}function p(l){if(a)throw new TypeError("Generator is already executing.");for(;i;)try{if(a=1,s&&(c=l[0]&2?s.return:l[0]?s.throw||((c=s.return)&&c.call(s),0):s.next)&&!(c=c.call(s,l[1])).done)return c;switch(s=0,c&&(l=[l[0]&2,c.value]),l[0]){case 0:case 1:c=l;break;case 4:return i.label++,{value:l[1],done:!1};case 5:i.label++,s=l[1],l=[0];continue;case 7:l=i.ops.pop(),i.trys.pop();continue;default:if(c=i.trys,!(c=c.length>0&&c[c.length-1])&&(l[0]===6||l[0]===2)){i=0;continue}if(l[0]===3&&(!c||l[1]>c[0]&&l[1]=n.length&&(n=void 0),{value:n&&n[a++],done:!n}}};throw new TypeError(o?"Object is not iterable.":"Symbol.iterator is not defined.")},Or=function(n,o){var i=typeof Symbol=="function"&&n[Symbol.iterator];if(!i)return n;var a=i.call(n),s,c=[],u;try{for(;(o===void 0||o-- >0)&&!(s=a.next()).done;)c.push(s.value)}catch(f){u={error:f}}finally{try{s&&!s.done&&(i=a.return)&&i.call(a)}finally{if(u)throw u.error}}return c},xn=function(){for(var n=[],o=0;o1||f(b,F)})})}function f(b,F){try{p(a[b](F))}catch(K){h(c[0][3],K)}}function p(b){b.value instanceof rt?Promise.resolve(b.value.v).then(l,d):h(c[0][2],b)}function l(b){f("next",b)}function d(b){f("throw",b)}function h(b,F){b(F),c.shift(),c.length&&f(c[0][0],c[0][1])}},On=function(n){var o,i;return o={},a("next"),a("throw",function(s){throw s}),a("return"),o[Symbol.iterator]=function(){return this},o;function a(s,c){o[s]=n[s]?function(u){return(i=!i)?{value:rt(n[s](u)),done:s==="return"}:c?c(u):u}:c}},_n=function(n){if(!Symbol.asyncIterator)throw new TypeError("Symbol.asyncIterator is not defined.");var o=n[Symbol.asyncIterator],i;return o?o.call(n):(n=typeof kt=="function"?kt(n):n[Symbol.iterator](),i={},a("next"),a("throw"),a("return"),i[Symbol.asyncIterator]=function(){return this},i);function a(c){i[c]=n[c]&&function(u){return new Promise(function(f,p){u=n[c](u),s(f,p,u.done,u.value)})}}function s(c,u,f,p){Promise.resolve(p).then(function(l){c({value:l,done:f})},u)}},Tn=function(n,o){return Object.defineProperty?Object.defineProperty(n,"raw",{value:o}):n.raw=o,n};var r=Object.create?function(n,o){Object.defineProperty(n,"default",{enumerable:!0,value:o})}:function(n,o){n.default=o};Mn=function(n){if(n&&n.__esModule)return n;var o={};if(n!=null)for(var i in n)i!=="default"&&Object.prototype.hasOwnProperty.call(n,i)&&Ht(o,n,i);return r(o,n),o},Ln=function(n){return n&&n.__esModule?n:{default:n}},An=function(n,o,i,a){if(i==="a"&&!a)throw new TypeError("Private accessor was defined without a getter");if(typeof o=="function"?n!==o||!a:!o.has(n))throw new TypeError("Cannot read private member from an object whose class did not declare it");return i==="m"?a:i==="a"?a.call(n):a?a.value:o.get(n)},Cn=function(n,o,i,a,s){if(a==="m")throw new TypeError("Private method is not writable");if(a==="a"&&!s)throw new TypeError("Private accessor was defined without a setter");if(typeof o=="function"?n!==o||!s:!o.has(n))throw new TypeError("Cannot write private member to an object whose class did not declare it");return a==="a"?s.call(n,i):s?s.value=i:o.set(n,i),i},e("__extends",pn),e("__assign",ln),e("__rest",mn),e("__decorate",dn),e("__param",hn),e("__metadata",bn),e("__awaiter",vn),e("__generator",gn),e("__exportStar",yn),e("__createBinding",Ht),e("__values",kt),e("__read",Or),e("__spread",xn),e("__spreadArrays",Sn),e("__spreadArray",wn),e("__await",rt),e("__asyncGenerator",En),e("__asyncDelegator",On),e("__asyncValues",_n),e("__makeTemplateObject",Tn),e("__importStar",Mn),e("__importDefault",Ln),e("__classPrivateFieldGet",An),e("__classPrivateFieldSet",Cn)})});var Yr=vt((Mt,Kr)=>{/*! + * clipboard.js v2.0.10 + * https://clipboardjs.com/ + * + * Licensed MIT © Zeno Rocha + */(function(t,r){typeof Mt=="object"&&typeof Kr=="object"?Kr.exports=r():typeof define=="function"&&define.amd?define([],r):typeof Mt=="object"?Mt.ClipboardJS=r():t.ClipboardJS=r()})(Mt,function(){return function(){var e={686:function(n,o,i){"use strict";i.d(o,{default:function(){return ea}});var a=i(279),s=i.n(a),c=i(370),u=i.n(c),f=i(817),p=i.n(f);function l(I){try{return document.execCommand(I)}catch(M){return!1}}var d=function(M){var w=p()(M);return l("cut"),w},h=d;function b(I){var M=document.documentElement.getAttribute("dir")==="rtl",w=document.createElement("textarea");w.style.fontSize="12pt",w.style.border="0",w.style.padding="0",w.style.margin="0",w.style.position="absolute",w.style[M?"right":"left"]="-9999px";var W=window.pageYOffset||document.documentElement.scrollTop;return w.style.top="".concat(W,"px"),w.setAttribute("readonly",""),w.value=I,w}var F=function(M){var w=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body},W="";if(typeof M=="string"){var R=b(M);w.container.appendChild(R),W=p()(R),l("copy"),R.remove()}else W=p()(M),l("copy");return W},K=F;function U(I){return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?U=function(w){return typeof w}:U=function(w){return w&&typeof Symbol=="function"&&w.constructor===Symbol&&w!==Symbol.prototype?"symbol":typeof w},U(I)}var _=function(){var M=arguments.length>0&&arguments[0]!==void 0?arguments[0]:{},w=M.action,W=w===void 0?"copy":w,R=M.container,N=M.target,Oe=M.text;if(W!=="copy"&&W!=="cut")throw new Error('Invalid "action" value, use either "copy" or "cut"');if(N!==void 0)if(N&&U(N)==="object"&&N.nodeType===1){if(W==="copy"&&N.hasAttribute("disabled"))throw new Error('Invalid "target" attribute. Please use "readonly" instead of "disabled" attribute');if(W==="cut"&&(N.hasAttribute("readonly")||N.hasAttribute("disabled")))throw new Error(`Invalid "target" attribute. You can't cut text from elements with "readonly" or "disabled" attributes`)}else throw new Error('Invalid "target" value, use a valid Element');if(Oe)return K(Oe,{container:R});if(N)return W==="cut"?h(N):K(N,{container:R})},je=_;function he(I){return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?he=function(w){return typeof w}:he=function(w){return w&&typeof Symbol=="function"&&w.constructor===Symbol&&w!==Symbol.prototype?"symbol":typeof w},he(I)}function tt(I,M){if(!(I instanceof M))throw new TypeError("Cannot call a class as a function")}function nn(I,M){for(var w=0;w0&&arguments[0]!==void 0?arguments[0]:{};this.action=typeof R.action=="function"?R.action:this.defaultAction,this.target=typeof R.target=="function"?R.target:this.defaultTarget,this.text=typeof R.text=="function"?R.text:this.defaultText,this.container=he(R.container)==="object"?R.container:document.body}},{key:"listenClick",value:function(R){var N=this;this.listener=u()(R,"click",function(Oe){return N.onClick(Oe)})}},{key:"onClick",value:function(R){var N=R.delegateTarget||R.currentTarget,Oe=this.action(N)||"copy",Ct=je({action:Oe,container:this.container,target:this.target(N),text:this.text(N)});this.emit(Ct?"success":"error",{action:Oe,text:Ct,trigger:N,clearSelection:function(){N&&N.focus(),document.activeElement.blur(),window.getSelection().removeAllRanges()}})}},{key:"defaultAction",value:function(R){return yr("action",R)}},{key:"defaultTarget",value:function(R){var N=yr("target",R);if(N)return document.querySelector(N)}},{key:"defaultText",value:function(R){return yr("text",R)}},{key:"destroy",value:function(){this.listener.destroy()}}],[{key:"copy",value:function(R){var N=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body};return K(R,N)}},{key:"cut",value:function(R){return h(R)}},{key:"isSupported",value:function(){var R=arguments.length>0&&arguments[0]!==void 0?arguments[0]:["copy","cut"],N=typeof R=="string"?[R]:R,Oe=!!document.queryCommandSupported;return N.forEach(function(Ct){Oe=Oe&&!!document.queryCommandSupported(Ct)}),Oe}}]),w}(s()),ea=Zi},828:function(n){var o=9;if(typeof Element!="undefined"&&!Element.prototype.matches){var i=Element.prototype;i.matches=i.matchesSelector||i.mozMatchesSelector||i.msMatchesSelector||i.oMatchesSelector||i.webkitMatchesSelector}function a(s,c){for(;s&&s.nodeType!==o;){if(typeof s.matches=="function"&&s.matches(c))return s;s=s.parentNode}}n.exports=a},438:function(n,o,i){var a=i(828);function s(f,p,l,d,h){var b=u.apply(this,arguments);return f.addEventListener(l,b,h),{destroy:function(){f.removeEventListener(l,b,h)}}}function c(f,p,l,d,h){return typeof f.addEventListener=="function"?s.apply(null,arguments):typeof l=="function"?s.bind(null,document).apply(null,arguments):(typeof f=="string"&&(f=document.querySelectorAll(f)),Array.prototype.map.call(f,function(b){return s(b,p,l,d,h)}))}function u(f,p,l,d){return function(h){h.delegateTarget=a(h.target,p),h.delegateTarget&&d.call(f,h)}}n.exports=c},879:function(n,o){o.node=function(i){return i!==void 0&&i instanceof HTMLElement&&i.nodeType===1},o.nodeList=function(i){var a=Object.prototype.toString.call(i);return i!==void 0&&(a==="[object NodeList]"||a==="[object HTMLCollection]")&&"length"in i&&(i.length===0||o.node(i[0]))},o.string=function(i){return typeof i=="string"||i instanceof String},o.fn=function(i){var a=Object.prototype.toString.call(i);return a==="[object Function]"}},370:function(n,o,i){var a=i(879),s=i(438);function c(l,d,h){if(!l&&!d&&!h)throw new Error("Missing required arguments");if(!a.string(d))throw new TypeError("Second argument must be a String");if(!a.fn(h))throw new TypeError("Third argument must be a Function");if(a.node(l))return u(l,d,h);if(a.nodeList(l))return f(l,d,h);if(a.string(l))return p(l,d,h);throw new TypeError("First argument must be a String, HTMLElement, HTMLCollection, or NodeList")}function u(l,d,h){return l.addEventListener(d,h),{destroy:function(){l.removeEventListener(d,h)}}}function f(l,d,h){return Array.prototype.forEach.call(l,function(b){b.addEventListener(d,h)}),{destroy:function(){Array.prototype.forEach.call(l,function(b){b.removeEventListener(d,h)})}}}function p(l,d,h){return s(document.body,l,d,h)}n.exports=c},817:function(n){function o(i){var a;if(i.nodeName==="SELECT")i.focus(),a=i.value;else if(i.nodeName==="INPUT"||i.nodeName==="TEXTAREA"){var s=i.hasAttribute("readonly");s||i.setAttribute("readonly",""),i.select(),i.setSelectionRange(0,i.value.length),s||i.removeAttribute("readonly"),a=i.value}else{i.hasAttribute("contenteditable")&&i.focus();var c=window.getSelection(),u=document.createRange();u.selectNodeContents(i),c.removeAllRanges(),c.addRange(u),a=c.toString()}return a}n.exports=o},279:function(n){function o(){}o.prototype={on:function(i,a,s){var c=this.e||(this.e={});return(c[i]||(c[i]=[])).push({fn:a,ctx:s}),this},once:function(i,a,s){var c=this;function u(){c.off(i,u),a.apply(s,arguments)}return u._=a,this.on(i,u,s)},emit:function(i){var a=[].slice.call(arguments,1),s=((this.e||(this.e={}))[i]||[]).slice(),c=0,u=s.length;for(c;c{"use strict";/*! + * escape-html + * Copyright(c) 2012-2013 TJ Holowaychuk + * Copyright(c) 2015 Andreas Lubbe + * Copyright(c) 2015 Tiancheng "Timothy" Gu + * MIT Licensed + */var xs=/["'&<>]/;yi.exports=Ss;function Ss(e){var t=""+e,r=xs.exec(t);if(!r)return t;var n,o="",i=0,a=0;for(i=r.index;i0},enumerable:!1,configurable:!0}),t.prototype._trySubscribe=function(r){return this._throwIfClosed(),e.prototype._trySubscribe.call(this,r)},t.prototype._subscribe=function(r){return this._throwIfClosed(),this._checkFinalizedStatuses(r),this._innerSubscribe(r)},t.prototype._innerSubscribe=function(r){var n=this,o=this,i=o.hasError,a=o.isStopped,s=o.observers;return i||a?_r:(this.currentObservers=null,s.push(r),new Ae(function(){n.currentObservers=null,ke(s,r)}))},t.prototype._checkFinalizedStatuses=function(r){var n=this,o=n.hasError,i=n.thrownError,a=n.isStopped;o?r.error(i):a&&r.complete()},t.prototype.asObservable=function(){var r=new k;return r.source=this,r},t.create=function(r,n){return new zn(r,n)},t}(k);var zn=function(e){te(t,e);function t(r,n){var o=e.call(this)||this;return o.destination=r,o.source=n,o}return t.prototype.next=function(r){var n,o;(o=(n=this.destination)===null||n===void 0?void 0:n.next)===null||o===void 0||o.call(n,r)},t.prototype.error=function(r){var n,o;(o=(n=this.destination)===null||n===void 0?void 0:n.error)===null||o===void 0||o.call(n,r)},t.prototype.complete=function(){var r,n;(n=(r=this.destination)===null||r===void 0?void 0:r.complete)===null||n===void 0||n.call(r)},t.prototype._subscribe=function(r){var n,o;return(o=(n=this.source)===null||n===void 0?void 0:n.subscribe(r))!==null&&o!==void 0?o:_r},t}(O);var yt={now:function(){return(yt.delegate||Date).now()},delegate:void 0};var xt=function(e){te(t,e);function t(r,n,o){r===void 0&&(r=1/0),n===void 0&&(n=1/0),o===void 0&&(o=yt);var i=e.call(this)||this;return i._bufferSize=r,i._windowTime=n,i._timestampProvider=o,i._buffer=[],i._infiniteTimeWindow=!0,i._infiniteTimeWindow=n===1/0,i._bufferSize=Math.max(1,r),i._windowTime=Math.max(1,n),i}return t.prototype.next=function(r){var n=this,o=n.isStopped,i=n._buffer,a=n._infiniteTimeWindow,s=n._timestampProvider,c=n._windowTime;o||(i.push(r),!a&&i.push(s.now()+c)),this._trimBuffer(),e.prototype.next.call(this,r)},t.prototype._subscribe=function(r){this._throwIfClosed(),this._trimBuffer();for(var n=this._innerSubscribe(r),o=this,i=o._infiniteTimeWindow,a=o._buffer,s=a.slice(),c=0;c0?e.prototype.requestAsyncId.call(this,r,n,o):(r.actions.push(this),r._scheduled||(r._scheduled=st.requestAnimationFrame(function(){return r.flush(void 0)})))},t.prototype.recycleAsyncId=function(r,n,o){if(o===void 0&&(o=0),o!=null&&o>0||o==null&&this.delay>0)return e.prototype.recycleAsyncId.call(this,r,n,o);r.actions.some(function(i){return i.id===n})||(st.cancelAnimationFrame(n),r._scheduled=void 0)},t}(Vt);var Kn=function(e){te(t,e);function t(){return e!==null&&e.apply(this,arguments)||this}return t.prototype.flush=function(r){this._active=!0;var n=this._scheduled;this._scheduled=void 0;var o=this.actions,i;r=r||o.shift();do if(i=r.execute(r.state,r.delay))break;while((r=o[0])&&r.id===n&&o.shift());if(this._active=!1,i){for(;(r=o[0])&&r.id===n&&o.shift();)r.unsubscribe();throw i}},t}(Nt);var Te=new Kn(Qn);var z=new k(function(e){return e.complete()});function zt(e){return e&&E(e.schedule)}function kr(e){return e[e.length-1]}function Fe(e){return E(kr(e))?e.pop():void 0}function ye(e){return zt(kr(e))?e.pop():void 0}function qt(e,t){return typeof kr(e)=="number"?e.pop():t}var ct=function(e){return e&&typeof e.length=="number"&&typeof e!="function"};function Qt(e){return E(e==null?void 0:e.then)}function Kt(e){return E(e[at])}function Yt(e){return Symbol.asyncIterator&&E(e==null?void 0:e[Symbol.asyncIterator])}function Bt(e){return new TypeError("You provided "+(e!==null&&typeof e=="object"?"an invalid object":"'"+e+"'")+" where a stream was expected. You can provide an Observable, Promise, ReadableStream, Array, AsyncIterable, or Iterable.")}function da(){return typeof Symbol!="function"||!Symbol.iterator?"@@iterator":Symbol.iterator}var Gt=da();function Jt(e){return E(e==null?void 0:e[Gt])}function Xt(e){return Pn(this,arguments,function(){var r,n,o,i;return It(this,function(a){switch(a.label){case 0:r=e.getReader(),a.label=1;case 1:a.trys.push([1,,9,10]),a.label=2;case 2:return[4,$t(r.read())];case 3:return n=a.sent(),o=n.value,i=n.done,i?[4,$t(void 0)]:[3,5];case 4:return[2,a.sent()];case 5:return[4,$t(o)];case 6:return[4,a.sent()];case 7:return a.sent(),[3,2];case 8:return[3,10];case 9:return r.releaseLock(),[7];case 10:return[2]}})})}function Zt(e){return E(e==null?void 0:e.getReader)}function V(e){if(e instanceof k)return e;if(e!=null){if(Kt(e))return ha(e);if(ct(e))return ba(e);if(Qt(e))return va(e);if(Yt(e))return Yn(e);if(Jt(e))return ga(e);if(Zt(e))return ya(e)}throw Bt(e)}function ha(e){return new k(function(t){var r=e[at]();if(E(r.subscribe))return r.subscribe(t);throw new TypeError("Provided object does not correctly implement Symbol.observable")})}function ba(e){return new k(function(t){for(var r=0;r=2,!0))}function ae(e){e===void 0&&(e={});var t=e.connector,r=t===void 0?function(){return new O}:t,n=e.resetOnError,o=n===void 0?!0:n,i=e.resetOnComplete,a=i===void 0?!0:i,s=e.resetOnRefCountZero,c=s===void 0?!0:s;return function(u){var f=null,p=null,l=null,d=0,h=!1,b=!1,F=function(){p==null||p.unsubscribe(),p=null},K=function(){F(),f=l=null,h=b=!1},U=function(){var _=f;K(),_==null||_.unsubscribe()};return v(function(_,je){d++,!b&&!h&&F();var he=l=l!=null?l:r();je.add(function(){d--,d===0&&!b&&!h&&(p=Ur(U,c))}),he.subscribe(je),f||(f=new it({next:function(tt){return he.next(tt)},error:function(tt){b=!0,F(),p=Ur(K,o,tt),he.error(tt)},complete:function(){h=!0,F(),p=Ur(K,a),he.complete()}}),ne(_).subscribe(f))})(u)}}function Ur(e,t){for(var r=[],n=2;ne.next(document)),e}function G(e,t=document){return Array.from(t.querySelectorAll(e))}function Q(e,t=document){let r=ue(e,t);if(typeof r=="undefined")throw new ReferenceError(`Missing element: expected "${e}" to be present`);return r}function ue(e,t=document){return t.querySelector(e)||void 0}function Ne(){return document.activeElement instanceof HTMLElement&&document.activeElement||void 0}function nr(e){return C(y(document.body,"focusin"),y(document.body,"focusout")).pipe(Xe(1),m(()=>{let t=Ne();return typeof t!="undefined"?e.contains(t):!1}),q(e===Ne()),Y())}function ze(e){return{x:e.offsetLeft,y:e.offsetTop}}function ho(e){return C(y(window,"load"),y(window,"resize")).pipe(He(0,Te),m(()=>ze(e)),q(ze(e)))}function bo(e){return{x:e.scrollLeft,y:e.scrollTop}}function or(e){return C(y(e,"scroll"),y(window,"resize")).pipe(He(0,Te),m(()=>bo(e)),q(bo(e)))}var go=function(){if(typeof Map!="undefined")return Map;function e(t,r){var n=-1;return t.some(function(o,i){return o[0]===r?(n=i,!0):!1}),n}return function(){function t(){this.__entries__=[]}return Object.defineProperty(t.prototype,"size",{get:function(){return this.__entries__.length},enumerable:!0,configurable:!0}),t.prototype.get=function(r){var n=e(this.__entries__,r),o=this.__entries__[n];return o&&o[1]},t.prototype.set=function(r,n){var o=e(this.__entries__,r);~o?this.__entries__[o][1]=n:this.__entries__.push([r,n])},t.prototype.delete=function(r){var n=this.__entries__,o=e(n,r);~o&&n.splice(o,1)},t.prototype.has=function(r){return!!~e(this.__entries__,r)},t.prototype.clear=function(){this.__entries__.splice(0)},t.prototype.forEach=function(r,n){n===void 0&&(n=null);for(var o=0,i=this.__entries__;o0},e.prototype.connect_=function(){!zr||this.connected_||(document.addEventListener("transitionend",this.onTransitionEnd_),window.addEventListener("resize",this.refresh),Da?(this.mutationsObserver_=new MutationObserver(this.refresh),this.mutationsObserver_.observe(document,{attributes:!0,childList:!0,characterData:!0,subtree:!0})):(document.addEventListener("DOMSubtreeModified",this.refresh),this.mutationEventsAdded_=!0),this.connected_=!0)},e.prototype.disconnect_=function(){!zr||!this.connected_||(document.removeEventListener("transitionend",this.onTransitionEnd_),window.removeEventListener("resize",this.refresh),this.mutationsObserver_&&this.mutationsObserver_.disconnect(),this.mutationEventsAdded_&&document.removeEventListener("DOMSubtreeModified",this.refresh),this.mutationsObserver_=null,this.mutationEventsAdded_=!1,this.connected_=!1)},e.prototype.onTransitionEnd_=function(t){var r=t.propertyName,n=r===void 0?"":r,o=Wa.some(function(i){return!!~n.indexOf(i)});o&&this.refresh()},e.getInstance=function(){return this.instance_||(this.instance_=new e),this.instance_},e.instance_=null,e}(),yo=function(e,t){for(var r=0,n=Object.keys(t);r0},e}(),So=typeof WeakMap!="undefined"?new WeakMap:new go,wo=function(){function e(t){if(!(this instanceof e))throw new TypeError("Cannot call a class as a function.");if(!arguments.length)throw new TypeError("1 argument required, but only 0 present.");var r=Va.getInstance(),n=new Xa(t,r,this);So.set(this,n)}return e}();["observe","unobserve","disconnect"].forEach(function(e){wo.prototype[e]=function(){var t;return(t=So.get(this))[e].apply(t,arguments)}});var Za=function(){return typeof ir.ResizeObserver!="undefined"?ir.ResizeObserver:wo}(),Eo=Za;var Oo=new O,es=P(()=>H(new Eo(e=>{for(let t of e)Oo.next(t)}))).pipe(x(e=>C(xe,H(e)).pipe(L(()=>e.disconnect()))),X(1));function Ce(e){return{width:e.offsetWidth,height:e.offsetHeight}}function ge(e){return es.pipe(S(t=>t.observe(e)),x(t=>Oo.pipe(T(({target:r})=>r===e),L(()=>t.unobserve(e)),m(()=>Ce(e)))),q(Ce(e)))}function cr(e){return{width:e.scrollWidth,height:e.scrollHeight}}var _o=new O,ts=P(()=>H(new IntersectionObserver(e=>{for(let t of e)_o.next(t)},{threshold:0}))).pipe(x(e=>C(xe,H(e)).pipe(L(()=>e.disconnect()))),X(1));function To(e){return ts.pipe(S(t=>t.observe(e)),x(t=>_o.pipe(T(({target:r})=>r===e),L(()=>t.unobserve(e)),m(({isIntersecting:r})=>r))))}function Mo(e,t=16){return or(e).pipe(m(({y:r})=>{let n=Ce(e),o=cr(e);return r>=o.height-n.height-t}),Y())}var ur={drawer:Q("[data-md-toggle=drawer]"),search:Q("[data-md-toggle=search]")};function Lo(e){return ur[e].checked}function qe(e,t){ur[e].checked!==t&&ur[e].click()}function mt(e){let t=ur[e];return y(t,"change").pipe(m(()=>t.checked),q(t.checked))}function rs(e,t){switch(e.constructor){case HTMLInputElement:return e.type==="radio"?/^Arrow/.test(t):!0;case HTMLSelectElement:case HTMLTextAreaElement:return!0;default:return e.isContentEditable}}function Ao(){return y(window,"keydown").pipe(T(e=>!(e.metaKey||e.ctrlKey)),m(e=>({mode:Lo("search")?"search":"global",type:e.key,claim(){e.preventDefault(),e.stopPropagation()}})),T(({mode:e,type:t})=>{if(e==="global"){let r=Ne();if(typeof r!="undefined")return!rs(r,t)}return!0}),ae())}function Se(){return new URL(location.href)}function fr(e){location.href=e.href}function Co(){return new O}function Ro(e,t){if(typeof t=="string"||typeof t=="number")e.innerHTML+=t.toString();else if(t instanceof Node)e.appendChild(t);else if(Array.isArray(t))for(let r of t)Ro(e,r)}function A(e,t,...r){let n=document.createElement(e);if(t)for(let o of Object.keys(t))typeof t[o]!="boolean"?n.setAttribute(o,t[o]):t[o]&&n.setAttribute(o,"");for(let o of r)Ro(n,o);return n}function ko(e,t){let r=t;if(e.length>r){for(;e[r]!==" "&&--r>0;);return`${e.substring(0,r)}...`}return e}function pr(e){if(e>999){let t=+((e-950)%1e3>99);return`${((e+1e-6)/1e3).toFixed(t)}k`}else return e.toString()}function Ho(){return location.hash.substring(1)}function Po(e){let t=A("a",{href:e});t.addEventListener("click",r=>r.stopPropagation()),t.click()}function ns(){return y(window,"hashchange").pipe(m(Ho),q(Ho()),T(e=>e.length>0),X(1))}function Io(){return ns().pipe(m(e=>ue(`[id="${e}"]`)),T(e=>typeof e!="undefined"))}function qr(e){let t=matchMedia(e);return tr(r=>t.addListener(()=>r(t.matches))).pipe(q(t.matches))}function $o(){let e=matchMedia("print");return C(y(window,"beforeprint").pipe(Z(!0)),y(window,"afterprint").pipe(Z(!1))).pipe(q(e.matches))}function Qr(e,t){return e.pipe(x(r=>r?t():z))}function lr(e,t={credentials:"same-origin"}){return ne(fetch(`${e}`,t)).pipe(T(r=>r.status===200),De(()=>z))}function Re(e,t){return lr(e,t).pipe(x(r=>r.json()),X(1))}function jo(e,t){let r=new DOMParser;return lr(e,t).pipe(x(n=>n.text()),m(n=>r.parseFromString(n,"text/xml")),X(1))}function Fo(e){let t=A("script",{src:e});return P(()=>(document.head.appendChild(t),C(y(t,"load"),y(t,"error").pipe(x(()=>Hr(()=>new ReferenceError(`Invalid script: ${e}`))))).pipe(Z(void 0),L(()=>document.head.removeChild(t)),re(1))))}function Uo(){return{x:Math.max(0,scrollX),y:Math.max(0,scrollY)}}function Wo(){return C(y(window,"scroll",{passive:!0}),y(window,"resize",{passive:!0})).pipe(m(Uo),q(Uo()))}function Do(){return{width:innerWidth,height:innerHeight}}function Vo(){return y(window,"resize",{passive:!0}).pipe(m(Do),q(Do()))}function No(){return B([Wo(),Vo()]).pipe(m(([e,t])=>({offset:e,size:t})),X(1))}function mr(e,{viewport$:t,header$:r}){let n=t.pipe(J("size")),o=B([n,r]).pipe(m(()=>ze(e)));return B([r,t,o]).pipe(m(([{height:i},{offset:a,size:s},{x:c,y:u}])=>({offset:{x:a.x-c,y:a.y-u+i},size:s})))}function zo(e,{tx$:t}){let r=y(e,"message").pipe(m(({data:n})=>n));return t.pipe(Tt(()=>r,{leading:!0,trailing:!0}),S(n=>e.postMessage(n)),_t(r),ae())}var os=Q("#__config"),dt=JSON.parse(os.textContent);dt.base=`${new URL(dt.base,Se())}`;function de(){return dt}function ce(e){return dt.features.includes(e)}function ee(e,t){return typeof t!="undefined"?dt.translations[e].replace("#",t.toString()):dt.translations[e]}function we(e,t=document){return Q(`[data-md-component=${e}]`,t)}function oe(e,t=document){return G(`[data-md-component=${e}]`,t)}var ti=Ke(Yr());function qo(e){return A("aside",{class:"md-annotation",tabIndex:0},A("div",{class:"md-annotation__inner md-tooltip"},A("div",{class:"md-tooltip__inner md-typeset"})),A("span",{class:"md-annotation__index"},A("span",{"data-md-annotation-id":e})))}function Qo(e){return A("button",{class:"md-clipboard md-icon",title:ee("clipboard.copy"),"data-clipboard-target":`#${e} > code`})}function Br(e,t){let r=t&2,n=t&1,o=Object.keys(e.terms).filter(a=>!e.terms[a]).reduce((a,s)=>[...a,A("del",null,s)," "],[]).slice(0,-1),i=new URL(e.location);return ce("search.highlight")&&i.searchParams.set("h",Object.entries(e.terms).filter(([,a])=>a).reduce((a,[s])=>`${a} ${s}`.trim(),"")),A("a",{href:`${i}`,class:"md-search-result__link",tabIndex:-1},A("article",{class:["md-search-result__article",...r?["md-search-result__article--document"]:[]].join(" "),"data-md-score":e.score.toFixed(2)},r>0&&A("div",{class:"md-search-result__icon md-icon"}),A("h1",{class:"md-search-result__title"},e.title),n>0&&e.text.length>0&&A("p",{class:"md-search-result__teaser"},ko(e.text,320)),e.tags&&e.tags.map(a=>A("span",{class:"md-tag"},a)),n>0&&o.length>0&&A("p",{class:"md-search-result__terms"},ee("search.result.term.missing"),": ",o)))}function Ko(e){let t=e[0].score,r=[...e],n=r.findIndex(u=>!u.location.includes("#")),[o]=r.splice(n,1),i=r.findIndex(u=>u.scoreBr(u,1)),...s.length?[A("details",{class:"md-search-result__more"},A("summary",{tabIndex:-1},s.length>0&&s.length===1?ee("search.result.more.one"):ee("search.result.more.other",s.length)),s.map(u=>Br(u,1)))]:[]];return A("li",{class:"md-search-result__item"},c)}function Yo(e){return A("ul",{class:"md-source__facts"},Object.entries(e).map(([t,r])=>A("li",{class:`md-source__fact md-source__fact--${t}`},typeof r=="number"?pr(r):r)))}function Bo(e){return A("div",{class:"md-typeset__scrollwrap"},A("div",{class:"md-typeset__table"},e))}function is(e){let t=de(),r=new URL(`../${e.version}/`,t.base);return A("li",{class:"md-version__item"},A("a",{href:r.toString(),class:"md-version__link"},e.title))}function Go(e,t){return A("div",{class:"md-version"},A("button",{class:"md-version__current","aria-label":ee("select.version.title")},t.title),A("ul",{class:"md-version__list"},e.map(is)))}function as(e,t){let r=P(()=>B([ho(e),or(t)])).pipe(m(([{x:n,y:o},i])=>{let{width:a}=Ce(e);return{x:n-i.x+a/2,y:o-i.y}}));return nr(e).pipe(x(n=>r.pipe(m(o=>({active:n,offset:o})),re(+!n||1/0))))}function Jo(e,t){return P(()=>{let r=new O;r.subscribe({next({offset:i}){e.style.setProperty("--md-tooltip-x",`${i.x}px`),e.style.setProperty("--md-tooltip-y",`${i.y}px`)},complete(){e.style.removeProperty("--md-tooltip-x"),e.style.removeProperty("--md-tooltip-y")}}),r.pipe(Vr(500,Te),m(()=>t.getBoundingClientRect()),m(({x:i})=>i)).subscribe({next(i){i?e.style.setProperty("--md-tooltip-0",`${-i}px`):e.style.removeProperty("--md-tooltip-0")},complete(){e.style.removeProperty("--md-tooltip-0")}});let n=Q(":scope > :last-child",e),o=y(n,"mousedown",{once:!0});return r.pipe(x(({active:i})=>i?o:z),S(i=>i.preventDefault())).subscribe(()=>e.blur()),as(e,t).pipe(S(i=>r.next(i)),L(()=>r.complete()),m(i=>$({ref:e},i)))})}function ss(e){let t=[];for(let r of G(".c, .c1, .cm",e)){let n,o=r.firstChild;if(o instanceof Text)for(;n=/\((\d+)\)/.exec(o.textContent);){let i=o.splitText(n.index);o=i.splitText(n[0].length),t.push(i)}}return t}function Xo(e,t){t.append(...Array.from(e.childNodes))}function Zo(e,t,{print$:r}){let n=new Map;for(let o of ss(t)){let[,i]=o.textContent.match(/\((\d+)\)/);ue(`li:nth-child(${i})`,e)&&(n.set(+i,qo(+i)),o.replaceWith(n.get(+i)))}return n.size===0?z:P(()=>{let o=new O;return r.pipe(se(o.pipe(pe(1)))).subscribe(i=>{e.hidden=!i;for(let[a,s]of n){let c=Q(".md-typeset",s),u=Q(`li:nth-child(${a})`,e);i?Xo(c,u):Xo(u,c)}}),C(...[...n].map(([,i])=>Jo(i,t))).pipe(L(()=>o.complete()),ae())})}var cs=0;function ri(e){if(e.nextElementSibling){let t=e.nextElementSibling;if(t.tagName==="OL")return t;if(t.tagName==="P"&&!t.children.length)return ri(t)}}function ei(e){return ge(e).pipe(m(({width:t})=>({scrollable:cr(e).width>t})),J("scrollable"))}function ni(e,t){let{matches:r}=matchMedia("(hover)"),n=P(()=>{let o=new O;if(o.subscribe(({scrollable:a})=>{a&&r?e.setAttribute("tabindex","0"):e.removeAttribute("tabindex")}),ti.default.isSupported()){let a=e.closest("pre");a.id=`__code_${++cs}`,a.insertBefore(Qo(a.id),e)}let i=e.closest([":not(td):not(.code) > .highlight",".highlighttable"].join(", "));if(i instanceof HTMLElement){let a=ri(i);if(typeof a!="undefined"&&(i.classList.contains("annotate")||ce("content.code.annotate"))){let s=Zo(a,e,t);return ei(e).pipe(S(c=>o.next(c)),L(()=>o.complete()),m(c=>$({ref:e},c)),Ze(ge(i).pipe(se(o.pipe(pe(1))),m(({width:c,height:u})=>c&&u),Y(),x(c=>c?s:z))))}}return ei(e).pipe(S(a=>o.next(a)),L(()=>o.complete()),m(a=>$({ref:e},a)))});return To(e).pipe(T(o=>o),re(1),x(()=>n))}var oi=".node circle,.node ellipse,.node path,.node polygon,.node rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}marker{fill:var(--md-mermaid-edge-color)!important}.edgeLabel .label rect{fill:transparent}.label{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.label foreignObject{line-height:normal;overflow:visible}.label div .edgeLabel{color:var(--md-mermaid-label-fg-color)}.edgeLabel,.edgeLabel rect,.label div .edgeLabel{background-color:var(--md-mermaid-label-bg-color)}.edgeLabel,.edgeLabel rect{fill:var(--md-mermaid-label-bg-color);color:var(--md-mermaid-edge-color)}.edgePath .path,.flowchart-link{stroke:var(--md-mermaid-edge-color)}.edgePath .arrowheadPath{fill:var(--md-mermaid-edge-color);stroke:none}.cluster rect{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}.cluster span{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}#flowchart-circleEnd,#flowchart-circleStart,#flowchart-crossEnd,#flowchart-crossStart,#flowchart-pointEnd,#flowchart-pointStart{stroke:none}g.classGroup line,g.classGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.classGroup text{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.classLabel .box{fill:var(--md-mermaid-label-bg-color);background-color:var(--md-mermaid-label-bg-color);opacity:1}.classLabel .label{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.node .divider{stroke:var(--md-mermaid-node-fg-color)}.relation{stroke:var(--md-mermaid-edge-color)}.cardinality{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.cardinality text{fill:inherit!important}#compositionEnd,#compositionStart,#dependencyEnd,#dependencyStart,#extensionEnd,#extensionStart{fill:var(--md-mermaid-edge-color)!important;stroke:var(--md-mermaid-edge-color)!important}#aggregationEnd,#aggregationStart{fill:var(--md-mermaid-label-bg-color)!important;stroke:var(--md-mermaid-edge-color)!important}g.stateGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.stateGroup .state-title{fill:var(--md-mermaid-label-fg-color)!important;font-family:var(--md-mermaid-font-family)}g.stateGroup .composit{fill:var(--md-mermaid-label-bg-color)}.nodeLabel{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.node circle.state-end,.node circle.state-start,.start-state{fill:var(--md-mermaid-edge-color);stroke:none}.end-state-inner,.end-state-outer{fill:var(--md-mermaid-edge-color)}.end-state-inner,.node circle.state-end{stroke:var(--md-mermaid-label-bg-color)}.transition{stroke:var(--md-mermaid-edge-color)}[id^=state-fork] rect,[id^=state-join] rect{fill:var(--md-mermaid-edge-color)!important;stroke:none!important}.statediagram-cluster.statediagram-cluster .inner{fill:var(--md-default-bg-color)}.statediagram-cluster rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}.statediagram-state rect.divider{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}.entityBox{fill:var(--md-mermaid-label-bg-color);stroke:var(--md-mermaid-node-fg-color)}.entityLabel{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.relationshipLabelBox{fill:var(--md-mermaid-label-bg-color);fill-opacity:1;background-color:var(--md-mermaid-label-bg-color);opacity:1}.relationshipLabel{fill:var(--md-mermaid-label-fg-color)}.relationshipLine{stroke:var(--md-mermaid-edge-color)}#ONE_OR_MORE_END *,#ONE_OR_MORE_START *,#ONLY_ONE_END *,#ONLY_ONE_START *,#ZERO_OR_MORE_END *,#ZERO_OR_MORE_START *,#ZERO_OR_ONE_END *,#ZERO_OR_ONE_START *{stroke:var(--md-mermaid-edge-color)!important}#ZERO_OR_MORE_END circle,#ZERO_OR_MORE_START circle,.actor{fill:var(--md-mermaid-label-bg-color)}.actor{stroke:var(--md-mermaid-node-fg-color)}text.actor>tspan{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}line{stroke:var(--md-default-fg-color--lighter)}.messageLine0,.messageLine1{stroke:var(--md-mermaid-edge-color)}.loopText>tspan,.messageText{font-family:var(--md-mermaid-font-family)!important}#arrowhead path,.loopText>tspan,.messageText{fill:var(--md-mermaid-edge-color);stroke:none}.loopLine{stroke:var(--md-mermaid-node-fg-color)}.labelBox,.loopLine{fill:var(--md-mermaid-node-bg-color)}.labelBox{stroke:none}.labelText,.labelText>span{fill:var(--md-mermaid-node-fg-color);font-family:var(--md-mermaid-font-family)}";var Gr,fs=0;function ps(){return typeof mermaid=="undefined"||mermaid instanceof Element?Fo("https://unpkg.com/mermaid@8.13.3/dist/mermaid.min.js"):H(void 0)}function ii(e){return e.classList.remove("mermaid"),Gr||(Gr=ps().pipe(S(()=>mermaid.initialize({startOnLoad:!1,themeCSS:oi})),Z(void 0),X(1))),Gr.subscribe(()=>{e.classList.add("mermaid");let t=`__mermaid_${fs++}`,r=A("div",{class:"mermaid"});mermaid.mermaidAPI.render(t,e.textContent,n=>{let o=r.attachShadow({mode:"closed"});o.innerHTML=n,e.replaceWith(r)})}),Gr.pipe(Z({ref:e}))}function ls(e,{target$:t,print$:r}){let n=!0;return C(t.pipe(m(o=>o.closest("details:not([open])")),T(o=>e===o),Z({action:"open",reveal:!0})),r.pipe(T(o=>o||!n),S(()=>n=e.open),m(o=>({action:o?"open":"close"}))))}function ai(e,t){return P(()=>{let r=new O;return r.subscribe(({action:n,reveal:o})=>{n==="open"?e.setAttribute("open",""):e.removeAttribute("open"),o&&e.scrollIntoView()}),ls(e,t).pipe(S(n=>r.next(n)),L(()=>r.complete()),m(n=>$({ref:e},n)))})}var si=A("table");function ci(e){return e.replaceWith(si),si.replaceWith(Bo(e)),H({ref:e})}function ms(e){let t=G(":scope > input",e),r=t.find(n=>n.checked)||t[0];return C(...t.map(n=>y(n,"change").pipe(Z({active:Q(`label[for=${n.id}]`)})))).pipe(q({active:Q(`label[for=${r.id}]`)}))}function ui(e){let t=Q(".tabbed-labels",e);return P(()=>{let r=new O;return B([r,ge(e)]).pipe(He(1,Te),se(r.pipe(pe(1)))).subscribe({next([{active:n}]){let o=ze(n),{width:i}=Ce(n);e.style.setProperty("--md-indicator-x",`${o.x}px`),e.style.setProperty("--md-indicator-width",`${i}px`),t.scrollTo({behavior:"smooth",left:o.x})},complete(){e.style.removeProperty("--md-indicator-x"),e.style.removeProperty("--md-indicator-width")}}),ms(e).pipe(S(n=>r.next(n)),L(()=>r.complete()),m(n=>$({ref:e},n)))}).pipe(Ge(le))}function fi(e,{target$:t,print$:r}){return C(...G("pre:not(.mermaid) > code",e).map(n=>ni(n,{print$:r})),...G("pre.mermaid",e).map(n=>ii(n)),...G("table:not([class])",e).map(n=>ci(n)),...G("details",e).map(n=>ai(n,{target$:t,print$:r})),...G("[data-tabs]",e).map(n=>ui(n)))}function ds(e,{alert$:t}){return t.pipe(x(r=>C(H(!0),H(!1).pipe(Ie(2e3))).pipe(m(n=>({message:r,active:n})))))}function pi(e,t){let r=Q(".md-typeset",e);return P(()=>{let n=new O;return n.subscribe(({message:o,active:i})=>{r.textContent=o,i?e.setAttribute("data-md-state","open"):e.removeAttribute("data-md-state")}),ds(e,t).pipe(S(o=>n.next(o)),L(()=>n.complete()),m(o=>$({ref:e},o)))})}function hs({viewport$:e}){if(!ce("header.autohide"))return H(!1);let t=e.pipe(m(({offset:{y:o}})=>o),Me(2,1),m(([o,i])=>[oMath.abs(i-o.y)>100),m(([,[o]])=>o),Y()),n=mt("search");return B([e,n]).pipe(m(([{offset:o},i])=>o.y>400&&!i),Y(),x(o=>o?r:H(!1)),q(!1))}function li(e,t){return P(()=>{let r=getComputedStyle(e);return H(r.position==="sticky"||r.position==="-webkit-sticky")}).pipe(Ve(ge(e),hs(t)),m(([r,{height:n},o])=>({height:r?n:0,sticky:r,hidden:o})),Y((r,n)=>r.sticky===n.sticky&&r.height===n.height&&r.hidden===n.hidden),X(1))}function mi(e,{header$:t,main$:r}){return P(()=>{let n=new O;return n.pipe(J("active"),Ve(t)).subscribe(([{active:o},{hidden:i}])=>{o?e.setAttribute("data-md-state",i?"hidden":"shadow"):e.removeAttribute("data-md-state")}),r.subscribe(n),t.pipe(se(n.pipe(pe(1))),m(o=>$({ref:e},o)))})}function bs(e,{viewport$:t,header$:r}){return mr(e,{viewport$:t,header$:r}).pipe(m(({offset:{y:n}})=>{let{height:o}=Ce(e);return{active:n>=o}}),J("active"))}function di(e,t){return P(()=>{let r=new O;r.subscribe(({active:o})=>{o?e.setAttribute("data-md-state","active"):e.removeAttribute("data-md-state")});let n=ue("article h1");return typeof n=="undefined"?z:bs(n,t).pipe(S(o=>r.next(o)),L(()=>r.complete()),m(o=>$({ref:e},o)))})}function hi(e,{viewport$:t,header$:r}){let n=r.pipe(m(({height:i})=>i),Y()),o=n.pipe(x(()=>ge(e).pipe(m(({height:i})=>({top:e.offsetTop,bottom:e.offsetTop+i})),J("bottom"))));return B([n,o,t]).pipe(m(([i,{top:a,bottom:s},{offset:{y:c},size:{height:u}}])=>(u=Math.max(0,u-Math.max(0,a-c,i)-Math.max(0,u+c-s)),{offset:a-i,height:u,active:a-i<=c})),Y((i,a)=>i.offset===a.offset&&i.height===a.height&&i.active===a.active))}function vs(e){let t=__md_get("__palette")||{index:e.findIndex(r=>matchMedia(r.getAttribute("data-md-color-media")).matches)};return H(...e).pipe(ie(r=>y(r,"change").pipe(Z(r))),q(e[Math.max(0,t.index)]),m(r=>({index:e.indexOf(r),color:{scheme:r.getAttribute("data-md-color-scheme"),primary:r.getAttribute("data-md-color-primary"),accent:r.getAttribute("data-md-color-accent")}})),X(1))}function bi(e){return P(()=>{let t=new O;t.subscribe(n=>{for(let[o,i]of Object.entries(n.color))document.body.setAttribute(`data-md-color-${o}`,i);for(let o=0;ot.next(n)),L(()=>t.complete()),m(n=>$({ref:e},n)))})}var Jr=Ke(Yr());function gs(e){e.setAttribute("data-md-copying","");let t=e.innerText;return e.removeAttribute("data-md-copying"),t}function vi({alert$:e}){Jr.default.isSupported()&&new k(t=>{new Jr.default("[data-clipboard-target], [data-clipboard-text]",{text:r=>r.getAttribute("data-clipboard-text")||gs(Q(r.getAttribute("data-clipboard-target")))}).on("success",r=>t.next(r))}).pipe(S(t=>{t.trigger.focus()}),Z(ee("clipboard.copied"))).subscribe(e)}function ys(e){if(e.length<2)return[""];let[t,r]=[...e].sort((o,i)=>o.length-i.length).map(o=>o.replace(/[^/]+$/,"")),n=0;if(t===r)n=t.length;else for(;t.charCodeAt(n)===r.charCodeAt(n);)n++;return e.map(o=>o.replace(t.slice(0,n),""))}function dr(e){let t=__md_get("__sitemap",sessionStorage,e);if(t)return H(t);{let r=de();return jo(new URL("sitemap.xml",e||r.base)).pipe(m(n=>ys(G("loc",n).map(o=>o.textContent))),Pe([]),S(n=>__md_set("__sitemap",n,sessionStorage,e)))}}function gi({document$:e,location$:t,viewport$:r}){let n=de();if(location.protocol==="file:")return;"scrollRestoration"in history&&(history.scrollRestoration="manual",y(window,"beforeunload").subscribe(()=>{history.scrollRestoration="auto"}));let o=ue("link[rel=icon]");typeof o!="undefined"&&(o.href=o.href);let i=dr().pipe(m(u=>u.map(f=>`${new URL(f,n.base)}`)),x(u=>y(document.body,"click").pipe(T(f=>!f.metaKey&&!f.ctrlKey),x(f=>{if(f.target instanceof Element){let p=f.target.closest("a");if(p&&!p.target){let l=new URL(p.href);if(l.search="",l.hash="",l.pathname!==location.pathname&&u.includes(l.toString()))return f.preventDefault(),H({url:new URL(p.href)})}}return xe}))),ae()),a=y(window,"popstate").pipe(T(u=>u.state!==null),m(u=>({url:new URL(location.href),offset:u.state})),ae());C(i,a).pipe(Y((u,f)=>u.url.href===f.url.href),m(({url:u})=>u)).subscribe(t);let s=t.pipe(J("pathname"),x(u=>lr(u.href).pipe(De(()=>(fr(u),xe)))),ae());i.pipe(pt(s)).subscribe(({url:u})=>{history.pushState({},"",`${u}`)});let c=new DOMParser;s.pipe(x(u=>u.text()),m(u=>c.parseFromString(u,"text/html"))).subscribe(e),e.pipe($e(1)).subscribe(u=>{for(let f of["title","link[rel=canonical]","meta[name=author]","meta[name=description]","[data-md-component=announce]","[data-md-component=container]","[data-md-component=header-topic]","[data-md-component=outdated]","[data-md-component=logo]","[data-md-component=skip]",...ce("navigation.tabs.sticky")?["[data-md-component=tabs]"]:[]]){let p=ue(f),l=ue(f,u);typeof p!="undefined"&&typeof l!="undefined"&&p.replaceWith(l)}}),e.pipe($e(1),m(()=>we("container")),x(u=>G("script",u)),$r(u=>{let f=A("script");if(u.src){for(let p of u.getAttributeNames())f.setAttribute(p,u.getAttribute(p));return u.replaceWith(f),new k(p=>{f.onload=()=>p.complete()})}else return f.textContent=u.textContent,u.replaceWith(f),z})).subscribe(),C(i,a).pipe(pt(e)).subscribe(({url:u,offset:f})=>{u.hash&&!f?Po(u.hash):window.scrollTo(0,(f==null?void 0:f.y)||0)}),r.pipe(Ot(i),Xe(250),J("offset")).subscribe(({offset:u})=>{history.replaceState(u,"")}),C(i,a).pipe(Me(2,1),T(([u,f])=>u.url.pathname===f.url.pathname),m(([,u])=>u)).subscribe(({offset:u})=>{window.scrollTo(0,(u==null?void 0:u.y)||0)})}var ws=Ke(Xr());var xi=Ke(Xr());function Zr(e,t){let r=new RegExp(e.separator,"img"),n=(o,i,a)=>`${i}${a}`;return o=>{o=o.replace(/[\s*+\-:~^]+/g," ").trim();let i=new RegExp(`(^|${e.separator})(${o.replace(/[|\\{}()[\]^$+*?.-]/g,"\\$&").replace(r,"|")})`,"img");return a=>(t?(0,xi.default)(a):a).replace(i,n).replace(/<\/mark>(\s+)]*>/img,"$1")}}function Si(e){return e.split(/"([^"]+)"/g).map((t,r)=>r&1?t.replace(/^\b|^(?![^\x00-\x7F]|$)|\s+/g," +"):t).join("").replace(/"|(?:^|\s+)[*+\-:^~]+(?=\s+|$)/g,"").trim()}function ht(e){return e.type===1}function wi(e){return e.type===2}function bt(e){return e.type===3}function Os({config:e,docs:t}){e.lang.length===1&&e.lang[0]==="en"&&(e.lang=[ee("search.config.lang")]),e.separator==="[\\s\\-]+"&&(e.separator=ee("search.config.separator"));let n={pipeline:ee("search.config.pipeline").split(/\s*,\s*/).filter(Boolean),suggestions:ce("search.suggest")};return{config:e,docs:t,options:n}}function Ei(e,t){let r=de(),n=new Worker(e),o=new O,i=zo(n,{tx$:o}).pipe(m(a=>{if(bt(a))for(let s of a.data.items)for(let c of s)c.location=`${new URL(c.location,r.base)}`;return a}),ae());return ne(t).pipe(m(a=>({type:0,data:Os(a)}))).subscribe(o.next.bind(o)),{tx$:o,rx$:i}}function Oi({document$:e}){let t=de(),r=Re(new URL("../versions.json",t.base)),n=r.pipe(m(o=>{let[,i]=t.base.match(/([^/]+)\/?$/);return o.find(({version:a,aliases:s})=>a===i||s.includes(i))||o[0]}));B([r,n]).pipe(m(([o,i])=>new Map(o.filter(a=>a!==i).map(a=>[`${new URL(`../${a.version}/`,t.base)}`,a]))),x(o=>y(document.body,"click").pipe(T(i=>!i.metaKey&&!i.ctrlKey),x(i=>{if(i.target instanceof Element){let a=i.target.closest("a");if(a&&!a.target&&o.has(a.href))return i.preventDefault(),H(a.href)}return z}),x(i=>{let{version:a}=o.get(i);return dr(new URL(i)).pipe(m(s=>{let u=Se().href.replace(t.base,"");return s.includes(u)?new URL(`../${a}/${u}`,t.base):new URL(i)}))})))).subscribe(o=>fr(o)),B([r,n]).subscribe(([o,i])=>{Q(".md-header__topic").appendChild(Go(o,i))}),e.pipe(_t(n)).subscribe(o=>{var a;let i=__md_get("__outdated",sessionStorage);if(i===null){let s=((a=t.version)==null?void 0:a.default)||"latest";i=!o.aliases.includes(s),__md_set("__outdated",i,sessionStorage)}if(i)for(let s of oe("outdated"))s.hidden=!1})}function _s(e,{rx$:t}){let r=(__search==null?void 0:__search.transform)||Si,{searchParams:n}=Se();n.has("q")&&qe("search",!0);let o=t.pipe(T(ht),re(1),m(()=>n.get("q")||""));mt("search").pipe(T(s=>!s),re(1)).subscribe(()=>{let s=new URL(location.href);s.searchParams.delete("q"),history.replaceState({},"",`${s}`)}),o.subscribe(s=>{s&&(e.value=s)});let i=nr(e),a=C(y(e,"keyup"),y(e,"focus").pipe(Ie(1)),o).pipe(m(()=>r(e.value)),q(""),Y());return B([a,i]).pipe(m(([s,c])=>({value:s,focus:c})),X(1))}function _i(e,{tx$:t,rx$:r}){let n=new O;return n.pipe(J("value"),m(({value:o})=>({type:2,data:o}))).subscribe(t.next.bind(t)),n.pipe(J("focus")).subscribe(({focus:o})=>{o?(qe("search",o),e.placeholder=""):e.placeholder=ee("search.placeholder")}),y(e.form,"reset").pipe(se(n.pipe(pe(1)))).subscribe(()=>e.focus()),_s(e,{tx$:t,rx$:r}).pipe(S(o=>n.next(o)),L(()=>n.complete()),m(o=>$({ref:e},o)))}function Ti(e,{rx$:t},{query$:r}){let n=new O,o=Mo(e.parentElement).pipe(T(Boolean)),i=Q(":scope > :first-child",e),a=Q(":scope > :last-child",e),s=t.pipe(T(ht),re(1));return n.pipe(Le(r),Ot(s)).subscribe(([{items:u},{value:f}])=>{if(f)switch(u.length){case 0:i.textContent=ee("search.result.none");break;case 1:i.textContent=ee("search.result.one");break;default:i.textContent=ee("search.result.other",pr(u.length))}else i.textContent=ee("search.result.placeholder")}),n.pipe(S(()=>a.innerHTML=""),x(({items:u})=>C(H(...u.slice(0,10)),H(...u.slice(10)).pipe(Me(4),Nr(o),x(([f])=>f))))).subscribe(u=>a.appendChild(Ko(u))),t.pipe(T(bt),m(({data:u})=>u)).pipe(S(u=>n.next(u)),L(()=>n.complete()),m(u=>$({ref:e},u)))}function Ts(e,{query$:t}){return t.pipe(m(({value:r})=>{let n=Se();return n.hash="",n.searchParams.delete("h"),n.searchParams.set("q",r),{url:n}}))}function Mi(e,t){let r=new O;return r.subscribe(({url:n})=>{e.setAttribute("data-clipboard-text",e.href),e.href=`${n}`}),y(e,"click").subscribe(n=>n.preventDefault()),Ts(e,t).pipe(S(n=>r.next(n)),L(()=>r.complete()),m(n=>$({ref:e},n)))}function Li(e,{rx$:t},{keyboard$:r}){let n=new O,o=we("search-query"),i=C(y(o,"keydown"),y(o,"focus")).pipe(Be(le),m(()=>o.value),Y());return n.pipe(Ve(i),m(([{suggestions:s},c])=>{let u=c.split(/([\s-]+)/);if((s==null?void 0:s.length)&&u[u.length-1]){let f=s[s.length-1];f.startsWith(u[u.length-1])&&(u[u.length-1]=f)}else u.length=0;return u})).subscribe(s=>e.innerHTML=s.join("").replace(/\s/g," ")),r.pipe(T(({mode:s})=>s==="search")).subscribe(s=>{switch(s.type){case"ArrowRight":e.innerText.length&&o.selectionStart===o.value.length&&(o.value=e.innerText);break}}),t.pipe(T(bt),m(({data:s})=>s)).pipe(S(s=>n.next(s)),L(()=>n.complete()),m(()=>({ref:e})))}function Ai(e,{index$:t,keyboard$:r}){let n=de();try{let o=(__search==null?void 0:__search.worker)||n.search,i=Ei(o,t),a=we("search-query",e),s=we("search-result",e),{tx$:c,rx$:u}=i;c.pipe(T(wi),pt(u.pipe(T(ht))),re(1)).subscribe(c.next.bind(c)),r.pipe(T(({mode:l})=>l==="search")).subscribe(l=>{let d=Ne();switch(l.type){case"Enter":if(d===a){let h=new Map;for(let b of G(":first-child [href]",s)){let F=b.firstElementChild;h.set(b,parseFloat(F.getAttribute("data-md-score")))}if(h.size){let[[b]]=[...h].sort(([,F],[,K])=>K-F);b.click()}l.claim()}break;case"Escape":case"Tab":qe("search",!1),a.blur();break;case"ArrowUp":case"ArrowDown":if(typeof d=="undefined")a.focus();else{let h=[a,...G(":not(details) > [href], summary, details[open] [href]",s)],b=Math.max(0,(Math.max(0,h.indexOf(d))+h.length+(l.type==="ArrowUp"?-1:1))%h.length);h[b].focus()}l.claim();break;default:a!==Ne()&&a.focus()}}),r.pipe(T(({mode:l})=>l==="global")).subscribe(l=>{switch(l.type){case"f":case"s":case"/":a.focus(),a.select(),l.claim();break}});let f=_i(a,i),p=Ti(s,i,{query$:f});return C(f,p).pipe(Ze(...oe("search-share",e).map(l=>Mi(l,{query$:f})),...oe("search-suggest",e).map(l=>Li(l,i,{keyboard$:r}))))}catch(o){return e.hidden=!0,xe}}function Ci(e,{index$:t,location$:r}){return B([t,r.pipe(q(Se()),T(n=>!!n.searchParams.get("h")))]).pipe(m(([n,o])=>Zr(n.config,!0)(o.searchParams.get("h"))),m(n=>{var a;let o=new Map,i=document.createNodeIterator(e,NodeFilter.SHOW_TEXT);for(let s=i.nextNode();s;s=i.nextNode())if((a=s.parentElement)!=null&&a.offsetHeight){let c=s.textContent,u=n(c);u.length>c.length&&o.set(s,u)}for(let[s,c]of o){let{childNodes:u}=A("span",null,c);s.replaceWith(...Array.from(u))}return{ref:e,nodes:o}}))}function Ms(e,{viewport$:t,main$:r}){let n=e.parentElement,o=n.offsetTop-n.parentElement.offsetTop;return B([r,t]).pipe(m(([{offset:i,height:a},{offset:{y:s}}])=>(a=a+Math.min(o,Math.max(0,s-i))-o,{height:a,locked:s>=i+o})),Y((i,a)=>i.height===a.height&&i.locked===a.locked))}function en(e,n){var o=n,{header$:t}=o,r=sn(o,["header$"]);let i=Q(".md-sidebar__scrollwrap",e),{y:a}=ze(i);return P(()=>{let s=new O;return s.pipe(He(0,Te),Le(t)).subscribe({next([{height:c},{height:u}]){i.style.height=`${c-2*a}px`,e.style.top=`${u}px`},complete(){i.style.height="",e.style.top=""}}),Ms(e,r).pipe(S(c=>s.next(c)),L(()=>s.complete()),m(c=>$({ref:e},c)))})}function Ri(e,t){if(typeof t!="undefined"){let r=`https://api.github.com/repos/${e}/${t}`;return wt(Re(`${r}/releases/latest`).pipe(m(n=>({version:n.tag_name})),Pe({})),Re(r).pipe(m(n=>({stars:n.stargazers_count,forks:n.forks_count})),Pe({}))).pipe(m(([n,o])=>$($({},n),o)))}else{let r=`https://api.github.com/users/${e}`;return Re(r).pipe(m(n=>({repositories:n.public_repos})),Pe({}))}}function ki(e,t){let r=`https://${e}/api/v4/projects/${encodeURIComponent(t)}`;return Re(r).pipe(m(({star_count:n,forks_count:o})=>({stars:n,forks:o})),Pe({}))}function Hi(e){let[t]=e.match(/(git(?:hub|lab))/i)||[];switch(t.toLowerCase()){case"github":let[,r,n]=e.match(/^.+github\.com\/([^/]+)\/?([^/]+)?/i);return Ri(r,n);case"gitlab":let[,o,i]=e.match(/^.+?([^/]*gitlab[^/]+)\/(.+?)\/?$/i);return ki(o,i);default:return z}}var Ls;function As(e){return Ls||(Ls=P(()=>{let t=__md_get("__source",sessionStorage);return t?H(t):Hi(e.href).pipe(S(r=>__md_set("__source",r,sessionStorage)))}).pipe(De(()=>z),T(t=>Object.keys(t).length>0),m(t=>({facts:t})),X(1)))}function Pi(e){let t=Q(":scope > :last-child",e);return P(()=>{let r=new O;return r.subscribe(({facts:n})=>{t.appendChild(Yo(n)),t.setAttribute("data-md-state","done")}),As(e).pipe(S(n=>r.next(n)),L(()=>r.complete()),m(n=>$({ref:e},n)))})}function Cs(e,{viewport$:t,header$:r}){return ge(document.body).pipe(x(()=>mr(e,{header$:r,viewport$:t})),m(({offset:{y:n}})=>({hidden:n>=10})),J("hidden"))}function Ii(e,t){return P(()=>{let r=new O;return r.subscribe({next({hidden:n}){n?e.setAttribute("data-md-state","hidden"):e.removeAttribute("data-md-state")},complete(){e.removeAttribute("data-md-state")}}),(ce("navigation.tabs.sticky")?H({hidden:!1}):Cs(e,t)).pipe(S(n=>r.next(n)),L(()=>r.complete()),m(n=>$({ref:e},n)))})}function Rs(e,{viewport$:t,header$:r}){let n=new Map,o=G("[href^=\\#]",e);for(let s of o){let c=decodeURIComponent(s.hash.substring(1)),u=ue(`[id="${c}"]`);typeof u!="undefined"&&n.set(s,u)}let i=r.pipe(J("height"),m(({height:s})=>{let c=we("main"),u=Q(":scope > :first-child",c);return s+.8*(u.offsetTop-c.offsetTop)}),ae());return ge(document.body).pipe(J("height"),x(s=>P(()=>{let c=[];return H([...n].reduce((u,[f,p])=>{for(;c.length&&n.get(c[c.length-1]).tagName>=p.tagName;)c.pop();let l=p.offsetTop;for(;!l&&p.parentElement;)p=p.parentElement,l=p.offsetTop;return u.set([...c=[...c,f]].reverse(),l)},new Map))}).pipe(m(c=>new Map([...c].sort(([,u],[,f])=>u-f))),Ve(i),x(([c,u])=>t.pipe(Fr(([f,p],{offset:{y:l},size:d})=>{let h=l+d.height>=Math.floor(s.height);for(;p.length;){let[,b]=p[0];if(b-u=l&&!h)p=[f.pop(),...p];else break}return[f,p]},[[],[...c]]),Y((f,p)=>f[0]===p[0]&&f[1]===p[1])))))).pipe(m(([s,c])=>({prev:s.map(([u])=>u),next:c.map(([u])=>u)})),q({prev:[],next:[]}),Me(2,1),m(([s,c])=>s.prev.length{let o=new O;return o.subscribe(({prev:i,next:a})=>{for(let[s]of a)s.removeAttribute("data-md-state"),s.classList.remove("md-nav__link--active");for(let[s,[c]]of i.entries())c.setAttribute("data-md-state","blur"),c.classList.toggle("md-nav__link--active",s===i.length-1)}),ce("navigation.tracking")&&t.pipe(se(o.pipe(pe(1))),J("offset"),Xe(250),$e(1),se(n.pipe($e(1))),Et({delay:250}),Le(o)).subscribe(([,{prev:i}])=>{let a=Se(),s=i[i.length-1];if(s&&s.length){let[c]=s,{hash:u}=new URL(c.href);a.hash!==u&&(a.hash=u,history.replaceState({},"",`${a}`))}else a.hash="",history.replaceState({},"",`${a}`)}),Rs(e,{viewport$:t,header$:r}).pipe(S(i=>o.next(i)),L(()=>o.complete()),m(i=>$({ref:e},i)))})}function ks(e,{viewport$:t,main$:r,target$:n}){let o=t.pipe(m(({offset:{y:a}})=>a),Me(2,1),m(([a,s])=>a>s&&s>0),Y()),i=r.pipe(m(({active:a})=>a));return B([i,o]).pipe(m(([a,s])=>!(a&&s)),Y(),se(n.pipe($e(1))),rr(!0),Et({delay:250}),m(a=>({hidden:a})))}function ji(e,{viewport$:t,header$:r,main$:n,target$:o}){let i=new O;return i.subscribe({next({hidden:a}){a?(e.setAttribute("data-md-state","hidden"),e.setAttribute("tabindex","-1"),e.blur()):(e.removeAttribute("data-md-state"),e.removeAttribute("tabindex"))},complete(){e.style.top="",e.setAttribute("data-md-state","hidden"),e.removeAttribute("tabindex")}}),r.pipe(se(i.pipe(rr(0),pe(1))),J("height")).subscribe(({height:a})=>{e.style.top=`${a+16}px`}),ks(e,{viewport$:t,main$:n,target$:o}).pipe(S(a=>i.next(a)),L(()=>i.complete()),m(a=>$({ref:e},a)))}function Fi({document$:e,tablet$:t}){e.pipe(x(()=>G("[data-md-state=indeterminate]")),S(r=>{r.indeterminate=!0,r.checked=!1}),ie(r=>y(r,"change").pipe(Wr(()=>r.hasAttribute("data-md-state")),Z(r))),Le(t)).subscribe(([r,n])=>{r.removeAttribute("data-md-state"),n&&(r.checked=!1)})}function Hs(){return/(iPad|iPhone|iPod)/.test(navigator.userAgent)}function Ui({document$:e}){e.pipe(x(()=>G("[data-md-scrollfix]")),S(t=>t.removeAttribute("data-md-scrollfix")),T(Hs),ie(t=>y(t,"touchstart").pipe(Z(t)))).subscribe(t=>{let r=t.scrollTop;r===0?t.scrollTop=1:r+t.offsetHeight===t.scrollHeight&&(t.scrollTop=r-1)})}function Wi({viewport$:e,tablet$:t}){B([mt("search"),t]).pipe(m(([r,n])=>r&&!n),x(r=>H(r).pipe(Ie(r?400:100))),Le(e)).subscribe(([r,{offset:{y:n}}])=>{if(r)document.body.setAttribute("data-md-state","lock"),document.body.style.top=`-${n}px`;else{let o=-1*parseInt(document.body.style.top,10);document.body.removeAttribute("data-md-state"),document.body.style.top="",o&&window.scrollTo(0,o)}})}Object.entries||(Object.entries=function(e){let t=[];for(let r of Object.keys(e))t.push([r,e[r]]);return t});Object.values||(Object.values=function(e){let t=[];for(let r of Object.keys(e))t.push(e[r]);return t});typeof Element!="undefined"&&(Element.prototype.scrollTo||(Element.prototype.scrollTo=function(e,t){typeof e=="object"?(this.scrollLeft=e.left,this.scrollTop=e.top):(this.scrollLeft=e,this.scrollTop=t)}),Element.prototype.replaceWith||(Element.prototype.replaceWith=function(...e){let t=this.parentNode;if(t){e.length===0&&t.removeChild(this);for(let r=e.length-1;r>=0;r--){let n=e[r];typeof n!="object"?n=document.createTextNode(n):n.parentNode&&n.parentNode.removeChild(n),r?t.insertBefore(this.previousSibling,n):t.replaceChild(n,this)}}}));document.documentElement.classList.remove("no-js");document.documentElement.classList.add("js");var et=mo(),br=Co(),Lt=Io(),tn=Ao(),Ee=No(),vr=qr("(min-width: 960px)"),Vi=qr("(min-width: 1220px)"),Ni=$o(),zi=de(),qi=document.forms.namedItem("search")?(__search==null?void 0:__search.index)||Re(new URL("search/search_index.json",zi.base)):xe,rn=new O;vi({alert$:rn});ce("navigation.instant")&&gi({document$:et,location$:br,viewport$:Ee});var Di;((Di=zi.version)==null?void 0:Di.provider)==="mike"&&Oi({document$:et});C(br,Lt).pipe(Ie(125)).subscribe(()=>{qe("drawer",!1),qe("search",!1)});tn.pipe(T(({mode:e})=>e==="global")).subscribe(e=>{switch(e.type){case"p":case",":let t=ue("[href][rel=prev]");typeof t!="undefined"&&t.click();break;case"n":case".":let r=ue("[href][rel=next]");typeof r!="undefined"&&r.click();break}});Fi({document$:et,tablet$:vr});Ui({document$:et});Wi({viewport$:Ee,tablet$:vr});var Qe=li(we("header"),{viewport$:Ee}),hr=et.pipe(m(()=>we("main")),x(e=>hi(e,{viewport$:Ee,header$:Qe})),X(1)),Ps=C(...oe("dialog").map(e=>pi(e,{alert$:rn})),...oe("header").map(e=>mi(e,{viewport$:Ee,header$:Qe,main$:hr})),...oe("palette").map(e=>bi(e)),...oe("search").map(e=>Ai(e,{index$:qi,keyboard$:tn})),...oe("source").map(e=>Pi(e))),Is=P(()=>C(...oe("content").map(e=>fi(e,{target$:Lt,print$:Ni})),...oe("content").map(e=>ce("search.highlight")?Ci(e,{index$:qi,location$:br}):z),...oe("header-title").map(e=>di(e,{viewport$:Ee,header$:Qe})),...oe("sidebar").map(e=>e.getAttribute("data-md-type")==="navigation"?Qr(Vi,()=>en(e,{viewport$:Ee,header$:Qe,main$:hr})):Qr(vr,()=>en(e,{viewport$:Ee,header$:Qe,main$:hr}))),...oe("tabs").map(e=>Ii(e,{viewport$:Ee,header$:Qe})),...oe("toc").map(e=>$i(e,{viewport$:Ee,header$:Qe,target$:Lt})),...oe("top").map(e=>ji(e,{viewport$:Ee,header$:Qe,main$:hr,target$:Lt})))),Qi=et.pipe(x(()=>Is),Ze(Ps),X(1));Qi.subscribe();window.document$=et;window.location$=br;window.target$=Lt;window.keyboard$=tn;window.viewport$=Ee;window.tablet$=vr;window.screen$=Vi;window.print$=Ni;window.alert$=rn;window.component$=Qi;})(); +//# sourceMappingURL=bundle.c44cc438.min.js.map + diff --git a/assets/javascripts/bundle.c44cc438.min.js.map b/assets/javascripts/bundle.c44cc438.min.js.map new file mode 100644 index 00000000..13182229 --- /dev/null +++ b/assets/javascripts/bundle.c44cc438.min.js.map @@ -0,0 +1,8 @@ +{ + "version": 3, + "sources": ["node_modules/focus-visible/dist/focus-visible.js", "node_modules/url-polyfill/url-polyfill.js", "node_modules/rxjs/node_modules/tslib/tslib.js", "node_modules/clipboard/dist/clipboard.js", "node_modules/escape-html/index.js", "node_modules/array-flat-polyfill/index.mjs", "src/assets/javascripts/bundle.ts", "node_modules/unfetch/polyfill/index.js", "node_modules/rxjs/node_modules/tslib/modules/index.js", "node_modules/rxjs/src/internal/util/isFunction.ts", "node_modules/rxjs/src/internal/util/createErrorClass.ts", "node_modules/rxjs/src/internal/util/UnsubscriptionError.ts", "node_modules/rxjs/src/internal/util/arrRemove.ts", "node_modules/rxjs/src/internal/Subscription.ts", "node_modules/rxjs/src/internal/config.ts", "node_modules/rxjs/src/internal/scheduler/timeoutProvider.ts", "node_modules/rxjs/src/internal/util/reportUnhandledError.ts", "node_modules/rxjs/src/internal/util/noop.ts", "node_modules/rxjs/src/internal/NotificationFactories.ts", "node_modules/rxjs/src/internal/util/errorContext.ts", "node_modules/rxjs/src/internal/Subscriber.ts", "node_modules/rxjs/src/internal/symbol/observable.ts", "node_modules/rxjs/src/internal/util/identity.ts", "node_modules/rxjs/src/internal/util/pipe.ts", "node_modules/rxjs/src/internal/Observable.ts", "node_modules/rxjs/src/internal/util/lift.ts", "node_modules/rxjs/src/internal/operators/OperatorSubscriber.ts", "node_modules/rxjs/src/internal/scheduler/animationFrameProvider.ts", "node_modules/rxjs/src/internal/util/ObjectUnsubscribedError.ts", "node_modules/rxjs/src/internal/Subject.ts", "node_modules/rxjs/src/internal/scheduler/dateTimestampProvider.ts", "node_modules/rxjs/src/internal/ReplaySubject.ts", "node_modules/rxjs/src/internal/scheduler/Action.ts", "node_modules/rxjs/src/internal/scheduler/intervalProvider.ts", "node_modules/rxjs/src/internal/scheduler/AsyncAction.ts", "node_modules/rxjs/src/internal/Scheduler.ts", "node_modules/rxjs/src/internal/scheduler/AsyncScheduler.ts", "node_modules/rxjs/src/internal/scheduler/async.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameAction.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameScheduler.ts", "node_modules/rxjs/src/internal/scheduler/animationFrame.ts", "node_modules/rxjs/src/internal/observable/empty.ts", "node_modules/rxjs/src/internal/util/isScheduler.ts", "node_modules/rxjs/src/internal/util/args.ts", "node_modules/rxjs/src/internal/util/isArrayLike.ts", "node_modules/rxjs/src/internal/util/isPromise.ts", "node_modules/rxjs/src/internal/util/isInteropObservable.ts", "node_modules/rxjs/src/internal/util/isAsyncIterable.ts", "node_modules/rxjs/src/internal/util/throwUnobservableError.ts", "node_modules/rxjs/src/internal/symbol/iterator.ts", "node_modules/rxjs/src/internal/util/isIterable.ts", "node_modules/rxjs/src/internal/util/isReadableStreamLike.ts", "node_modules/rxjs/src/internal/observable/innerFrom.ts", "node_modules/rxjs/src/internal/util/executeSchedule.ts", "node_modules/rxjs/src/internal/operators/observeOn.ts", "node_modules/rxjs/src/internal/operators/subscribeOn.ts", "node_modules/rxjs/src/internal/scheduled/scheduleObservable.ts", "node_modules/rxjs/src/internal/scheduled/schedulePromise.ts", "node_modules/rxjs/src/internal/scheduled/scheduleArray.ts", "node_modules/rxjs/src/internal/scheduled/scheduleIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleAsyncIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleReadableStreamLike.ts", "node_modules/rxjs/src/internal/scheduled/scheduled.ts", "node_modules/rxjs/src/internal/observable/from.ts", "node_modules/rxjs/src/internal/observable/of.ts", "node_modules/rxjs/src/internal/observable/throwError.ts", "node_modules/rxjs/src/internal/util/isDate.ts", "node_modules/rxjs/src/internal/operators/map.ts", "node_modules/rxjs/src/internal/util/mapOneOrManyArgs.ts", "node_modules/rxjs/src/internal/util/argsArgArrayOrObject.ts", "node_modules/rxjs/src/internal/util/createObject.ts", "node_modules/rxjs/src/internal/observable/combineLatest.ts", "node_modules/rxjs/src/internal/operators/mergeInternals.ts", "node_modules/rxjs/src/internal/operators/mergeMap.ts", "node_modules/rxjs/src/internal/operators/mergeAll.ts", "node_modules/rxjs/src/internal/operators/concatAll.ts", "node_modules/rxjs/src/internal/observable/concat.ts", "node_modules/rxjs/src/internal/observable/defer.ts", "node_modules/rxjs/src/internal/observable/fromEvent.ts", "node_modules/rxjs/src/internal/observable/fromEventPattern.ts", "node_modules/rxjs/src/internal/observable/timer.ts", "node_modules/rxjs/src/internal/observable/merge.ts", "node_modules/rxjs/src/internal/observable/never.ts", "node_modules/rxjs/src/internal/util/argsOrArgArray.ts", "node_modules/rxjs/src/internal/operators/filter.ts", "node_modules/rxjs/src/internal/observable/zip.ts", "node_modules/rxjs/src/internal/operators/audit.ts", "node_modules/rxjs/src/internal/operators/auditTime.ts", "node_modules/rxjs/src/internal/operators/bufferCount.ts", "node_modules/rxjs/src/internal/operators/catchError.ts", "node_modules/rxjs/src/internal/operators/scanInternals.ts", "node_modules/rxjs/src/internal/operators/combineLatest.ts", "node_modules/rxjs/src/internal/operators/combineLatestWith.ts", "node_modules/rxjs/src/internal/operators/concatMap.ts", "node_modules/rxjs/src/internal/operators/debounceTime.ts", "node_modules/rxjs/src/internal/operators/defaultIfEmpty.ts", "node_modules/rxjs/src/internal/operators/take.ts", "node_modules/rxjs/src/internal/operators/ignoreElements.ts", "node_modules/rxjs/src/internal/operators/mapTo.ts", "node_modules/rxjs/src/internal/operators/delayWhen.ts", "node_modules/rxjs/src/internal/operators/delay.ts", "node_modules/rxjs/src/internal/operators/distinctUntilChanged.ts", "node_modules/rxjs/src/internal/operators/distinctUntilKeyChanged.ts", "node_modules/rxjs/src/internal/operators/endWith.ts", "node_modules/rxjs/src/internal/operators/finalize.ts", "node_modules/rxjs/src/internal/operators/takeLast.ts", "node_modules/rxjs/src/internal/operators/merge.ts", "node_modules/rxjs/src/internal/operators/mergeWith.ts", "node_modules/rxjs/src/internal/operators/repeat.ts", "node_modules/rxjs/src/internal/operators/sample.ts", "node_modules/rxjs/src/internal/operators/scan.ts", "node_modules/rxjs/src/internal/operators/share.ts", "node_modules/rxjs/src/internal/operators/shareReplay.ts", "node_modules/rxjs/src/internal/operators/skip.ts", "node_modules/rxjs/src/internal/operators/skipUntil.ts", "node_modules/rxjs/src/internal/operators/startWith.ts", "node_modules/rxjs/src/internal/operators/switchMap.ts", "node_modules/rxjs/src/internal/operators/switchMapTo.ts", "node_modules/rxjs/src/internal/operators/takeUntil.ts", "node_modules/rxjs/src/internal/operators/takeWhile.ts", "node_modules/rxjs/src/internal/operators/tap.ts", "node_modules/rxjs/src/internal/operators/throttle.ts", "node_modules/rxjs/src/internal/operators/throttleTime.ts", "node_modules/rxjs/src/internal/operators/withLatestFrom.ts", "node_modules/rxjs/src/internal/operators/zip.ts", "node_modules/rxjs/src/internal/operators/zipWith.ts", "src/assets/javascripts/browser/document/index.ts", "src/assets/javascripts/browser/element/_/index.ts", "src/assets/javascripts/browser/element/focus/index.ts", "src/assets/javascripts/browser/element/offset/_/index.ts", "src/assets/javascripts/browser/element/offset/content/index.ts", "node_modules/resize-observer-polyfill/dist/ResizeObserver.es.js", "src/assets/javascripts/browser/element/size/_/index.ts", "src/assets/javascripts/browser/element/size/content/index.ts", "src/assets/javascripts/browser/element/visibility/index.ts", "src/assets/javascripts/browser/toggle/index.ts", "src/assets/javascripts/browser/keyboard/index.ts", "src/assets/javascripts/browser/location/_/index.ts", "src/assets/javascripts/utilities/h/index.ts", "src/assets/javascripts/utilities/string/index.ts", "src/assets/javascripts/browser/location/hash/index.ts", "src/assets/javascripts/browser/media/index.ts", "src/assets/javascripts/browser/request/index.ts", "src/assets/javascripts/browser/script/index.ts", "src/assets/javascripts/browser/viewport/offset/index.ts", "src/assets/javascripts/browser/viewport/size/index.ts", "src/assets/javascripts/browser/viewport/_/index.ts", "src/assets/javascripts/browser/viewport/at/index.ts", "src/assets/javascripts/browser/worker/index.ts", "src/assets/javascripts/_/index.ts", "src/assets/javascripts/components/_/index.ts", "src/assets/javascripts/components/content/code/_/index.ts", "src/assets/javascripts/templates/annotation/index.tsx", "src/assets/javascripts/templates/clipboard/index.tsx", "src/assets/javascripts/templates/search/index.tsx", "src/assets/javascripts/templates/source/index.tsx", "src/assets/javascripts/templates/table/index.tsx", "src/assets/javascripts/templates/version/index.tsx", "src/assets/javascripts/components/content/annotation/_/index.ts", "src/assets/javascripts/components/content/annotation/list/index.ts", "src/assets/javascripts/components/content/code/mermaid/index.ts", "src/assets/javascripts/components/content/details/index.ts", "src/assets/javascripts/components/content/table/index.ts", "src/assets/javascripts/components/content/tabs/index.ts", "src/assets/javascripts/components/content/_/index.ts", "src/assets/javascripts/components/dialog/index.ts", "src/assets/javascripts/components/header/_/index.ts", "src/assets/javascripts/components/header/title/index.ts", "src/assets/javascripts/components/main/index.ts", "src/assets/javascripts/components/palette/index.ts", "src/assets/javascripts/integrations/clipboard/index.ts", "src/assets/javascripts/integrations/sitemap/index.ts", "src/assets/javascripts/integrations/instant/index.ts", "src/assets/javascripts/integrations/search/document/index.ts", "src/assets/javascripts/integrations/search/highlighter/index.ts", "src/assets/javascripts/integrations/search/query/transform/index.ts", "src/assets/javascripts/integrations/search/worker/message/index.ts", "src/assets/javascripts/integrations/search/worker/_/index.ts", "src/assets/javascripts/integrations/version/index.ts", "src/assets/javascripts/components/search/query/index.ts", "src/assets/javascripts/components/search/result/index.ts", "src/assets/javascripts/components/search/share/index.ts", "src/assets/javascripts/components/search/suggest/index.ts", "src/assets/javascripts/components/search/_/index.ts", "src/assets/javascripts/components/search/highlight/index.ts", "src/assets/javascripts/components/sidebar/index.ts", "src/assets/javascripts/components/source/facts/github/index.ts", "src/assets/javascripts/components/source/facts/gitlab/index.ts", "src/assets/javascripts/components/source/facts/_/index.ts", "src/assets/javascripts/components/source/_/index.ts", "src/assets/javascripts/components/tabs/index.ts", "src/assets/javascripts/components/toc/index.ts", "src/assets/javascripts/components/top/index.ts", "src/assets/javascripts/patches/indeterminate/index.ts", "src/assets/javascripts/patches/scrollfix/index.ts", "src/assets/javascripts/patches/scrolllock/index.ts", "src/assets/javascripts/polyfills/index.ts"], + "sourceRoot": "../../../..", + "sourcesContent": ["(function (global, factory) {\n typeof exports === 'object' && typeof module !== 'undefined' ? factory() :\n typeof define === 'function' && define.amd ? define(factory) :\n (factory());\n}(this, (function () { 'use strict';\n\n /**\n * Applies the :focus-visible polyfill at the given scope.\n * A scope in this case is either the top-level Document or a Shadow Root.\n *\n * @param {(Document|ShadowRoot)} scope\n * @see https://github.com/WICG/focus-visible\n */\n function applyFocusVisiblePolyfill(scope) {\n var hadKeyboardEvent = true;\n var hadFocusVisibleRecently = false;\n var hadFocusVisibleRecentlyTimeout = null;\n\n var inputTypesAllowlist = {\n text: true,\n search: true,\n url: true,\n tel: true,\n email: true,\n password: true,\n number: true,\n date: true,\n month: true,\n week: true,\n time: true,\n datetime: true,\n 'datetime-local': true\n };\n\n /**\n * Helper function for legacy browsers and iframes which sometimes focus\n * elements like document, body, and non-interactive SVG.\n * @param {Element} el\n */\n function isValidFocusTarget(el) {\n if (\n el &&\n el !== document &&\n el.nodeName !== 'HTML' &&\n el.nodeName !== 'BODY' &&\n 'classList' in el &&\n 'contains' in el.classList\n ) {\n return true;\n }\n return false;\n }\n\n /**\n * Computes whether the given element should automatically trigger the\n * `focus-visible` class being added, i.e. whether it should always match\n * `:focus-visible` when focused.\n * @param {Element} el\n * @return {boolean}\n */\n function focusTriggersKeyboardModality(el) {\n var type = el.type;\n var tagName = el.tagName;\n\n if (tagName === 'INPUT' && inputTypesAllowlist[type] && !el.readOnly) {\n return true;\n }\n\n if (tagName === 'TEXTAREA' && !el.readOnly) {\n return true;\n }\n\n if (el.isContentEditable) {\n return true;\n }\n\n return false;\n }\n\n /**\n * Add the `focus-visible` class to the given element if it was not added by\n * the author.\n * @param {Element} el\n */\n function addFocusVisibleClass(el) {\n if (el.classList.contains('focus-visible')) {\n return;\n }\n el.classList.add('focus-visible');\n el.setAttribute('data-focus-visible-added', '');\n }\n\n /**\n * Remove the `focus-visible` class from the given element if it was not\n * originally added by the author.\n * @param {Element} el\n */\n function removeFocusVisibleClass(el) {\n if (!el.hasAttribute('data-focus-visible-added')) {\n return;\n }\n el.classList.remove('focus-visible');\n el.removeAttribute('data-focus-visible-added');\n }\n\n /**\n * If the most recent user interaction was via the keyboard;\n * and the key press did not include a meta, alt/option, or control key;\n * then the modality is keyboard. Otherwise, the modality is not keyboard.\n * Apply `focus-visible` to any current active element and keep track\n * of our keyboard modality state with `hadKeyboardEvent`.\n * @param {KeyboardEvent} e\n */\n function onKeyDown(e) {\n if (e.metaKey || e.altKey || e.ctrlKey) {\n return;\n }\n\n if (isValidFocusTarget(scope.activeElement)) {\n addFocusVisibleClass(scope.activeElement);\n }\n\n hadKeyboardEvent = true;\n }\n\n /**\n * If at any point a user clicks with a pointing device, ensure that we change\n * the modality away from keyboard.\n * This avoids the situation where a user presses a key on an already focused\n * element, and then clicks on a different element, focusing it with a\n * pointing device, while we still think we're in keyboard modality.\n * @param {Event} e\n */\n function onPointerDown(e) {\n hadKeyboardEvent = false;\n }\n\n /**\n * On `focus`, add the `focus-visible` class to the target if:\n * - the target received focus as a result of keyboard navigation, or\n * - the event target is an element that will likely require interaction\n * via the keyboard (e.g. a text box)\n * @param {Event} e\n */\n function onFocus(e) {\n // Prevent IE from focusing the document or HTML element.\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (hadKeyboardEvent || focusTriggersKeyboardModality(e.target)) {\n addFocusVisibleClass(e.target);\n }\n }\n\n /**\n * On `blur`, remove the `focus-visible` class from the target.\n * @param {Event} e\n */\n function onBlur(e) {\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (\n e.target.classList.contains('focus-visible') ||\n e.target.hasAttribute('data-focus-visible-added')\n ) {\n // To detect a tab/window switch, we look for a blur event followed\n // rapidly by a visibility change.\n // If we don't see a visibility change within 100ms, it's probably a\n // regular focus change.\n hadFocusVisibleRecently = true;\n window.clearTimeout(hadFocusVisibleRecentlyTimeout);\n hadFocusVisibleRecentlyTimeout = window.setTimeout(function() {\n hadFocusVisibleRecently = false;\n }, 100);\n removeFocusVisibleClass(e.target);\n }\n }\n\n /**\n * If the user changes tabs, keep track of whether or not the previously\n * focused element had .focus-visible.\n * @param {Event} e\n */\n function onVisibilityChange(e) {\n if (document.visibilityState === 'hidden') {\n // If the tab becomes active again, the browser will handle calling focus\n // on the element (Safari actually calls it twice).\n // If this tab change caused a blur on an element with focus-visible,\n // re-apply the class when the user switches back to the tab.\n if (hadFocusVisibleRecently) {\n hadKeyboardEvent = true;\n }\n addInitialPointerMoveListeners();\n }\n }\n\n /**\n * Add a group of listeners to detect usage of any pointing devices.\n * These listeners will be added when the polyfill first loads, and anytime\n * the window is blurred, so that they are active when the window regains\n * focus.\n */\n function addInitialPointerMoveListeners() {\n document.addEventListener('mousemove', onInitialPointerMove);\n document.addEventListener('mousedown', onInitialPointerMove);\n document.addEventListener('mouseup', onInitialPointerMove);\n document.addEventListener('pointermove', onInitialPointerMove);\n document.addEventListener('pointerdown', onInitialPointerMove);\n document.addEventListener('pointerup', onInitialPointerMove);\n document.addEventListener('touchmove', onInitialPointerMove);\n document.addEventListener('touchstart', onInitialPointerMove);\n document.addEventListener('touchend', onInitialPointerMove);\n }\n\n function removeInitialPointerMoveListeners() {\n document.removeEventListener('mousemove', onInitialPointerMove);\n document.removeEventListener('mousedown', onInitialPointerMove);\n document.removeEventListener('mouseup', onInitialPointerMove);\n document.removeEventListener('pointermove', onInitialPointerMove);\n document.removeEventListener('pointerdown', onInitialPointerMove);\n document.removeEventListener('pointerup', onInitialPointerMove);\n document.removeEventListener('touchmove', onInitialPointerMove);\n document.removeEventListener('touchstart', onInitialPointerMove);\n document.removeEventListener('touchend', onInitialPointerMove);\n }\n\n /**\n * When the polfyill first loads, assume the user is in keyboard modality.\n * If any event is received from a pointing device (e.g. mouse, pointer,\n * touch), turn off keyboard modality.\n * This accounts for situations where focus enters the page from the URL bar.\n * @param {Event} e\n */\n function onInitialPointerMove(e) {\n // Work around a Safari quirk that fires a mousemove on whenever the\n // window blurs, even if you're tabbing out of the page. \u00AF\\_(\u30C4)_/\u00AF\n if (e.target.nodeName && e.target.nodeName.toLowerCase() === 'html') {\n return;\n }\n\n hadKeyboardEvent = false;\n removeInitialPointerMoveListeners();\n }\n\n // For some kinds of state, we are interested in changes at the global scope\n // only. For example, global pointer input, global key presses and global\n // visibility change should affect the state at every scope:\n document.addEventListener('keydown', onKeyDown, true);\n document.addEventListener('mousedown', onPointerDown, true);\n document.addEventListener('pointerdown', onPointerDown, true);\n document.addEventListener('touchstart', onPointerDown, true);\n document.addEventListener('visibilitychange', onVisibilityChange, true);\n\n addInitialPointerMoveListeners();\n\n // For focus and blur, we specifically care about state changes in the local\n // scope. This is because focus / blur events that originate from within a\n // shadow root are not re-dispatched from the host element if it was already\n // the active element in its own scope:\n scope.addEventListener('focus', onFocus, true);\n scope.addEventListener('blur', onBlur, true);\n\n // We detect that a node is a ShadowRoot by ensuring that it is a\n // DocumentFragment and also has a host property. This check covers native\n // implementation and polyfill implementation transparently. If we only cared\n // about the native implementation, we could just check if the scope was\n // an instance of a ShadowRoot.\n if (scope.nodeType === Node.DOCUMENT_FRAGMENT_NODE && scope.host) {\n // Since a ShadowRoot is a special kind of DocumentFragment, it does not\n // have a root element to add a class to. So, we add this attribute to the\n // host element instead:\n scope.host.setAttribute('data-js-focus-visible', '');\n } else if (scope.nodeType === Node.DOCUMENT_NODE) {\n document.documentElement.classList.add('js-focus-visible');\n document.documentElement.setAttribute('data-js-focus-visible', '');\n }\n }\n\n // It is important to wrap all references to global window and document in\n // these checks to support server-side rendering use cases\n // @see https://github.com/WICG/focus-visible/issues/199\n if (typeof window !== 'undefined' && typeof document !== 'undefined') {\n // Make the polyfill helper globally available. This can be used as a signal\n // to interested libraries that wish to coordinate with the polyfill for e.g.,\n // applying the polyfill to a shadow root:\n window.applyFocusVisiblePolyfill = applyFocusVisiblePolyfill;\n\n // Notify interested libraries of the polyfill's presence, in case the\n // polyfill was loaded lazily:\n var event;\n\n try {\n event = new CustomEvent('focus-visible-polyfill-ready');\n } catch (error) {\n // IE11 does not support using CustomEvent as a constructor directly:\n event = document.createEvent('CustomEvent');\n event.initCustomEvent('focus-visible-polyfill-ready', false, false, {});\n }\n\n window.dispatchEvent(event);\n }\n\n if (typeof document !== 'undefined') {\n // Apply the polyfill to the global document, so that no JavaScript\n // coordination is required to use the polyfill in the top-level document:\n applyFocusVisiblePolyfill(document);\n }\n\n})));\n", "(function(global) {\r\n /**\r\n * Polyfill URLSearchParams\r\n *\r\n * Inspired from : https://github.com/WebReflection/url-search-params/blob/master/src/url-search-params.js\r\n */\r\n\r\n var checkIfIteratorIsSupported = function() {\r\n try {\r\n return !!Symbol.iterator;\r\n } catch (error) {\r\n return false;\r\n }\r\n };\r\n\r\n\r\n var iteratorSupported = checkIfIteratorIsSupported();\r\n\r\n var createIterator = function(items) {\r\n var iterator = {\r\n next: function() {\r\n var value = items.shift();\r\n return { done: value === void 0, value: value };\r\n }\r\n };\r\n\r\n if (iteratorSupported) {\r\n iterator[Symbol.iterator] = function() {\r\n return iterator;\r\n };\r\n }\r\n\r\n return iterator;\r\n };\r\n\r\n /**\r\n * Search param name and values should be encoded according to https://url.spec.whatwg.org/#urlencoded-serializing\r\n * encodeURIComponent() produces the same result except encoding spaces as `%20` instead of `+`.\r\n */\r\n var serializeParam = function(value) {\r\n return encodeURIComponent(value).replace(/%20/g, '+');\r\n };\r\n\r\n var deserializeParam = function(value) {\r\n return decodeURIComponent(String(value).replace(/\\+/g, ' '));\r\n };\r\n\r\n var polyfillURLSearchParams = function() {\r\n\r\n var URLSearchParams = function(searchString) {\r\n Object.defineProperty(this, '_entries', { writable: true, value: {} });\r\n var typeofSearchString = typeof searchString;\r\n\r\n if (typeofSearchString === 'undefined') {\r\n // do nothing\r\n } else if (typeofSearchString === 'string') {\r\n if (searchString !== '') {\r\n this._fromString(searchString);\r\n }\r\n } else if (searchString instanceof URLSearchParams) {\r\n var _this = this;\r\n searchString.forEach(function(value, name) {\r\n _this.append(name, value);\r\n });\r\n } else if ((searchString !== null) && (typeofSearchString === 'object')) {\r\n if (Object.prototype.toString.call(searchString) === '[object Array]') {\r\n for (var i = 0; i < searchString.length; i++) {\r\n var entry = searchString[i];\r\n if ((Object.prototype.toString.call(entry) === '[object Array]') || (entry.length !== 2)) {\r\n this.append(entry[0], entry[1]);\r\n } else {\r\n throw new TypeError('Expected [string, any] as entry at index ' + i + ' of URLSearchParams\\'s input');\r\n }\r\n }\r\n } else {\r\n for (var key in searchString) {\r\n if (searchString.hasOwnProperty(key)) {\r\n this.append(key, searchString[key]);\r\n }\r\n }\r\n }\r\n } else {\r\n throw new TypeError('Unsupported input\\'s type for URLSearchParams');\r\n }\r\n };\r\n\r\n var proto = URLSearchParams.prototype;\r\n\r\n proto.append = function(name, value) {\r\n if (name in this._entries) {\r\n this._entries[name].push(String(value));\r\n } else {\r\n this._entries[name] = [String(value)];\r\n }\r\n };\r\n\r\n proto.delete = function(name) {\r\n delete this._entries[name];\r\n };\r\n\r\n proto.get = function(name) {\r\n return (name in this._entries) ? this._entries[name][0] : null;\r\n };\r\n\r\n proto.getAll = function(name) {\r\n return (name in this._entries) ? this._entries[name].slice(0) : [];\r\n };\r\n\r\n proto.has = function(name) {\r\n return (name in this._entries);\r\n };\r\n\r\n proto.set = function(name, value) {\r\n this._entries[name] = [String(value)];\r\n };\r\n\r\n proto.forEach = function(callback, thisArg) {\r\n var entries;\r\n for (var name in this._entries) {\r\n if (this._entries.hasOwnProperty(name)) {\r\n entries = this._entries[name];\r\n for (var i = 0; i < entries.length; i++) {\r\n callback.call(thisArg, entries[i], name, this);\r\n }\r\n }\r\n }\r\n };\r\n\r\n proto.keys = function() {\r\n var items = [];\r\n this.forEach(function(value, name) {\r\n items.push(name);\r\n });\r\n return createIterator(items);\r\n };\r\n\r\n proto.values = function() {\r\n var items = [];\r\n this.forEach(function(value) {\r\n items.push(value);\r\n });\r\n return createIterator(items);\r\n };\r\n\r\n proto.entries = function() {\r\n var items = [];\r\n this.forEach(function(value, name) {\r\n items.push([name, value]);\r\n });\r\n return createIterator(items);\r\n };\r\n\r\n if (iteratorSupported) {\r\n proto[Symbol.iterator] = proto.entries;\r\n }\r\n\r\n proto.toString = function() {\r\n var searchArray = [];\r\n this.forEach(function(value, name) {\r\n searchArray.push(serializeParam(name) + '=' + serializeParam(value));\r\n });\r\n return searchArray.join('&');\r\n };\r\n\r\n\r\n global.URLSearchParams = URLSearchParams;\r\n };\r\n\r\n var checkIfURLSearchParamsSupported = function() {\r\n try {\r\n var URLSearchParams = global.URLSearchParams;\r\n\r\n return (\r\n (new URLSearchParams('?a=1').toString() === 'a=1') &&\r\n (typeof URLSearchParams.prototype.set === 'function') &&\r\n (typeof URLSearchParams.prototype.entries === 'function')\r\n );\r\n } catch (e) {\r\n return false;\r\n }\r\n };\r\n\r\n if (!checkIfURLSearchParamsSupported()) {\r\n polyfillURLSearchParams();\r\n }\r\n\r\n var proto = global.URLSearchParams.prototype;\r\n\r\n if (typeof proto.sort !== 'function') {\r\n proto.sort = function() {\r\n var _this = this;\r\n var items = [];\r\n this.forEach(function(value, name) {\r\n items.push([name, value]);\r\n if (!_this._entries) {\r\n _this.delete(name);\r\n }\r\n });\r\n items.sort(function(a, b) {\r\n if (a[0] < b[0]) {\r\n return -1;\r\n } else if (a[0] > b[0]) {\r\n return +1;\r\n } else {\r\n return 0;\r\n }\r\n });\r\n if (_this._entries) { // force reset because IE keeps keys index\r\n _this._entries = {};\r\n }\r\n for (var i = 0; i < items.length; i++) {\r\n this.append(items[i][0], items[i][1]);\r\n }\r\n };\r\n }\r\n\r\n if (typeof proto._fromString !== 'function') {\r\n Object.defineProperty(proto, '_fromString', {\r\n enumerable: false,\r\n configurable: false,\r\n writable: false,\r\n value: function(searchString) {\r\n if (this._entries) {\r\n this._entries = {};\r\n } else {\r\n var keys = [];\r\n this.forEach(function(value, name) {\r\n keys.push(name);\r\n });\r\n for (var i = 0; i < keys.length; i++) {\r\n this.delete(keys[i]);\r\n }\r\n }\r\n\r\n searchString = searchString.replace(/^\\?/, '');\r\n var attributes = searchString.split('&');\r\n var attribute;\r\n for (var i = 0; i < attributes.length; i++) {\r\n attribute = attributes[i].split('=');\r\n this.append(\r\n deserializeParam(attribute[0]),\r\n (attribute.length > 1) ? deserializeParam(attribute[1]) : ''\r\n );\r\n }\r\n }\r\n });\r\n }\r\n\r\n // HTMLAnchorElement\r\n\r\n})(\r\n (typeof global !== 'undefined') ? global\r\n : ((typeof window !== 'undefined') ? window\r\n : ((typeof self !== 'undefined') ? self : this))\r\n);\r\n\r\n(function(global) {\r\n /**\r\n * Polyfill URL\r\n *\r\n * Inspired from : https://github.com/arv/DOM-URL-Polyfill/blob/master/src/url.js\r\n */\r\n\r\n var checkIfURLIsSupported = function() {\r\n try {\r\n var u = new global.URL('b', 'http://a');\r\n u.pathname = 'c d';\r\n return (u.href === 'http://a/c%20d') && u.searchParams;\r\n } catch (e) {\r\n return false;\r\n }\r\n };\r\n\r\n\r\n var polyfillURL = function() {\r\n var _URL = global.URL;\r\n\r\n var URL = function(url, base) {\r\n if (typeof url !== 'string') url = String(url);\r\n if (base && typeof base !== 'string') base = String(base);\r\n\r\n // Only create another document if the base is different from current location.\r\n var doc = document, baseElement;\r\n if (base && (global.location === void 0 || base !== global.location.href)) {\r\n base = base.toLowerCase();\r\n doc = document.implementation.createHTMLDocument('');\r\n baseElement = doc.createElement('base');\r\n baseElement.href = base;\r\n doc.head.appendChild(baseElement);\r\n try {\r\n if (baseElement.href.indexOf(base) !== 0) throw new Error(baseElement.href);\r\n } catch (err) {\r\n throw new Error('URL unable to set base ' + base + ' due to ' + err);\r\n }\r\n }\r\n\r\n var anchorElement = doc.createElement('a');\r\n anchorElement.href = url;\r\n if (baseElement) {\r\n doc.body.appendChild(anchorElement);\r\n anchorElement.href = anchorElement.href; // force href to refresh\r\n }\r\n\r\n var inputElement = doc.createElement('input');\r\n inputElement.type = 'url';\r\n inputElement.value = url;\r\n\r\n if (anchorElement.protocol === ':' || !/:/.test(anchorElement.href) || (!inputElement.checkValidity() && !base)) {\r\n throw new TypeError('Invalid URL');\r\n }\r\n\r\n Object.defineProperty(this, '_anchorElement', {\r\n value: anchorElement\r\n });\r\n\r\n\r\n // create a linked searchParams which reflect its changes on URL\r\n var searchParams = new global.URLSearchParams(this.search);\r\n var enableSearchUpdate = true;\r\n var enableSearchParamsUpdate = true;\r\n var _this = this;\r\n ['append', 'delete', 'set'].forEach(function(methodName) {\r\n var method = searchParams[methodName];\r\n searchParams[methodName] = function() {\r\n method.apply(searchParams, arguments);\r\n if (enableSearchUpdate) {\r\n enableSearchParamsUpdate = false;\r\n _this.search = searchParams.toString();\r\n enableSearchParamsUpdate = true;\r\n }\r\n };\r\n });\r\n\r\n Object.defineProperty(this, 'searchParams', {\r\n value: searchParams,\r\n enumerable: true\r\n });\r\n\r\n var search = void 0;\r\n Object.defineProperty(this, '_updateSearchParams', {\r\n enumerable: false,\r\n configurable: false,\r\n writable: false,\r\n value: function() {\r\n if (this.search !== search) {\r\n search = this.search;\r\n if (enableSearchParamsUpdate) {\r\n enableSearchUpdate = false;\r\n this.searchParams._fromString(this.search);\r\n enableSearchUpdate = true;\r\n }\r\n }\r\n }\r\n });\r\n };\r\n\r\n var proto = URL.prototype;\r\n\r\n var linkURLWithAnchorAttribute = function(attributeName) {\r\n Object.defineProperty(proto, attributeName, {\r\n get: function() {\r\n return this._anchorElement[attributeName];\r\n },\r\n set: function(value) {\r\n this._anchorElement[attributeName] = value;\r\n },\r\n enumerable: true\r\n });\r\n };\r\n\r\n ['hash', 'host', 'hostname', 'port', 'protocol']\r\n .forEach(function(attributeName) {\r\n linkURLWithAnchorAttribute(attributeName);\r\n });\r\n\r\n Object.defineProperty(proto, 'search', {\r\n get: function() {\r\n return this._anchorElement['search'];\r\n },\r\n set: function(value) {\r\n this._anchorElement['search'] = value;\r\n this._updateSearchParams();\r\n },\r\n enumerable: true\r\n });\r\n\r\n Object.defineProperties(proto, {\r\n\r\n 'toString': {\r\n get: function() {\r\n var _this = this;\r\n return function() {\r\n return _this.href;\r\n };\r\n }\r\n },\r\n\r\n 'href': {\r\n get: function() {\r\n return this._anchorElement.href.replace(/\\?$/, '');\r\n },\r\n set: function(value) {\r\n this._anchorElement.href = value;\r\n this._updateSearchParams();\r\n },\r\n enumerable: true\r\n },\r\n\r\n 'pathname': {\r\n get: function() {\r\n return this._anchorElement.pathname.replace(/(^\\/?)/, '/');\r\n },\r\n set: function(value) {\r\n this._anchorElement.pathname = value;\r\n },\r\n enumerable: true\r\n },\r\n\r\n 'origin': {\r\n get: function() {\r\n // get expected port from protocol\r\n var expectedPort = { 'http:': 80, 'https:': 443, 'ftp:': 21 }[this._anchorElement.protocol];\r\n // add port to origin if, expected port is different than actual port\r\n // and it is not empty f.e http://foo:8080\r\n // 8080 != 80 && 8080 != ''\r\n var addPortToOrigin = this._anchorElement.port != expectedPort &&\r\n this._anchorElement.port !== '';\r\n\r\n return this._anchorElement.protocol +\r\n '//' +\r\n this._anchorElement.hostname +\r\n (addPortToOrigin ? (':' + this._anchorElement.port) : '');\r\n },\r\n enumerable: true\r\n },\r\n\r\n 'password': { // TODO\r\n get: function() {\r\n return '';\r\n },\r\n set: function(value) {\r\n },\r\n enumerable: true\r\n },\r\n\r\n 'username': { // TODO\r\n get: function() {\r\n return '';\r\n },\r\n set: function(value) {\r\n },\r\n enumerable: true\r\n },\r\n });\r\n\r\n URL.createObjectURL = function(blob) {\r\n return _URL.createObjectURL.apply(_URL, arguments);\r\n };\r\n\r\n URL.revokeObjectURL = function(url) {\r\n return _URL.revokeObjectURL.apply(_URL, arguments);\r\n };\r\n\r\n global.URL = URL;\r\n\r\n };\r\n\r\n if (!checkIfURLIsSupported()) {\r\n polyfillURL();\r\n }\r\n\r\n if ((global.location !== void 0) && !('origin' in global.location)) {\r\n var getOrigin = function() {\r\n return global.location.protocol + '//' + global.location.hostname + (global.location.port ? (':' + global.location.port) : '');\r\n };\r\n\r\n try {\r\n Object.defineProperty(global.location, 'origin', {\r\n get: getOrigin,\r\n enumerable: true\r\n });\r\n } catch (e) {\r\n setInterval(function() {\r\n global.location.origin = getOrigin();\r\n }, 100);\r\n }\r\n }\r\n\r\n})(\r\n (typeof global !== 'undefined') ? global\r\n : ((typeof window !== 'undefined') ? window\r\n : ((typeof self !== 'undefined') ? self : this))\r\n);\r\n", "/*! *****************************************************************************\r\nCopyright (c) Microsoft Corporation.\r\n\r\nPermission to use, copy, modify, and/or distribute this software for any\r\npurpose with or without fee is hereby granted.\r\n\r\nTHE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH\r\nREGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY\r\nAND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,\r\nINDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM\r\nLOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR\r\nOTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR\r\nPERFORMANCE OF THIS SOFTWARE.\r\n***************************************************************************** */\r\n/* global global, define, System, Reflect, Promise */\r\nvar __extends;\r\nvar __assign;\r\nvar __rest;\r\nvar __decorate;\r\nvar __param;\r\nvar __metadata;\r\nvar __awaiter;\r\nvar __generator;\r\nvar __exportStar;\r\nvar __values;\r\nvar __read;\r\nvar __spread;\r\nvar __spreadArrays;\r\nvar __spreadArray;\r\nvar __await;\r\nvar __asyncGenerator;\r\nvar __asyncDelegator;\r\nvar __asyncValues;\r\nvar __makeTemplateObject;\r\nvar __importStar;\r\nvar __importDefault;\r\nvar __classPrivateFieldGet;\r\nvar __classPrivateFieldSet;\r\nvar __createBinding;\r\n(function (factory) {\r\n var root = typeof global === \"object\" ? global : typeof self === \"object\" ? self : typeof this === \"object\" ? this : {};\r\n if (typeof define === \"function\" && define.amd) {\r\n define(\"tslib\", [\"exports\"], function (exports) { factory(createExporter(root, createExporter(exports))); });\r\n }\r\n else if (typeof module === \"object\" && typeof module.exports === \"object\") {\r\n factory(createExporter(root, createExporter(module.exports)));\r\n }\r\n else {\r\n factory(createExporter(root));\r\n }\r\n function createExporter(exports, previous) {\r\n if (exports !== root) {\r\n if (typeof Object.create === \"function\") {\r\n Object.defineProperty(exports, \"__esModule\", { value: true });\r\n }\r\n else {\r\n exports.__esModule = true;\r\n }\r\n }\r\n return function (id, v) { return exports[id] = previous ? previous(id, v) : v; };\r\n }\r\n})\r\n(function (exporter) {\r\n var extendStatics = Object.setPrototypeOf ||\r\n ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) ||\r\n function (d, b) { for (var p in b) if (Object.prototype.hasOwnProperty.call(b, p)) d[p] = b[p]; };\r\n\r\n __extends = function (d, b) {\r\n if (typeof b !== \"function\" && b !== null)\r\n throw new TypeError(\"Class extends value \" + String(b) + \" is not a constructor or null\");\r\n extendStatics(d, b);\r\n function __() { this.constructor = d; }\r\n d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __());\r\n };\r\n\r\n __assign = Object.assign || function (t) {\r\n for (var s, i = 1, n = arguments.length; i < n; i++) {\r\n s = arguments[i];\r\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p];\r\n }\r\n return t;\r\n };\r\n\r\n __rest = function (s, e) {\r\n var t = {};\r\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p) && e.indexOf(p) < 0)\r\n t[p] = s[p];\r\n if (s != null && typeof Object.getOwnPropertySymbols === \"function\")\r\n for (var i = 0, p = Object.getOwnPropertySymbols(s); i < p.length; i++) {\r\n if (e.indexOf(p[i]) < 0 && Object.prototype.propertyIsEnumerable.call(s, p[i]))\r\n t[p[i]] = s[p[i]];\r\n }\r\n return t;\r\n };\r\n\r\n __decorate = function (decorators, target, key, desc) {\r\n var c = arguments.length, r = c < 3 ? target : desc === null ? desc = Object.getOwnPropertyDescriptor(target, key) : desc, d;\r\n if (typeof Reflect === \"object\" && typeof Reflect.decorate === \"function\") r = Reflect.decorate(decorators, target, key, desc);\r\n else for (var i = decorators.length - 1; i >= 0; i--) if (d = decorators[i]) r = (c < 3 ? d(r) : c > 3 ? d(target, key, r) : d(target, key)) || r;\r\n return c > 3 && r && Object.defineProperty(target, key, r), r;\r\n };\r\n\r\n __param = function (paramIndex, decorator) {\r\n return function (target, key) { decorator(target, key, paramIndex); }\r\n };\r\n\r\n __metadata = function (metadataKey, metadataValue) {\r\n if (typeof Reflect === \"object\" && typeof Reflect.metadata === \"function\") return Reflect.metadata(metadataKey, metadataValue);\r\n };\r\n\r\n __awaiter = function (thisArg, _arguments, P, generator) {\r\n function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); }\r\n return new (P || (P = Promise))(function (resolve, reject) {\r\n function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }\r\n function rejected(value) { try { step(generator[\"throw\"](value)); } catch (e) { reject(e); } }\r\n function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); }\r\n step((generator = generator.apply(thisArg, _arguments || [])).next());\r\n });\r\n };\r\n\r\n __generator = function (thisArg, body) {\r\n var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g;\r\n return g = { next: verb(0), \"throw\": verb(1), \"return\": verb(2) }, typeof Symbol === \"function\" && (g[Symbol.iterator] = function() { return this; }), g;\r\n function verb(n) { return function (v) { return step([n, v]); }; }\r\n function step(op) {\r\n if (f) throw new TypeError(\"Generator is already executing.\");\r\n while (_) try {\r\n if (f = 1, y && (t = op[0] & 2 ? y[\"return\"] : op[0] ? y[\"throw\"] || ((t = y[\"return\"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t;\r\n if (y = 0, t) op = [op[0] & 2, t.value];\r\n switch (op[0]) {\r\n case 0: case 1: t = op; break;\r\n case 4: _.label++; return { value: op[1], done: false };\r\n case 5: _.label++; y = op[1]; op = [0]; continue;\r\n case 7: op = _.ops.pop(); _.trys.pop(); continue;\r\n default:\r\n if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; }\r\n if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; }\r\n if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; }\r\n if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; }\r\n if (t[2]) _.ops.pop();\r\n _.trys.pop(); continue;\r\n }\r\n op = body.call(thisArg, _);\r\n } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; }\r\n if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true };\r\n }\r\n };\r\n\r\n __exportStar = function(m, o) {\r\n for (var p in m) if (p !== \"default\" && !Object.prototype.hasOwnProperty.call(o, p)) __createBinding(o, m, p);\r\n };\r\n\r\n __createBinding = Object.create ? (function(o, m, k, k2) {\r\n if (k2 === undefined) k2 = k;\r\n Object.defineProperty(o, k2, { enumerable: true, get: function() { return m[k]; } });\r\n }) : (function(o, m, k, k2) {\r\n if (k2 === undefined) k2 = k;\r\n o[k2] = m[k];\r\n });\r\n\r\n __values = function (o) {\r\n var s = typeof Symbol === \"function\" && Symbol.iterator, m = s && o[s], i = 0;\r\n if (m) return m.call(o);\r\n if (o && typeof o.length === \"number\") return {\r\n next: function () {\r\n if (o && i >= o.length) o = void 0;\r\n return { value: o && o[i++], done: !o };\r\n }\r\n };\r\n throw new TypeError(s ? \"Object is not iterable.\" : \"Symbol.iterator is not defined.\");\r\n };\r\n\r\n __read = function (o, n) {\r\n var m = typeof Symbol === \"function\" && o[Symbol.iterator];\r\n if (!m) return o;\r\n var i = m.call(o), r, ar = [], e;\r\n try {\r\n while ((n === void 0 || n-- > 0) && !(r = i.next()).done) ar.push(r.value);\r\n }\r\n catch (error) { e = { error: error }; }\r\n finally {\r\n try {\r\n if (r && !r.done && (m = i[\"return\"])) m.call(i);\r\n }\r\n finally { if (e) throw e.error; }\r\n }\r\n return ar;\r\n };\r\n\r\n /** @deprecated */\r\n __spread = function () {\r\n for (var ar = [], i = 0; i < arguments.length; i++)\r\n ar = ar.concat(__read(arguments[i]));\r\n return ar;\r\n };\r\n\r\n /** @deprecated */\r\n __spreadArrays = function () {\r\n for (var s = 0, i = 0, il = arguments.length; i < il; i++) s += arguments[i].length;\r\n for (var r = Array(s), k = 0, i = 0; i < il; i++)\r\n for (var a = arguments[i], j = 0, jl = a.length; j < jl; j++, k++)\r\n r[k] = a[j];\r\n return r;\r\n };\r\n\r\n __spreadArray = function (to, from, pack) {\r\n if (pack || arguments.length === 2) for (var i = 0, l = from.length, ar; i < l; i++) {\r\n if (ar || !(i in from)) {\r\n if (!ar) ar = Array.prototype.slice.call(from, 0, i);\r\n ar[i] = from[i];\r\n }\r\n }\r\n return to.concat(ar || Array.prototype.slice.call(from));\r\n };\r\n\r\n __await = function (v) {\r\n return this instanceof __await ? (this.v = v, this) : new __await(v);\r\n };\r\n\r\n __asyncGenerator = function (thisArg, _arguments, generator) {\r\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\r\n var g = generator.apply(thisArg, _arguments || []), i, q = [];\r\n return i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i;\r\n function verb(n) { if (g[n]) i[n] = function (v) { return new Promise(function (a, b) { q.push([n, v, a, b]) > 1 || resume(n, v); }); }; }\r\n function resume(n, v) { try { step(g[n](v)); } catch (e) { settle(q[0][3], e); } }\r\n function step(r) { r.value instanceof __await ? Promise.resolve(r.value.v).then(fulfill, reject) : settle(q[0][2], r); }\r\n function fulfill(value) { resume(\"next\", value); }\r\n function reject(value) { resume(\"throw\", value); }\r\n function settle(f, v) { if (f(v), q.shift(), q.length) resume(q[0][0], q[0][1]); }\r\n };\r\n\r\n __asyncDelegator = function (o) {\r\n var i, p;\r\n return i = {}, verb(\"next\"), verb(\"throw\", function (e) { throw e; }), verb(\"return\"), i[Symbol.iterator] = function () { return this; }, i;\r\n function verb(n, f) { i[n] = o[n] ? function (v) { return (p = !p) ? { value: __await(o[n](v)), done: n === \"return\" } : f ? f(v) : v; } : f; }\r\n };\r\n\r\n __asyncValues = function (o) {\r\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\r\n var m = o[Symbol.asyncIterator], i;\r\n return m ? m.call(o) : (o = typeof __values === \"function\" ? __values(o) : o[Symbol.iterator](), i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i);\r\n function verb(n) { i[n] = o[n] && function (v) { return new Promise(function (resolve, reject) { v = o[n](v), settle(resolve, reject, v.done, v.value); }); }; }\r\n function settle(resolve, reject, d, v) { Promise.resolve(v).then(function(v) { resolve({ value: v, done: d }); }, reject); }\r\n };\r\n\r\n __makeTemplateObject = function (cooked, raw) {\r\n if (Object.defineProperty) { Object.defineProperty(cooked, \"raw\", { value: raw }); } else { cooked.raw = raw; }\r\n return cooked;\r\n };\r\n\r\n var __setModuleDefault = Object.create ? (function(o, v) {\r\n Object.defineProperty(o, \"default\", { enumerable: true, value: v });\r\n }) : function(o, v) {\r\n o[\"default\"] = v;\r\n };\r\n\r\n __importStar = function (mod) {\r\n if (mod && mod.__esModule) return mod;\r\n var result = {};\r\n if (mod != null) for (var k in mod) if (k !== \"default\" && Object.prototype.hasOwnProperty.call(mod, k)) __createBinding(result, mod, k);\r\n __setModuleDefault(result, mod);\r\n return result;\r\n };\r\n\r\n __importDefault = function (mod) {\r\n return (mod && mod.__esModule) ? mod : { \"default\": mod };\r\n };\r\n\r\n __classPrivateFieldGet = function (receiver, state, kind, f) {\r\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a getter\");\r\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot read private member from an object whose class did not declare it\");\r\n return kind === \"m\" ? f : kind === \"a\" ? f.call(receiver) : f ? f.value : state.get(receiver);\r\n };\r\n\r\n __classPrivateFieldSet = function (receiver, state, value, kind, f) {\r\n if (kind === \"m\") throw new TypeError(\"Private method is not writable\");\r\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a setter\");\r\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot write private member to an object whose class did not declare it\");\r\n return (kind === \"a\" ? f.call(receiver, value) : f ? f.value = value : state.set(receiver, value)), value;\r\n };\r\n\r\n exporter(\"__extends\", __extends);\r\n exporter(\"__assign\", __assign);\r\n exporter(\"__rest\", __rest);\r\n exporter(\"__decorate\", __decorate);\r\n exporter(\"__param\", __param);\r\n exporter(\"__metadata\", __metadata);\r\n exporter(\"__awaiter\", __awaiter);\r\n exporter(\"__generator\", __generator);\r\n exporter(\"__exportStar\", __exportStar);\r\n exporter(\"__createBinding\", __createBinding);\r\n exporter(\"__values\", __values);\r\n exporter(\"__read\", __read);\r\n exporter(\"__spread\", __spread);\r\n exporter(\"__spreadArrays\", __spreadArrays);\r\n exporter(\"__spreadArray\", __spreadArray);\r\n exporter(\"__await\", __await);\r\n exporter(\"__asyncGenerator\", __asyncGenerator);\r\n exporter(\"__asyncDelegator\", __asyncDelegator);\r\n exporter(\"__asyncValues\", __asyncValues);\r\n exporter(\"__makeTemplateObject\", __makeTemplateObject);\r\n exporter(\"__importStar\", __importStar);\r\n exporter(\"__importDefault\", __importDefault);\r\n exporter(\"__classPrivateFieldGet\", __classPrivateFieldGet);\r\n exporter(\"__classPrivateFieldSet\", __classPrivateFieldSet);\r\n});\r\n", "/*!\n * clipboard.js v2.0.10\n * https://clipboardjs.com/\n *\n * Licensed MIT \u00A9 Zeno Rocha\n */\n(function webpackUniversalModuleDefinition(root, factory) {\n\tif(typeof exports === 'object' && typeof module === 'object')\n\t\tmodule.exports = factory();\n\telse if(typeof define === 'function' && define.amd)\n\t\tdefine([], factory);\n\telse if(typeof exports === 'object')\n\t\texports[\"ClipboardJS\"] = factory();\n\telse\n\t\troot[\"ClipboardJS\"] = factory();\n})(this, function() {\nreturn /******/ (function() { // webpackBootstrap\n/******/ \tvar __webpack_modules__ = ({\n\n/***/ 686:\n/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n\n// EXPORTS\n__webpack_require__.d(__webpack_exports__, {\n \"default\": function() { return /* binding */ clipboard; }\n});\n\n// EXTERNAL MODULE: ./node_modules/tiny-emitter/index.js\nvar tiny_emitter = __webpack_require__(279);\nvar tiny_emitter_default = /*#__PURE__*/__webpack_require__.n(tiny_emitter);\n// EXTERNAL MODULE: ./node_modules/good-listener/src/listen.js\nvar listen = __webpack_require__(370);\nvar listen_default = /*#__PURE__*/__webpack_require__.n(listen);\n// EXTERNAL MODULE: ./node_modules/select/src/select.js\nvar src_select = __webpack_require__(817);\nvar select_default = /*#__PURE__*/__webpack_require__.n(src_select);\n;// CONCATENATED MODULE: ./src/common/command.js\n/**\n * Executes a given operation type.\n * @param {String} type\n * @return {Boolean}\n */\nfunction command(type) {\n try {\n return document.execCommand(type);\n } catch (err) {\n return false;\n }\n}\n;// CONCATENATED MODULE: ./src/actions/cut.js\n\n\n/**\n * Cut action wrapper.\n * @param {String|HTMLElement} target\n * @return {String}\n */\n\nvar ClipboardActionCut = function ClipboardActionCut(target) {\n var selectedText = select_default()(target);\n command('cut');\n return selectedText;\n};\n\n/* harmony default export */ var actions_cut = (ClipboardActionCut);\n;// CONCATENATED MODULE: ./src/common/create-fake-element.js\n/**\n * Creates a fake textarea element with a value.\n * @param {String} value\n * @return {HTMLElement}\n */\nfunction createFakeElement(value) {\n var isRTL = document.documentElement.getAttribute('dir') === 'rtl';\n var fakeElement = document.createElement('textarea'); // Prevent zooming on iOS\n\n fakeElement.style.fontSize = '12pt'; // Reset box model\n\n fakeElement.style.border = '0';\n fakeElement.style.padding = '0';\n fakeElement.style.margin = '0'; // Move element out of screen horizontally\n\n fakeElement.style.position = 'absolute';\n fakeElement.style[isRTL ? 'right' : 'left'] = '-9999px'; // Move element to the same position vertically\n\n var yPosition = window.pageYOffset || document.documentElement.scrollTop;\n fakeElement.style.top = \"\".concat(yPosition, \"px\");\n fakeElement.setAttribute('readonly', '');\n fakeElement.value = value;\n return fakeElement;\n}\n;// CONCATENATED MODULE: ./src/actions/copy.js\n\n\n\n/**\n * Copy action wrapper.\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @return {String}\n */\n\nvar ClipboardActionCopy = function ClipboardActionCopy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n var selectedText = '';\n\n if (typeof target === 'string') {\n var fakeElement = createFakeElement(target);\n options.container.appendChild(fakeElement);\n selectedText = select_default()(fakeElement);\n command('copy');\n fakeElement.remove();\n } else {\n selectedText = select_default()(target);\n command('copy');\n }\n\n return selectedText;\n};\n\n/* harmony default export */ var actions_copy = (ClipboardActionCopy);\n;// CONCATENATED MODULE: ./src/actions/default.js\nfunction _typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { _typeof = function _typeof(obj) { return typeof obj; }; } else { _typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return _typeof(obj); }\n\n\n\n/**\n * Inner function which performs selection from either `text` or `target`\n * properties and then executes copy or cut operations.\n * @param {Object} options\n */\n\nvar ClipboardActionDefault = function ClipboardActionDefault() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n // Defines base properties passed from constructor.\n var _options$action = options.action,\n action = _options$action === void 0 ? 'copy' : _options$action,\n container = options.container,\n target = options.target,\n text = options.text; // Sets the `action` to be performed which can be either 'copy' or 'cut'.\n\n if (action !== 'copy' && action !== 'cut') {\n throw new Error('Invalid \"action\" value, use either \"copy\" or \"cut\"');\n } // Sets the `target` property using an element that will be have its content copied.\n\n\n if (target !== undefined) {\n if (target && _typeof(target) === 'object' && target.nodeType === 1) {\n if (action === 'copy' && target.hasAttribute('disabled')) {\n throw new Error('Invalid \"target\" attribute. Please use \"readonly\" instead of \"disabled\" attribute');\n }\n\n if (action === 'cut' && (target.hasAttribute('readonly') || target.hasAttribute('disabled'))) {\n throw new Error('Invalid \"target\" attribute. You can\\'t cut text from elements with \"readonly\" or \"disabled\" attributes');\n }\n } else {\n throw new Error('Invalid \"target\" value, use a valid Element');\n }\n } // Define selection strategy based on `text` property.\n\n\n if (text) {\n return actions_copy(text, {\n container: container\n });\n } // Defines which selection strategy based on `target` property.\n\n\n if (target) {\n return action === 'cut' ? actions_cut(target) : actions_copy(target, {\n container: container\n });\n }\n};\n\n/* harmony default export */ var actions_default = (ClipboardActionDefault);\n;// CONCATENATED MODULE: ./src/clipboard.js\nfunction clipboard_typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { clipboard_typeof = function _typeof(obj) { return typeof obj; }; } else { clipboard_typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return clipboard_typeof(obj); }\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\nfunction _defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } }\n\nfunction _createClass(Constructor, protoProps, staticProps) { if (protoProps) _defineProperties(Constructor.prototype, protoProps); if (staticProps) _defineProperties(Constructor, staticProps); return Constructor; }\n\nfunction _inherits(subClass, superClass) { if (typeof superClass !== \"function\" && superClass !== null) { throw new TypeError(\"Super expression must either be null or a function\"); } subClass.prototype = Object.create(superClass && superClass.prototype, { constructor: { value: subClass, writable: true, configurable: true } }); if (superClass) _setPrototypeOf(subClass, superClass); }\n\nfunction _setPrototypeOf(o, p) { _setPrototypeOf = Object.setPrototypeOf || function _setPrototypeOf(o, p) { o.__proto__ = p; return o; }; return _setPrototypeOf(o, p); }\n\nfunction _createSuper(Derived) { var hasNativeReflectConstruct = _isNativeReflectConstruct(); return function _createSuperInternal() { var Super = _getPrototypeOf(Derived), result; if (hasNativeReflectConstruct) { var NewTarget = _getPrototypeOf(this).constructor; result = Reflect.construct(Super, arguments, NewTarget); } else { result = Super.apply(this, arguments); } return _possibleConstructorReturn(this, result); }; }\n\nfunction _possibleConstructorReturn(self, call) { if (call && (clipboard_typeof(call) === \"object\" || typeof call === \"function\")) { return call; } return _assertThisInitialized(self); }\n\nfunction _assertThisInitialized(self) { if (self === void 0) { throw new ReferenceError(\"this hasn't been initialised - super() hasn't been called\"); } return self; }\n\nfunction _isNativeReflectConstruct() { if (typeof Reflect === \"undefined\" || !Reflect.construct) return false; if (Reflect.construct.sham) return false; if (typeof Proxy === \"function\") return true; try { Date.prototype.toString.call(Reflect.construct(Date, [], function () {})); return true; } catch (e) { return false; } }\n\nfunction _getPrototypeOf(o) { _getPrototypeOf = Object.setPrototypeOf ? Object.getPrototypeOf : function _getPrototypeOf(o) { return o.__proto__ || Object.getPrototypeOf(o); }; return _getPrototypeOf(o); }\n\n\n\n\n\n\n/**\n * Helper function to retrieve attribute value.\n * @param {String} suffix\n * @param {Element} element\n */\n\nfunction getAttributeValue(suffix, element) {\n var attribute = \"data-clipboard-\".concat(suffix);\n\n if (!element.hasAttribute(attribute)) {\n return;\n }\n\n return element.getAttribute(attribute);\n}\n/**\n * Base class which takes one or more elements, adds event listeners to them,\n * and instantiates a new `ClipboardAction` on each click.\n */\n\n\nvar Clipboard = /*#__PURE__*/function (_Emitter) {\n _inherits(Clipboard, _Emitter);\n\n var _super = _createSuper(Clipboard);\n\n /**\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n * @param {Object} options\n */\n function Clipboard(trigger, options) {\n var _this;\n\n _classCallCheck(this, Clipboard);\n\n _this = _super.call(this);\n\n _this.resolveOptions(options);\n\n _this.listenClick(trigger);\n\n return _this;\n }\n /**\n * Defines if attributes would be resolved using internal setter functions\n * or custom functions that were passed in the constructor.\n * @param {Object} options\n */\n\n\n _createClass(Clipboard, [{\n key: \"resolveOptions\",\n value: function resolveOptions() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n this.action = typeof options.action === 'function' ? options.action : this.defaultAction;\n this.target = typeof options.target === 'function' ? options.target : this.defaultTarget;\n this.text = typeof options.text === 'function' ? options.text : this.defaultText;\n this.container = clipboard_typeof(options.container) === 'object' ? options.container : document.body;\n }\n /**\n * Adds a click event listener to the passed trigger.\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n */\n\n }, {\n key: \"listenClick\",\n value: function listenClick(trigger) {\n var _this2 = this;\n\n this.listener = listen_default()(trigger, 'click', function (e) {\n return _this2.onClick(e);\n });\n }\n /**\n * Defines a new `ClipboardAction` on each click event.\n * @param {Event} e\n */\n\n }, {\n key: \"onClick\",\n value: function onClick(e) {\n var trigger = e.delegateTarget || e.currentTarget;\n var action = this.action(trigger) || 'copy';\n var text = actions_default({\n action: action,\n container: this.container,\n target: this.target(trigger),\n text: this.text(trigger)\n }); // Fires an event based on the copy operation result.\n\n this.emit(text ? 'success' : 'error', {\n action: action,\n text: text,\n trigger: trigger,\n clearSelection: function clearSelection() {\n if (trigger) {\n trigger.focus();\n }\n\n document.activeElement.blur();\n window.getSelection().removeAllRanges();\n }\n });\n }\n /**\n * Default `action` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultAction\",\n value: function defaultAction(trigger) {\n return getAttributeValue('action', trigger);\n }\n /**\n * Default `target` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultTarget\",\n value: function defaultTarget(trigger) {\n var selector = getAttributeValue('target', trigger);\n\n if (selector) {\n return document.querySelector(selector);\n }\n }\n /**\n * Allow fire programmatically a copy action\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @returns Text copied.\n */\n\n }, {\n key: \"defaultText\",\n\n /**\n * Default `text` lookup function.\n * @param {Element} trigger\n */\n value: function defaultText(trigger) {\n return getAttributeValue('text', trigger);\n }\n /**\n * Destroy lifecycle.\n */\n\n }, {\n key: \"destroy\",\n value: function destroy() {\n this.listener.destroy();\n }\n }], [{\n key: \"copy\",\n value: function copy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n return actions_copy(target, options);\n }\n /**\n * Allow fire programmatically a cut action\n * @param {String|HTMLElement} target\n * @returns Text cutted.\n */\n\n }, {\n key: \"cut\",\n value: function cut(target) {\n return actions_cut(target);\n }\n /**\n * Returns the support of the given action, or all actions if no action is\n * given.\n * @param {String} [action]\n */\n\n }, {\n key: \"isSupported\",\n value: function isSupported() {\n var action = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : ['copy', 'cut'];\n var actions = typeof action === 'string' ? [action] : action;\n var support = !!document.queryCommandSupported;\n actions.forEach(function (action) {\n support = support && !!document.queryCommandSupported(action);\n });\n return support;\n }\n }]);\n\n return Clipboard;\n}((tiny_emitter_default()));\n\n/* harmony default export */ var clipboard = (Clipboard);\n\n/***/ }),\n\n/***/ 828:\n/***/ (function(module) {\n\nvar DOCUMENT_NODE_TYPE = 9;\n\n/**\n * A polyfill for Element.matches()\n */\nif (typeof Element !== 'undefined' && !Element.prototype.matches) {\n var proto = Element.prototype;\n\n proto.matches = proto.matchesSelector ||\n proto.mozMatchesSelector ||\n proto.msMatchesSelector ||\n proto.oMatchesSelector ||\n proto.webkitMatchesSelector;\n}\n\n/**\n * Finds the closest parent that matches a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @return {Function}\n */\nfunction closest (element, selector) {\n while (element && element.nodeType !== DOCUMENT_NODE_TYPE) {\n if (typeof element.matches === 'function' &&\n element.matches(selector)) {\n return element;\n }\n element = element.parentNode;\n }\n}\n\nmodule.exports = closest;\n\n\n/***/ }),\n\n/***/ 438:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar closest = __webpack_require__(828);\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction _delegate(element, selector, type, callback, useCapture) {\n var listenerFn = listener.apply(this, arguments);\n\n element.addEventListener(type, listenerFn, useCapture);\n\n return {\n destroy: function() {\n element.removeEventListener(type, listenerFn, useCapture);\n }\n }\n}\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element|String|Array} [elements]\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction delegate(elements, selector, type, callback, useCapture) {\n // Handle the regular Element usage\n if (typeof elements.addEventListener === 'function') {\n return _delegate.apply(null, arguments);\n }\n\n // Handle Element-less usage, it defaults to global delegation\n if (typeof type === 'function') {\n // Use `document` as the first parameter, then apply arguments\n // This is a short way to .unshift `arguments` without running into deoptimizations\n return _delegate.bind(null, document).apply(null, arguments);\n }\n\n // Handle Selector-based usage\n if (typeof elements === 'string') {\n elements = document.querySelectorAll(elements);\n }\n\n // Handle Array-like based usage\n return Array.prototype.map.call(elements, function (element) {\n return _delegate(element, selector, type, callback, useCapture);\n });\n}\n\n/**\n * Finds closest match and invokes callback.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Function}\n */\nfunction listener(element, selector, type, callback) {\n return function(e) {\n e.delegateTarget = closest(e.target, selector);\n\n if (e.delegateTarget) {\n callback.call(element, e);\n }\n }\n}\n\nmodule.exports = delegate;\n\n\n/***/ }),\n\n/***/ 879:\n/***/ (function(__unused_webpack_module, exports) {\n\n/**\n * Check if argument is a HTML element.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.node = function(value) {\n return value !== undefined\n && value instanceof HTMLElement\n && value.nodeType === 1;\n};\n\n/**\n * Check if argument is a list of HTML elements.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.nodeList = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return value !== undefined\n && (type === '[object NodeList]' || type === '[object HTMLCollection]')\n && ('length' in value)\n && (value.length === 0 || exports.node(value[0]));\n};\n\n/**\n * Check if argument is a string.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.string = function(value) {\n return typeof value === 'string'\n || value instanceof String;\n};\n\n/**\n * Check if argument is a function.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.fn = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return type === '[object Function]';\n};\n\n\n/***/ }),\n\n/***/ 370:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar is = __webpack_require__(879);\nvar delegate = __webpack_require__(438);\n\n/**\n * Validates all params and calls the right\n * listener function based on its target type.\n *\n * @param {String|HTMLElement|HTMLCollection|NodeList} target\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listen(target, type, callback) {\n if (!target && !type && !callback) {\n throw new Error('Missing required arguments');\n }\n\n if (!is.string(type)) {\n throw new TypeError('Second argument must be a String');\n }\n\n if (!is.fn(callback)) {\n throw new TypeError('Third argument must be a Function');\n }\n\n if (is.node(target)) {\n return listenNode(target, type, callback);\n }\n else if (is.nodeList(target)) {\n return listenNodeList(target, type, callback);\n }\n else if (is.string(target)) {\n return listenSelector(target, type, callback);\n }\n else {\n throw new TypeError('First argument must be a String, HTMLElement, HTMLCollection, or NodeList');\n }\n}\n\n/**\n * Adds an event listener to a HTML element\n * and returns a remove listener function.\n *\n * @param {HTMLElement} node\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNode(node, type, callback) {\n node.addEventListener(type, callback);\n\n return {\n destroy: function() {\n node.removeEventListener(type, callback);\n }\n }\n}\n\n/**\n * Add an event listener to a list of HTML elements\n * and returns a remove listener function.\n *\n * @param {NodeList|HTMLCollection} nodeList\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNodeList(nodeList, type, callback) {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.addEventListener(type, callback);\n });\n\n return {\n destroy: function() {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.removeEventListener(type, callback);\n });\n }\n }\n}\n\n/**\n * Add an event listener to a selector\n * and returns a remove listener function.\n *\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenSelector(selector, type, callback) {\n return delegate(document.body, selector, type, callback);\n}\n\nmodule.exports = listen;\n\n\n/***/ }),\n\n/***/ 817:\n/***/ (function(module) {\n\nfunction select(element) {\n var selectedText;\n\n if (element.nodeName === 'SELECT') {\n element.focus();\n\n selectedText = element.value;\n }\n else if (element.nodeName === 'INPUT' || element.nodeName === 'TEXTAREA') {\n var isReadOnly = element.hasAttribute('readonly');\n\n if (!isReadOnly) {\n element.setAttribute('readonly', '');\n }\n\n element.select();\n element.setSelectionRange(0, element.value.length);\n\n if (!isReadOnly) {\n element.removeAttribute('readonly');\n }\n\n selectedText = element.value;\n }\n else {\n if (element.hasAttribute('contenteditable')) {\n element.focus();\n }\n\n var selection = window.getSelection();\n var range = document.createRange();\n\n range.selectNodeContents(element);\n selection.removeAllRanges();\n selection.addRange(range);\n\n selectedText = selection.toString();\n }\n\n return selectedText;\n}\n\nmodule.exports = select;\n\n\n/***/ }),\n\n/***/ 279:\n/***/ (function(module) {\n\nfunction E () {\n // Keep this empty so it's easier to inherit from\n // (via https://github.com/lipsmack from https://github.com/scottcorgan/tiny-emitter/issues/3)\n}\n\nE.prototype = {\n on: function (name, callback, ctx) {\n var e = this.e || (this.e = {});\n\n (e[name] || (e[name] = [])).push({\n fn: callback,\n ctx: ctx\n });\n\n return this;\n },\n\n once: function (name, callback, ctx) {\n var self = this;\n function listener () {\n self.off(name, listener);\n callback.apply(ctx, arguments);\n };\n\n listener._ = callback\n return this.on(name, listener, ctx);\n },\n\n emit: function (name) {\n var data = [].slice.call(arguments, 1);\n var evtArr = ((this.e || (this.e = {}))[name] || []).slice();\n var i = 0;\n var len = evtArr.length;\n\n for (i; i < len; i++) {\n evtArr[i].fn.apply(evtArr[i].ctx, data);\n }\n\n return this;\n },\n\n off: function (name, callback) {\n var e = this.e || (this.e = {});\n var evts = e[name];\n var liveEvents = [];\n\n if (evts && callback) {\n for (var i = 0, len = evts.length; i < len; i++) {\n if (evts[i].fn !== callback && evts[i].fn._ !== callback)\n liveEvents.push(evts[i]);\n }\n }\n\n // Remove event from queue to prevent memory leak\n // Suggested by https://github.com/lazd\n // Ref: https://github.com/scottcorgan/tiny-emitter/commit/c6ebfaa9bc973b33d110a84a307742b7cf94c953#commitcomment-5024910\n\n (liveEvents.length)\n ? e[name] = liveEvents\n : delete e[name];\n\n return this;\n }\n};\n\nmodule.exports = E;\nmodule.exports.TinyEmitter = E;\n\n\n/***/ })\n\n/******/ \t});\n/************************************************************************/\n/******/ \t// The module cache\n/******/ \tvar __webpack_module_cache__ = {};\n/******/ \t\n/******/ \t// The require function\n/******/ \tfunction __webpack_require__(moduleId) {\n/******/ \t\t// Check if module is in cache\n/******/ \t\tif(__webpack_module_cache__[moduleId]) {\n/******/ \t\t\treturn __webpack_module_cache__[moduleId].exports;\n/******/ \t\t}\n/******/ \t\t// Create a new module (and put it into the cache)\n/******/ \t\tvar module = __webpack_module_cache__[moduleId] = {\n/******/ \t\t\t// no module.id needed\n/******/ \t\t\t// no module.loaded needed\n/******/ \t\t\texports: {}\n/******/ \t\t};\n/******/ \t\n/******/ \t\t// Execute the module function\n/******/ \t\t__webpack_modules__[moduleId](module, module.exports, __webpack_require__);\n/******/ \t\n/******/ \t\t// Return the exports of the module\n/******/ \t\treturn module.exports;\n/******/ \t}\n/******/ \t\n/************************************************************************/\n/******/ \t/* webpack/runtime/compat get default export */\n/******/ \t!function() {\n/******/ \t\t// getDefaultExport function for compatibility with non-harmony modules\n/******/ \t\t__webpack_require__.n = function(module) {\n/******/ \t\t\tvar getter = module && module.__esModule ?\n/******/ \t\t\t\tfunction() { return module['default']; } :\n/******/ \t\t\t\tfunction() { return module; };\n/******/ \t\t\t__webpack_require__.d(getter, { a: getter });\n/******/ \t\t\treturn getter;\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/define property getters */\n/******/ \t!function() {\n/******/ \t\t// define getter functions for harmony exports\n/******/ \t\t__webpack_require__.d = function(exports, definition) {\n/******/ \t\t\tfor(var key in definition) {\n/******/ \t\t\t\tif(__webpack_require__.o(definition, key) && !__webpack_require__.o(exports, key)) {\n/******/ \t\t\t\t\tObject.defineProperty(exports, key, { enumerable: true, get: definition[key] });\n/******/ \t\t\t\t}\n/******/ \t\t\t}\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/hasOwnProperty shorthand */\n/******/ \t!function() {\n/******/ \t\t__webpack_require__.o = function(obj, prop) { return Object.prototype.hasOwnProperty.call(obj, prop); }\n/******/ \t}();\n/******/ \t\n/************************************************************************/\n/******/ \t// module exports must be returned from runtime so entry inlining is disabled\n/******/ \t// startup\n/******/ \t// Load entry module and return exports\n/******/ \treturn __webpack_require__(686);\n/******/ })()\n.default;\n});", "/*!\n * escape-html\n * Copyright(c) 2012-2013 TJ Holowaychuk\n * Copyright(c) 2015 Andreas Lubbe\n * Copyright(c) 2015 Tiancheng \"Timothy\" Gu\n * MIT Licensed\n */\n\n'use strict';\n\n/**\n * Module variables.\n * @private\n */\n\nvar matchHtmlRegExp = /[\"'&<>]/;\n\n/**\n * Module exports.\n * @public\n */\n\nmodule.exports = escapeHtml;\n\n/**\n * Escape special characters in the given string of html.\n *\n * @param {string} string The string to escape for inserting into HTML\n * @return {string}\n * @public\n */\n\nfunction escapeHtml(string) {\n var str = '' + string;\n var match = matchHtmlRegExp.exec(str);\n\n if (!match) {\n return str;\n }\n\n var escape;\n var html = '';\n var index = 0;\n var lastIndex = 0;\n\n for (index = match.index; index < str.length; index++) {\n switch (str.charCodeAt(index)) {\n case 34: // \"\n escape = '"';\n break;\n case 38: // &\n escape = '&';\n break;\n case 39: // '\n escape = ''';\n break;\n case 60: // <\n escape = '<';\n break;\n case 62: // >\n escape = '>';\n break;\n default:\n continue;\n }\n\n if (lastIndex !== index) {\n html += str.substring(lastIndex, index);\n }\n\n lastIndex = index + 1;\n html += escape;\n }\n\n return lastIndex !== index\n ? html + str.substring(lastIndex, index)\n : html;\n}\n", "Array.prototype.flat||Object.defineProperty(Array.prototype,\"flat\",{configurable:!0,value:function r(){var t=isNaN(arguments[0])?1:Number(arguments[0]);return t?Array.prototype.reduce.call(this,function(a,e){return Array.isArray(e)?a.push.apply(a,r.call(e,t-1)):a.push(e),a},[]):Array.prototype.slice.call(this)},writable:!0}),Array.prototype.flatMap||Object.defineProperty(Array.prototype,\"flatMap\",{configurable:!0,value:function(r){return Array.prototype.map.apply(this,arguments).flat()},writable:!0})\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport \"array-flat-polyfill\"\nimport \"focus-visible\"\nimport \"unfetch/polyfill\"\nimport \"url-polyfill\"\n\nimport {\n EMPTY,\n NEVER,\n Subject,\n defer,\n delay,\n filter,\n map,\n merge,\n mergeWith,\n shareReplay,\n switchMap\n} from \"rxjs\"\n\nimport { configuration, feature } from \"./_\"\nimport {\n at,\n getOptionalElement,\n requestJSON,\n setToggle,\n watchDocument,\n watchKeyboard,\n watchLocation,\n watchLocationTarget,\n watchMedia,\n watchPrint,\n watchViewport\n} from \"./browser\"\nimport {\n getComponentElement,\n getComponentElements,\n mountBackToTop,\n mountContent,\n mountDialog,\n mountHeader,\n mountHeaderTitle,\n mountPalette,\n mountSearch,\n mountSearchHiglight,\n mountSidebar,\n mountSource,\n mountTableOfContents,\n mountTabs,\n watchHeader,\n watchMain\n} from \"./components\"\nimport {\n SearchIndex,\n setupClipboardJS,\n setupInstantLoading,\n setupVersionSelector\n} from \"./integrations\"\nimport {\n patchIndeterminate,\n patchScrollfix,\n patchScrolllock\n} from \"./patches\"\nimport \"./polyfills\"\n\n/* ----------------------------------------------------------------------------\n * Application\n * ------------------------------------------------------------------------- */\n\n/* Yay, JavaScript is available */\ndocument.documentElement.classList.remove(\"no-js\")\ndocument.documentElement.classList.add(\"js\")\n\n/* Set up navigation observables and subjects */\nconst document$ = watchDocument()\nconst location$ = watchLocation()\nconst target$ = watchLocationTarget()\nconst keyboard$ = watchKeyboard()\n\n/* Set up media observables */\nconst viewport$ = watchViewport()\nconst tablet$ = watchMedia(\"(min-width: 960px)\")\nconst screen$ = watchMedia(\"(min-width: 1220px)\")\nconst print$ = watchPrint()\n\n/* Retrieve search index, if search is enabled */\nconst config = configuration()\nconst index$ = document.forms.namedItem(\"search\")\n ? __search?.index || requestJSON(\n new URL(\"search/search_index.json\", config.base)\n )\n : NEVER\n\n/* Set up Clipboard.js integration */\nconst alert$ = new Subject()\nsetupClipboardJS({ alert$ })\n\n/* Set up instant loading, if enabled */\nif (feature(\"navigation.instant\"))\n setupInstantLoading({ document$, location$, viewport$ })\n\n/* Set up version selector */\nif (config.version?.provider === \"mike\")\n setupVersionSelector({ document$ })\n\n/* Always close drawer and search on navigation */\nmerge(location$, target$)\n .pipe(\n delay(125)\n )\n .subscribe(() => {\n setToggle(\"drawer\", false)\n setToggle(\"search\", false)\n })\n\n/* Set up global keyboard handlers */\nkeyboard$\n .pipe(\n filter(({ mode }) => mode === \"global\")\n )\n .subscribe(key => {\n switch (key.type) {\n\n /* Go to previous page */\n case \"p\":\n case \",\":\n const prev = getOptionalElement(\"[href][rel=prev]\")\n if (typeof prev !== \"undefined\")\n prev.click()\n break\n\n /* Go to next page */\n case \"n\":\n case \".\":\n const next = getOptionalElement(\"[href][rel=next]\")\n if (typeof next !== \"undefined\")\n next.click()\n break\n }\n })\n\n/* Set up patches */\npatchIndeterminate({ document$, tablet$ })\npatchScrollfix({ document$ })\npatchScrolllock({ viewport$, tablet$ })\n\n/* Set up header and main area observable */\nconst header$ = watchHeader(getComponentElement(\"header\"), { viewport$ })\nconst main$ = document$\n .pipe(\n map(() => getComponentElement(\"main\")),\n switchMap(el => watchMain(el, { viewport$, header$ })),\n shareReplay(1)\n )\n\n/* Set up control component observables */\nconst control$ = merge(\n\n /* Dialog */\n ...getComponentElements(\"dialog\")\n .map(el => mountDialog(el, { alert$ })),\n\n /* Header */\n ...getComponentElements(\"header\")\n .map(el => mountHeader(el, { viewport$, header$, main$ })),\n\n /* Color palette */\n ...getComponentElements(\"palette\")\n .map(el => mountPalette(el)),\n\n /* Search */\n ...getComponentElements(\"search\")\n .map(el => mountSearch(el, { index$, keyboard$ })),\n\n /* Repository information */\n ...getComponentElements(\"source\")\n .map(el => mountSource(el))\n)\n\n/* Set up content component observables */\nconst content$ = defer(() => merge(\n\n /* Content */\n ...getComponentElements(\"content\")\n .map(el => mountContent(el, { target$, print$ })),\n\n /* Search highlighting */\n ...getComponentElements(\"content\")\n .map(el => feature(\"search.highlight\")\n ? mountSearchHiglight(el, { index$, location$ })\n : EMPTY\n ),\n\n /* Header title */\n ...getComponentElements(\"header-title\")\n .map(el => mountHeaderTitle(el, { viewport$, header$ })),\n\n /* Sidebar */\n ...getComponentElements(\"sidebar\")\n .map(el => el.getAttribute(\"data-md-type\") === \"navigation\"\n ? at(screen$, () => mountSidebar(el, { viewport$, header$, main$ }))\n : at(tablet$, () => mountSidebar(el, { viewport$, header$, main$ }))\n ),\n\n /* Navigation tabs */\n ...getComponentElements(\"tabs\")\n .map(el => mountTabs(el, { viewport$, header$ })),\n\n /* Table of contents */\n ...getComponentElements(\"toc\")\n .map(el => mountTableOfContents(el, { viewport$, header$, target$ })),\n\n /* Back-to-top button */\n ...getComponentElements(\"top\")\n .map(el => mountBackToTop(el, { viewport$, header$, main$, target$ }))\n))\n\n/* Set up component observables */\nconst component$ = document$\n .pipe(\n switchMap(() => content$),\n mergeWith(control$),\n shareReplay(1)\n )\n\n/* Subscribe to all components */\ncomponent$.subscribe()\n\n/* ----------------------------------------------------------------------------\n * Exports\n * ------------------------------------------------------------------------- */\n\nwindow.document$ = document$ /* Document observable */\nwindow.location$ = location$ /* Location subject */\nwindow.target$ = target$ /* Location target observable */\nwindow.keyboard$ = keyboard$ /* Keyboard observable */\nwindow.viewport$ = viewport$ /* Viewport observable */\nwindow.tablet$ = tablet$ /* Media tablet observable */\nwindow.screen$ = screen$ /* Media screen observable */\nwindow.print$ = print$ /* Media print observable */\nwindow.alert$ = alert$ /* Alert subject */\nwindow.component$ = component$ /* Component observable */\n", "self.fetch||(self.fetch=function(e,n){return n=n||{},new Promise(function(t,s){var r=new XMLHttpRequest,o=[],u=[],i={},a=function(){return{ok:2==(r.status/100|0),statusText:r.statusText,status:r.status,url:r.responseURL,text:function(){return Promise.resolve(r.responseText)},json:function(){return Promise.resolve(r.responseText).then(JSON.parse)},blob:function(){return Promise.resolve(new Blob([r.response]))},clone:a,headers:{keys:function(){return o},entries:function(){return u},get:function(e){return i[e.toLowerCase()]},has:function(e){return e.toLowerCase()in i}}}};for(var c in r.open(n.method||\"get\",e,!0),r.onload=function(){r.getAllResponseHeaders().replace(/^(.*?):[^\\S\\n]*([\\s\\S]*?)$/gm,function(e,n,t){o.push(n=n.toLowerCase()),u.push([n,t]),i[n]=i[n]?i[n]+\",\"+t:t}),t(a())},r.onerror=s,r.withCredentials=\"include\"==n.credentials,n.headers)r.setRequestHeader(c,n.headers[c]);r.send(n.body||null)})});\n", "import tslib from '../tslib.js';\r\nconst {\r\n __extends,\r\n __assign,\r\n __rest,\r\n __decorate,\r\n __param,\r\n __metadata,\r\n __awaiter,\r\n __generator,\r\n __exportStar,\r\n __createBinding,\r\n __values,\r\n __read,\r\n __spread,\r\n __spreadArrays,\r\n __spreadArray,\r\n __await,\r\n __asyncGenerator,\r\n __asyncDelegator,\r\n __asyncValues,\r\n __makeTemplateObject,\r\n __importStar,\r\n __importDefault,\r\n __classPrivateFieldGet,\r\n __classPrivateFieldSet,\r\n} = tslib;\r\nexport {\r\n __extends,\r\n __assign,\r\n __rest,\r\n __decorate,\r\n __param,\r\n __metadata,\r\n __awaiter,\r\n __generator,\r\n __exportStar,\r\n __createBinding,\r\n __values,\r\n __read,\r\n __spread,\r\n __spreadArrays,\r\n __spreadArray,\r\n __await,\r\n __asyncGenerator,\r\n __asyncDelegator,\r\n __asyncValues,\r\n __makeTemplateObject,\r\n __importStar,\r\n __importDefault,\r\n __classPrivateFieldGet,\r\n __classPrivateFieldSet,\r\n};\r\n", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n ReplaySubject,\n Subject,\n fromEvent\n} from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch document\n *\n * Documents are implemented as subjects, so all downstream observables are\n * automatically updated when a new document is emitted.\n *\n * @returns Document subject\n */\nexport function watchDocument(): Subject {\n const document$ = new ReplaySubject(1)\n fromEvent(document, \"DOMContentLoaded\", { once: true })\n .subscribe(() => document$.next(document))\n\n /* Return document */\n return document$\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve all elements matching the query selector\n *\n * @template T - Element type\n *\n * @param selector - Query selector\n * @param node - Node of reference\n *\n * @returns Elements\n */\nexport function getElements(\n selector: T, node?: ParentNode\n): HTMLElementTagNameMap[T][]\n\nexport function getElements(\n selector: string, node?: ParentNode\n): T[]\n\nexport function getElements(\n selector: string, node: ParentNode = document\n): T[] {\n return Array.from(node.querySelectorAll(selector))\n}\n\n/**\n * Retrieve an element matching a query selector or throw a reference error\n *\n * Note that this function assumes that the element is present. If unsure if an\n * element is existent, use the `getOptionalElement` function instead.\n *\n * @template T - Element type\n *\n * @param selector - Query selector\n * @param node - Node of reference\n *\n * @returns Element\n */\nexport function getElement(\n selector: T, node?: ParentNode\n): HTMLElementTagNameMap[T]\n\nexport function getElement(\n selector: string, node?: ParentNode\n): T\n\nexport function getElement(\n selector: string, node: ParentNode = document\n): T {\n const el = getOptionalElement(selector, node)\n if (typeof el === \"undefined\")\n throw new ReferenceError(\n `Missing element: expected \"${selector}\" to be present`\n )\n\n /* Return element */\n return el\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Retrieve an optional element matching the query selector\n *\n * @template T - Element type\n *\n * @param selector - Query selector\n * @param node - Node of reference\n *\n * @returns Element or nothing\n */\nexport function getOptionalElement(\n selector: T, node?: ParentNode\n): HTMLElementTagNameMap[T] | undefined\n\nexport function getOptionalElement(\n selector: string, node?: ParentNode\n): T | undefined\n\nexport function getOptionalElement(\n selector: string, node: ParentNode = document\n): T | undefined {\n return node.querySelector(selector) || undefined\n}\n\n/**\n * Retrieve the currently active element\n *\n * @returns Element or nothing\n */\nexport function getActiveElement(): HTMLElement | undefined {\n return document.activeElement instanceof HTMLElement\n ? document.activeElement || undefined\n : undefined\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n debounceTime,\n distinctUntilChanged,\n fromEvent,\n map,\n merge,\n startWith\n} from \"rxjs\"\n\nimport { getActiveElement } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch element focus\n *\n * Previously, this function used `focus` and `blur` events to determine whether\n * an element is focused, but this doesn't work if there are focusable elements\n * within the elements itself. A better solutions are `focusin` and `focusout`\n * events, which bubble up the tree and allow for more fine-grained control.\n *\n * `debounceTime` is necessary, because when a focus change happens inside an\n * element, the observable would first emit `false` and then `true` again.\n *\n * @param el - Element\n *\n * @returns Element focus observable\n */\nexport function watchElementFocus(\n el: HTMLElement\n): Observable {\n return merge(\n fromEvent(document.body, \"focusin\"),\n fromEvent(document.body, \"focusout\")\n )\n .pipe(\n debounceTime(1),\n map(() => {\n const active = getActiveElement()\n return typeof active !== \"undefined\"\n ? el.contains(active)\n : false\n }),\n startWith(el === getActiveElement()),\n distinctUntilChanged()\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n animationFrameScheduler,\n auditTime,\n fromEvent,\n map,\n merge,\n startWith\n} from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Element offset\n */\nexport interface ElementOffset {\n x: number /* Horizontal offset */\n y: number /* Vertical offset */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve element offset\n *\n * @param el - Element\n *\n * @returns Element offset\n */\nexport function getElementOffset(\n el: HTMLElement\n): ElementOffset {\n return {\n x: el.offsetLeft,\n y: el.offsetTop\n }\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch element offset\n *\n * @param el - Element\n *\n * @returns Element offset observable\n */\nexport function watchElementOffset(\n el: HTMLElement\n): Observable {\n return merge(\n fromEvent(window, \"load\"),\n fromEvent(window, \"resize\")\n )\n .pipe(\n auditTime(0, animationFrameScheduler),\n map(() => getElementOffset(el)),\n startWith(getElementOffset(el))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n animationFrameScheduler,\n auditTime,\n fromEvent,\n map,\n merge,\n startWith\n} from \"rxjs\"\n\nimport { ElementOffset } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve element content offset (= scroll offset)\n *\n * @param el - Element\n *\n * @returns Element content offset\n */\nexport function getElementContentOffset(\n el: HTMLElement\n): ElementOffset {\n return {\n x: el.scrollLeft,\n y: el.scrollTop\n }\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch element content offset\n *\n * @param el - Element\n *\n * @returns Element content offset observable\n */\nexport function watchElementContentOffset(\n el: HTMLElement\n): Observable {\n return merge(\n fromEvent(el, \"scroll\"),\n fromEvent(window, \"resize\")\n )\n .pipe(\n auditTime(0, animationFrameScheduler),\n map(() => getElementContentOffset(el)),\n startWith(getElementContentOffset(el))\n )\n}\n", "/**\r\n * A collection of shims that provide minimal functionality of the ES6 collections.\r\n *\r\n * These implementations are not meant to be used outside of the ResizeObserver\r\n * modules as they cover only a limited range of use cases.\r\n */\r\n/* eslint-disable require-jsdoc, valid-jsdoc */\r\nvar MapShim = (function () {\r\n if (typeof Map !== 'undefined') {\r\n return Map;\r\n }\r\n /**\r\n * Returns index in provided array that matches the specified key.\r\n *\r\n * @param {Array} arr\r\n * @param {*} key\r\n * @returns {number}\r\n */\r\n function getIndex(arr, key) {\r\n var result = -1;\r\n arr.some(function (entry, index) {\r\n if (entry[0] === key) {\r\n result = index;\r\n return true;\r\n }\r\n return false;\r\n });\r\n return result;\r\n }\r\n return /** @class */ (function () {\r\n function class_1() {\r\n this.__entries__ = [];\r\n }\r\n Object.defineProperty(class_1.prototype, \"size\", {\r\n /**\r\n * @returns {boolean}\r\n */\r\n get: function () {\r\n return this.__entries__.length;\r\n },\r\n enumerable: true,\r\n configurable: true\r\n });\r\n /**\r\n * @param {*} key\r\n * @returns {*}\r\n */\r\n class_1.prototype.get = function (key) {\r\n var index = getIndex(this.__entries__, key);\r\n var entry = this.__entries__[index];\r\n return entry && entry[1];\r\n };\r\n /**\r\n * @param {*} key\r\n * @param {*} value\r\n * @returns {void}\r\n */\r\n class_1.prototype.set = function (key, value) {\r\n var index = getIndex(this.__entries__, key);\r\n if (~index) {\r\n this.__entries__[index][1] = value;\r\n }\r\n else {\r\n this.__entries__.push([key, value]);\r\n }\r\n };\r\n /**\r\n * @param {*} key\r\n * @returns {void}\r\n */\r\n class_1.prototype.delete = function (key) {\r\n var entries = this.__entries__;\r\n var index = getIndex(entries, key);\r\n if (~index) {\r\n entries.splice(index, 1);\r\n }\r\n };\r\n /**\r\n * @param {*} key\r\n * @returns {void}\r\n */\r\n class_1.prototype.has = function (key) {\r\n return !!~getIndex(this.__entries__, key);\r\n };\r\n /**\r\n * @returns {void}\r\n */\r\n class_1.prototype.clear = function () {\r\n this.__entries__.splice(0);\r\n };\r\n /**\r\n * @param {Function} callback\r\n * @param {*} [ctx=null]\r\n * @returns {void}\r\n */\r\n class_1.prototype.forEach = function (callback, ctx) {\r\n if (ctx === void 0) { ctx = null; }\r\n for (var _i = 0, _a = this.__entries__; _i < _a.length; _i++) {\r\n var entry = _a[_i];\r\n callback.call(ctx, entry[1], entry[0]);\r\n }\r\n };\r\n return class_1;\r\n }());\r\n})();\n\n/**\r\n * Detects whether window and document objects are available in current environment.\r\n */\r\nvar isBrowser = typeof window !== 'undefined' && typeof document !== 'undefined' && window.document === document;\n\n// Returns global object of a current environment.\r\nvar global$1 = (function () {\r\n if (typeof global !== 'undefined' && global.Math === Math) {\r\n return global;\r\n }\r\n if (typeof self !== 'undefined' && self.Math === Math) {\r\n return self;\r\n }\r\n if (typeof window !== 'undefined' && window.Math === Math) {\r\n return window;\r\n }\r\n // eslint-disable-next-line no-new-func\r\n return Function('return this')();\r\n})();\n\n/**\r\n * A shim for the requestAnimationFrame which falls back to the setTimeout if\r\n * first one is not supported.\r\n *\r\n * @returns {number} Requests' identifier.\r\n */\r\nvar requestAnimationFrame$1 = (function () {\r\n if (typeof requestAnimationFrame === 'function') {\r\n // It's required to use a bounded function because IE sometimes throws\r\n // an \"Invalid calling object\" error if rAF is invoked without the global\r\n // object on the left hand side.\r\n return requestAnimationFrame.bind(global$1);\r\n }\r\n return function (callback) { return setTimeout(function () { return callback(Date.now()); }, 1000 / 60); };\r\n})();\n\n// Defines minimum timeout before adding a trailing call.\r\nvar trailingTimeout = 2;\r\n/**\r\n * Creates a wrapper function which ensures that provided callback will be\r\n * invoked only once during the specified delay period.\r\n *\r\n * @param {Function} callback - Function to be invoked after the delay period.\r\n * @param {number} delay - Delay after which to invoke callback.\r\n * @returns {Function}\r\n */\r\nfunction throttle (callback, delay) {\r\n var leadingCall = false, trailingCall = false, lastCallTime = 0;\r\n /**\r\n * Invokes the original callback function and schedules new invocation if\r\n * the \"proxy\" was called during current request.\r\n *\r\n * @returns {void}\r\n */\r\n function resolvePending() {\r\n if (leadingCall) {\r\n leadingCall = false;\r\n callback();\r\n }\r\n if (trailingCall) {\r\n proxy();\r\n }\r\n }\r\n /**\r\n * Callback invoked after the specified delay. It will further postpone\r\n * invocation of the original function delegating it to the\r\n * requestAnimationFrame.\r\n *\r\n * @returns {void}\r\n */\r\n function timeoutCallback() {\r\n requestAnimationFrame$1(resolvePending);\r\n }\r\n /**\r\n * Schedules invocation of the original function.\r\n *\r\n * @returns {void}\r\n */\r\n function proxy() {\r\n var timeStamp = Date.now();\r\n if (leadingCall) {\r\n // Reject immediately following calls.\r\n if (timeStamp - lastCallTime < trailingTimeout) {\r\n return;\r\n }\r\n // Schedule new call to be in invoked when the pending one is resolved.\r\n // This is important for \"transitions\" which never actually start\r\n // immediately so there is a chance that we might miss one if change\r\n // happens amids the pending invocation.\r\n trailingCall = true;\r\n }\r\n else {\r\n leadingCall = true;\r\n trailingCall = false;\r\n setTimeout(timeoutCallback, delay);\r\n }\r\n lastCallTime = timeStamp;\r\n }\r\n return proxy;\r\n}\n\n// Minimum delay before invoking the update of observers.\r\nvar REFRESH_DELAY = 20;\r\n// A list of substrings of CSS properties used to find transition events that\r\n// might affect dimensions of observed elements.\r\nvar transitionKeys = ['top', 'right', 'bottom', 'left', 'width', 'height', 'size', 'weight'];\r\n// Check if MutationObserver is available.\r\nvar mutationObserverSupported = typeof MutationObserver !== 'undefined';\r\n/**\r\n * Singleton controller class which handles updates of ResizeObserver instances.\r\n */\r\nvar ResizeObserverController = /** @class */ (function () {\r\n /**\r\n * Creates a new instance of ResizeObserverController.\r\n *\r\n * @private\r\n */\r\n function ResizeObserverController() {\r\n /**\r\n * Indicates whether DOM listeners have been added.\r\n *\r\n * @private {boolean}\r\n */\r\n this.connected_ = false;\r\n /**\r\n * Tells that controller has subscribed for Mutation Events.\r\n *\r\n * @private {boolean}\r\n */\r\n this.mutationEventsAdded_ = false;\r\n /**\r\n * Keeps reference to the instance of MutationObserver.\r\n *\r\n * @private {MutationObserver}\r\n */\r\n this.mutationsObserver_ = null;\r\n /**\r\n * A list of connected observers.\r\n *\r\n * @private {Array}\r\n */\r\n this.observers_ = [];\r\n this.onTransitionEnd_ = this.onTransitionEnd_.bind(this);\r\n this.refresh = throttle(this.refresh.bind(this), REFRESH_DELAY);\r\n }\r\n /**\r\n * Adds observer to observers list.\r\n *\r\n * @param {ResizeObserverSPI} observer - Observer to be added.\r\n * @returns {void}\r\n */\r\n ResizeObserverController.prototype.addObserver = function (observer) {\r\n if (!~this.observers_.indexOf(observer)) {\r\n this.observers_.push(observer);\r\n }\r\n // Add listeners if they haven't been added yet.\r\n if (!this.connected_) {\r\n this.connect_();\r\n }\r\n };\r\n /**\r\n * Removes observer from observers list.\r\n *\r\n * @param {ResizeObserverSPI} observer - Observer to be removed.\r\n * @returns {void}\r\n */\r\n ResizeObserverController.prototype.removeObserver = function (observer) {\r\n var observers = this.observers_;\r\n var index = observers.indexOf(observer);\r\n // Remove observer if it's present in registry.\r\n if (~index) {\r\n observers.splice(index, 1);\r\n }\r\n // Remove listeners if controller has no connected observers.\r\n if (!observers.length && this.connected_) {\r\n this.disconnect_();\r\n }\r\n };\r\n /**\r\n * Invokes the update of observers. It will continue running updates insofar\r\n * it detects changes.\r\n *\r\n * @returns {void}\r\n */\r\n ResizeObserverController.prototype.refresh = function () {\r\n var changesDetected = this.updateObservers_();\r\n // Continue running updates if changes have been detected as there might\r\n // be future ones caused by CSS transitions.\r\n if (changesDetected) {\r\n this.refresh();\r\n }\r\n };\r\n /**\r\n * Updates every observer from observers list and notifies them of queued\r\n * entries.\r\n *\r\n * @private\r\n * @returns {boolean} Returns \"true\" if any observer has detected changes in\r\n * dimensions of it's elements.\r\n */\r\n ResizeObserverController.prototype.updateObservers_ = function () {\r\n // Collect observers that have active observations.\r\n var activeObservers = this.observers_.filter(function (observer) {\r\n return observer.gatherActive(), observer.hasActive();\r\n });\r\n // Deliver notifications in a separate cycle in order to avoid any\r\n // collisions between observers, e.g. when multiple instances of\r\n // ResizeObserver are tracking the same element and the callback of one\r\n // of them changes content dimensions of the observed target. Sometimes\r\n // this may result in notifications being blocked for the rest of observers.\r\n activeObservers.forEach(function (observer) { return observer.broadcastActive(); });\r\n return activeObservers.length > 0;\r\n };\r\n /**\r\n * Initializes DOM listeners.\r\n *\r\n * @private\r\n * @returns {void}\r\n */\r\n ResizeObserverController.prototype.connect_ = function () {\r\n // Do nothing if running in a non-browser environment or if listeners\r\n // have been already added.\r\n if (!isBrowser || this.connected_) {\r\n return;\r\n }\r\n // Subscription to the \"Transitionend\" event is used as a workaround for\r\n // delayed transitions. This way it's possible to capture at least the\r\n // final state of an element.\r\n document.addEventListener('transitionend', this.onTransitionEnd_);\r\n window.addEventListener('resize', this.refresh);\r\n if (mutationObserverSupported) {\r\n this.mutationsObserver_ = new MutationObserver(this.refresh);\r\n this.mutationsObserver_.observe(document, {\r\n attributes: true,\r\n childList: true,\r\n characterData: true,\r\n subtree: true\r\n });\r\n }\r\n else {\r\n document.addEventListener('DOMSubtreeModified', this.refresh);\r\n this.mutationEventsAdded_ = true;\r\n }\r\n this.connected_ = true;\r\n };\r\n /**\r\n * Removes DOM listeners.\r\n *\r\n * @private\r\n * @returns {void}\r\n */\r\n ResizeObserverController.prototype.disconnect_ = function () {\r\n // Do nothing if running in a non-browser environment or if listeners\r\n // have been already removed.\r\n if (!isBrowser || !this.connected_) {\r\n return;\r\n }\r\n document.removeEventListener('transitionend', this.onTransitionEnd_);\r\n window.removeEventListener('resize', this.refresh);\r\n if (this.mutationsObserver_) {\r\n this.mutationsObserver_.disconnect();\r\n }\r\n if (this.mutationEventsAdded_) {\r\n document.removeEventListener('DOMSubtreeModified', this.refresh);\r\n }\r\n this.mutationsObserver_ = null;\r\n this.mutationEventsAdded_ = false;\r\n this.connected_ = false;\r\n };\r\n /**\r\n * \"Transitionend\" event handler.\r\n *\r\n * @private\r\n * @param {TransitionEvent} event\r\n * @returns {void}\r\n */\r\n ResizeObserverController.prototype.onTransitionEnd_ = function (_a) {\r\n var _b = _a.propertyName, propertyName = _b === void 0 ? '' : _b;\r\n // Detect whether transition may affect dimensions of an element.\r\n var isReflowProperty = transitionKeys.some(function (key) {\r\n return !!~propertyName.indexOf(key);\r\n });\r\n if (isReflowProperty) {\r\n this.refresh();\r\n }\r\n };\r\n /**\r\n * Returns instance of the ResizeObserverController.\r\n *\r\n * @returns {ResizeObserverController}\r\n */\r\n ResizeObserverController.getInstance = function () {\r\n if (!this.instance_) {\r\n this.instance_ = new ResizeObserverController();\r\n }\r\n return this.instance_;\r\n };\r\n /**\r\n * Holds reference to the controller's instance.\r\n *\r\n * @private {ResizeObserverController}\r\n */\r\n ResizeObserverController.instance_ = null;\r\n return ResizeObserverController;\r\n}());\n\n/**\r\n * Defines non-writable/enumerable properties of the provided target object.\r\n *\r\n * @param {Object} target - Object for which to define properties.\r\n * @param {Object} props - Properties to be defined.\r\n * @returns {Object} Target object.\r\n */\r\nvar defineConfigurable = (function (target, props) {\r\n for (var _i = 0, _a = Object.keys(props); _i < _a.length; _i++) {\r\n var key = _a[_i];\r\n Object.defineProperty(target, key, {\r\n value: props[key],\r\n enumerable: false,\r\n writable: false,\r\n configurable: true\r\n });\r\n }\r\n return target;\r\n});\n\n/**\r\n * Returns the global object associated with provided element.\r\n *\r\n * @param {Object} target\r\n * @returns {Object}\r\n */\r\nvar getWindowOf = (function (target) {\r\n // Assume that the element is an instance of Node, which means that it\r\n // has the \"ownerDocument\" property from which we can retrieve a\r\n // corresponding global object.\r\n var ownerGlobal = target && target.ownerDocument && target.ownerDocument.defaultView;\r\n // Return the local global object if it's not possible extract one from\r\n // provided element.\r\n return ownerGlobal || global$1;\r\n});\n\n// Placeholder of an empty content rectangle.\r\nvar emptyRect = createRectInit(0, 0, 0, 0);\r\n/**\r\n * Converts provided string to a number.\r\n *\r\n * @param {number|string} value\r\n * @returns {number}\r\n */\r\nfunction toFloat(value) {\r\n return parseFloat(value) || 0;\r\n}\r\n/**\r\n * Extracts borders size from provided styles.\r\n *\r\n * @param {CSSStyleDeclaration} styles\r\n * @param {...string} positions - Borders positions (top, right, ...)\r\n * @returns {number}\r\n */\r\nfunction getBordersSize(styles) {\r\n var positions = [];\r\n for (var _i = 1; _i < arguments.length; _i++) {\r\n positions[_i - 1] = arguments[_i];\r\n }\r\n return positions.reduce(function (size, position) {\r\n var value = styles['border-' + position + '-width'];\r\n return size + toFloat(value);\r\n }, 0);\r\n}\r\n/**\r\n * Extracts paddings sizes from provided styles.\r\n *\r\n * @param {CSSStyleDeclaration} styles\r\n * @returns {Object} Paddings box.\r\n */\r\nfunction getPaddings(styles) {\r\n var positions = ['top', 'right', 'bottom', 'left'];\r\n var paddings = {};\r\n for (var _i = 0, positions_1 = positions; _i < positions_1.length; _i++) {\r\n var position = positions_1[_i];\r\n var value = styles['padding-' + position];\r\n paddings[position] = toFloat(value);\r\n }\r\n return paddings;\r\n}\r\n/**\r\n * Calculates content rectangle of provided SVG element.\r\n *\r\n * @param {SVGGraphicsElement} target - Element content rectangle of which needs\r\n * to be calculated.\r\n * @returns {DOMRectInit}\r\n */\r\nfunction getSVGContentRect(target) {\r\n var bbox = target.getBBox();\r\n return createRectInit(0, 0, bbox.width, bbox.height);\r\n}\r\n/**\r\n * Calculates content rectangle of provided HTMLElement.\r\n *\r\n * @param {HTMLElement} target - Element for which to calculate the content rectangle.\r\n * @returns {DOMRectInit}\r\n */\r\nfunction getHTMLElementContentRect(target) {\r\n // Client width & height properties can't be\r\n // used exclusively as they provide rounded values.\r\n var clientWidth = target.clientWidth, clientHeight = target.clientHeight;\r\n // By this condition we can catch all non-replaced inline, hidden and\r\n // detached elements. Though elements with width & height properties less\r\n // than 0.5 will be discarded as well.\r\n //\r\n // Without it we would need to implement separate methods for each of\r\n // those cases and it's not possible to perform a precise and performance\r\n // effective test for hidden elements. E.g. even jQuery's ':visible' filter\r\n // gives wrong results for elements with width & height less than 0.5.\r\n if (!clientWidth && !clientHeight) {\r\n return emptyRect;\r\n }\r\n var styles = getWindowOf(target).getComputedStyle(target);\r\n var paddings = getPaddings(styles);\r\n var horizPad = paddings.left + paddings.right;\r\n var vertPad = paddings.top + paddings.bottom;\r\n // Computed styles of width & height are being used because they are the\r\n // only dimensions available to JS that contain non-rounded values. It could\r\n // be possible to utilize the getBoundingClientRect if only it's data wasn't\r\n // affected by CSS transformations let alone paddings, borders and scroll bars.\r\n var width = toFloat(styles.width), height = toFloat(styles.height);\r\n // Width & height include paddings and borders when the 'border-box' box\r\n // model is applied (except for IE).\r\n if (styles.boxSizing === 'border-box') {\r\n // Following conditions are required to handle Internet Explorer which\r\n // doesn't include paddings and borders to computed CSS dimensions.\r\n //\r\n // We can say that if CSS dimensions + paddings are equal to the \"client\"\r\n // properties then it's either IE, and thus we don't need to subtract\r\n // anything, or an element merely doesn't have paddings/borders styles.\r\n if (Math.round(width + horizPad) !== clientWidth) {\r\n width -= getBordersSize(styles, 'left', 'right') + horizPad;\r\n }\r\n if (Math.round(height + vertPad) !== clientHeight) {\r\n height -= getBordersSize(styles, 'top', 'bottom') + vertPad;\r\n }\r\n }\r\n // Following steps can't be applied to the document's root element as its\r\n // client[Width/Height] properties represent viewport area of the window.\r\n // Besides, it's as well not necessary as the itself neither has\r\n // rendered scroll bars nor it can be clipped.\r\n if (!isDocumentElement(target)) {\r\n // In some browsers (only in Firefox, actually) CSS width & height\r\n // include scroll bars size which can be removed at this step as scroll\r\n // bars are the only difference between rounded dimensions + paddings\r\n // and \"client\" properties, though that is not always true in Chrome.\r\n var vertScrollbar = Math.round(width + horizPad) - clientWidth;\r\n var horizScrollbar = Math.round(height + vertPad) - clientHeight;\r\n // Chrome has a rather weird rounding of \"client\" properties.\r\n // E.g. for an element with content width of 314.2px it sometimes gives\r\n // the client width of 315px and for the width of 314.7px it may give\r\n // 314px. And it doesn't happen all the time. So just ignore this delta\r\n // as a non-relevant.\r\n if (Math.abs(vertScrollbar) !== 1) {\r\n width -= vertScrollbar;\r\n }\r\n if (Math.abs(horizScrollbar) !== 1) {\r\n height -= horizScrollbar;\r\n }\r\n }\r\n return createRectInit(paddings.left, paddings.top, width, height);\r\n}\r\n/**\r\n * Checks whether provided element is an instance of the SVGGraphicsElement.\r\n *\r\n * @param {Element} target - Element to be checked.\r\n * @returns {boolean}\r\n */\r\nvar isSVGGraphicsElement = (function () {\r\n // Some browsers, namely IE and Edge, don't have the SVGGraphicsElement\r\n // interface.\r\n if (typeof SVGGraphicsElement !== 'undefined') {\r\n return function (target) { return target instanceof getWindowOf(target).SVGGraphicsElement; };\r\n }\r\n // If it's so, then check that element is at least an instance of the\r\n // SVGElement and that it has the \"getBBox\" method.\r\n // eslint-disable-next-line no-extra-parens\r\n return function (target) { return (target instanceof getWindowOf(target).SVGElement &&\r\n typeof target.getBBox === 'function'); };\r\n})();\r\n/**\r\n * Checks whether provided element is a document element ().\r\n *\r\n * @param {Element} target - Element to be checked.\r\n * @returns {boolean}\r\n */\r\nfunction isDocumentElement(target) {\r\n return target === getWindowOf(target).document.documentElement;\r\n}\r\n/**\r\n * Calculates an appropriate content rectangle for provided html or svg element.\r\n *\r\n * @param {Element} target - Element content rectangle of which needs to be calculated.\r\n * @returns {DOMRectInit}\r\n */\r\nfunction getContentRect(target) {\r\n if (!isBrowser) {\r\n return emptyRect;\r\n }\r\n if (isSVGGraphicsElement(target)) {\r\n return getSVGContentRect(target);\r\n }\r\n return getHTMLElementContentRect(target);\r\n}\r\n/**\r\n * Creates rectangle with an interface of the DOMRectReadOnly.\r\n * Spec: https://drafts.fxtf.org/geometry/#domrectreadonly\r\n *\r\n * @param {DOMRectInit} rectInit - Object with rectangle's x/y coordinates and dimensions.\r\n * @returns {DOMRectReadOnly}\r\n */\r\nfunction createReadOnlyRect(_a) {\r\n var x = _a.x, y = _a.y, width = _a.width, height = _a.height;\r\n // If DOMRectReadOnly is available use it as a prototype for the rectangle.\r\n var Constr = typeof DOMRectReadOnly !== 'undefined' ? DOMRectReadOnly : Object;\r\n var rect = Object.create(Constr.prototype);\r\n // Rectangle's properties are not writable and non-enumerable.\r\n defineConfigurable(rect, {\r\n x: x, y: y, width: width, height: height,\r\n top: y,\r\n right: x + width,\r\n bottom: height + y,\r\n left: x\r\n });\r\n return rect;\r\n}\r\n/**\r\n * Creates DOMRectInit object based on the provided dimensions and the x/y coordinates.\r\n * Spec: https://drafts.fxtf.org/geometry/#dictdef-domrectinit\r\n *\r\n * @param {number} x - X coordinate.\r\n * @param {number} y - Y coordinate.\r\n * @param {number} width - Rectangle's width.\r\n * @param {number} height - Rectangle's height.\r\n * @returns {DOMRectInit}\r\n */\r\nfunction createRectInit(x, y, width, height) {\r\n return { x: x, y: y, width: width, height: height };\r\n}\n\n/**\r\n * Class that is responsible for computations of the content rectangle of\r\n * provided DOM element and for keeping track of it's changes.\r\n */\r\nvar ResizeObservation = /** @class */ (function () {\r\n /**\r\n * Creates an instance of ResizeObservation.\r\n *\r\n * @param {Element} target - Element to be observed.\r\n */\r\n function ResizeObservation(target) {\r\n /**\r\n * Broadcasted width of content rectangle.\r\n *\r\n * @type {number}\r\n */\r\n this.broadcastWidth = 0;\r\n /**\r\n * Broadcasted height of content rectangle.\r\n *\r\n * @type {number}\r\n */\r\n this.broadcastHeight = 0;\r\n /**\r\n * Reference to the last observed content rectangle.\r\n *\r\n * @private {DOMRectInit}\r\n */\r\n this.contentRect_ = createRectInit(0, 0, 0, 0);\r\n this.target = target;\r\n }\r\n /**\r\n * Updates content rectangle and tells whether it's width or height properties\r\n * have changed since the last broadcast.\r\n *\r\n * @returns {boolean}\r\n */\r\n ResizeObservation.prototype.isActive = function () {\r\n var rect = getContentRect(this.target);\r\n this.contentRect_ = rect;\r\n return (rect.width !== this.broadcastWidth ||\r\n rect.height !== this.broadcastHeight);\r\n };\r\n /**\r\n * Updates 'broadcastWidth' and 'broadcastHeight' properties with a data\r\n * from the corresponding properties of the last observed content rectangle.\r\n *\r\n * @returns {DOMRectInit} Last observed content rectangle.\r\n */\r\n ResizeObservation.prototype.broadcastRect = function () {\r\n var rect = this.contentRect_;\r\n this.broadcastWidth = rect.width;\r\n this.broadcastHeight = rect.height;\r\n return rect;\r\n };\r\n return ResizeObservation;\r\n}());\n\nvar ResizeObserverEntry = /** @class */ (function () {\r\n /**\r\n * Creates an instance of ResizeObserverEntry.\r\n *\r\n * @param {Element} target - Element that is being observed.\r\n * @param {DOMRectInit} rectInit - Data of the element's content rectangle.\r\n */\r\n function ResizeObserverEntry(target, rectInit) {\r\n var contentRect = createReadOnlyRect(rectInit);\r\n // According to the specification following properties are not writable\r\n // and are also not enumerable in the native implementation.\r\n //\r\n // Property accessors are not being used as they'd require to define a\r\n // private WeakMap storage which may cause memory leaks in browsers that\r\n // don't support this type of collections.\r\n defineConfigurable(this, { target: target, contentRect: contentRect });\r\n }\r\n return ResizeObserverEntry;\r\n}());\n\nvar ResizeObserverSPI = /** @class */ (function () {\r\n /**\r\n * Creates a new instance of ResizeObserver.\r\n *\r\n * @param {ResizeObserverCallback} callback - Callback function that is invoked\r\n * when one of the observed elements changes it's content dimensions.\r\n * @param {ResizeObserverController} controller - Controller instance which\r\n * is responsible for the updates of observer.\r\n * @param {ResizeObserver} callbackCtx - Reference to the public\r\n * ResizeObserver instance which will be passed to callback function.\r\n */\r\n function ResizeObserverSPI(callback, controller, callbackCtx) {\r\n /**\r\n * Collection of resize observations that have detected changes in dimensions\r\n * of elements.\r\n *\r\n * @private {Array}\r\n */\r\n this.activeObservations_ = [];\r\n /**\r\n * Registry of the ResizeObservation instances.\r\n *\r\n * @private {Map}\r\n */\r\n this.observations_ = new MapShim();\r\n if (typeof callback !== 'function') {\r\n throw new TypeError('The callback provided as parameter 1 is not a function.');\r\n }\r\n this.callback_ = callback;\r\n this.controller_ = controller;\r\n this.callbackCtx_ = callbackCtx;\r\n }\r\n /**\r\n * Starts observing provided element.\r\n *\r\n * @param {Element} target - Element to be observed.\r\n * @returns {void}\r\n */\r\n ResizeObserverSPI.prototype.observe = function (target) {\r\n if (!arguments.length) {\r\n throw new TypeError('1 argument required, but only 0 present.');\r\n }\r\n // Do nothing if current environment doesn't have the Element interface.\r\n if (typeof Element === 'undefined' || !(Element instanceof Object)) {\r\n return;\r\n }\r\n if (!(target instanceof getWindowOf(target).Element)) {\r\n throw new TypeError('parameter 1 is not of type \"Element\".');\r\n }\r\n var observations = this.observations_;\r\n // Do nothing if element is already being observed.\r\n if (observations.has(target)) {\r\n return;\r\n }\r\n observations.set(target, new ResizeObservation(target));\r\n this.controller_.addObserver(this);\r\n // Force the update of observations.\r\n this.controller_.refresh();\r\n };\r\n /**\r\n * Stops observing provided element.\r\n *\r\n * @param {Element} target - Element to stop observing.\r\n * @returns {void}\r\n */\r\n ResizeObserverSPI.prototype.unobserve = function (target) {\r\n if (!arguments.length) {\r\n throw new TypeError('1 argument required, but only 0 present.');\r\n }\r\n // Do nothing if current environment doesn't have the Element interface.\r\n if (typeof Element === 'undefined' || !(Element instanceof Object)) {\r\n return;\r\n }\r\n if (!(target instanceof getWindowOf(target).Element)) {\r\n throw new TypeError('parameter 1 is not of type \"Element\".');\r\n }\r\n var observations = this.observations_;\r\n // Do nothing if element is not being observed.\r\n if (!observations.has(target)) {\r\n return;\r\n }\r\n observations.delete(target);\r\n if (!observations.size) {\r\n this.controller_.removeObserver(this);\r\n }\r\n };\r\n /**\r\n * Stops observing all elements.\r\n *\r\n * @returns {void}\r\n */\r\n ResizeObserverSPI.prototype.disconnect = function () {\r\n this.clearActive();\r\n this.observations_.clear();\r\n this.controller_.removeObserver(this);\r\n };\r\n /**\r\n * Collects observation instances the associated element of which has changed\r\n * it's content rectangle.\r\n *\r\n * @returns {void}\r\n */\r\n ResizeObserverSPI.prototype.gatherActive = function () {\r\n var _this = this;\r\n this.clearActive();\r\n this.observations_.forEach(function (observation) {\r\n if (observation.isActive()) {\r\n _this.activeObservations_.push(observation);\r\n }\r\n });\r\n };\r\n /**\r\n * Invokes initial callback function with a list of ResizeObserverEntry\r\n * instances collected from active resize observations.\r\n *\r\n * @returns {void}\r\n */\r\n ResizeObserverSPI.prototype.broadcastActive = function () {\r\n // Do nothing if observer doesn't have active observations.\r\n if (!this.hasActive()) {\r\n return;\r\n }\r\n var ctx = this.callbackCtx_;\r\n // Create ResizeObserverEntry instance for every active observation.\r\n var entries = this.activeObservations_.map(function (observation) {\r\n return new ResizeObserverEntry(observation.target, observation.broadcastRect());\r\n });\r\n this.callback_.call(ctx, entries, ctx);\r\n this.clearActive();\r\n };\r\n /**\r\n * Clears the collection of active observations.\r\n *\r\n * @returns {void}\r\n */\r\n ResizeObserverSPI.prototype.clearActive = function () {\r\n this.activeObservations_.splice(0);\r\n };\r\n /**\r\n * Tells whether observer has active observations.\r\n *\r\n * @returns {boolean}\r\n */\r\n ResizeObserverSPI.prototype.hasActive = function () {\r\n return this.activeObservations_.length > 0;\r\n };\r\n return ResizeObserverSPI;\r\n}());\n\n// Registry of internal observers. If WeakMap is not available use current shim\r\n// for the Map collection as it has all required methods and because WeakMap\r\n// can't be fully polyfilled anyway.\r\nvar observers = typeof WeakMap !== 'undefined' ? new WeakMap() : new MapShim();\r\n/**\r\n * ResizeObserver API. Encapsulates the ResizeObserver SPI implementation\r\n * exposing only those methods and properties that are defined in the spec.\r\n */\r\nvar ResizeObserver = /** @class */ (function () {\r\n /**\r\n * Creates a new instance of ResizeObserver.\r\n *\r\n * @param {ResizeObserverCallback} callback - Callback that is invoked when\r\n * dimensions of the observed elements change.\r\n */\r\n function ResizeObserver(callback) {\r\n if (!(this instanceof ResizeObserver)) {\r\n throw new TypeError('Cannot call a class as a function.');\r\n }\r\n if (!arguments.length) {\r\n throw new TypeError('1 argument required, but only 0 present.');\r\n }\r\n var controller = ResizeObserverController.getInstance();\r\n var observer = new ResizeObserverSPI(callback, controller, this);\r\n observers.set(this, observer);\r\n }\r\n return ResizeObserver;\r\n}());\r\n// Expose public methods of ResizeObserver.\r\n[\r\n 'observe',\r\n 'unobserve',\r\n 'disconnect'\r\n].forEach(function (method) {\r\n ResizeObserver.prototype[method] = function () {\r\n var _a;\r\n return (_a = observers.get(this))[method].apply(_a, arguments);\r\n };\r\n});\n\nvar index = (function () {\r\n // Export existing implementation if available.\r\n if (typeof global$1.ResizeObserver !== 'undefined') {\r\n return global$1.ResizeObserver;\r\n }\r\n return ResizeObserver;\r\n})();\n\nexport default index;\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport ResizeObserver from \"resize-observer-polyfill\"\nimport {\n NEVER,\n Observable,\n Subject,\n defer,\n filter,\n finalize,\n map,\n merge,\n of,\n shareReplay,\n startWith,\n switchMap,\n tap\n} from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Element offset\n */\nexport interface ElementSize {\n width: number /* Element width */\n height: number /* Element height */\n}\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Resize observer entry subject\n */\nconst entry$ = new Subject()\n\n/**\n * Resize observer observable\n *\n * This observable will create a `ResizeObserver` on the first subscription\n * and will automatically terminate it when there are no more subscribers.\n * It's quite important to centralize observation in a single `ResizeObserver`,\n * as the performance difference can be quite dramatic, as the link shows.\n *\n * @see https://bit.ly/3iIYfEm - Google Groups on performance\n */\nconst observer$ = defer(() => of(\n new ResizeObserver(entries => {\n for (const entry of entries)\n entry$.next(entry)\n })\n))\n .pipe(\n switchMap(observer => merge(NEVER, of(observer))\n .pipe(\n finalize(() => observer.disconnect())\n )\n ),\n shareReplay(1)\n )\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve element size\n *\n * @param el - Element\n *\n * @returns Element size\n */\nexport function getElementSize(\n el: HTMLElement\n): ElementSize {\n return {\n width: el.offsetWidth,\n height: el.offsetHeight\n }\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch element size\n *\n * This function returns an observable that subscribes to a single internal\n * instance of `ResizeObserver` upon subscription, and emit resize events until\n * termination. Note that this function should not be called with the same\n * element twice, as the first unsubscription will terminate observation.\n *\n * Sadly, we can't use the `DOMRect` objects returned by the observer, because\n * we need the emitted values to be consistent with `getElementSize`, which will\n * return the used values (rounded) and not actual values (unrounded). Thus, we\n * use the `offset*` properties. See the linked GitHub issue.\n *\n * @see https://bit.ly/3m0k3he - GitHub issue\n *\n * @param el - Element\n *\n * @returns Element size observable\n */\nexport function watchElementSize(\n el: HTMLElement\n): Observable {\n return observer$\n .pipe(\n tap(observer => observer.observe(el)),\n switchMap(observer => entry$\n .pipe(\n filter(({ target }) => target === el),\n finalize(() => observer.unobserve(el)),\n map(() => getElementSize(el))\n )\n ),\n startWith(getElementSize(el))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { ElementSize } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve element content size (= scroll width and height)\n *\n * @param el - Element\n *\n * @returns Element content size\n */\nexport function getElementContentSize(\n el: HTMLElement\n): ElementSize {\n return {\n width: el.scrollWidth,\n height: el.scrollHeight\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n NEVER,\n Observable,\n Subject,\n defer,\n distinctUntilChanged,\n filter,\n finalize,\n map,\n merge,\n of,\n shareReplay,\n switchMap,\n tap\n} from \"rxjs\"\n\nimport {\n getElementContentSize,\n getElementSize,\n watchElementContentOffset\n} from \"~/browser\"\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Intersection observer entry subject\n */\nconst entry$ = new Subject()\n\n/**\n * Intersection observer observable\n *\n * This observable will create an `IntersectionObserver` on first subscription\n * and will automatically terminate it when there are no more subscribers.\n *\n * @see https://bit.ly/3iIYfEm - Google Groups on performance\n */\nconst observer$ = defer(() => of(\n new IntersectionObserver(entries => {\n for (const entry of entries)\n entry$.next(entry)\n }, {\n threshold: 0\n })\n))\n .pipe(\n switchMap(observer => merge(NEVER, of(observer))\n .pipe(\n finalize(() => observer.disconnect())\n )\n ),\n shareReplay(1)\n )\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch element visibility\n *\n * @param el - Element\n *\n * @returns Element visibility observable\n */\nexport function watchElementVisibility(\n el: HTMLElement\n): Observable {\n return observer$\n .pipe(\n tap(observer => observer.observe(el)),\n switchMap(observer => entry$\n .pipe(\n filter(({ target }) => target === el),\n finalize(() => observer.unobserve(el)),\n map(({ isIntersecting }) => isIntersecting)\n )\n )\n )\n}\n\n/**\n * Watch element boundary\n *\n * This function returns an observable which emits whether the bottom content\n * boundary (= scroll offset) of an element is within a certain threshold.\n *\n * @param el - Element\n * @param threshold - Threshold\n *\n * @returns Element boundary observable\n */\nexport function watchElementBoundary(\n el: HTMLElement, threshold = 16\n): Observable {\n return watchElementContentOffset(el)\n .pipe(\n map(({ y }) => {\n const visible = getElementSize(el)\n const content = getElementContentSize(el)\n return y >= (\n content.height - visible.height - threshold\n )\n }),\n distinctUntilChanged()\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n fromEvent,\n map,\n startWith\n} from \"rxjs\"\n\nimport { getElement } from \"../element\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Toggle\n */\nexport type Toggle =\n | \"drawer\" /* Toggle for drawer */\n | \"search\" /* Toggle for search */\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Toggle map\n */\nconst toggles: Record = {\n drawer: getElement(\"[data-md-toggle=drawer]\"),\n search: getElement(\"[data-md-toggle=search]\")\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve the value of a toggle\n *\n * @param name - Toggle\n *\n * @returns Toggle value\n */\nexport function getToggle(name: Toggle): boolean {\n return toggles[name].checked\n}\n\n/**\n * Set toggle\n *\n * Simulating a click event seems to be the most cross-browser compatible way\n * of changing the value while also emitting a `change` event. Before, Material\n * used `CustomEvent` to programmatically change the value of a toggle, but this\n * is a much simpler and cleaner solution which doesn't require a polyfill.\n *\n * @param name - Toggle\n * @param value - Toggle value\n */\nexport function setToggle(name: Toggle, value: boolean): void {\n if (toggles[name].checked !== value)\n toggles[name].click()\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch toggle\n *\n * @param name - Toggle\n *\n * @returns Toggle value observable\n */\nexport function watchToggle(name: Toggle): Observable {\n const el = toggles[name]\n return fromEvent(el, \"change\")\n .pipe(\n map(() => el.checked),\n startWith(el.checked)\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n filter,\n fromEvent,\n map,\n share\n} from \"rxjs\"\n\nimport { getActiveElement } from \"../element\"\nimport { getToggle } from \"../toggle\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Keyboard mode\n */\nexport type KeyboardMode =\n | \"global\" /* Global */\n | \"search\" /* Search is open */\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Keyboard\n */\nexport interface Keyboard {\n mode: KeyboardMode /* Keyboard mode */\n type: string /* Key type */\n claim(): void /* Key claim */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Check whether an element may receive keyboard input\n *\n * @param el - Element\n * @param type - Key type\n *\n * @returns Test result\n */\nfunction isSusceptibleToKeyboard(\n el: HTMLElement, type: string\n): boolean {\n switch (el.constructor) {\n\n /* Input elements */\n case HTMLInputElement:\n /* @ts-expect-error - omit unnecessary type cast */\n if (el.type === \"radio\")\n return /^Arrow/.test(type)\n else\n return true\n\n /* Select element and textarea */\n case HTMLSelectElement:\n case HTMLTextAreaElement:\n return true\n\n /* Everything else */\n default:\n return el.isContentEditable\n }\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch keyboard\n *\n * @returns Keyboard observable\n */\nexport function watchKeyboard(): Observable {\n return fromEvent(window, \"keydown\")\n .pipe(\n filter(ev => !(ev.metaKey || ev.ctrlKey)),\n map(ev => ({\n mode: getToggle(\"search\") ? \"search\" : \"global\",\n type: ev.key,\n claim() {\n ev.preventDefault()\n ev.stopPropagation()\n }\n } as Keyboard)),\n filter(({ mode, type }) => {\n if (mode === \"global\") {\n const active = getActiveElement()\n if (typeof active !== \"undefined\")\n return !isSusceptibleToKeyboard(active, type)\n }\n return true\n }),\n share()\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Subject } from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve location\n *\n * This function returns a `URL` object (and not `Location`) to normalize the\n * typings across the application. Furthermore, locations need to be tracked\n * without setting them and `Location` is a singleton which represents the\n * current location.\n *\n * @returns URL\n */\nexport function getLocation(): URL {\n return new URL(location.href)\n}\n\n/**\n * Set location\n *\n * @param url - URL to change to\n */\nexport function setLocation(url: URL): void {\n location.href = url.href\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch location\n *\n * @returns Location subject\n */\nexport function watchLocation(): Subject {\n return new Subject()\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { JSX as JSXInternal } from \"preact\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * HTML attributes\n */\ntype Attributes =\n & JSXInternal.HTMLAttributes\n & JSXInternal.SVGAttributes\n & Record\n\n/**\n * Child element\n */\ntype Child =\n | HTMLElement\n | Text\n | string\n | number\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Append a child node to an element\n *\n * @param el - Element\n * @param child - Child node(s)\n */\nfunction appendChild(el: HTMLElement, child: Child | Child[]): void {\n\n /* Handle primitive types (including raw HTML) */\n if (typeof child === \"string\" || typeof child === \"number\") {\n el.innerHTML += child.toString()\n\n /* Handle nodes */\n } else if (child instanceof Node) {\n el.appendChild(child)\n\n /* Handle nested children */\n } else if (Array.isArray(child)) {\n for (const node of child)\n appendChild(el, node)\n }\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * JSX factory\n *\n * @template T - Element type\n *\n * @param tag - HTML tag\n * @param attributes - HTML attributes\n * @param children - Child elements\n *\n * @returns Element\n */\nexport function h(\n tag: T, attributes?: Attributes | null, ...children: Child[]\n): HTMLElementTagNameMap[T]\n\nexport function h(\n tag: string, attributes?: Attributes | null, ...children: Child[]\n): T\n\nexport function h(\n tag: string, attributes?: Attributes | null, ...children: Child[]\n): T {\n const el = document.createElement(tag)\n\n /* Set attributes, if any */\n if (attributes)\n for (const attr of Object.keys(attributes))\n if (typeof attributes[attr] !== \"boolean\")\n el.setAttribute(attr, attributes[attr])\n else if (attributes[attr])\n el.setAttribute(attr, \"\")\n\n /* Append child nodes */\n for (const child of children)\n appendChild(el, child)\n\n /* Return element */\n return el as T\n}\n\n/* ----------------------------------------------------------------------------\n * Namespace\n * ------------------------------------------------------------------------- */\n\nexport declare namespace h {\n namespace JSX {\n type Element = HTMLElement\n type IntrinsicElements = JSXInternal.IntrinsicElements\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Truncate a string after the given number of characters\n *\n * This is not a very reasonable approach, since the summaries kind of suck.\n * It would be better to create something more intelligent, highlighting the\n * search occurrences and making a better summary out of it, but this note was\n * written three years ago, so who knows if we'll ever fix it.\n *\n * @param value - Value to be truncated\n * @param n - Number of characters\n *\n * @returns Truncated value\n */\nexport function truncate(value: string, n: number): string {\n let i = n\n if (value.length > i) {\n while (value[i] !== \" \" && --i > 0) { /* keep eating */ }\n return `${value.substring(0, i)}...`\n }\n return value\n}\n\n/**\n * Round a number for display with repository facts\n *\n * This is a reverse-engineered version of GitHub's weird rounding algorithm\n * for stars, forks and all other numbers. While all numbers below `1,000` are\n * returned as-is, bigger numbers are converted to fixed numbers:\n *\n * - `1,049` => `1k`\n * - `1,050` => `1.1k`\n * - `1,949` => `1.9k`\n * - `1,950` => `2k`\n *\n * @param value - Original value\n *\n * @returns Rounded value\n */\nexport function round(value: number): string {\n if (value > 999) {\n const digits = +((value - 950) % 1000 > 99)\n return `${((value + 0.000001) / 1000).toFixed(digits)}k`\n } else {\n return value.toString()\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n filter,\n fromEvent,\n map,\n shareReplay,\n startWith\n} from \"rxjs\"\n\nimport { getOptionalElement } from \"~/browser\"\nimport { h } from \"~/utilities\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve location hash\n *\n * @returns Location hash\n */\nexport function getLocationHash(): string {\n return location.hash.substring(1)\n}\n\n/**\n * Set location hash\n *\n * Setting a new fragment identifier via `location.hash` will have no effect\n * if the value doesn't change. When a new fragment identifier is set, we want\n * the browser to target the respective element at all times, which is why we\n * use this dirty little trick.\n *\n * @param hash - Location hash\n */\nexport function setLocationHash(hash: string): void {\n const el = h(\"a\", { href: hash })\n el.addEventListener(\"click\", ev => ev.stopPropagation())\n el.click()\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch location hash\n *\n * @returns Location hash observable\n */\nexport function watchLocationHash(): Observable {\n return fromEvent(window, \"hashchange\")\n .pipe(\n map(getLocationHash),\n startWith(getLocationHash()),\n filter(hash => hash.length > 0),\n shareReplay(1)\n )\n}\n\n/**\n * Watch location target\n *\n * @returns Location target observable\n */\nexport function watchLocationTarget(): Observable {\n return watchLocationHash()\n .pipe(\n map(id => getOptionalElement(`[id=\"${id}\"]`)!),\n filter(el => typeof el !== \"undefined\")\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n EMPTY,\n Observable,\n fromEvent,\n fromEventPattern,\n mapTo,\n merge,\n startWith,\n switchMap\n} from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch media query\n *\n * Note that although `MediaQueryList.addListener` is deprecated we have to\n * use it, because it's the only way to ensure proper downward compatibility.\n *\n * @see https://bit.ly/3dUBH2m - GitHub issue\n *\n * @param query - Media query\n *\n * @returns Media observable\n */\nexport function watchMedia(query: string): Observable {\n const media = matchMedia(query)\n return fromEventPattern(next => (\n media.addListener(() => next(media.matches))\n ))\n .pipe(\n startWith(media.matches)\n )\n}\n\n/**\n * Watch print mode\n *\n * @returns Print observable\n */\nexport function watchPrint(): Observable {\n const media = matchMedia(\"print\")\n return merge(\n fromEvent(window, \"beforeprint\").pipe(mapTo(true)),\n fromEvent(window, \"afterprint\").pipe(mapTo(false))\n )\n .pipe(\n startWith(media.matches)\n )\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Toggle an observable with a media observable\n *\n * @template T - Data type\n *\n * @param query$ - Media observable\n * @param factory - Observable factory\n *\n * @returns Toggled observable\n */\nexport function at(\n query$: Observable, factory: () => Observable\n): Observable {\n return query$\n .pipe(\n switchMap(active => active ? factory() : EMPTY)\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n EMPTY,\n Observable,\n catchError,\n filter,\n from,\n map,\n shareReplay,\n switchMap\n} from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch the given URL\n *\n * If the request fails (e.g. when dispatched from `file://` locations), the\n * observable will complete without emitting a value.\n *\n * @param url - Request URL\n * @param options - Options\n *\n * @returns Response observable\n */\nexport function request(\n url: URL | string, options: RequestInit = { credentials: \"same-origin\" }\n): Observable {\n return from(fetch(`${url}`, options))\n .pipe(\n filter(res => res.status === 200),\n catchError(() => EMPTY)\n )\n}\n\n/**\n * Fetch JSON from the given URL\n *\n * @template T - Data type\n *\n * @param url - Request URL\n * @param options - Options\n *\n * @returns Data observable\n */\nexport function requestJSON(\n url: URL | string, options?: RequestInit\n): Observable {\n return request(url, options)\n .pipe(\n switchMap(res => res.json()),\n shareReplay(1)\n )\n}\n\n/**\n * Fetch XML from the given URL\n *\n * @param url - Request URL\n * @param options - Options\n *\n * @returns Data observable\n */\nexport function requestXML(\n url: URL | string, options?: RequestInit\n): Observable {\n const dom = new DOMParser()\n return request(url, options)\n .pipe(\n switchMap(res => res.text()),\n map(res => dom.parseFromString(res, \"text/xml\")),\n shareReplay(1)\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n defer,\n finalize,\n fromEvent,\n mapTo,\n merge,\n switchMap,\n take,\n throwError\n} from \"rxjs\"\n\nimport { h } from \"~/utilities\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Create and load a `script` element\n *\n * This function returns an observable that will emit when the script was\n * successfully loaded, or throw an error if it didn't.\n *\n * @param src - Script URL\n *\n * @returns Script observable\n */\nexport function watchScript(src: string): Observable {\n const script = h(\"script\", { src })\n return defer(() => {\n document.head.appendChild(script)\n return merge(\n fromEvent(script, \"load\"),\n fromEvent(script, \"error\")\n .pipe(\n switchMap(() => (\n throwError(() => new ReferenceError(`Invalid script: ${src}`))\n ))\n )\n )\n .pipe(\n mapTo(undefined),\n finalize(() => document.head.removeChild(script)),\n take(1)\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n fromEvent,\n map,\n merge,\n startWith\n} from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Viewport offset\n */\nexport interface ViewportOffset {\n x: number /* Horizontal offset */\n y: number /* Vertical offset */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve viewport offset\n *\n * On iOS Safari, viewport offset can be negative due to overflow scrolling.\n * As this may induce strange behaviors downstream, we'll just limit it to 0.\n *\n * @returns Viewport offset\n */\nexport function getViewportOffset(): ViewportOffset {\n return {\n x: Math.max(0, scrollX),\n y: Math.max(0, scrollY)\n }\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch viewport offset\n *\n * @returns Viewport offset observable\n */\nexport function watchViewportOffset(): Observable {\n return merge(\n fromEvent(window, \"scroll\", { passive: true }),\n fromEvent(window, \"resize\", { passive: true })\n )\n .pipe(\n map(getViewportOffset),\n startWith(getViewportOffset())\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n fromEvent,\n map,\n startWith\n} from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Viewport size\n */\nexport interface ViewportSize {\n width: number /* Viewport width */\n height: number /* Viewport height */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve viewport size\n *\n * @returns Viewport size\n */\nexport function getViewportSize(): ViewportSize {\n return {\n width: innerWidth,\n height: innerHeight\n }\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch viewport size\n *\n * @returns Viewport size observable\n */\nexport function watchViewportSize(): Observable {\n return fromEvent(window, \"resize\", { passive: true })\n .pipe(\n map(getViewportSize),\n startWith(getViewportSize())\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n combineLatest,\n map,\n shareReplay\n} from \"rxjs\"\n\nimport {\n ViewportOffset,\n watchViewportOffset\n} from \"../offset\"\nimport {\n ViewportSize,\n watchViewportSize\n} from \"../size\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Viewport\n */\nexport interface Viewport {\n offset: ViewportOffset /* Viewport offset */\n size: ViewportSize /* Viewport size */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch viewport\n *\n * @returns Viewport observable\n */\nexport function watchViewport(): Observable {\n return combineLatest([\n watchViewportOffset(),\n watchViewportSize()\n ])\n .pipe(\n map(([offset, size]) => ({ offset, size })),\n shareReplay(1)\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n combineLatest,\n distinctUntilKeyChanged,\n map\n} from \"rxjs\"\n\nimport { Header } from \"~/components\"\n\nimport { getElementOffset } from \"../../element\"\nimport { Viewport } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
/* Header observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch viewport relative to element\n *\n * @param el - Element\n * @param options - Options\n *\n * @returns Viewport observable\n */\nexport function watchViewportAt(\n el: HTMLElement, { viewport$, header$ }: WatchOptions\n): Observable {\n const size$ = viewport$\n .pipe(\n distinctUntilKeyChanged(\"size\")\n )\n\n /* Compute element offset */\n const offset$ = combineLatest([size$, header$])\n .pipe(\n map(() => getElementOffset(el))\n )\n\n /* Compute relative viewport, return hot observable */\n return combineLatest([header$, viewport$, offset$])\n .pipe(\n map(([{ height }, { offset, size }, { x, y }]) => ({\n offset: {\n x: offset.x - x,\n y: offset.y - y + height\n },\n size\n }))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n fromEvent,\n map,\n share,\n switchMapTo,\n tap,\n throttle\n} from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Worker message\n */\nexport interface WorkerMessage {\n type: unknown /* Message type */\n data?: unknown /* Message data */\n}\n\n/**\n * Worker handler\n *\n * @template T - Message type\n */\nexport interface WorkerHandler<\n T extends WorkerMessage\n> {\n tx$: Subject /* Message transmission subject */\n rx$: Observable /* Message receive observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n *\n * @template T - Worker message type\n */\ninterface WatchOptions {\n tx$: Observable /* Message transmission observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch a web worker\n *\n * This function returns an observable that sends all values emitted by the\n * message observable to the web worker. Web worker communication is expected\n * to be bidirectional (request-response) and synchronous. Messages that are\n * emitted during a pending request are throttled, the last one is emitted.\n *\n * @param worker - Web worker\n * @param options - Options\n *\n * @returns Worker message observable\n */\nexport function watchWorker(\n worker: Worker, { tx$ }: WatchOptions\n): Observable {\n\n /* Intercept messages from worker-like objects */\n const rx$ = fromEvent(worker, \"message\")\n .pipe(\n map(({ data }) => data as T)\n )\n\n /* Send and receive messages, return hot observable */\n return tx$\n .pipe(\n throttle(() => rx$, { leading: true, trailing: true }),\n tap(message => worker.postMessage(message)),\n switchMapTo(rx$),\n share()\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { getElement, getLocation } from \"~/browser\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Feature flag\n */\nexport type Flag =\n | \"content.code.annotate\" /* Code annotations */\n | \"header.autohide\" /* Hide header */\n | \"navigation.expand\" /* Automatic expansion */\n | \"navigation.indexes\" /* Section pages */\n | \"navigation.instant\" /* Instant loading */\n | \"navigation.sections\" /* Section navigation */\n | \"navigation.tabs\" /* Tabs navigation */\n | \"navigation.tabs.sticky\" /* Tabs navigation (sticky) */\n | \"navigation.top\" /* Back-to-top button */\n | \"navigation.tracking\" /* Anchor tracking */\n | \"search.highlight\" /* Search highlighting */\n | \"search.share\" /* Search sharing */\n | \"search.suggest\" /* Search suggestions */\n | \"toc.integrate\" /* Integrated table of contents */\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Translation\n */\nexport type Translation =\n | \"clipboard.copy\" /* Copy to clipboard */\n | \"clipboard.copied\" /* Copied to clipboard */\n | \"search.config.lang\" /* Search language */\n | \"search.config.pipeline\" /* Search pipeline */\n | \"search.config.separator\" /* Search separator */\n | \"search.placeholder\" /* Search */\n | \"search.result.placeholder\" /* Type to start searching */\n | \"search.result.none\" /* No matching documents */\n | \"search.result.one\" /* 1 matching document */\n | \"search.result.other\" /* # matching documents */\n | \"search.result.more.one\" /* 1 more on this page */\n | \"search.result.more.other\" /* # more on this page */\n | \"search.result.term.missing\" /* Missing */\n | \"select.version.title\" /* Version selector */\n\n/**\n * Translations\n */\nexport type Translations = Record\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Versioning\n */\nexport interface Versioning {\n provider: \"mike\" /* Version provider */\n default?: string /* Default version */\n}\n\n/**\n * Configuration\n */\nexport interface Config {\n base: string /* Base URL */\n features: Flag[] /* Feature flags */\n translations: Translations /* Translations */\n search: string /* Search worker URL */\n version?: Versioning /* Versioning */\n}\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve global configuration and make base URL absolute\n */\nconst script = getElement(\"#__config\")\nconst config: Config = JSON.parse(script.textContent!)\nconfig.base = `${new URL(config.base, getLocation())}`\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve global configuration\n *\n * @returns Global configuration\n */\nexport function configuration(): Config {\n return config\n}\n\n/**\n * Check whether a feature flag is enabled\n *\n * @param flag - Feature flag\n *\n * @returns Test result\n */\nexport function feature(flag: Flag): boolean {\n return config.features.includes(flag)\n}\n\n/**\n * Retrieve the translation for the given key\n *\n * @param key - Key to be translated\n * @param value - Positional value, if any\n *\n * @returns Translation\n */\nexport function translation(\n key: Translation, value?: string | number\n): string {\n return typeof value !== \"undefined\"\n ? config.translations[key].replace(\"#\", value.toString())\n : config.translations[key]\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { getElement, getElements } from \"~/browser\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Component type\n */\nexport type ComponentType =\n | \"announce\" /* Announcement bar */\n | \"container\" /* Container */\n | \"content\" /* Content */\n | \"dialog\" /* Dialog */\n | \"header\" /* Header */\n | \"header-title\" /* Header title */\n | \"header-topic\" /* Header topic */\n | \"main\" /* Main area */\n | \"outdated\" /* Version warning */\n | \"palette\" /* Color palette */\n | \"search\" /* Search */\n | \"search-query\" /* Search input */\n | \"search-result\" /* Search results */\n | \"search-share\" /* Search sharing */\n | \"search-suggest\" /* Search suggestions */\n | \"sidebar\" /* Sidebar */\n | \"skip\" /* Skip link */\n | \"source\" /* Repository information */\n | \"tabs\" /* Navigation tabs */\n | \"toc\" /* Table of contents */\n | \"top\" /* Back-to-top button */\n\n/**\n * Component\n *\n * @template T - Component type\n * @template U - Reference type\n */\nexport type Component<\n T extends {} = {},\n U extends HTMLElement = HTMLElement\n> =\n T & {\n ref: U /* Component reference */\n }\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Component type map\n */\ninterface ComponentTypeMap {\n \"announce\": HTMLElement /* Announcement bar */\n \"container\": HTMLElement /* Container */\n \"content\": HTMLElement /* Content */\n \"dialog\": HTMLElement /* Dialog */\n \"header\": HTMLElement /* Header */\n \"header-title\": HTMLElement /* Header title */\n \"header-topic\": HTMLElement /* Header topic */\n \"main\": HTMLElement /* Main area */\n \"outdated\": HTMLElement /* Version warning */\n \"palette\": HTMLElement /* Color palette */\n \"search\": HTMLElement /* Search */\n \"search-query\": HTMLInputElement /* Search input */\n \"search-result\": HTMLElement /* Search results */\n \"search-share\": HTMLAnchorElement /* Search sharing */\n \"search-suggest\": HTMLElement /* Search suggestions */\n \"sidebar\": HTMLElement /* Sidebar */\n \"skip\": HTMLAnchorElement /* Skip link */\n \"source\": HTMLAnchorElement /* Repository information */\n \"tabs\": HTMLElement /* Navigation tabs */\n \"toc\": HTMLElement /* Table of contents */\n \"top\": HTMLAnchorElement /* Back-to-top button */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve the element for a given component or throw a reference error\n *\n * @template T - Component type\n *\n * @param type - Component type\n * @param node - Node of reference\n *\n * @returns Element\n */\nexport function getComponentElement(\n type: T, node: ParentNode = document\n): ComponentTypeMap[T] {\n return getElement(`[data-md-component=${type}]`, node)\n}\n\n/**\n * Retrieve all elements for a given component\n *\n * @template T - Component type\n *\n * @param type - Component type\n * @param node - Node of reference\n *\n * @returns Elements\n */\nexport function getComponentElements(\n type: T, node: ParentNode = document\n): ComponentTypeMap[T][] {\n return getElements(`[data-md-component=${type}]`, node)\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport ClipboardJS from \"clipboard\"\nimport {\n EMPTY,\n Observable,\n Subject,\n defer,\n distinctUntilChanged,\n distinctUntilKeyChanged,\n filter,\n finalize,\n map,\n mergeWith,\n switchMap,\n take,\n takeLast,\n takeUntil,\n tap\n} from \"rxjs\"\n\nimport { feature } from \"~/_\"\nimport {\n getElementContentSize,\n watchElementSize,\n watchElementVisibility\n} from \"~/browser\"\nimport { renderClipboardButton } from \"~/templates\"\n\nimport { Component } from \"../../../_\"\nimport {\n Annotation,\n mountAnnotationList\n} from \"../../annotation\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Code block\n */\nexport interface CodeBlock {\n scrollable: boolean /* Code block overflows */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n print$: Observable /* Media print observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Global sequence number for Clipboard.js integration\n */\nlet sequence = 0\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Find candidate list element directly following a code block\n *\n * @param el - Code block element\n *\n * @returns List element or nothing\n */\nfunction findCandidateList(el: HTMLElement): HTMLElement | undefined {\n if (el.nextElementSibling) {\n const sibling = el.nextElementSibling as HTMLElement\n if (sibling.tagName === \"OL\")\n return sibling\n\n /* Skip empty paragraphs - see https://bit.ly/3r4ZJ2O */\n else if (sibling.tagName === \"P\" && !sibling.children.length)\n return findCandidateList(sibling)\n }\n\n /* Everything else */\n return undefined\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch code block\n *\n * This function monitors size changes of the viewport, as well as switches of\n * content tabs with embedded code blocks, as both may trigger overflow.\n *\n * @param el - Code block element\n *\n * @returns Code block observable\n */\nexport function watchCodeBlock(\n el: HTMLElement\n): Observable {\n return watchElementSize(el)\n .pipe(\n map(({ width }) => {\n const content = getElementContentSize(el)\n return {\n scrollable: content.width > width\n }\n }),\n distinctUntilKeyChanged(\"scrollable\")\n )\n}\n\n/**\n * Mount code block\n *\n * This function ensures that an overflowing code block is focusable through\n * keyboard, so it can be scrolled without a mouse to improve on accessibility.\n * Furthermore, if code annotations are enabled, they are mounted if and only\n * if the code block is currently visible, e.g., not in a hidden content tab.\n *\n * @param el - Code block element\n * @param options - Options\n *\n * @returns Code block and annotation component observable\n */\nexport function mountCodeBlock(\n el: HTMLElement, options: MountOptions\n): Observable> {\n const { matches: hover } = matchMedia(\"(hover)\")\n\n /* Defer mounting of code block - see https://bit.ly/3vHVoVD */\n const factory$ = defer(() => {\n const push$ = new Subject()\n push$.subscribe(({ scrollable }) => {\n if (scrollable && hover)\n el.setAttribute(\"tabindex\", \"0\")\n else\n el.removeAttribute(\"tabindex\")\n })\n\n /* Render button for Clipboard.js integration */\n if (ClipboardJS.isSupported()) {\n const parent = el.closest(\"pre\")!\n parent.id = `__code_${++sequence}`\n parent.insertBefore(\n renderClipboardButton(parent.id),\n el\n )\n }\n\n /* Handle code annotations */\n const container = el.closest([\n \":not(td):not(.code) > .highlight\",\n \".highlighttable\"\n ].join(\", \"))\n if (container instanceof HTMLElement) {\n const list = findCandidateList(container)\n\n /* Mount code annotations, if enabled */\n if (typeof list !== \"undefined\" && (\n container.classList.contains(\"annotate\") ||\n feature(\"content.code.annotate\")\n )) {\n const annotations$ = mountAnnotationList(list, el, options)\n\n /* Create and return component */\n return watchCodeBlock(el)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state })),\n mergeWith(\n watchElementSize(container)\n .pipe(\n takeUntil(push$.pipe(takeLast(1))),\n map(({ width, height }) => width && height),\n distinctUntilChanged(),\n switchMap(active => active ? annotations$ : EMPTY)\n )\n )\n )\n }\n }\n\n /* Create and return component */\n return watchCodeBlock(el)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n\n /* Mount code block on first sight */\n return watchElementVisibility(el)\n .pipe(\n filter(visible => visible),\n take(1),\n switchMap(() => factory$)\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { h } from \"~/utilities\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render an empty annotation\n *\n * @param id - Annotation identifier\n *\n * @returns Element\n */\nexport function renderAnnotation(id: number): HTMLElement {\n return (\n \n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { translation } from \"~/_\"\nimport { h } from \"~/utilities\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render a 'copy-to-clipboard' button\n *\n * @param id - Unique identifier\n *\n * @returns Element\n */\nexport function renderClipboardButton(id: string): HTMLElement {\n return (\n code`}\n >\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { ComponentChild } from \"preact\"\n\nimport { feature, translation } from \"~/_\"\nimport {\n SearchDocument,\n SearchMetadata,\n SearchResultItem\n} from \"~/integrations/search\"\nimport { h, truncate } from \"~/utilities\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Render flag\n */\nconst enum Flag {\n TEASER = 1, /* Render teaser */\n PARENT = 2 /* Render as parent */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper function\n * ------------------------------------------------------------------------- */\n\n/**\n * Render a search document\n *\n * @param document - Search document\n * @param flag - Render flags\n *\n * @returns Element\n */\nfunction renderSearchDocument(\n document: SearchDocument & SearchMetadata, flag: Flag\n): HTMLElement {\n const parent = flag & Flag.PARENT\n const teaser = flag & Flag.TEASER\n\n /* Render missing query terms */\n const missing = Object.keys(document.terms)\n .filter(key => !document.terms[key])\n .reduce((list, key) => [\n ...list, {key}, \" \"\n ], [])\n .slice(0, -1)\n\n /* Assemble query string for highlighting */\n const url = new URL(document.location)\n if (feature(\"search.highlight\"))\n url.searchParams.set(\"h\", Object.entries(document.terms)\n .filter(([, match]) => match)\n .reduce((highlight, [value]) => `${highlight} ${value}`.trim(), \"\")\n )\n\n /* Render article or section, depending on flags */\n return (\n \n \n {parent > 0 &&
}\n

{document.title}

\n {teaser > 0 && document.text.length > 0 &&\n

\n {truncate(document.text, 320)}\n

\n }\n {document.tags && document.tags.map(tag => (\n {tag}\n ))}\n {teaser > 0 && missing.length > 0 &&\n

\n {translation(\"search.result.term.missing\")}: {...missing}\n

\n }\n \n
\n )\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render a search result\n *\n * @param result - Search result\n *\n * @returns Element\n */\nexport function renderSearchResultItem(\n result: SearchResultItem\n): HTMLElement {\n const threshold = result[0].score\n const docs = [...result]\n\n /* Find and extract parent article */\n const parent = docs.findIndex(doc => !doc.location.includes(\"#\"))\n const [article] = docs.splice(parent, 1)\n\n /* Determine last index above threshold */\n let index = docs.findIndex(doc => doc.score < threshold)\n if (index === -1)\n index = docs.length\n\n /* Partition sections */\n const best = docs.slice(0, index)\n const more = docs.slice(index)\n\n /* Render children */\n const children = [\n renderSearchDocument(article, Flag.PARENT | +(!parent && index === 0)),\n ...best.map(section => renderSearchDocument(section, Flag.TEASER)),\n ...more.length ? [\n
\n \n {more.length > 0 && more.length === 1\n ? translation(\"search.result.more.one\")\n : translation(\"search.result.more.other\", more.length)\n }\n \n {...more.map(section => renderSearchDocument(section, Flag.TEASER))}\n
\n ] : []\n ]\n\n /* Render search result */\n return (\n
  • \n {children}\n
  • \n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { SourceFacts } from \"~/components\"\nimport { h, round } from \"~/utilities\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render repository facts\n *\n * @param facts - Repository facts\n *\n * @returns Element\n */\nexport function renderSourceFacts(facts: SourceFacts): HTMLElement {\n return (\n
      \n {Object.entries(facts).map(([key, value]) => (\n
    • \n {typeof value === \"number\" ? round(value) : value}\n
    • \n ))}\n
    \n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { h } from \"~/utilities\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render a table inside a wrapper to improve scrolling on mobile\n *\n * @param table - Table element\n *\n * @returns Element\n */\nexport function renderTable(table: HTMLElement): HTMLElement {\n return (\n
    \n
    \n {table}\n
    \n
    \n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { configuration, translation } from \"~/_\"\nimport { h } from \"~/utilities\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Version\n */\nexport interface Version {\n version: string /* Version identifier */\n title: string /* Version title */\n aliases: string[] /* Version aliases */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render a version\n *\n * @param version - Version\n *\n * @returns Element\n */\nfunction renderVersion(version: Version): HTMLElement {\n const config = configuration()\n\n /* Ensure trailing slash, see https://bit.ly/3rL5u3f */\n const url = new URL(`../${version.version}/`, config.base)\n return (\n
  • \n \n {version.title}\n \n
  • \n )\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render a version selector\n *\n * @param versions - Versions\n * @param active - Active version\n *\n * @returns Element\n */\nexport function renderVersionSelector(\n versions: Version[], active: Version\n): HTMLElement {\n return (\n
    \n \n {active.title}\n \n
      \n {versions.map(renderVersion)}\n
    \n
    \n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n EMPTY,\n Observable,\n Subject,\n animationFrameScheduler,\n combineLatest,\n defer,\n finalize,\n fromEvent,\n map,\n switchMap,\n take,\n tap,\n throttleTime\n} from \"rxjs\"\n\nimport {\n ElementOffset,\n getElement,\n getElementSize,\n watchElementContentOffset,\n watchElementFocus,\n watchElementOffset\n} from \"~/browser\"\n\nimport { Component } from \"../../../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Annotation\n */\nexport interface Annotation {\n active: boolean /* Annotation is active */\n offset: ElementOffset /* Annotation offset */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch annotation\n *\n * @param el - Annotation element\n * @param container - Containing element\n *\n * @returns Annotation observable\n */\nexport function watchAnnotation(\n el: HTMLElement, container: HTMLElement\n): Observable {\n const offset$ = defer(() => combineLatest([\n watchElementOffset(el),\n watchElementContentOffset(container)\n ]))\n .pipe(\n map(([{ x, y }, scroll]) => {\n const { width } = getElementSize(el)\n return ({\n x: x - scroll.x + width / 2,\n y: y - scroll.y\n })\n })\n )\n\n /* Actively watch annotation on focus */\n return watchElementFocus(el)\n .pipe(\n switchMap(active => offset$\n .pipe(\n map(offset => ({ active, offset })),\n take(+!active || Infinity)\n )\n )\n )\n}\n\n/**\n * Mount annotation\n *\n * @param el - Annotation element\n * @param container - Containing element\n *\n * @returns Annotation component observable\n */\nexport function mountAnnotation(\n el: HTMLElement, container: HTMLElement\n): Observable> {\n return defer(() => {\n const push$ = new Subject()\n push$.subscribe({\n\n /* Handle emission */\n next({ offset }) {\n el.style.setProperty(\"--md-tooltip-x\", `${offset.x}px`)\n el.style.setProperty(\"--md-tooltip-y\", `${offset.y}px`)\n },\n\n /* Handle complete */\n complete() {\n el.style.removeProperty(\"--md-tooltip-x\")\n el.style.removeProperty(\"--md-tooltip-y\")\n }\n })\n\n /* Track relative origin of tooltip */\n push$\n .pipe(\n throttleTime(500, animationFrameScheduler),\n map(() => container.getBoundingClientRect()),\n map(({ x }) => x)\n )\n .subscribe({\n\n /* Handle emission */\n next(origin) {\n if (origin)\n el.style.setProperty(\"--md-tooltip-0\", `${-origin}px`)\n else\n el.style.removeProperty(\"--md-tooltip-0\")\n },\n\n /* Handle complete */\n complete() {\n el.style.removeProperty(\"--md-tooltip-0\")\n }\n })\n\n /* Close open annotation on click */\n const index = getElement(\":scope > :last-child\", el)\n const blur$ = fromEvent(index, \"mousedown\", { once: true })\n push$\n .pipe(\n switchMap(({ active }) => active ? blur$ : EMPTY),\n tap(ev => ev.preventDefault())\n )\n .subscribe(() => el.blur())\n\n /* Create and return component */\n return watchAnnotation(el, container)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n EMPTY,\n Observable,\n Subject,\n defer,\n finalize,\n merge,\n share,\n takeLast,\n takeUntil\n} from \"rxjs\"\n\nimport {\n getElement,\n getElements,\n getOptionalElement\n} from \"~/browser\"\nimport { renderAnnotation } from \"~/templates\"\n\nimport { Component } from \"../../../_\"\nimport {\n Annotation,\n mountAnnotation\n} from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n print$: Observable /* Media print observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Find all annotation markers in the given code block\n *\n * @param container - Containing element\n *\n * @returns Annotation markers\n */\nfunction findAnnotationMarkers(container: HTMLElement): Text[] {\n const markers: Text[] = []\n for (const comment of getElements(\".c, .c1, .cm\", container)) {\n let match: RegExpExecArray | null\n\n /* Split text at marker and add to list */\n let text = comment.firstChild as Text\n if (text instanceof Text)\n while ((match = /\\((\\d+)\\)/.exec(text.textContent!))) {\n const marker = text.splitText(match.index)\n text = marker.splitText(match[0].length)\n markers.push(marker)\n }\n }\n return markers\n}\n\n/**\n * Swap the child nodes of two elements\n *\n * @param source - Source element\n * @param target - Target element\n */\nfunction swap(source: HTMLElement, target: HTMLElement): void {\n target.append(...Array.from(source.childNodes))\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount annotation list\n *\n * This function analyzes the containing code block and checks for markers\n * referring to elements in the given annotation list. If no markers are found,\n * the list is left untouched. Otherwise, list elements are rendered as\n * annotations inside the code block.\n *\n * @param el - Annotation list element\n * @param container - Containing element\n * @param options - Options\n *\n * @returns Annotation component observable\n */\nexport function mountAnnotationList(\n el: HTMLElement, container: HTMLElement, { print$ }: MountOptions\n): Observable> {\n\n /* Find and replace all markers with empty annotations */\n const annotations = new Map()\n for (const marker of findAnnotationMarkers(container)) {\n const [, id] = marker.textContent!.match(/\\((\\d+)\\)/)!\n if (getOptionalElement(`li:nth-child(${id})`, el)) {\n annotations.set(+id, renderAnnotation(+id))\n marker.replaceWith(annotations.get(+id)!)\n }\n }\n\n /* Keep list if there are no annotations to render */\n if (annotations.size === 0)\n return EMPTY\n\n /* Create and return component */\n return defer(() => {\n const done$ = new Subject()\n\n /* Handle print mode - see https://bit.ly/3rgPdpt */\n print$\n .pipe(\n takeUntil(done$.pipe(takeLast(1)))\n )\n .subscribe(active => {\n el.hidden = !active\n\n /* Show annotations in code block or list (print) */\n for (const [id, annotation] of annotations) {\n const inner = getElement(\".md-typeset\", annotation)\n const child = getElement(`li:nth-child(${id})`, el)\n if (!active)\n swap(child, inner)\n else\n swap(inner, child)\n }\n })\n\n /* Create and return component */\n return merge(...[...annotations]\n .map(([, annotation]) => (\n mountAnnotation(annotation, container)\n ))\n )\n .pipe(\n finalize(() => done$.complete()),\n share()\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n mapTo,\n of,\n shareReplay,\n tap\n} from \"rxjs\"\n\nimport { watchScript } from \"~/browser\"\nimport { h } from \"~/utilities\"\n\nimport { Component } from \"../../../_\"\n\nimport themeCSS from \"./index.css\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mermaid diagram\n */\nexport interface Mermaid {}\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Mermaid instance observable\n */\nlet mermaid$: Observable\n\n/**\n * Global index for Mermaid integration\n */\nlet index = 0\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch Mermaid script\n *\n * @returns Mermaid scripts observable\n */\nfunction fetchScripts(): Observable {\n return typeof mermaid === \"undefined\" || mermaid instanceof Element\n ? watchScript(\"https://unpkg.com/mermaid@8.13.3/dist/mermaid.min.js\")\n : of(undefined)\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount Mermaid diagram\n *\n * @param el - Code block element\n *\n * @returns Mermaid diagram component observable\n */\nexport function mountMermaid(\n el: HTMLElement\n): Observable> {\n el.classList.remove(\"mermaid\") // Hack: mitigate https://bit.ly/3CiN6Du\n mermaid$ ||= fetchScripts()\n .pipe(\n tap(() => mermaid.initialize({\n startOnLoad: false,\n themeCSS\n })),\n mapTo(undefined),\n shareReplay(1)\n )\n\n /* Render diagram */\n mermaid$.subscribe(() => {\n el.classList.add(\"mermaid\") // Hack: mitigate https://bit.ly/3CiN6Du\n const id = `__mermaid_${index++}`\n const host = h(\"div\", { class: \"mermaid\" })\n mermaid.mermaidAPI.render(id, el.textContent, (svg: string) => {\n\n /* Create a shadow root and inject diagram */\n const shadow = host.attachShadow({ mode: \"closed\" })\n shadow.innerHTML = svg\n\n /* Replace code block with diagram */\n el.replaceWith(host)\n })\n })\n\n /* Create and return component */\n return mermaid$\n .pipe(\n mapTo({ ref: el })\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n defer,\n filter,\n finalize,\n map,\n mapTo,\n merge,\n tap\n} from \"rxjs\"\n\nimport { Component } from \"../../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Details\n */\nexport interface Details {\n action: \"open\" | \"close\" /* Details state */\n reveal?: boolean /* Details is revealed */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n target$: Observable /* Location target observable */\n print$: Observable /* Media print observable */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n target$: Observable /* Location target observable */\n print$: Observable /* Media print observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch details\n *\n * @param el - Details element\n * @param options - Options\n *\n * @returns Details observable\n */\nexport function watchDetails(\n el: HTMLDetailsElement, { target$, print$ }: WatchOptions\n): Observable
    {\n let open = true\n return merge(\n\n /* Open and focus details on location target */\n target$\n .pipe(\n map(target => target.closest(\"details:not([open])\")!),\n filter(details => el === details),\n mapTo
    ({ action: \"open\", reveal: true })\n ),\n\n /* Open details on print and close afterwards */\n print$\n .pipe(\n filter(active => active || !open),\n tap(() => open = el.open),\n map(active => ({\n action: active ? \"open\" : \"close\"\n }) as Details)\n )\n )\n}\n\n/**\n * Mount details\n *\n * This function ensures that `details` tags are opened on anchor jumps and\n * prior to printing, so the whole content of the page is visible.\n *\n * @param el - Details element\n * @param options - Options\n *\n * @returns Details component observable\n */\nexport function mountDetails(\n el: HTMLDetailsElement, options: MountOptions\n): Observable> {\n return defer(() => {\n const push$ = new Subject
    ()\n push$.subscribe(({ action, reveal }) => {\n if (action === \"open\")\n el.setAttribute(\"open\", \"\")\n else\n el.removeAttribute(\"open\")\n if (reveal)\n el.scrollIntoView()\n })\n\n /* Create and return component */\n return watchDetails(el, options)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable, of } from \"rxjs\"\n\nimport { renderTable } from \"~/templates\"\nimport { h } from \"~/utilities\"\n\nimport { Component } from \"../../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Data table\n */\nexport interface DataTable {}\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Sentinel for replacement\n */\nconst sentinel = h(\"table\")\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount data table\n *\n * This function wraps a data table in another scrollable container, so it can\n * be smoothly scrolled on smaller screen sizes and won't break the layout.\n *\n * @param el - Data table element\n *\n * @returns Data table component observable\n */\nexport function mountDataTable(\n el: HTMLElement\n): Observable> {\n el.replaceWith(sentinel)\n sentinel.replaceWith(renderTable(el))\n\n /* Create and return component */\n return of({ ref: el })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n animationFrameScheduler,\n asyncScheduler,\n auditTime,\n combineLatest,\n defer,\n finalize,\n fromEvent,\n map,\n mapTo,\n merge,\n startWith,\n subscribeOn,\n takeLast,\n takeUntil,\n tap\n} from \"rxjs\"\n\nimport {\n getElement,\n getElementOffset,\n getElementSize,\n getElements,\n watchElementSize\n} from \"~/browser\"\n\nimport { Component } from \"../../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Content tabs\n */\nexport interface ContentTabs {\n active: HTMLLabelElement /* Active tab label */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch content tabs\n *\n * @param el - Content tabs element\n *\n * @returns Content tabs observable\n */\nexport function watchContentTabs(\n el: HTMLElement\n): Observable {\n const inputs = getElements(\":scope > input\", el)\n const active = inputs.find(input => input.checked) || inputs[0]\n return merge(...inputs.map(input => fromEvent(input, \"change\")\n .pipe(\n mapTo({\n active: getElement(`label[for=${input.id}]`)\n })\n )\n ))\n .pipe(\n startWith({\n active: getElement(`label[for=${active.id}]`)\n } as ContentTabs)\n )\n}\n\n/**\n * Mount content tabs\n *\n * This function scrolls the active tab into view. While this functionality is\n * provided by browsers as part of `scrollInfoView`, browsers will always also\n * scroll the vertical axis, which we do not want. Thus, we decided to provide\n * this functionality ourselves.\n *\n * @param el - Content tabs element\n *\n * @returns Content tabs component observable\n */\nexport function mountContentTabs(\n el: HTMLElement\n): Observable> {\n const container = getElement(\".tabbed-labels\", el)\n return defer(() => {\n const push$ = new Subject()\n combineLatest([push$, watchElementSize(el)])\n .pipe(\n auditTime(1, animationFrameScheduler),\n takeUntil(push$.pipe(takeLast(1)))\n )\n .subscribe({\n\n /* Handle emission */\n next([{ active }]) {\n const offset = getElementOffset(active)\n const { width } = getElementSize(active)\n\n /* Set tab indicator offset and width */\n el.style.setProperty(\"--md-indicator-x\", `${offset.x}px`)\n el.style.setProperty(\"--md-indicator-width\", `${width}px`)\n\n /* Smoothly scroll container */\n container.scrollTo({\n behavior: \"smooth\",\n left: offset.x\n })\n },\n\n /* Handle complete */\n complete() {\n el.style.removeProperty(\"--md-indicator-x\")\n el.style.removeProperty(\"--md-indicator-width\")\n }\n })\n\n /* Create and return component */\n return watchContentTabs(el)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n .pipe(\n subscribeOn(asyncScheduler)\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable, merge } from \"rxjs\"\n\nimport { getElements } from \"~/browser\"\n\nimport { Component } from \"../../_\"\nimport { Annotation } from \"../annotation\"\nimport {\n CodeBlock,\n Mermaid,\n mountCodeBlock,\n mountMermaid\n} from \"../code\"\nimport {\n Details,\n mountDetails\n} from \"../details\"\nimport {\n DataTable,\n mountDataTable\n} from \"../table\"\nimport {\n ContentTabs,\n mountContentTabs\n} from \"../tabs\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Content\n */\nexport type Content =\n | Annotation\n | ContentTabs\n | CodeBlock\n | Mermaid\n | DataTable\n | Details\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n target$: Observable /* Location target observable */\n print$: Observable /* Media print observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount content\n *\n * This function mounts all components that are found in the content of the\n * actual article, including code blocks, data tables and details.\n *\n * @param el - Content element\n * @param options - Options\n *\n * @returns Content component observable\n */\nexport function mountContent(\n el: HTMLElement, { target$, print$ }: MountOptions\n): Observable> {\n return merge(\n\n /* Code blocks */\n ...getElements(\"pre:not(.mermaid) > code\", el)\n .map(child => mountCodeBlock(child, { print$ })),\n\n /* Mermaid diagrams */\n ...getElements(\"pre.mermaid\", el)\n .map(child => mountMermaid(child)),\n\n /* Data tables */\n ...getElements(\"table:not([class])\", el)\n .map(child => mountDataTable(child)),\n\n /* Details */\n ...getElements(\"details\", el)\n .map(child => mountDetails(child, { target$, print$ })),\n\n /* Content tabs */\n ...getElements(\"[data-tabs]\", el)\n .map(child => mountContentTabs(child))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n defer,\n delay,\n finalize,\n map,\n merge,\n of,\n switchMap,\n tap\n} from \"rxjs\"\n\nimport { getElement } from \"~/browser\"\n\nimport { Component } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Dialog\n */\nexport interface Dialog {\n message: string /* Dialog message */\n active: boolean /* Dialog is active */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n alert$: Subject /* Alert subject */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n alert$: Subject /* Alert subject */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch dialog\n *\n * @param _el - Dialog element\n * @param options - Options\n *\n * @returns Dialog observable\n */\nexport function watchDialog(\n _el: HTMLElement, { alert$ }: WatchOptions\n): Observable {\n return alert$\n .pipe(\n switchMap(message => merge(\n of(true),\n of(false).pipe(delay(2000))\n )\n .pipe(\n map(active => ({ message, active }))\n )\n )\n )\n}\n\n/**\n * Mount dialog\n *\n * This function reveals the dialog in the right corner when a new alert is\n * emitted through the subject that is passed as part of the options.\n *\n * @param el - Dialog element\n * @param options - Options\n *\n * @returns Dialog component observable\n */\nexport function mountDialog(\n el: HTMLElement, options: MountOptions\n): Observable> {\n const inner = getElement(\".md-typeset\", el)\n return defer(() => {\n const push$ = new Subject()\n push$.subscribe(({ message, active }) => {\n inner.textContent = message\n if (active)\n el.setAttribute(\"data-md-state\", \"open\")\n else\n el.removeAttribute(\"data-md-state\")\n })\n\n /* Create and return component */\n return watchDialog(el, options)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n bufferCount,\n combineLatest,\n combineLatestWith,\n defer,\n distinctUntilChanged,\n distinctUntilKeyChanged,\n filter,\n map,\n of,\n shareReplay,\n startWith,\n switchMap,\n takeLast,\n takeUntil\n} from \"rxjs\"\n\nimport { feature } from \"~/_\"\nimport {\n Viewport,\n watchElementSize,\n watchToggle\n} from \"~/browser\"\n\nimport { Component } from \"../../_\"\nimport { Main } from \"../../main\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Header\n */\nexport interface Header {\n height: number /* Header visible height */\n sticky: boolean /* Header stickyness */\n hidden: boolean /* Header is hidden */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n viewport$: Observable /* Viewport observable */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n main$: Observable
    /* Main area observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Compute whether the header is hidden\n *\n * If the user scrolls past a certain threshold, the header can be hidden when\n * scrolling down, and shown when scrolling up.\n *\n * @param options - Options\n *\n * @returns Toggle observable\n */\nfunction isHidden({ viewport$ }: WatchOptions): Observable {\n if (!feature(\"header.autohide\"))\n return of(false)\n\n /* Compute direction and turning point */\n const direction$ = viewport$\n .pipe(\n map(({ offset: { y } }) => y),\n bufferCount(2, 1),\n map(([a, b]) => [a < b, b] as const),\n distinctUntilKeyChanged(0)\n )\n\n /* Compute whether header should be hidden */\n const hidden$ = combineLatest([viewport$, direction$])\n .pipe(\n filter(([{ offset }, [, y]]) => Math.abs(y - offset.y) > 100),\n map(([, [direction]]) => direction),\n distinctUntilChanged()\n )\n\n /* Compute threshold for hiding */\n const search$ = watchToggle(\"search\")\n return combineLatest([viewport$, search$])\n .pipe(\n map(([{ offset }, search]) => offset.y > 400 && !search),\n distinctUntilChanged(),\n switchMap(active => active ? hidden$ : of(false)),\n startWith(false)\n )\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch header\n *\n * @param el - Header element\n * @param options - Options\n *\n * @returns Header observable\n */\nexport function watchHeader(\n el: HTMLElement, options: WatchOptions\n): Observable
    {\n return defer(() => {\n const styles = getComputedStyle(el)\n return of(\n styles.position === \"sticky\" ||\n styles.position === \"-webkit-sticky\"\n )\n })\n .pipe(\n combineLatestWith(watchElementSize(el), isHidden(options)),\n map(([sticky, { height }, hidden]) => ({\n height: sticky ? height : 0,\n sticky,\n hidden\n })),\n distinctUntilChanged((a, b) => (\n a.sticky === b.sticky &&\n a.height === b.height &&\n a.hidden === b.hidden\n )),\n shareReplay(1)\n )\n}\n\n/**\n * Mount header\n *\n * This function manages the different states of the header, i.e. whether it's\n * hidden or rendered with a shadow. This depends heavily on the main area.\n *\n * @param el - Header element\n * @param options - Options\n *\n * @returns Header component observable\n */\nexport function mountHeader(\n el: HTMLElement, { header$, main$ }: MountOptions\n): Observable> {\n return defer(() => {\n const push$ = new Subject
    ()\n push$\n .pipe(\n distinctUntilKeyChanged(\"active\"),\n combineLatestWith(header$)\n )\n .subscribe(([{ active }, { hidden }]) => {\n if (active)\n el.setAttribute(\"data-md-state\", hidden ? \"hidden\" : \"shadow\")\n else\n el.removeAttribute(\"data-md-state\")\n })\n\n /* Link to main area */\n main$.subscribe(push$)\n\n /* Create and return component */\n return header$\n .pipe(\n takeUntil(push$.pipe(takeLast(1))),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n EMPTY,\n Observable,\n Subject,\n defer,\n distinctUntilKeyChanged,\n finalize,\n map,\n tap\n} from \"rxjs\"\n\nimport {\n Viewport,\n getElementSize,\n getOptionalElement,\n watchViewportAt\n} from \"~/browser\"\n\nimport { Component } from \"../../_\"\nimport { Header } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Header\n */\nexport interface HeaderTitle {\n active: boolean /* Header title is active */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch header title\n *\n * @param el - Heading element\n * @param options - Options\n *\n * @returns Header title observable\n */\nexport function watchHeaderTitle(\n el: HTMLElement, { viewport$, header$ }: WatchOptions\n): Observable {\n return watchViewportAt(el, { viewport$, header$ })\n .pipe(\n map(({ offset: { y } }) => {\n const { height } = getElementSize(el)\n return {\n active: y >= height\n }\n }),\n distinctUntilKeyChanged(\"active\")\n )\n}\n\n/**\n * Mount header title\n *\n * This function swaps the header title from the site title to the title of the\n * current page when the user scrolls past the first headline.\n *\n * @param el - Header title element\n * @param options - Options\n *\n * @returns Header title component observable\n */\nexport function mountHeaderTitle(\n el: HTMLElement, options: MountOptions\n): Observable> {\n return defer(() => {\n const push$ = new Subject()\n push$.subscribe(({ active }) => {\n if (active)\n el.setAttribute(\"data-md-state\", \"active\")\n else\n el.removeAttribute(\"data-md-state\")\n })\n\n /* Obtain headline, if any */\n const heading = getOptionalElement(\"article h1\")\n if (typeof heading === \"undefined\")\n return EMPTY\n\n /* Create and return component */\n return watchHeaderTitle(heading, options)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n combineLatest,\n distinctUntilChanged,\n distinctUntilKeyChanged,\n map,\n switchMap\n} from \"rxjs\"\n\nimport {\n Viewport,\n watchElementSize\n} from \"~/browser\"\n\nimport { Header } from \"../header\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Main area\n */\nexport interface Main {\n offset: number /* Main area top offset */\n height: number /* Main area visible height */\n active: boolean /* Main area is active */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch main area\n *\n * This function returns an observable that computes the visual parameters of\n * the main area which depends on the viewport vertical offset and height, as\n * well as the height of the header element, if the header is fixed.\n *\n * @param el - Main area element\n * @param options - Options\n *\n * @returns Main area observable\n */\nexport function watchMain(\n el: HTMLElement, { viewport$, header$ }: WatchOptions\n): Observable
    {\n\n /* Compute necessary adjustment for header */\n const adjust$ = header$\n .pipe(\n map(({ height }) => height),\n distinctUntilChanged()\n )\n\n /* Compute the main area's top and bottom borders */\n const border$ = adjust$\n .pipe(\n switchMap(() => watchElementSize(el)\n .pipe(\n map(({ height }) => ({\n top: el.offsetTop,\n bottom: el.offsetTop + height\n })),\n distinctUntilKeyChanged(\"bottom\")\n )\n )\n )\n\n /* Compute the main area's offset, visible height and if we scrolled past */\n return combineLatest([adjust$, border$, viewport$])\n .pipe(\n map(([header, { top, bottom }, { offset: { y }, size: { height } }]) => {\n height = Math.max(0, height\n - Math.max(0, top - y, header)\n - Math.max(0, height + y - bottom)\n )\n return {\n offset: top - header,\n height,\n active: top - header <= y\n }\n }),\n distinctUntilChanged((a, b) => (\n a.offset === b.offset &&\n a.height === b.height &&\n a.active === b.active\n ))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n defer,\n finalize,\n fromEvent,\n map,\n mapTo,\n mergeMap,\n of,\n shareReplay,\n startWith,\n tap\n} from \"rxjs\"\n\nimport { getElements } from \"~/browser\"\n\nimport { Component } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Palette colors\n */\nexport interface PaletteColor {\n scheme?: string /* Color scheme */\n primary?: string /* Primary color */\n accent?: string /* Accent color */\n}\n\n/**\n * Palette\n */\nexport interface Palette {\n index: number /* Palette index */\n color: PaletteColor /* Palette colors */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch color palette\n *\n * @param inputs - Color palette element\n *\n * @returns Color palette observable\n */\nexport function watchPalette(\n inputs: HTMLInputElement[]\n): Observable {\n const current = __md_get(\"__palette\") || {\n index: inputs.findIndex(input => matchMedia(\n input.getAttribute(\"data-md-color-media\")!\n ).matches)\n }\n\n /* Emit changes in color palette */\n return of(...inputs)\n .pipe(\n mergeMap(input => fromEvent(input, \"change\")\n .pipe(\n mapTo(input)\n )\n ),\n startWith(inputs[Math.max(0, current.index)]),\n map(input => ({\n index: inputs.indexOf(input),\n color: {\n scheme: input.getAttribute(\"data-md-color-scheme\"),\n primary: input.getAttribute(\"data-md-color-primary\"),\n accent: input.getAttribute(\"data-md-color-accent\")\n }\n } as Palette)),\n shareReplay(1)\n )\n}\n\n/**\n * Mount color palette\n *\n * @param el - Color palette element\n *\n * @returns Color palette component observable\n */\nexport function mountPalette(\n el: HTMLElement\n): Observable> {\n return defer(() => {\n const push$ = new Subject()\n push$.subscribe(palette => {\n\n /* Set color palette */\n for (const [key, value] of Object.entries(palette.color))\n document.body.setAttribute(`data-md-color-${key}`, value)\n\n /* Toggle visibility */\n for (let index = 0; index < inputs.length; index++) {\n const label = inputs[index].nextElementSibling\n if (label instanceof HTMLElement)\n label.hidden = palette.index !== index\n }\n\n /* Persist preference in local storage */\n __md_set(\"__palette\", palette)\n })\n\n /* Create and return component */\n const inputs = getElements(\"input\", el)\n return watchPalette(inputs)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport ClipboardJS from \"clipboard\"\nimport {\n Observable,\n Subject,\n mapTo,\n tap\n} from \"rxjs\"\n\nimport { translation } from \"~/_\"\nimport { getElement } from \"~/browser\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Setup options\n */\ninterface SetupOptions {\n alert$: Subject /* Alert subject */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Extract text to copy\n *\n * @param el - HTML element\n *\n * @returns Extracted text\n */\nfunction extract(el: HTMLElement): string {\n el.setAttribute(\"data-md-copying\", \"\")\n const text = el.innerText\n el.removeAttribute(\"data-md-copying\")\n return text\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set up Clipboard.js integration\n *\n * @param options - Options\n */\nexport function setupClipboardJS(\n { alert$ }: SetupOptions\n): void {\n if (ClipboardJS.isSupported()) {\n new Observable(subscriber => {\n new ClipboardJS(\"[data-clipboard-target], [data-clipboard-text]\", {\n text: el => (\n el.getAttribute(\"data-clipboard-text\")! ||\n extract(getElement(\n el.getAttribute(\"data-clipboard-target\")!\n ))\n )\n })\n .on(\"success\", ev => subscriber.next(ev))\n })\n .pipe(\n tap(ev => {\n const trigger = ev.trigger as HTMLElement\n trigger.focus()\n }),\n mapTo(translation(\"clipboard.copied\"))\n )\n .subscribe(alert$)\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n defaultIfEmpty,\n map,\n of,\n tap\n} from \"rxjs\"\n\nimport { configuration } from \"~/_\"\nimport { getElements, requestXML } from \"~/browser\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Sitemap, i.e. a list of URLs\n */\nexport type Sitemap = string[]\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Preprocess a list of URLs\n *\n * This function replaces the `site_url` in the sitemap with the actual base\n * URL, to allow instant loading to work in occasions like Netlify previews.\n *\n * @param urls - URLs\n *\n * @returns URL path parts\n */\nfunction preprocess(urls: Sitemap): Sitemap {\n if (urls.length < 2)\n return [\"\"]\n\n /* Take the first two URLs and remove everything after the last slash */\n const [root, next] = [...urls]\n .sort((a, b) => a.length - b.length)\n .map(url => url.replace(/[^/]+$/, \"\"))\n\n /* Compute common prefix */\n let index = 0\n if (root === next)\n index = root.length\n else\n while (root.charCodeAt(index) === next.charCodeAt(index))\n index++\n\n /* Remove common prefix and return in original order */\n return urls.map(url => url.replace(root.slice(0, index), \"\"))\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch the sitemap for the given base URL\n *\n * @param base - Base URL\n *\n * @returns Sitemap observable\n */\nexport function fetchSitemap(base?: URL): Observable {\n const cached = __md_get(\"__sitemap\", sessionStorage, base)\n if (cached) {\n return of(cached)\n } else {\n const config = configuration()\n return requestXML(new URL(\"sitemap.xml\", base || config.base))\n .pipe(\n map(sitemap => preprocess(getElements(\"loc\", sitemap)\n .map(node => node.textContent!)\n )),\n defaultIfEmpty([]),\n tap(sitemap => __md_set(\"__sitemap\", sitemap, sessionStorage, base))\n )\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n EMPTY,\n NEVER,\n Observable,\n Subject,\n bufferCount,\n catchError,\n concatMap,\n debounceTime,\n distinctUntilChanged,\n distinctUntilKeyChanged,\n filter,\n fromEvent,\n map,\n merge,\n of,\n sample,\n share,\n skip,\n skipUntil,\n switchMap\n} from \"rxjs\"\n\nimport { configuration, feature } from \"~/_\"\nimport {\n Viewport,\n ViewportOffset,\n getElements,\n getOptionalElement,\n request,\n setLocation,\n setLocationHash\n} from \"~/browser\"\nimport { getComponentElement } from \"~/components\"\nimport { h } from \"~/utilities\"\n\nimport { fetchSitemap } from \"../sitemap\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * History state\n */\nexport interface HistoryState {\n url: URL /* State URL */\n offset?: ViewportOffset /* State viewport offset */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Setup options\n */\ninterface SetupOptions {\n document$: Subject /* Document subject */\n location$: Subject /* Location subject */\n viewport$: Observable /* Viewport observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set up instant loading\n *\n * When fetching, theoretically, we could use `responseType: \"document\"`, but\n * since all MkDocs links are relative, we need to make sure that the current\n * location matches the document we just loaded. Otherwise any relative links\n * in the document could use the old location.\n *\n * This is the reason why we need to synchronize history events and the process\n * of fetching the document for navigation changes (except `popstate` events):\n *\n * 1. Fetch document via `XMLHTTPRequest`\n * 2. Set new location via `history.pushState`\n * 3. Parse and emit fetched document\n *\n * For `popstate` events, we must not use `history.pushState`, or the forward\n * history will be irreversibly overwritten. In case the request fails, the\n * location change is dispatched regularly.\n *\n * @param options - Options\n */\nexport function setupInstantLoading(\n { document$, location$, viewport$ }: SetupOptions\n): void {\n const config = configuration()\n if (location.protocol === \"file:\")\n return\n\n /* Disable automatic scroll restoration */\n if (\"scrollRestoration\" in history) {\n history.scrollRestoration = \"manual\"\n\n /* Hack: ensure that reloads restore viewport offset */\n fromEvent(window, \"beforeunload\")\n .subscribe(() => {\n history.scrollRestoration = \"auto\"\n })\n }\n\n /* Hack: ensure absolute favicon link to omit 404s when switching */\n const favicon = getOptionalElement(\"link[rel=icon]\")\n if (typeof favicon !== \"undefined\")\n favicon.href = favicon.href\n\n /* Intercept internal navigation */\n const push$ = fetchSitemap()\n .pipe(\n map(paths => paths.map(path => `${new URL(path, config.base)}`)),\n switchMap(urls => fromEvent(document.body, \"click\")\n .pipe(\n filter(ev => !ev.metaKey && !ev.ctrlKey),\n switchMap(ev => {\n if (ev.target instanceof Element) {\n const el = ev.target.closest(\"a\")\n if (el && !el.target) {\n const url = new URL(el.href)\n\n /* Canonicalize URL */\n url.search = \"\"\n url.hash = \"\"\n\n /* Check if URL should be intercepted */\n if (\n url.pathname !== location.pathname &&\n urls.includes(url.toString())\n ) {\n ev.preventDefault()\n return of({\n url: new URL(el.href)\n })\n }\n }\n }\n return NEVER\n })\n )\n ),\n share()\n )\n\n /* Intercept history back and forward */\n const pop$ = fromEvent(window, \"popstate\")\n .pipe(\n filter(ev => ev.state !== null),\n map(ev => ({\n url: new URL(location.href),\n offset: ev.state\n })),\n share()\n )\n\n /* Emit location change */\n merge(push$, pop$)\n .pipe(\n distinctUntilChanged((a, b) => a.url.href === b.url.href),\n map(({ url }) => url)\n )\n .subscribe(location$)\n\n /* Fetch document via `XMLHTTPRequest` */\n const response$ = location$\n .pipe(\n distinctUntilKeyChanged(\"pathname\"),\n switchMap(url => request(url.href)\n .pipe(\n catchError(() => {\n setLocation(url)\n return NEVER\n })\n )\n ),\n share()\n )\n\n /* Set new location via `history.pushState` */\n push$\n .pipe(\n sample(response$)\n )\n .subscribe(({ url }) => {\n history.pushState({}, \"\", `${url}`)\n })\n\n /* Parse and emit fetched document */\n const dom = new DOMParser()\n response$\n .pipe(\n switchMap(res => res.text()),\n map(res => dom.parseFromString(res, \"text/html\"))\n )\n .subscribe(document$)\n\n /* Replace meta tags and components */\n document$\n .pipe(\n skip(1)\n )\n .subscribe(replacement => {\n for (const selector of [\n\n /* Meta tags */\n \"title\",\n \"link[rel=canonical]\",\n \"meta[name=author]\",\n \"meta[name=description]\",\n\n /* Components */\n \"[data-md-component=announce]\",\n \"[data-md-component=container]\",\n \"[data-md-component=header-topic]\",\n \"[data-md-component=outdated]\",\n \"[data-md-component=logo]\",\n \"[data-md-component=skip]\",\n ...feature(\"navigation.tabs.sticky\")\n ? [\"[data-md-component=tabs]\"]\n : []\n ]) {\n const source = getOptionalElement(selector)\n const target = getOptionalElement(selector, replacement)\n if (\n typeof source !== \"undefined\" &&\n typeof target !== \"undefined\"\n ) {\n source.replaceWith(target)\n }\n }\n })\n\n /* Re-evaluate scripts */\n document$\n .pipe(\n skip(1),\n map(() => getComponentElement(\"container\")),\n switchMap(el => getElements(\"script\", el)),\n concatMap(el => {\n const script = h(\"script\")\n if (el.src) {\n for (const name of el.getAttributeNames())\n script.setAttribute(name, el.getAttribute(name)!)\n el.replaceWith(script)\n\n /* Complete when script is loaded */\n return new Observable(observer => {\n script.onload = () => observer.complete()\n })\n\n /* Complete immediately */\n } else {\n script.textContent = el.textContent\n el.replaceWith(script)\n return EMPTY\n }\n })\n )\n .subscribe()\n\n /* Emit history state change */\n merge(push$, pop$)\n .pipe(\n sample(document$)\n )\n .subscribe(({ url, offset }) => {\n if (url.hash && !offset) {\n setLocationHash(url.hash)\n } else {\n window.scrollTo(0, offset?.y || 0)\n }\n })\n\n /* Debounce update of viewport offset */\n viewport$\n .pipe(\n skipUntil(push$),\n debounceTime(250),\n distinctUntilKeyChanged(\"offset\")\n )\n .subscribe(({ offset }) => {\n history.replaceState(offset, \"\")\n })\n\n /* Set viewport offset from history */\n merge(push$, pop$)\n .pipe(\n bufferCount(2, 1),\n filter(([a, b]) => a.url.pathname === b.url.pathname),\n map(([, state]) => state)\n )\n .subscribe(({ offset }) => {\n window.scrollTo(0, offset?.y || 0)\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport escapeHTML from \"escape-html\"\n\nimport { SearchIndexDocument } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search document\n */\nexport interface SearchDocument extends SearchIndexDocument {\n parent?: SearchIndexDocument /* Parent article */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search document mapping\n */\nexport type SearchDocumentMap = Map\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Create a search document mapping\n *\n * @param docs - Search index documents\n *\n * @returns Search document map\n */\nexport function setupSearchDocumentMap(\n docs: SearchIndexDocument[]\n): SearchDocumentMap {\n const documents = new Map()\n const parents = new Set()\n for (const doc of docs) {\n const [path, hash] = doc.location.split(\"#\")\n\n /* Extract location, title and tags */\n const location = doc.location\n const title = doc.title\n const tags = doc.tags\n\n /* Escape and cleanup text */\n const text = escapeHTML(doc.text)\n .replace(/\\s+(?=[,.:;!?])/g, \"\")\n .replace(/\\s+/g, \" \")\n\n /* Handle section */\n if (hash) {\n const parent = documents.get(path)!\n\n /* Ignore first section, override article */\n if (!parents.has(parent)) {\n parent.title = doc.title\n parent.text = text\n\n /* Remember that we processed the article */\n parents.add(parent)\n\n /* Add subsequent section */\n } else {\n documents.set(location, {\n location,\n title,\n text,\n parent\n })\n }\n\n /* Add article */\n } else {\n documents.set(location, {\n location,\n title,\n text,\n ...tags && { tags }\n })\n }\n }\n return documents\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport escapeHTML from \"escape-html\"\n\nimport { SearchIndexConfig } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search highlight function\n *\n * @param value - Value\n *\n * @returns Highlighted value\n */\nexport type SearchHighlightFn = (value: string) => string\n\n/**\n * Search highlight factory function\n *\n * @param query - Query value\n *\n * @returns Search highlight function\n */\nexport type SearchHighlightFactoryFn = (query: string) => SearchHighlightFn\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Create a search highlighter\n *\n * @param config - Search index configuration\n * @param escape - Whether to escape HTML\n *\n * @returns Search highlight factory function\n */\nexport function setupSearchHighlighter(\n config: SearchIndexConfig, escape: boolean\n): SearchHighlightFactoryFn {\n const separator = new RegExp(config.separator, \"img\")\n const highlight = (_: unknown, data: string, term: string) => {\n return `${data}${term}`\n }\n\n /* Return factory function */\n return (query: string) => {\n query = query\n .replace(/[\\s*+\\-:~^]+/g, \" \")\n .trim()\n\n /* Create search term match expression */\n const match = new RegExp(`(^|${config.separator})(${\n query\n .replace(/[|\\\\{}()[\\]^$+*?.-]/g, \"\\\\$&\")\n .replace(separator, \"|\")\n })`, \"img\")\n\n /* Highlight string value */\n return value => (\n escape\n ? escapeHTML(value)\n : value\n )\n .replace(match, highlight)\n .replace(/<\\/mark>(\\s+)]*>/img, \"$1\")\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search transformation function\n *\n * @param value - Query value\n *\n * @returns Transformed query value\n */\nexport type SearchTransformFn = (value: string) => string\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Default transformation function\n *\n * 1. Search for terms in quotation marks and prepend a `+` modifier to denote\n * that the resulting document must contain all terms, converting the query\n * to an `AND` query (as opposed to the default `OR` behavior). While users\n * may expect terms enclosed in quotation marks to map to span queries, i.e.\n * for which order is important, Lunr.js doesn't support them, so the best\n * we can do is to convert the terms to an `AND` query.\n *\n * 2. Replace control characters which are not located at the beginning of the\n * query or preceded by white space, or are not followed by a non-whitespace\n * character or are at the end of the query string. Furthermore, filter\n * unmatched quotation marks.\n *\n * 3. Trim excess whitespace from left and right.\n *\n * @param query - Query value\n *\n * @returns Transformed query value\n */\nexport function defaultTransform(query: string): string {\n return query\n .split(/\"([^\"]+)\"/g) /* => 1 */\n .map((terms, index) => index & 1\n ? terms.replace(/^\\b|^(?![^\\x00-\\x7F]|$)|\\s+/g, \" +\")\n : terms\n )\n .join(\"\")\n .replace(/\"|(?:^|\\s+)[*+\\-:^~]+(?=\\s+|$)/g, \"\") /* => 2 */\n .trim() /* => 3 */\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A RTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { SearchIndex, SearchResult } from \"../../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search message type\n */\nexport const enum SearchMessageType {\n SETUP, /* Search index setup */\n READY, /* Search index ready */\n QUERY, /* Search query */\n RESULT /* Search results */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Message containing the data necessary to setup the search index\n */\nexport interface SearchSetupMessage {\n type: SearchMessageType.SETUP /* Message type */\n data: SearchIndex /* Message data */\n}\n\n/**\n * Message indicating the search index is ready\n */\nexport interface SearchReadyMessage {\n type: SearchMessageType.READY /* Message type */\n}\n\n/**\n * Message containing a search query\n */\nexport interface SearchQueryMessage {\n type: SearchMessageType.QUERY /* Message type */\n data: string /* Message data */\n}\n\n/**\n * Message containing results for a search query\n */\nexport interface SearchResultMessage {\n type: SearchMessageType.RESULT /* Message type */\n data: SearchResult /* Message data */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Message exchanged with the search worker\n */\nexport type SearchMessage =\n | SearchSetupMessage\n | SearchReadyMessage\n | SearchQueryMessage\n | SearchResultMessage\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Type guard for search setup messages\n *\n * @param message - Search worker message\n *\n * @returns Test result\n */\nexport function isSearchSetupMessage(\n message: SearchMessage\n): message is SearchSetupMessage {\n return message.type === SearchMessageType.SETUP\n}\n\n/**\n * Type guard for search ready messages\n *\n * @param message - Search worker message\n *\n * @returns Test result\n */\nexport function isSearchReadyMessage(\n message: SearchMessage\n): message is SearchReadyMessage {\n return message.type === SearchMessageType.READY\n}\n\n/**\n * Type guard for search query messages\n *\n * @param message - Search worker message\n *\n * @returns Test result\n */\nexport function isSearchQueryMessage(\n message: SearchMessage\n): message is SearchQueryMessage {\n return message.type === SearchMessageType.QUERY\n}\n\n/**\n * Type guard for search result messages\n *\n * @param message - Search worker message\n *\n * @returns Test result\n */\nexport function isSearchResultMessage(\n message: SearchMessage\n): message is SearchResultMessage {\n return message.type === SearchMessageType.RESULT\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A RTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n ObservableInput,\n Subject,\n from,\n map,\n share\n} from \"rxjs\"\n\nimport { configuration, feature, translation } from \"~/_\"\nimport { WorkerHandler, watchWorker } from \"~/browser\"\n\nimport { SearchIndex } from \"../../_\"\nimport {\n SearchOptions,\n SearchPipeline\n} from \"../../options\"\nimport {\n SearchMessage,\n SearchMessageType,\n SearchSetupMessage,\n isSearchResultMessage\n} from \"../message\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search worker\n */\nexport type SearchWorker = WorkerHandler\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set up search index\n *\n * @param data - Search index\n *\n * @returns Search index\n */\nfunction setupSearchIndex({ config, docs }: SearchIndex): SearchIndex {\n\n /* Override default language with value from translation */\n if (config.lang.length === 1 && config.lang[0] === \"en\")\n config.lang = [\n translation(\"search.config.lang\")\n ]\n\n /* Override default separator with value from translation */\n if (config.separator === \"[\\\\s\\\\-]+\")\n config.separator = translation(\"search.config.separator\")\n\n /* Set pipeline from translation */\n const pipeline = translation(\"search.config.pipeline\")\n .split(/\\s*,\\s*/)\n .filter(Boolean) as SearchPipeline\n\n /* Determine search options */\n const options: SearchOptions = {\n pipeline,\n suggestions: feature(\"search.suggest\")\n }\n\n /* Return search index after defaulting */\n return { config, docs, options }\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set up search worker\n *\n * This function creates a web worker to set up and query the search index,\n * which is done using Lunr.js. The index must be passed as an observable to\n * enable hacks like _localsearch_ via search index embedding as JSON.\n *\n * @param url - Worker URL\n * @param index - Search index observable input\n *\n * @returns Search worker\n */\nexport function setupSearchWorker(\n url: string, index: ObservableInput\n): SearchWorker {\n const config = configuration()\n const worker = new Worker(url)\n\n /* Create communication channels and resolve relative links */\n const tx$ = new Subject()\n const rx$ = watchWorker(worker, { tx$ })\n .pipe(\n map(message => {\n if (isSearchResultMessage(message)) {\n for (const result of message.data.items)\n for (const document of result)\n document.location = `${new URL(document.location, config.base)}`\n }\n return message\n }),\n share()\n )\n\n /* Set up search index */\n from(index)\n .pipe(\n map(data => ({\n type: SearchMessageType.SETUP,\n data: setupSearchIndex(data)\n } as SearchSetupMessage))\n )\n .subscribe(tx$.next.bind(tx$))\n\n /* Return search worker */\n return { tx$, rx$ }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n EMPTY,\n Subject,\n combineLatest,\n filter,\n fromEvent,\n map,\n of,\n switchMap,\n switchMapTo\n} from \"rxjs\"\n\nimport { configuration } from \"~/_\"\nimport {\n getElement,\n getLocation,\n requestJSON,\n setLocation\n} from \"~/browser\"\nimport { getComponentElements } from \"~/components\"\nimport {\n Version,\n renderVersionSelector\n} from \"~/templates\"\n\nimport { fetchSitemap } from \"../sitemap\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Setup options\n */\ninterface SetupOptions {\n document$: Subject /* Document subject */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set up version selector\n *\n * @param options - Options\n */\nexport function setupVersionSelector(\n { document$ }: SetupOptions\n): void {\n const config = configuration()\n const versions$ = requestJSON(\n new URL(\"../versions.json\", config.base)\n )\n\n /* Determine current version */\n const current$ = versions$\n .pipe(\n map(versions => {\n const [, current] = config.base.match(/([^/]+)\\/?$/)!\n return versions.find(({ version, aliases }) => (\n version === current || aliases.includes(current)\n )) || versions[0]\n })\n )\n\n /* Intercept inter-version navigation */\n combineLatest([versions$, current$])\n .pipe(\n map(([versions, current]) => new Map(versions\n .filter(version => version !== current)\n .map(version => [\n `${new URL(`../${version.version}/`, config.base)}`,\n version\n ])\n )),\n switchMap(urls => fromEvent(document.body, \"click\")\n .pipe(\n filter(ev => !ev.metaKey && !ev.ctrlKey),\n switchMap(ev => {\n if (ev.target instanceof Element) {\n const el = ev.target.closest(\"a\")\n if (el && !el.target && urls.has(el.href)) {\n ev.preventDefault()\n return of(el.href)\n }\n }\n return EMPTY\n }),\n switchMap(url => {\n const { version } = urls.get(url)!\n return fetchSitemap(new URL(url))\n .pipe(\n map(sitemap => {\n const location = getLocation()\n const path = location.href.replace(config.base, \"\")\n return sitemap.includes(path)\n ? new URL(`../${version}/${path}`, config.base)\n : new URL(url)\n })\n )\n })\n )\n )\n )\n .subscribe(url => setLocation(url))\n\n /* Render version selector and warning */\n combineLatest([versions$, current$])\n .subscribe(([versions, current]) => {\n const topic = getElement(\".md-header__topic\")\n topic.appendChild(renderVersionSelector(versions, current))\n })\n\n /* Integrate outdated version banner with instant loading */\n document$.pipe(switchMapTo(current$))\n .subscribe(current => {\n\n /* Check if version state was already determined */\n let outdated = __md_get(\"__outdated\", sessionStorage)\n if (outdated === null) {\n const latest = config.version?.default || \"latest\"\n outdated = !current.aliases.includes(latest)\n\n /* Persist version state in session storage */\n __md_set(\"__outdated\", outdated, sessionStorage)\n }\n\n /* Unhide outdated version banner */\n if (outdated)\n for (const warning of getComponentElements(\"outdated\"))\n warning.hidden = false\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n combineLatest,\n delay,\n distinctUntilChanged,\n distinctUntilKeyChanged,\n filter,\n finalize,\n fromEvent,\n map,\n merge,\n shareReplay,\n startWith,\n take,\n takeLast,\n takeUntil,\n tap\n} from \"rxjs\"\n\nimport { translation } from \"~/_\"\nimport {\n getLocation,\n setToggle,\n watchElementFocus,\n watchToggle\n} from \"~/browser\"\nimport {\n SearchMessageType,\n SearchQueryMessage,\n SearchWorker,\n defaultTransform,\n isSearchReadyMessage\n} from \"~/integrations\"\n\nimport { Component } from \"../../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search query\n */\nexport interface SearchQuery {\n value: string /* Query value */\n focus: boolean /* Query focus */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch search query\n *\n * Note that the focus event which triggers re-reading the current query value\n * is delayed by `1ms` so the input's empty state is allowed to propagate.\n *\n * @param el - Search query element\n * @param worker - Search worker\n *\n * @returns Search query observable\n */\nexport function watchSearchQuery(\n el: HTMLInputElement, { rx$ }: SearchWorker\n): Observable {\n const fn = __search?.transform || defaultTransform\n\n /* Immediately show search dialog */\n const { searchParams } = getLocation()\n if (searchParams.has(\"q\"))\n setToggle(\"search\", true)\n\n /* Intercept query parameter (deep link) */\n const param$ = rx$\n .pipe(\n filter(isSearchReadyMessage),\n take(1),\n map(() => searchParams.get(\"q\") || \"\")\n )\n\n /* Remove query parameter when search is closed */\n watchToggle(\"search\")\n .pipe(\n filter(active => !active),\n take(1)\n )\n .subscribe(() => {\n const url = new URL(location.href)\n url.searchParams.delete(\"q\")\n history.replaceState({}, \"\", `${url}`)\n })\n\n /* Set query from parameter */\n param$.subscribe(value => { // TODO: not ideal - find a better way\n if (value)\n el.value = value\n })\n\n /* Intercept focus and input events */\n const focus$ = watchElementFocus(el)\n const value$ = merge(\n fromEvent(el, \"keyup\"),\n fromEvent(el, \"focus\").pipe(delay(1)),\n param$\n )\n .pipe(\n map(() => fn(el.value)),\n startWith(\"\"),\n distinctUntilChanged(),\n )\n\n /* Combine into single observable */\n return combineLatest([value$, focus$])\n .pipe(\n map(([value, focus]) => ({ value, focus })),\n shareReplay(1)\n )\n}\n\n/**\n * Mount search query\n *\n * @param el - Search query element\n * @param worker - Search worker\n *\n * @returns Search query component observable\n */\nexport function mountSearchQuery(\n el: HTMLInputElement, { tx$, rx$ }: SearchWorker\n): Observable> {\n const push$ = new Subject()\n\n /* Handle value changes */\n push$\n .pipe(\n distinctUntilKeyChanged(\"value\"),\n map(({ value }): SearchQueryMessage => ({\n type: SearchMessageType.QUERY,\n data: value\n }))\n )\n .subscribe(tx$.next.bind(tx$))\n\n /* Handle focus changes */\n push$\n .pipe(\n distinctUntilKeyChanged(\"focus\")\n )\n .subscribe(({ focus }) => {\n if (focus) {\n setToggle(\"search\", focus)\n el.placeholder = \"\"\n } else {\n el.placeholder = translation(\"search.placeholder\")\n }\n })\n\n /* Handle reset */\n fromEvent(el.form!, \"reset\")\n .pipe(\n takeUntil(push$.pipe(takeLast(1)))\n )\n .subscribe(() => el.focus())\n\n /* Create and return component */\n return watchSearchQuery(el, { tx$, rx$ })\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n bufferCount,\n filter,\n finalize,\n map,\n merge,\n of,\n skipUntil,\n switchMap,\n take,\n tap,\n withLatestFrom,\n zipWith\n} from \"rxjs\"\n\nimport { translation } from \"~/_\"\nimport {\n getElement,\n watchElementBoundary\n} from \"~/browser\"\nimport {\n SearchResult,\n SearchWorker,\n isSearchReadyMessage,\n isSearchResultMessage\n} from \"~/integrations\"\nimport { renderSearchResultItem } from \"~/templates\"\nimport { round } from \"~/utilities\"\n\nimport { Component } from \"../../_\"\nimport { SearchQuery } from \"../query\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n query$: Observable /* Search query observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount search result list\n *\n * This function performs a lazy rendering of the search results, depending on\n * the vertical offset of the search result container.\n *\n * @param el - Search result list element\n * @param worker - Search worker\n * @param options - Options\n *\n * @returns Search result list component observable\n */\nexport function mountSearchResult(\n el: HTMLElement, { rx$ }: SearchWorker, { query$ }: MountOptions\n): Observable> {\n const push$ = new Subject()\n const boundary$ = watchElementBoundary(el.parentElement!)\n .pipe(\n filter(Boolean)\n )\n\n /* Retrieve nested components */\n const meta = getElement(\":scope > :first-child\", el)\n const list = getElement(\":scope > :last-child\", el)\n\n /* Wait until search is ready */\n const ready$ = rx$\n .pipe(\n filter(isSearchReadyMessage),\n take(1)\n )\n\n /* Update search result metadata */\n push$\n .pipe(\n withLatestFrom(query$),\n skipUntil(ready$)\n )\n .subscribe(([{ items }, { value }]) => {\n if (value) {\n switch (items.length) {\n\n /* No results */\n case 0:\n meta.textContent = translation(\"search.result.none\")\n break\n\n /* One result */\n case 1:\n meta.textContent = translation(\"search.result.one\")\n break\n\n /* Multiple result */\n default:\n meta.textContent = translation(\n \"search.result.other\",\n round(items.length)\n )\n }\n } else {\n meta.textContent = translation(\"search.result.placeholder\")\n }\n })\n\n /* Update search result list */\n push$\n .pipe(\n tap(() => list.innerHTML = \"\"),\n switchMap(({ items }) => merge(\n of(...items.slice(0, 10)),\n of(...items.slice(10))\n .pipe(\n bufferCount(4),\n zipWith(boundary$),\n switchMap(([chunk]) => chunk)\n )\n ))\n )\n .subscribe(result => list.appendChild(\n renderSearchResultItem(result)\n ))\n\n /* Filter search result message */\n const result$ = rx$\n .pipe(\n filter(isSearchResultMessage),\n map(({ data }) => data)\n )\n\n /* Create and return component */\n return result$\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n finalize,\n fromEvent,\n map,\n tap\n} from \"rxjs\"\n\nimport { getLocation } from \"~/browser\"\n\nimport { Component } from \"../../_\"\nimport { SearchQuery } from \"../query\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search sharing\n */\nexport interface SearchShare {\n url: URL /* Deep link for sharing */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n query$: Observable /* Search query observable */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n query$: Observable /* Search query observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount search sharing\n *\n * @param _el - Search sharing element\n * @param options - Options\n *\n * @returns Search sharing observable\n */\nexport function watchSearchShare(\n _el: HTMLElement, { query$ }: WatchOptions\n): Observable {\n return query$\n .pipe(\n map(({ value }) => {\n const url = getLocation()\n url.hash = \"\"\n url.searchParams.delete(\"h\")\n url.searchParams.set(\"q\", value)\n return { url }\n })\n )\n}\n\n/**\n * Mount search sharing\n *\n * @param el - Search sharing element\n * @param options - Options\n *\n * @returns Search sharing component observable\n */\nexport function mountSearchShare(\n el: HTMLAnchorElement, options: MountOptions\n): Observable> {\n const push$ = new Subject()\n push$.subscribe(({ url }) => {\n el.setAttribute(\"data-clipboard-text\", el.href)\n el.href = `${url}`\n })\n\n /* Prevent following of link */\n fromEvent(el, \"click\")\n .subscribe(ev => ev.preventDefault())\n\n /* Create and return component */\n return watchSearchShare(el, options)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n asyncScheduler,\n combineLatestWith,\n distinctUntilChanged,\n filter,\n finalize,\n fromEvent,\n map,\n merge,\n observeOn,\n tap\n} from \"rxjs\"\n\nimport { Keyboard } from \"~/browser\"\nimport {\n SearchResult,\n SearchWorker,\n isSearchResultMessage\n} from \"~/integrations\"\n\nimport { Component, getComponentElement } from \"../../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search suggestions\n */\nexport interface SearchSuggest {}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n keyboard$: Observable /* Keyboard observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount search suggestions\n *\n * This function will perform a lazy rendering of the search results, depending\n * on the vertical offset of the search result container.\n *\n * @param el - Search result list element\n * @param worker - Search worker\n * @param options - Options\n *\n * @returns Search result list component observable\n */\nexport function mountSearchSuggest(\n el: HTMLElement, { rx$ }: SearchWorker, { keyboard$ }: MountOptions\n): Observable> {\n const push$ = new Subject()\n\n /* Retrieve query component and track all changes */\n const query = getComponentElement(\"search-query\")\n const query$ = merge(\n fromEvent(query, \"keydown\"),\n fromEvent(query, \"focus\")\n )\n .pipe(\n observeOn(asyncScheduler),\n map(() => query.value),\n distinctUntilChanged(),\n )\n\n /* Update search suggestions */\n push$\n .pipe(\n combineLatestWith(query$),\n map(([{ suggestions }, value]) => {\n const words = value.split(/([\\s-]+)/)\n if (suggestions?.length && words[words.length - 1]) {\n const last = suggestions[suggestions.length - 1]\n if (last.startsWith(words[words.length - 1]))\n words[words.length - 1] = last\n } else {\n words.length = 0\n }\n return words\n })\n )\n .subscribe(words => el.innerHTML = words\n .join(\"\")\n .replace(/\\s/g, \" \")\n )\n\n /* Set up search keyboard handlers */\n keyboard$\n .pipe(\n filter(({ mode }) => mode === \"search\")\n )\n .subscribe(key => {\n switch (key.type) {\n\n /* Right arrow: accept current suggestion */\n case \"ArrowRight\":\n if (\n el.innerText.length &&\n query.selectionStart === query.value.length\n )\n query.value = el.innerText\n break\n }\n })\n\n /* Filter search result message */\n const result$ = rx$\n .pipe(\n filter(isSearchResultMessage),\n map(({ data }) => data)\n )\n\n /* Create and return component */\n return result$\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(() => ({ ref: el }))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n NEVER,\n Observable,\n ObservableInput,\n filter,\n merge,\n mergeWith,\n sample,\n take\n} from \"rxjs\"\n\nimport { configuration } from \"~/_\"\nimport {\n Keyboard,\n getActiveElement,\n getElements,\n setToggle\n} from \"~/browser\"\nimport {\n SearchIndex,\n SearchResult,\n isSearchQueryMessage,\n isSearchReadyMessage,\n setupSearchWorker\n} from \"~/integrations\"\n\nimport {\n Component,\n getComponentElement,\n getComponentElements\n} from \"../../_\"\nimport {\n SearchQuery,\n mountSearchQuery\n} from \"../query\"\nimport { mountSearchResult } from \"../result\"\nimport {\n SearchShare,\n mountSearchShare\n} from \"../share\"\nimport {\n SearchSuggest,\n mountSearchSuggest\n} from \"../suggest\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search\n */\nexport type Search =\n | SearchQuery\n | SearchResult\n | SearchShare\n | SearchSuggest\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n index$: ObservableInput /* Search index observable */\n keyboard$: Observable /* Keyboard observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount search\n *\n * This function sets up the search functionality, including the underlying\n * web worker and all keyboard bindings.\n *\n * @param el - Search element\n * @param options - Options\n *\n * @returns Search component observable\n */\nexport function mountSearch(\n el: HTMLElement, { index$, keyboard$ }: MountOptions\n): Observable> {\n const config = configuration()\n try {\n const url = __search?.worker || config.search\n const worker = setupSearchWorker(url, index$)\n\n /* Retrieve query and result components */\n const query = getComponentElement(\"search-query\", el)\n const result = getComponentElement(\"search-result\", el)\n\n /* Re-emit query when search is ready */\n const { tx$, rx$ } = worker\n tx$\n .pipe(\n filter(isSearchQueryMessage),\n sample(rx$.pipe(filter(isSearchReadyMessage))),\n take(1)\n )\n .subscribe(tx$.next.bind(tx$))\n\n /* Set up search keyboard handlers */\n keyboard$\n .pipe(\n filter(({ mode }) => mode === \"search\")\n )\n .subscribe(key => {\n const active = getActiveElement()\n switch (key.type) {\n\n /* Enter: go to first (best) result */\n case \"Enter\":\n if (active === query) {\n const anchors = new Map()\n for (const anchor of getElements(\n \":first-child [href]\", result\n )) {\n const article = anchor.firstElementChild!\n anchors.set(anchor, parseFloat(\n article.getAttribute(\"data-md-score\")!\n ))\n }\n\n /* Go to result with highest score, if any */\n if (anchors.size) {\n const [[best]] = [...anchors].sort(([, a], [, b]) => b - a)\n best.click()\n }\n\n /* Otherwise omit form submission */\n key.claim()\n }\n break\n\n /* Escape or Tab: close search */\n case \"Escape\":\n case \"Tab\":\n setToggle(\"search\", false)\n query.blur()\n break\n\n /* Vertical arrows: select previous or next search result */\n case \"ArrowUp\":\n case \"ArrowDown\":\n if (typeof active === \"undefined\") {\n query.focus()\n } else {\n const els = [query, ...getElements(\n \":not(details) > [href], summary, details[open] [href]\",\n result\n )]\n const i = Math.max(0, (\n Math.max(0, els.indexOf(active)) + els.length + (\n key.type === \"ArrowUp\" ? -1 : +1\n )\n ) % els.length)\n els[i].focus()\n }\n\n /* Prevent scrolling of page */\n key.claim()\n break\n\n /* All other keys: hand to search query */\n default:\n if (query !== getActiveElement())\n query.focus()\n }\n })\n\n /* Set up global keyboard handlers */\n keyboard$\n .pipe(\n filter(({ mode }) => mode === \"global\"),\n )\n .subscribe(key => {\n switch (key.type) {\n\n /* Open search and select query */\n case \"f\":\n case \"s\":\n case \"/\":\n query.focus()\n query.select()\n\n /* Prevent scrolling of page */\n key.claim()\n break\n }\n })\n\n /* Create and return component */\n const query$ = mountSearchQuery(query, worker)\n const result$ = mountSearchResult(result, worker, { query$ })\n return merge(query$, result$)\n .pipe(\n mergeWith(\n\n /* Search sharing */\n ...getComponentElements(\"search-share\", el)\n .map(child => mountSearchShare(child, { query$ })),\n\n /* Search suggestions */\n ...getComponentElements(\"search-suggest\", el)\n .map(child => mountSearchSuggest(child, worker, { keyboard$ }))\n )\n )\n\n /* Gracefully handle broken search */\n } catch (err) {\n el.hidden = true\n return NEVER\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n ObservableInput,\n combineLatest,\n filter,\n map,\n startWith\n} from \"rxjs\"\n\nimport { getLocation } from \"~/browser\"\nimport {\n SearchIndex,\n setupSearchHighlighter\n} from \"~/integrations\"\nimport { h } from \"~/utilities\"\n\nimport { Component } from \"../../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search highlighting\n */\nexport interface SearchHighlight {\n nodes: Map /* Map of replacements */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n index$: ObservableInput /* Search index observable */\n location$: Observable /* Location observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount search highlighting\n *\n * @param el - Content element\n * @param options - Options\n *\n * @returns Search highlighting component observable\n */\nexport function mountSearchHiglight(\n el: HTMLElement, { index$, location$ }: MountOptions\n): Observable> {\n return combineLatest([\n index$,\n location$\n .pipe(\n startWith(getLocation()),\n filter(url => !!url.searchParams.get(\"h\"))\n )\n ])\n .pipe(\n map(([index, url]) => setupSearchHighlighter(index.config, true)(\n url.searchParams.get(\"h\")!\n )),\n map(fn => {\n const nodes = new Map()\n\n /* Traverse text nodes and collect matches */\n const it = document.createNodeIterator(el, NodeFilter.SHOW_TEXT)\n for (let node = it.nextNode(); node; node = it.nextNode()) {\n if (node.parentElement?.offsetHeight) {\n const original = node.textContent!\n const replaced = fn(original)\n if (replaced.length > original.length)\n nodes.set(node as ChildNode, replaced)\n }\n }\n\n /* Replace original nodes with matches */\n for (const [node, text] of nodes) {\n const { childNodes } = h(\"span\", null, text)\n node.replaceWith(...Array.from(childNodes))\n }\n\n /* Return component */\n return { ref: el, nodes }\n })\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n animationFrameScheduler,\n auditTime,\n combineLatest,\n defer,\n distinctUntilChanged,\n finalize,\n map,\n tap,\n withLatestFrom\n} from \"rxjs\"\n\nimport {\n Viewport,\n getElement,\n getElementOffset\n} from \"~/browser\"\n\nimport { Component } from \"../_\"\nimport { Header } from \"../header\"\nimport { Main } from \"../main\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Sidebar\n */\nexport interface Sidebar {\n height: number /* Sidebar height */\n locked: boolean /* Sidebar is locked */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n viewport$: Observable /* Viewport observable */\n main$: Observable
    /* Main area observable */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n main$: Observable
    /* Main area observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch sidebar\n *\n * This function returns an observable that computes the visual parameters of\n * the sidebar which depends on the vertical viewport offset, as well as the\n * height of the main area. When the page is scrolled beyond the header, the\n * sidebar is locked and fills the remaining space.\n *\n * @param el - Sidebar element\n * @param options - Options\n *\n * @returns Sidebar observable\n */\nexport function watchSidebar(\n el: HTMLElement, { viewport$, main$ }: WatchOptions\n): Observable {\n const parent = el.parentElement!\n const adjust =\n parent.offsetTop -\n parent.parentElement!.offsetTop\n\n /* Compute the sidebar's available height and if it should be locked */\n return combineLatest([main$, viewport$])\n .pipe(\n map(([{ offset, height }, { offset: { y } }]) => {\n height = height\n + Math.min(adjust, Math.max(0, y - offset))\n - adjust\n return {\n height,\n locked: y >= offset + adjust\n }\n }),\n distinctUntilChanged((a, b) => (\n a.height === b.height &&\n a.locked === b.locked\n ))\n )\n}\n\n/**\n * Mount sidebar\n *\n * This function doesn't set the height of the actual sidebar, but of its first\n * child \u2013 the `.md-sidebar__scrollwrap` element in order to mitigiate jittery\n * sidebars when the footer is scrolled into view. At some point we switched\n * from `absolute` / `fixed` positioning to `sticky` positioning, significantly\n * reducing jitter in some browsers (respectively Firefox and Safari) when\n * scrolling from the top. However, top-aligned sticky positioning means that\n * the sidebar snaps to the bottom when the end of the container is reached.\n * This is what leads to the mentioned jitter, as the sidebar's height may be\n * updated too slowly.\n *\n * This behaviour can be mitigiated by setting the height of the sidebar to `0`\n * while preserving the padding, and the height on its first element.\n *\n * @param el - Sidebar element\n * @param options - Options\n *\n * @returns Sidebar component observable\n */\nexport function mountSidebar(\n el: HTMLElement, { header$, ...options }: MountOptions\n): Observable> {\n const inner = getElement(\".md-sidebar__scrollwrap\", el)\n const { y } = getElementOffset(inner)\n return defer(() => {\n const push$ = new Subject()\n push$\n .pipe(\n auditTime(0, animationFrameScheduler),\n withLatestFrom(header$)\n )\n .subscribe({\n\n /* Handle emission */\n next([{ height }, { height: offset }]) {\n inner.style.height = `${height - 2 * y}px`\n el.style.top = `${offset}px`\n },\n\n /* Handle complete */\n complete() {\n inner.style.height = \"\"\n el.style.top = \"\"\n }\n })\n\n /* Create and return component */\n return watchSidebar(el, options)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Repo, User } from \"github-types\"\nimport {\n Observable,\n defaultIfEmpty,\n map,\n zip\n} from \"rxjs\"\n\nimport { requestJSON } from \"~/browser\"\n\nimport { SourceFacts } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * GitHub release (partial)\n */\ninterface Release {\n tag_name: string /* Tag name */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch GitHub repository facts\n *\n * @param user - GitHub user or organization\n * @param repo - GitHub repository\n *\n * @returns Repository facts observable\n */\nexport function fetchSourceFactsFromGitHub(\n user: string, repo?: string\n): Observable {\n if (typeof repo !== \"undefined\") {\n const url = `https://api.github.com/repos/${user}/${repo}`\n return zip(\n\n /* Fetch version */\n requestJSON(`${url}/releases/latest`)\n .pipe(\n map(release => ({\n version: release.tag_name\n })),\n defaultIfEmpty({})\n ),\n\n /* Fetch stars and forks */\n requestJSON(url)\n .pipe(\n map(info => ({\n stars: info.stargazers_count,\n forks: info.forks_count\n })),\n defaultIfEmpty({})\n )\n )\n .pipe(\n map(([release, info]) => ({ ...release, ...info }))\n )\n\n /* User or organization */\n } else {\n const url = `https://api.github.com/users/${user}`\n return requestJSON(url)\n .pipe(\n map(info => ({\n repositories: info.public_repos\n })),\n defaultIfEmpty({})\n )\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { ProjectSchema } from \"gitlab\"\nimport {\n Observable,\n defaultIfEmpty,\n map\n} from \"rxjs\"\n\nimport { requestJSON } from \"~/browser\"\n\nimport { SourceFacts } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch GitLab repository facts\n *\n * @param base - GitLab base\n * @param project - GitLab project\n *\n * @returns Repository facts observable\n */\nexport function fetchSourceFactsFromGitLab(\n base: string, project: string\n): Observable {\n const url = `https://${base}/api/v4/projects/${encodeURIComponent(project)}`\n return requestJSON(url)\n .pipe(\n map(({ star_count, forks_count }) => ({\n stars: star_count,\n forks: forks_count\n })),\n defaultIfEmpty({})\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { EMPTY, Observable } from \"rxjs\"\n\nimport { fetchSourceFactsFromGitHub } from \"../github\"\nimport { fetchSourceFactsFromGitLab } from \"../gitlab\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Repository facts for repositories\n */\nexport interface RepositoryFacts {\n stars?: number /* Number of stars */\n forks?: number /* Number of forks */\n version?: string /* Latest version */\n}\n\n/**\n * Repository facts for organizations\n */\nexport interface OrganizationFacts {\n repositories?: number /* Number of repositories */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Repository facts\n */\nexport type SourceFacts =\n | RepositoryFacts\n | OrganizationFacts\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch repository facts\n *\n * @param url - Repository URL\n *\n * @returns Repository facts observable\n */\nexport function fetchSourceFacts(\n url: string\n): Observable {\n const [type] = url.match(/(git(?:hub|lab))/i) || []\n switch (type.toLowerCase()) {\n\n /* GitHub repository */\n case \"github\":\n const [, user, repo] = url.match(/^.+github\\.com\\/([^/]+)\\/?([^/]+)?/i)!\n return fetchSourceFactsFromGitHub(user, repo)\n\n /* GitLab repository */\n case \"gitlab\":\n const [, base, slug] = url.match(/^.+?([^/]*gitlab[^/]+)\\/(.+?)\\/?$/i)!\n return fetchSourceFactsFromGitLab(base, slug)\n\n /* Everything else */\n default:\n return EMPTY\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n EMPTY,\n Observable,\n Subject,\n catchError,\n defer,\n filter,\n finalize,\n map,\n of,\n shareReplay,\n tap\n} from \"rxjs\"\n\nimport { getElement } from \"~/browser\"\nimport { renderSourceFacts } from \"~/templates\"\n\nimport { Component } from \"../../_\"\nimport {\n SourceFacts,\n fetchSourceFacts\n} from \"../facts\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Repository information\n */\nexport interface Source {\n facts: SourceFacts /* Repository facts */\n}\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Repository information observable\n */\nlet fetch$: Observable\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch repository information\n *\n * This function tries to read the repository facts from session storage, and\n * if unsuccessful, fetches them from the underlying provider.\n *\n * @param el - Repository information element\n *\n * @returns Repository information observable\n */\nexport function watchSource(\n el: HTMLAnchorElement\n): Observable {\n return fetch$ ||= defer(() => {\n const cached = __md_get(\"__source\", sessionStorage)\n if (cached)\n return of(cached)\n else\n return fetchSourceFacts(el.href)\n .pipe(\n tap(facts => __md_set(\"__source\", facts, sessionStorage))\n )\n })\n .pipe(\n catchError(() => EMPTY),\n filter(facts => Object.keys(facts).length > 0),\n map(facts => ({ facts })),\n shareReplay(1)\n )\n}\n\n/**\n * Mount repository information\n *\n * @param el - Repository information element\n *\n * @returns Repository information component observable\n */\nexport function mountSource(\n el: HTMLAnchorElement\n): Observable> {\n const inner = getElement(\":scope > :last-child\", el)\n return defer(() => {\n const push$ = new Subject()\n push$.subscribe(({ facts }) => {\n inner.appendChild(renderSourceFacts(facts))\n inner.setAttribute(\"data-md-state\", \"done\")\n })\n\n /* Create and return component */\n return watchSource(el)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n defer,\n distinctUntilKeyChanged,\n finalize,\n map,\n of,\n switchMap,\n tap\n} from \"rxjs\"\n\nimport { feature } from \"~/_\"\nimport {\n Viewport,\n watchElementSize,\n watchViewportAt\n} from \"~/browser\"\n\nimport { Component } from \"../_\"\nimport { Header } from \"../header\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Navigation tabs\n */\nexport interface Tabs {\n hidden: boolean /* Navigation tabs are hidden */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch navigation tabs\n *\n * @param el - Navigation tabs element\n * @param options - Options\n *\n * @returns Navigation tabs observable\n */\nexport function watchTabs(\n el: HTMLElement, { viewport$, header$ }: WatchOptions\n): Observable {\n return watchElementSize(document.body)\n .pipe(\n switchMap(() => watchViewportAt(el, { header$, viewport$ })),\n map(({ offset: { y } }) => {\n return {\n hidden: y >= 10\n }\n }),\n distinctUntilKeyChanged(\"hidden\")\n )\n}\n\n/**\n * Mount navigation tabs\n *\n * This function hides the navigation tabs when scrolling past the threshold\n * and makes them reappear in a nice CSS animation when scrolling back up.\n *\n * @param el - Navigation tabs element\n * @param options - Options\n *\n * @returns Navigation tabs component observable\n */\nexport function mountTabs(\n el: HTMLElement, options: MountOptions\n): Observable> {\n return defer(() => {\n const push$ = new Subject()\n push$.subscribe({\n\n /* Handle emission */\n next({ hidden }) {\n if (hidden)\n el.setAttribute(\"data-md-state\", \"hidden\")\n else\n el.removeAttribute(\"data-md-state\")\n },\n\n /* Handle complete */\n complete() {\n el.removeAttribute(\"data-md-state\")\n }\n })\n\n /* Create and return component */\n return (\n feature(\"navigation.tabs.sticky\")\n ? of({ hidden: false })\n : watchTabs(el, options)\n )\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n bufferCount,\n combineLatestWith,\n debounceTime,\n defer,\n distinctUntilChanged,\n distinctUntilKeyChanged,\n finalize,\n map,\n of,\n repeat,\n scan,\n share,\n skip,\n startWith,\n switchMap,\n takeLast,\n takeUntil,\n tap,\n withLatestFrom\n} from \"rxjs\"\n\nimport { feature } from \"~/_\"\nimport {\n Viewport,\n getElement,\n getElements,\n getLocation,\n getOptionalElement,\n watchElementSize\n} from \"~/browser\"\n\nimport {\n Component,\n getComponentElement\n} from \"../_\"\nimport { Header } from \"../header\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Table of contents\n */\nexport interface TableOfContents {\n prev: HTMLAnchorElement[][] /* Anchors (previous) */\n next: HTMLAnchorElement[][] /* Anchors (next) */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n target$: Observable /* Location target observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch table of contents\n *\n * This is effectively a scroll spy implementation which will account for the\n * fixed header and automatically re-calculate anchor offsets when the viewport\n * is resized. The returned observable will only emit if the table of contents\n * needs to be repainted.\n *\n * This implementation tracks an anchor element's entire path starting from its\n * level up to the top-most anchor element, e.g. `[h3, h2, h1]`. Although the\n * Material theme currently doesn't make use of this information, it enables\n * the styling of the entire hierarchy through customization.\n *\n * Note that the current anchor is the last item of the `prev` anchor list.\n *\n * @param el - Table of contents element\n * @param options - Options\n *\n * @returns Table of contents observable\n */\nexport function watchTableOfContents(\n el: HTMLElement, { viewport$, header$ }: WatchOptions\n): Observable {\n const table = new Map()\n\n /* Compute anchor-to-target mapping */\n const anchors = getElements(\"[href^=\\\\#]\", el)\n for (const anchor of anchors) {\n const id = decodeURIComponent(anchor.hash.substring(1))\n const target = getOptionalElement(`[id=\"${id}\"]`)\n if (typeof target !== \"undefined\")\n table.set(anchor, target)\n }\n\n /* Compute necessary adjustment for header */\n const adjust$ = header$\n .pipe(\n distinctUntilKeyChanged(\"height\"),\n map(({ height }) => {\n const main = getComponentElement(\"main\")\n const grid = getElement(\":scope > :first-child\", main)\n return height + 0.8 * (\n grid.offsetTop -\n main.offsetTop\n )\n }),\n share()\n )\n\n /* Compute partition of previous and next anchors */\n const partition$ = watchElementSize(document.body)\n .pipe(\n distinctUntilKeyChanged(\"height\"),\n\n /* Build index to map anchor paths to vertical offsets */\n switchMap(body => defer(() => {\n let path: HTMLAnchorElement[] = []\n return of([...table].reduce((index, [anchor, target]) => {\n while (path.length) {\n const last = table.get(path[path.length - 1])!\n if (last.tagName >= target.tagName) {\n path.pop()\n } else {\n break\n }\n }\n\n /* If the current anchor is hidden, continue with its parent */\n let offset = target.offsetTop\n while (!offset && target.parentElement) {\n target = target.parentElement\n offset = target.offsetTop\n }\n\n /* Map reversed anchor path to vertical offset */\n return index.set(\n [...path = [...path, anchor]].reverse(),\n offset\n )\n }, new Map()))\n })\n .pipe(\n\n /* Sort index by vertical offset (see https://bit.ly/30z6QSO) */\n map(index => new Map([...index].sort(([, a], [, b]) => a - b))),\n combineLatestWith(adjust$),\n\n /* Re-compute partition when viewport offset changes */\n switchMap(([index, adjust]) => viewport$\n .pipe(\n scan(([prev, next], { offset: { y }, size }) => {\n const last = y + size.height >= Math.floor(body.height)\n\n /* Look forward */\n while (next.length) {\n const [, offset] = next[0]\n if (offset - adjust < y || last) {\n prev = [...prev, next.shift()!]\n } else {\n break\n }\n }\n\n /* Look backward */\n while (prev.length) {\n const [, offset] = prev[prev.length - 1]\n if (offset - adjust >= y && !last) {\n next = [prev.pop()!, ...next]\n } else {\n break\n }\n }\n\n /* Return partition */\n return [prev, next]\n }, [[], [...index]]),\n distinctUntilChanged((a, b) => (\n a[0] === b[0] &&\n a[1] === b[1]\n ))\n )\n )\n )\n )\n )\n\n /* Compute and return anchor list migrations */\n return partition$\n .pipe(\n map(([prev, next]) => ({\n prev: prev.map(([path]) => path),\n next: next.map(([path]) => path)\n })),\n\n /* Extract anchor list migrations */\n startWith({ prev: [], next: [] }),\n bufferCount(2, 1),\n map(([a, b]) => {\n\n /* Moving down */\n if (a.prev.length < b.prev.length) {\n return {\n prev: b.prev.slice(Math.max(0, a.prev.length - 1), b.prev.length),\n next: []\n }\n\n /* Moving up */\n } else {\n return {\n prev: b.prev.slice(-1),\n next: b.next.slice(0, b.next.length - a.next.length)\n }\n }\n })\n )\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Mount table of contents\n *\n * @param el - Table of contents element\n * @param options - Options\n *\n * @returns Table of contents component observable\n */\nexport function mountTableOfContents(\n el: HTMLElement, { viewport$, header$, target$ }: MountOptions\n): Observable> {\n return defer(() => {\n const push$ = new Subject()\n push$.subscribe(({ prev, next }) => {\n\n /* Look forward */\n for (const [anchor] of next) {\n anchor.removeAttribute(\"data-md-state\")\n anchor.classList.remove(\n \"md-nav__link--active\"\n )\n }\n\n /* Look backward */\n for (const [index, [anchor]] of prev.entries()) {\n anchor.setAttribute(\"data-md-state\", \"blur\")\n anchor.classList.toggle(\n \"md-nav__link--active\",\n index === prev.length - 1\n )\n }\n })\n\n /* Set up anchor tracking, if enabled */\n if (feature(\"navigation.tracking\"))\n viewport$\n .pipe(\n takeUntil(push$.pipe(takeLast(1))),\n distinctUntilKeyChanged(\"offset\"),\n debounceTime(250),\n skip(1),\n takeUntil(target$.pipe(skip(1))),\n repeat({ delay: 250 }),\n withLatestFrom(push$)\n )\n .subscribe(([, { prev }]) => {\n const url = getLocation()\n\n /* Set hash fragment to active anchor */\n const anchor = prev[prev.length - 1]\n if (anchor && anchor.length) {\n const [active] = anchor\n const { hash } = new URL(active.href)\n if (url.hash !== hash) {\n url.hash = hash\n history.replaceState({}, \"\", `${url}`)\n }\n\n /* Reset anchor when at the top */\n } else {\n url.hash = \"\"\n history.replaceState({}, \"\", `${url}`)\n }\n })\n\n /* Create and return component */\n return watchTableOfContents(el, { viewport$, header$ })\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n bufferCount,\n combineLatest,\n distinctUntilChanged,\n distinctUntilKeyChanged,\n endWith,\n finalize,\n map,\n repeat,\n skip,\n takeLast,\n takeUntil,\n tap\n} from \"rxjs\"\n\nimport { Viewport } from \"~/browser\"\n\nimport { Component } from \"../_\"\nimport { Header } from \"../header\"\nimport { Main } from \"../main\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Back-to-top button\n */\nexport interface BackToTop {\n hidden: boolean /* Back-to-top button is hidden */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n viewport$: Observable /* Viewport observable */\n main$: Observable
    /* Main area observable */\n target$: Observable /* Location target observable */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n main$: Observable
    /* Main area observable */\n target$: Observable /* Location target observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch back-to-top\n *\n * @param _el - Back-to-top element\n * @param options - Options\n *\n * @returns Back-to-top observable\n */\nexport function watchBackToTop(\n _el: HTMLElement, { viewport$, main$, target$ }: WatchOptions\n): Observable {\n\n /* Compute direction */\n const direction$ = viewport$\n .pipe(\n map(({ offset: { y } }) => y),\n bufferCount(2, 1),\n map(([a, b]) => a > b && b > 0),\n distinctUntilChanged()\n )\n\n /* Compute whether main area is active */\n const active$ = main$\n .pipe(\n map(({ active }) => active)\n )\n\n /* Compute threshold for hiding */\n return combineLatest([active$, direction$])\n .pipe(\n map(([active, direction]) => !(active && direction)),\n distinctUntilChanged(),\n takeUntil(target$.pipe(skip(1))),\n endWith(true),\n repeat({ delay: 250 }),\n map(hidden => ({ hidden }))\n )\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Mount back-to-top\n *\n * @param el - Back-to-top element\n * @param options - Options\n *\n * @returns Back-to-top component observable\n */\nexport function mountBackToTop(\n el: HTMLElement, { viewport$, header$, main$, target$ }: MountOptions\n): Observable> {\n const push$ = new Subject()\n push$.subscribe({\n\n /* Handle emission */\n next({ hidden }) {\n if (hidden) {\n el.setAttribute(\"data-md-state\", \"hidden\")\n el.setAttribute(\"tabindex\", \"-1\")\n el.blur()\n } else {\n el.removeAttribute(\"data-md-state\")\n el.removeAttribute(\"tabindex\")\n }\n },\n\n /* Handle complete */\n complete() {\n el.style.top = \"\"\n el.setAttribute(\"data-md-state\", \"hidden\")\n el.removeAttribute(\"tabindex\")\n }\n })\n\n /* Watch header height */\n header$\n .pipe(\n takeUntil(push$.pipe(endWith(0), takeLast(1))),\n distinctUntilKeyChanged(\"height\")\n )\n .subscribe(({ height }) => {\n el.style.top = `${height + 16}px`\n })\n\n /* Create and return component */\n return watchBackToTop(el, { viewport$, main$, target$ })\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n fromEvent,\n mapTo,\n mergeMap,\n switchMap,\n takeWhile,\n tap,\n withLatestFrom\n} from \"rxjs\"\n\nimport { getElements } from \"~/browser\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch options\n */\ninterface PatchOptions {\n document$: Observable /* Document observable */\n tablet$: Observable /* Media tablet observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch indeterminate checkboxes\n *\n * This function replaces the indeterminate \"pseudo state\" with the actual\n * indeterminate state, which is used to keep navigation always expanded.\n *\n * @param options - Options\n */\nexport function patchIndeterminate(\n { document$, tablet$ }: PatchOptions\n): void {\n document$\n .pipe(\n switchMap(() => getElements(\n \"[data-md-state=indeterminate]\"\n )),\n tap(el => {\n el.indeterminate = true\n el.checked = false\n }),\n mergeMap(el => fromEvent(el, \"change\")\n .pipe(\n takeWhile(() => el.hasAttribute(\"data-md-state\")),\n mapTo(el)\n )\n ),\n withLatestFrom(tablet$)\n )\n .subscribe(([el, tablet]) => {\n el.removeAttribute(\"data-md-state\")\n if (tablet)\n el.checked = false\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n filter,\n fromEvent,\n mapTo,\n mergeMap,\n switchMap,\n tap\n} from \"rxjs\"\n\nimport { getElements } from \"~/browser\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch options\n */\ninterface PatchOptions {\n document$: Observable /* Document observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Check whether the given device is an Apple device\n *\n * @returns Test result\n */\nfunction isAppleDevice(): boolean {\n return /(iPad|iPhone|iPod)/.test(navigator.userAgent)\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch all elements with `data-md-scrollfix` attributes\n *\n * This is a year-old patch which ensures that overflow scrolling works at the\n * top and bottom of containers on iOS by ensuring a `1px` scroll offset upon\n * the start of a touch event.\n *\n * @see https://bit.ly/2SCtAOO - Original source\n *\n * @param options - Options\n */\nexport function patchScrollfix(\n { document$ }: PatchOptions\n): void {\n document$\n .pipe(\n switchMap(() => getElements(\"[data-md-scrollfix]\")),\n tap(el => el.removeAttribute(\"data-md-scrollfix\")),\n filter(isAppleDevice),\n mergeMap(el => fromEvent(el, \"touchstart\")\n .pipe(\n mapTo(el)\n )\n )\n )\n .subscribe(el => {\n const top = el.scrollTop\n\n /* We're at the top of the container */\n if (top === 0) {\n el.scrollTop = 1\n\n /* We're at the bottom of the container */\n } else if (top + el.offsetHeight === el.scrollHeight) {\n el.scrollTop = top - 1\n }\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n combineLatest,\n delay,\n map,\n of,\n switchMap,\n withLatestFrom\n} from \"rxjs\"\n\nimport {\n Viewport,\n watchToggle\n} from \"~/browser\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch options\n */\ninterface PatchOptions {\n viewport$: Observable /* Viewport observable */\n tablet$: Observable /* Media tablet observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch the document body to lock when search is open\n *\n * For mobile and tablet viewports, the search is rendered full screen, which\n * leads to scroll leaking when at the top or bottom of the search result. This\n * function locks the body when the search is in full screen mode, and restores\n * the scroll position when leaving.\n *\n * @param options - Options\n */\nexport function patchScrolllock(\n { viewport$, tablet$ }: PatchOptions\n): void {\n combineLatest([watchToggle(\"search\"), tablet$])\n .pipe(\n map(([active, tablet]) => active && !tablet),\n switchMap(active => of(active)\n .pipe(\n delay(active ? 400 : 100)\n )\n ),\n withLatestFrom(viewport$)\n )\n .subscribe(([active, { offset: { y }}]) => {\n if (active) {\n document.body.setAttribute(\"data-md-state\", \"lock\")\n document.body.style.top = `-${y}px`\n } else {\n const value = -1 * parseInt(document.body.style.top, 10)\n document.body.removeAttribute(\"data-md-state\")\n document.body.style.top = \"\"\n if (value)\n window.scrollTo(0, value)\n }\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Polyfills\n * ------------------------------------------------------------------------- */\n\n/* Polyfill `Object.entries` */\nif (!Object.entries)\n Object.entries = function (obj: object) {\n const data: [string, string][] = []\n for (const key of Object.keys(obj))\n // @ts-expect-error - ignore property access warning\n data.push([key, obj[key]])\n\n /* Return entries */\n return data\n }\n\n/* Polyfill `Object.values` */\nif (!Object.values)\n Object.values = function (obj: object) {\n const data: string[] = []\n for (const key of Object.keys(obj))\n // @ts-expect-error - ignore property access warning\n data.push(obj[key])\n\n /* Return values */\n return data\n }\n\n/* ------------------------------------------------------------------------- */\n\n/* Polyfills for `Element` */\nif (typeof Element !== \"undefined\") {\n\n /* Polyfill `Element.scrollTo` */\n if (!Element.prototype.scrollTo)\n Element.prototype.scrollTo = function (\n x?: ScrollToOptions | number, y?: number\n ): void {\n if (typeof x === \"object\") {\n this.scrollLeft = x.left!\n this.scrollTop = x.top!\n } else {\n this.scrollLeft = x!\n this.scrollTop = y!\n }\n }\n\n /* Polyfill `Element.replaceWith` */\n if (!Element.prototype.replaceWith)\n Element.prototype.replaceWith = function (\n ...nodes: Array\n ): void {\n const parent = this.parentNode\n if (parent) {\n if (nodes.length === 0)\n parent.removeChild(this)\n\n /* Replace children and create text nodes */\n for (let i = nodes.length - 1; i >= 0; i--) {\n let node = nodes[i]\n if (typeof node !== \"object\")\n node = document.createTextNode(node)\n else if (node.parentNode)\n node.parentNode.removeChild(node)\n\n /* Replace child or insert before previous sibling */\n if (!i)\n parent.replaceChild(node, this)\n else\n parent.insertBefore(this.previousSibling!, node)\n }\n }\n }\n}\n"], + "mappings": "g+BAAA,oBAAC,UAAU,EAAQ,EAAS,CAC1B,MAAO,KAAY,UAAY,MAAO,KAAW,YAAc,EAAQ,EACvE,MAAO,SAAW,YAAc,OAAO,IAAM,OAAO,CAAO,EAC1D,EAAQ,CACX,GAAE,GAAO,UAAY,CAAE,aASrB,WAAmC,EAAO,CACxC,GAAI,GAAmB,GACnB,EAA0B,GAC1B,EAAiC,KAEjC,EAAsB,CACxB,KAAM,GACN,OAAQ,GACR,IAAK,GACL,IAAK,GACL,MAAO,GACP,SAAU,GACV,OAAQ,GACR,KAAM,GACN,MAAO,GACP,KAAM,GACN,KAAM,GACN,SAAU,GACV,iBAAkB,EACpB,EAOA,WAA4B,EAAI,CAC9B,MACE,MACA,IAAO,UACP,EAAG,WAAa,QAChB,EAAG,WAAa,QAChB,aAAe,IACf,YAAc,GAAG,UAKrB,CASA,WAAuC,EAAI,CACzC,GAAI,IAAO,EAAG,KACV,GAAU,EAAG,QAUjB,MARI,QAAY,SAAW,EAAoB,KAAS,CAAC,EAAG,UAIxD,KAAY,YAAc,CAAC,EAAG,UAI9B,EAAG,kBAKT,CAOA,WAA8B,EAAI,CAChC,AAAI,EAAG,UAAU,SAAS,eAAe,GAGzC,GAAG,UAAU,IAAI,eAAe,EAChC,EAAG,aAAa,2BAA4B,EAAE,EAChD,CAOA,WAAiC,EAAI,CACnC,AAAI,CAAC,EAAG,aAAa,0BAA0B,GAG/C,GAAG,UAAU,OAAO,eAAe,EACnC,EAAG,gBAAgB,0BAA0B,EAC/C,CAUA,WAAmB,EAAG,CACpB,AAAI,EAAE,SAAW,EAAE,QAAU,EAAE,SAI3B,GAAmB,EAAM,aAAa,GACxC,EAAqB,EAAM,aAAa,EAG1C,EAAmB,GACrB,CAUA,WAAuB,EAAG,CACxB,EAAmB,EACrB,CASA,WAAiB,EAAG,CAElB,AAAI,CAAC,EAAmB,EAAE,MAAM,GAI5B,IAAoB,EAA8B,EAAE,MAAM,IAC5D,EAAqB,EAAE,MAAM,CAEjC,CAMA,WAAgB,EAAG,CACjB,AAAI,CAAC,EAAmB,EAAE,MAAM,GAK9B,GAAE,OAAO,UAAU,SAAS,eAAe,GAC3C,EAAE,OAAO,aAAa,0BAA0B,IAMhD,GAA0B,GAC1B,OAAO,aAAa,CAA8B,EAClD,EAAiC,OAAO,WAAW,UAAW,CAC5D,EAA0B,EAC5B,EAAG,GAAG,EACN,EAAwB,EAAE,MAAM,EAEpC,CAOA,WAA4B,EAAG,CAC7B,AAAI,SAAS,kBAAoB,UAK3B,IACF,GAAmB,IAErB,EAA+B,EAEnC,CAQA,YAA0C,CACxC,SAAS,iBAAiB,YAAa,CAAoB,EAC3D,SAAS,iBAAiB,YAAa,CAAoB,EAC3D,SAAS,iBAAiB,UAAW,CAAoB,EACzD,SAAS,iBAAiB,cAAe,CAAoB,EAC7D,SAAS,iBAAiB,cAAe,CAAoB,EAC7D,SAAS,iBAAiB,YAAa,CAAoB,EAC3D,SAAS,iBAAiB,YAAa,CAAoB,EAC3D,SAAS,iBAAiB,aAAc,CAAoB,EAC5D,SAAS,iBAAiB,WAAY,CAAoB,CAC5D,CAEA,YAA6C,CAC3C,SAAS,oBAAoB,YAAa,CAAoB,EAC9D,SAAS,oBAAoB,YAAa,CAAoB,EAC9D,SAAS,oBAAoB,UAAW,CAAoB,EAC5D,SAAS,oBAAoB,cAAe,CAAoB,EAChE,SAAS,oBAAoB,cAAe,CAAoB,EAChE,SAAS,oBAAoB,YAAa,CAAoB,EAC9D,SAAS,oBAAoB,YAAa,CAAoB,EAC9D,SAAS,oBAAoB,aAAc,CAAoB,EAC/D,SAAS,oBAAoB,WAAY,CAAoB,CAC/D,CASA,WAA8B,EAAG,CAG/B,AAAI,EAAE,OAAO,UAAY,EAAE,OAAO,SAAS,YAAY,IAAM,QAI7D,GAAmB,GACnB,EAAkC,EACpC,CAKA,SAAS,iBAAiB,UAAW,EAAW,EAAI,EACpD,SAAS,iBAAiB,YAAa,EAAe,EAAI,EAC1D,SAAS,iBAAiB,cAAe,EAAe,EAAI,EAC5D,SAAS,iBAAiB,aAAc,EAAe,EAAI,EAC3D,SAAS,iBAAiB,mBAAoB,EAAoB,EAAI,EAEtE,EAA+B,EAM/B,EAAM,iBAAiB,QAAS,EAAS,EAAI,EAC7C,EAAM,iBAAiB,OAAQ,EAAQ,EAAI,EAO3C,AAAI,EAAM,WAAa,KAAK,wBAA0B,EAAM,KAI1D,EAAM,KAAK,aAAa,wBAAyB,EAAE,EAC1C,EAAM,WAAa,KAAK,eACjC,UAAS,gBAAgB,UAAU,IAAI,kBAAkB,EACzD,SAAS,gBAAgB,aAAa,wBAAyB,EAAE,EAErE,CAKA,GAAI,MAAO,SAAW,aAAe,MAAO,WAAa,YAAa,CAIpE,OAAO,0BAA4B,EAInC,GAAI,GAEJ,GAAI,CACF,EAAQ,GAAI,aAAY,8BAA8B,CACxD,OAAS,EAAP,CAEA,EAAQ,SAAS,YAAY,aAAa,EAC1C,EAAM,gBAAgB,+BAAgC,GAAO,GAAO,CAAC,CAAC,CACxE,CAEA,OAAO,cAAc,CAAK,CAC5B,CAEA,AAAI,MAAO,WAAa,aAGtB,EAA0B,QAAQ,CAGtC,CAAE,ICvTF,eAAC,UAAS,EAAQ,CAOhB,GAAI,GAA6B,UAAW,CAC1C,GAAI,CACF,MAAO,CAAC,CAAC,OAAO,QAClB,OAAS,EAAP,CACA,MAAO,EACT,CACF,EAGI,EAAoB,EAA2B,EAE/C,EAAiB,SAAS,EAAO,CACnC,GAAI,GAAW,CACb,KAAM,UAAW,CACf,GAAI,GAAQ,EAAM,MAAM,EACxB,MAAO,CAAE,KAAM,IAAU,OAAQ,MAAO,CAAM,CAChD,CACF,EAEA,MAAI,IACF,GAAS,OAAO,UAAY,UAAW,CACrC,MAAO,EACT,GAGK,CACT,EAMI,EAAiB,SAAS,EAAO,CACnC,MAAO,oBAAmB,CAAK,EAAE,QAAQ,OAAQ,GAAG,CACtD,EAEI,EAAmB,SAAS,EAAO,CACrC,MAAO,oBAAmB,OAAO,CAAK,EAAE,QAAQ,MAAO,GAAG,CAAC,CAC7D,EAEI,EAA0B,UAAW,CAEvC,GAAI,GAAkB,SAAS,EAAc,CAC3C,OAAO,eAAe,KAAM,WAAY,CAAE,SAAU,GAAM,MAAO,CAAC,CAAE,CAAC,EACrE,GAAI,GAAqB,MAAO,GAEhC,GAAI,IAAuB,YAEpB,GAAI,IAAuB,SAChC,AAAI,IAAiB,IACnB,KAAK,YAAY,CAAY,UAEtB,YAAwB,GAAiB,CAClD,GAAI,GAAQ,KACZ,EAAa,QAAQ,SAAS,EAAO,EAAM,CACzC,EAAM,OAAO,EAAM,CAAK,CAC1B,CAAC,CACH,SAAY,IAAiB,MAAU,IAAuB,SAC5D,GAAI,OAAO,UAAU,SAAS,KAAK,CAAY,IAAM,iBACnD,OAAS,GAAI,EAAG,EAAI,EAAa,OAAQ,IAAK,CAC5C,GAAI,GAAQ,EAAa,GACzB,GAAK,OAAO,UAAU,SAAS,KAAK,CAAK,IAAM,kBAAsB,EAAM,SAAW,EACpF,KAAK,OAAO,EAAM,GAAI,EAAM,EAAE,MAE9B,MAAM,IAAI,WAAU,4CAA8C,EAAI,6BAA8B,CAExG,KAEA,QAAS,KAAO,GACd,AAAI,EAAa,eAAe,CAAG,GACjC,KAAK,OAAO,EAAK,EAAa,EAAI,MAKxC,MAAM,IAAI,WAAU,8CAA+C,CAEvE,EAEI,EAAQ,EAAgB,UAE5B,EAAM,OAAS,SAAS,EAAM,EAAO,CACnC,AAAI,IAAQ,MAAK,SACf,KAAK,SAAS,GAAM,KAAK,OAAO,CAAK,CAAC,EAEtC,KAAK,SAAS,GAAQ,CAAC,OAAO,CAAK,CAAC,CAExC,EAEA,EAAM,OAAS,SAAS,EAAM,CAC5B,MAAO,MAAK,SAAS,EACvB,EAEA,EAAM,IAAM,SAAS,EAAM,CACzB,MAAQ,KAAQ,MAAK,SAAY,KAAK,SAAS,GAAM,GAAK,IAC5D,EAEA,EAAM,OAAS,SAAS,EAAM,CAC5B,MAAQ,KAAQ,MAAK,SAAY,KAAK,SAAS,GAAM,MAAM,CAAC,EAAI,CAAC,CACnE,EAEA,EAAM,IAAM,SAAS,EAAM,CACzB,MAAQ,KAAQ,MAAK,QACvB,EAEA,EAAM,IAAM,SAAS,EAAM,EAAO,CAChC,KAAK,SAAS,GAAQ,CAAC,OAAO,CAAK,CAAC,CACtC,EAEA,EAAM,QAAU,SAAS,EAAU,EAAS,CAC1C,GAAI,GACJ,OAAS,KAAQ,MAAK,SACpB,GAAI,KAAK,SAAS,eAAe,CAAI,EAAG,CACtC,EAAU,KAAK,SAAS,GACxB,OAAS,GAAI,EAAG,EAAI,EAAQ,OAAQ,IAClC,EAAS,KAAK,EAAS,EAAQ,GAAI,EAAM,IAAI,CAEjD,CAEJ,EAEA,EAAM,KAAO,UAAW,CACtB,GAAI,GAAQ,CAAC,EACb,YAAK,QAAQ,SAAS,EAAO,EAAM,CACjC,EAAM,KAAK,CAAI,CACjB,CAAC,EACM,EAAe,CAAK,CAC7B,EAEA,EAAM,OAAS,UAAW,CACxB,GAAI,GAAQ,CAAC,EACb,YAAK,QAAQ,SAAS,EAAO,CAC3B,EAAM,KAAK,CAAK,CAClB,CAAC,EACM,EAAe,CAAK,CAC7B,EAEA,EAAM,QAAU,UAAW,CACzB,GAAI,GAAQ,CAAC,EACb,YAAK,QAAQ,SAAS,EAAO,EAAM,CACjC,EAAM,KAAK,CAAC,EAAM,CAAK,CAAC,CAC1B,CAAC,EACM,EAAe,CAAK,CAC7B,EAEI,GACF,GAAM,OAAO,UAAY,EAAM,SAGjC,EAAM,SAAW,UAAW,CAC1B,GAAI,GAAc,CAAC,EACnB,YAAK,QAAQ,SAAS,EAAO,EAAM,CACjC,EAAY,KAAK,EAAe,CAAI,EAAI,IAAM,EAAe,CAAK,CAAC,CACrE,CAAC,EACM,EAAY,KAAK,GAAG,CAC7B,EAGA,EAAO,gBAAkB,CAC3B,EAEI,EAAkC,UAAW,CAC/C,GAAI,CACF,GAAI,GAAkB,EAAO,gBAE7B,MACG,IAAI,GAAgB,MAAM,EAAE,SAAS,IAAM,OAC3C,MAAO,GAAgB,UAAU,KAAQ,YACzC,MAAO,GAAgB,UAAU,SAAY,UAElD,OAAS,EAAP,CACA,MAAO,EACT,CACF,EAEA,AAAK,EAAgC,GACnC,EAAwB,EAG1B,GAAI,GAAQ,EAAO,gBAAgB,UAEnC,AAAI,MAAO,GAAM,MAAS,YACxB,GAAM,KAAO,UAAW,CACtB,GAAI,GAAQ,KACR,EAAQ,CAAC,EACb,KAAK,QAAQ,SAAS,EAAO,EAAM,CACjC,EAAM,KAAK,CAAC,EAAM,CAAK,CAAC,EACnB,EAAM,UACT,EAAM,OAAO,CAAI,CAErB,CAAC,EACD,EAAM,KAAK,SAAS,EAAG,EAAG,CACxB,MAAI,GAAE,GAAK,EAAE,GACJ,GACE,EAAE,GAAK,EAAE,GACX,EAEA,CAEX,CAAC,EACG,EAAM,UACR,GAAM,SAAW,CAAC,GAEpB,OAAS,GAAI,EAAG,EAAI,EAAM,OAAQ,IAChC,KAAK,OAAO,EAAM,GAAG,GAAI,EAAM,GAAG,EAAE,CAExC,GAGE,MAAO,GAAM,aAAgB,YAC/B,OAAO,eAAe,EAAO,cAAe,CAC1C,WAAY,GACZ,aAAc,GACd,SAAU,GACV,MAAO,SAAS,EAAc,CAC5B,GAAI,KAAK,SACP,KAAK,SAAW,CAAC,MACZ,CACL,GAAI,GAAO,CAAC,EACZ,KAAK,QAAQ,SAAS,EAAO,EAAM,CACjC,EAAK,KAAK,CAAI,CAChB,CAAC,EACD,OAAS,GAAI,EAAG,EAAI,EAAK,OAAQ,IAC/B,KAAK,OAAO,EAAK,EAAE,CAEvB,CAEA,EAAe,EAAa,QAAQ,MAAO,EAAE,EAG7C,OAFI,GAAa,EAAa,MAAM,GAAG,EACnC,EACK,EAAI,EAAG,EAAI,EAAW,OAAQ,IACrC,EAAY,EAAW,GAAG,MAAM,GAAG,EACnC,KAAK,OACH,EAAiB,EAAU,EAAE,EAC5B,EAAU,OAAS,EAAK,EAAiB,EAAU,EAAE,EAAI,EAC5D,CAEJ,CACF,CAAC,CAKL,GACG,MAAO,SAAW,YAAe,OAC5B,MAAO,SAAW,YAAe,OACjC,MAAO,OAAS,YAAe,KAAO,EAC9C,EAEA,AAAC,UAAS,EAAQ,CAOhB,GAAI,GAAwB,UAAW,CACrC,GAAI,CACF,GAAI,GAAI,GAAI,GAAO,IAAI,IAAK,UAAU,EACtC,SAAE,SAAW,MACL,EAAE,OAAS,kBAAqB,EAAE,YAC5C,OAAS,EAAP,CACA,MAAO,EACT,CACF,EAGI,EAAc,UAAW,CAC3B,GAAI,GAAO,EAAO,IAEd,EAAM,SAAS,EAAK,EAAM,CAC5B,AAAI,MAAO,IAAQ,UAAU,GAAM,OAAO,CAAG,GACzC,GAAQ,MAAO,IAAS,UAAU,GAAO,OAAO,CAAI,GAGxD,GAAI,GAAM,SAAU,EACpB,GAAI,GAAS,GAAO,WAAa,QAAU,IAAS,EAAO,SAAS,MAAO,CACzE,EAAO,EAAK,YAAY,EACxB,EAAM,SAAS,eAAe,mBAAmB,EAAE,EACnD,EAAc,EAAI,cAAc,MAAM,EACtC,EAAY,KAAO,EACnB,EAAI,KAAK,YAAY,CAAW,EAChC,GAAI,CACF,GAAI,EAAY,KAAK,QAAQ,CAAI,IAAM,EAAG,KAAM,IAAI,OAAM,EAAY,IAAI,CAC5E,OAAS,EAAP,CACA,KAAM,IAAI,OAAM,0BAA4B,EAAO,WAAa,CAAG,CACrE,CACF,CAEA,GAAI,GAAgB,EAAI,cAAc,GAAG,EACzC,EAAc,KAAO,EACjB,GACF,GAAI,KAAK,YAAY,CAAa,EAClC,EAAc,KAAO,EAAc,MAGrC,GAAI,GAAe,EAAI,cAAc,OAAO,EAI5C,GAHA,EAAa,KAAO,MACpB,EAAa,MAAQ,EAEjB,EAAc,WAAa,KAAO,CAAC,IAAI,KAAK,EAAc,IAAI,GAAM,CAAC,EAAa,cAAc,GAAK,CAAC,EACxG,KAAM,IAAI,WAAU,aAAa,EAGnC,OAAO,eAAe,KAAM,iBAAkB,CAC5C,MAAO,CACT,CAAC,EAID,GAAI,GAAe,GAAI,GAAO,gBAAgB,KAAK,MAAM,EACrD,EAAqB,GACrB,EAA2B,GAC3B,EAAQ,KACZ,CAAC,SAAU,SAAU,KAAK,EAAE,QAAQ,SAAS,EAAY,CACvD,GAAI,IAAS,EAAa,GAC1B,EAAa,GAAc,UAAW,CACpC,GAAO,MAAM,EAAc,SAAS,EAChC,GACF,GAA2B,GAC3B,EAAM,OAAS,EAAa,SAAS,EACrC,EAA2B,GAE/B,CACF,CAAC,EAED,OAAO,eAAe,KAAM,eAAgB,CAC1C,MAAO,EACP,WAAY,EACd,CAAC,EAED,GAAI,GAAS,OACb,OAAO,eAAe,KAAM,sBAAuB,CACjD,WAAY,GACZ,aAAc,GACd,SAAU,GACV,MAAO,UAAW,CAChB,AAAI,KAAK,SAAW,GAClB,GAAS,KAAK,OACV,GACF,GAAqB,GACrB,KAAK,aAAa,YAAY,KAAK,MAAM,EACzC,EAAqB,IAG3B,CACF,CAAC,CACH,EAEI,EAAQ,EAAI,UAEZ,EAA6B,SAAS,EAAe,CACvD,OAAO,eAAe,EAAO,EAAe,CAC1C,IAAK,UAAW,CACd,MAAO,MAAK,eAAe,EAC7B,EACA,IAAK,SAAS,EAAO,CACnB,KAAK,eAAe,GAAiB,CACvC,EACA,WAAY,EACd,CAAC,CACH,EAEA,CAAC,OAAQ,OAAQ,WAAY,OAAQ,UAAU,EAC5C,QAAQ,SAAS,EAAe,CAC/B,EAA2B,CAAa,CAC1C,CAAC,EAEH,OAAO,eAAe,EAAO,SAAU,CACrC,IAAK,UAAW,CACd,MAAO,MAAK,eAAe,MAC7B,EACA,IAAK,SAAS,EAAO,CACnB,KAAK,eAAe,OAAY,EAChC,KAAK,oBAAoB,CAC3B,EACA,WAAY,EACd,CAAC,EAED,OAAO,iBAAiB,EAAO,CAE7B,SAAY,CACV,IAAK,UAAW,CACd,GAAI,GAAQ,KACZ,MAAO,WAAW,CAChB,MAAO,GAAM,IACf,CACF,CACF,EAEA,KAAQ,CACN,IAAK,UAAW,CACd,MAAO,MAAK,eAAe,KAAK,QAAQ,MAAO,EAAE,CACnD,EACA,IAAK,SAAS,EAAO,CACnB,KAAK,eAAe,KAAO,EAC3B,KAAK,oBAAoB,CAC3B,EACA,WAAY,EACd,EAEA,SAAY,CACV,IAAK,UAAW,CACd,MAAO,MAAK,eAAe,SAAS,QAAQ,SAAU,GAAG,CAC3D,EACA,IAAK,SAAS,EAAO,CACnB,KAAK,eAAe,SAAW,CACjC,EACA,WAAY,EACd,EAEA,OAAU,CACR,IAAK,UAAW,CAEd,GAAI,GAAe,CAAE,QAAS,GAAI,SAAU,IAAK,OAAQ,EAAG,EAAE,KAAK,eAAe,UAI9E,EAAkB,KAAK,eAAe,MAAQ,GAChD,KAAK,eAAe,OAAS,GAE/B,MAAO,MAAK,eAAe,SACzB,KACA,KAAK,eAAe,SACnB,GAAmB,IAAM,KAAK,eAAe,KAAQ,GAC1D,EACA,WAAY,EACd,EAEA,SAAY,CACV,IAAK,UAAW,CACd,MAAO,EACT,EACA,IAAK,SAAS,EAAO,CACrB,EACA,WAAY,EACd,EAEA,SAAY,CACV,IAAK,UAAW,CACd,MAAO,EACT,EACA,IAAK,SAAS,EAAO,CACrB,EACA,WAAY,EACd,CACF,CAAC,EAED,EAAI,gBAAkB,SAAS,EAAM,CACnC,MAAO,GAAK,gBAAgB,MAAM,EAAM,SAAS,CACnD,EAEA,EAAI,gBAAkB,SAAS,EAAK,CAClC,MAAO,GAAK,gBAAgB,MAAM,EAAM,SAAS,CACnD,EAEA,EAAO,IAAM,CAEf,EAMA,GAJK,EAAsB,GACzB,EAAY,EAGT,EAAO,WAAa,QAAW,CAAE,WAAY,GAAO,UAAW,CAClE,GAAI,GAAY,UAAW,CACzB,MAAO,GAAO,SAAS,SAAW,KAAO,EAAO,SAAS,SAAY,GAAO,SAAS,KAAQ,IAAM,EAAO,SAAS,KAAQ,GAC7H,EAEA,GAAI,CACF,OAAO,eAAe,EAAO,SAAU,SAAU,CAC/C,IAAK,EACL,WAAY,EACd,CAAC,CACH,OAAS,EAAP,CACA,YAAY,UAAW,CACrB,EAAO,SAAS,OAAS,EAAU,CACrC,EAAG,GAAG,CACR,CACF,CAEF,GACG,MAAO,SAAW,YAAe,OAC5B,MAAO,SAAW,YAAe,OACjC,MAAO,OAAS,YAAe,KAAO,EAC9C,IC5eA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,gFAeA,GAAI,IACA,GACA,GACA,GACA,GACA,GACA,GACA,GACA,GACA,GACA,GACA,GACA,GACA,GACA,GACA,GACA,GACA,GACA,GACA,GACA,GACA,GACA,GACA,GACJ,AAAC,UAAU,EAAS,CAChB,GAAI,GAAO,MAAO,SAAW,SAAW,OAAS,MAAO,OAAS,SAAW,KAAO,MAAO,OAAS,SAAW,KAAO,CAAC,EACtH,AAAI,MAAO,SAAW,YAAc,OAAO,IACvC,OAAO,QAAS,CAAC,SAAS,EAAG,SAAU,EAAS,CAAE,EAAQ,EAAe,EAAM,EAAe,CAAO,CAAC,CAAC,CAAG,CAAC,EAE1G,AAAI,MAAO,KAAW,UAAY,MAAO,IAAO,SAAY,SAC7D,EAAQ,EAAe,EAAM,EAAe,GAAO,OAAO,CAAC,CAAC,EAG5D,EAAQ,EAAe,CAAI,CAAC,EAEhC,WAAwB,EAAS,EAAU,CACvC,MAAI,KAAY,GACZ,CAAI,MAAO,QAAO,QAAW,WACzB,OAAO,eAAe,EAAS,aAAc,CAAE,MAAO,EAAK,CAAC,EAG5D,EAAQ,WAAa,IAGtB,SAAU,EAAI,EAAG,CAAE,MAAO,GAAQ,GAAM,EAAW,EAAS,EAAI,CAAC,EAAI,CAAG,CACnF,CACJ,GACC,SAAU,EAAU,CACjB,GAAI,GAAgB,OAAO,gBACtB,CAAE,UAAW,CAAC,CAAE,WAAa,QAAS,SAAU,EAAG,EAAG,CAAE,EAAE,UAAY,CAAG,GAC1E,SAAU,EAAG,EAAG,CAAE,OAAS,KAAK,GAAG,AAAI,OAAO,UAAU,eAAe,KAAK,EAAG,CAAC,GAAG,GAAE,GAAK,EAAE,GAAI,EAEpG,GAAY,SAAU,EAAG,EAAG,CACxB,GAAI,MAAO,IAAM,YAAc,IAAM,KACjC,KAAM,IAAI,WAAU,uBAAyB,OAAO,CAAC,EAAI,+BAA+B,EAC5F,EAAc,EAAG,CAAC,EAClB,YAAc,CAAE,KAAK,YAAc,CAAG,CACtC,EAAE,UAAY,IAAM,KAAO,OAAO,OAAO,CAAC,EAAK,GAAG,UAAY,EAAE,UAAW,GAAI,GACnF,EAEA,GAAW,OAAO,QAAU,SAAU,EAAG,CACrC,OAAS,GAAG,EAAI,EAAG,EAAI,UAAU,OAAQ,EAAI,EAAG,IAAK,CACjD,EAAI,UAAU,GACd,OAAS,KAAK,GAAG,AAAI,OAAO,UAAU,eAAe,KAAK,EAAG,CAAC,GAAG,GAAE,GAAK,EAAE,GAC9E,CACA,MAAO,EACX,EAEA,GAAS,SAAU,EAAG,EAAG,CACrB,GAAI,GAAI,CAAC,EACT,OAAS,KAAK,GAAG,AAAI,OAAO,UAAU,eAAe,KAAK,EAAG,CAAC,GAAK,EAAE,QAAQ,CAAC,EAAI,GAC9E,GAAE,GAAK,EAAE,IACb,GAAI,GAAK,MAAQ,MAAO,QAAO,uBAA0B,WACrD,OAAS,GAAI,EAAG,EAAI,OAAO,sBAAsB,CAAC,EAAG,EAAI,EAAE,OAAQ,IAC/D,AAAI,EAAE,QAAQ,EAAE,EAAE,EAAI,GAAK,OAAO,UAAU,qBAAqB,KAAK,EAAG,EAAE,EAAE,GACzE,GAAE,EAAE,IAAM,EAAE,EAAE,KAE1B,MAAO,EACX,EAEA,GAAa,SAAU,EAAY,EAAQ,EAAK,EAAM,CAClD,GAAI,GAAI,UAAU,OAAQ,EAAI,EAAI,EAAI,EAAS,IAAS,KAAO,EAAO,OAAO,yBAAyB,EAAQ,CAAG,EAAI,EAAM,EAC3H,GAAI,MAAO,UAAY,UAAY,MAAO,SAAQ,UAAa,WAAY,EAAI,QAAQ,SAAS,EAAY,EAAQ,EAAK,CAAI,MACxH,QAAS,GAAI,EAAW,OAAS,EAAG,GAAK,EAAG,IAAK,AAAI,GAAI,EAAW,KAAI,GAAK,GAAI,EAAI,EAAE,CAAC,EAAI,EAAI,EAAI,EAAE,EAAQ,EAAK,CAAC,EAAI,EAAE,EAAQ,CAAG,IAAM,GAChJ,MAAO,GAAI,GAAK,GAAK,OAAO,eAAe,EAAQ,EAAK,CAAC,EAAG,CAChE,EAEA,GAAU,SAAU,EAAY,EAAW,CACvC,MAAO,UAAU,EAAQ,EAAK,CAAE,EAAU,EAAQ,EAAK,CAAU,CAAG,CACxE,EAEA,GAAa,SAAU,EAAa,EAAe,CAC/C,GAAI,MAAO,UAAY,UAAY,MAAO,SAAQ,UAAa,WAAY,MAAO,SAAQ,SAAS,EAAa,CAAa,CACjI,EAEA,GAAY,SAAU,EAAS,EAAY,EAAG,EAAW,CACrD,WAAe,EAAO,CAAE,MAAO,aAAiB,GAAI,EAAQ,GAAI,GAAE,SAAU,EAAS,CAAE,EAAQ,CAAK,CAAG,CAAC,CAAG,CAC3G,MAAO,IAAK,IAAM,GAAI,UAAU,SAAU,EAAS,EAAQ,CACvD,WAAmB,EAAO,CAAE,GAAI,CAAE,EAAK,EAAU,KAAK,CAAK,CAAC,CAAG,OAAS,EAAP,CAAY,EAAO,CAAC,CAAG,CAAE,CAC1F,WAAkB,EAAO,CAAE,GAAI,CAAE,EAAK,EAAU,MAAS,CAAK,CAAC,CAAG,OAAS,EAAP,CAAY,EAAO,CAAC,CAAG,CAAE,CAC7F,WAAc,EAAQ,CAAE,EAAO,KAAO,EAAQ,EAAO,KAAK,EAAI,EAAM,EAAO,KAAK,EAAE,KAAK,EAAW,CAAQ,CAAG,CAC7G,EAAM,GAAY,EAAU,MAAM,EAAS,GAAc,CAAC,CAAC,GAAG,KAAK,CAAC,CACxE,CAAC,CACL,EAEA,GAAc,SAAU,EAAS,EAAM,CACnC,GAAI,GAAI,CAAE,MAAO,EAAG,KAAM,UAAW,CAAE,GAAI,EAAE,GAAK,EAAG,KAAM,GAAE,GAAI,MAAO,GAAE,EAAI,EAAG,KAAM,CAAC,EAAG,IAAK,CAAC,CAAE,EAAG,EAAG,EAAG,EAAG,EAC/G,MAAO,GAAI,CAAE,KAAM,EAAK,CAAC,EAAG,MAAS,EAAK,CAAC,EAAG,OAAU,EAAK,CAAC,CAAE,EAAG,MAAO,SAAW,YAAe,GAAE,OAAO,UAAY,UAAW,CAAE,MAAO,KAAM,GAAI,EACvJ,WAAc,EAAG,CAAE,MAAO,UAAU,EAAG,CAAE,MAAO,GAAK,CAAC,EAAG,CAAC,CAAC,CAAG,CAAG,CACjE,WAAc,EAAI,CACd,GAAI,EAAG,KAAM,IAAI,WAAU,iCAAiC,EAC5D,KAAO,GAAG,GAAI,CACV,GAAI,EAAI,EAAG,GAAM,GAAI,EAAG,GAAK,EAAI,EAAE,OAAY,EAAG,GAAK,EAAE,OAAc,IAAI,EAAE,SAAc,EAAE,KAAK,CAAC,EAAG,GAAK,EAAE,OAAS,CAAE,GAAI,EAAE,KAAK,EAAG,EAAG,EAAE,GAAG,KAAM,MAAO,GAE3J,OADI,EAAI,EAAG,GAAG,GAAK,CAAC,EAAG,GAAK,EAAG,EAAE,KAAK,GAC9B,EAAG,QACF,OAAQ,GAAG,EAAI,EAAI,UACnB,GAAG,SAAE,QAAgB,CAAE,MAAO,EAAG,GAAI,KAAM,EAAM,MACjD,GAAG,EAAE,QAAS,EAAI,EAAG,GAAI,EAAK,CAAC,CAAC,EAAG,aACnC,GAAG,EAAK,EAAE,IAAI,IAAI,EAAG,EAAE,KAAK,IAAI,EAAG,iBAEpC,GAAM,EAAI,EAAE,KAAM,IAAI,EAAE,OAAS,GAAK,EAAE,EAAE,OAAS,KAAQ,GAAG,KAAO,GAAK,EAAG,KAAO,GAAI,CAAE,EAAI,EAAG,QAAU,CAC3G,GAAI,EAAG,KAAO,GAAM,EAAC,GAAM,EAAG,GAAK,EAAE,IAAM,EAAG,GAAK,EAAE,IAAM,CAAE,EAAE,MAAQ,EAAG,GAAI,KAAO,CACrF,GAAI,EAAG,KAAO,GAAK,EAAE,MAAQ,EAAE,GAAI,CAAE,EAAE,MAAQ,EAAE,GAAI,EAAI,EAAI,KAAO,CACpE,GAAI,GAAK,EAAE,MAAQ,EAAE,GAAI,CAAE,EAAE,MAAQ,EAAE,GAAI,EAAE,IAAI,KAAK,CAAE,EAAG,KAAO,CAClE,AAAI,EAAE,IAAI,EAAE,IAAI,IAAI,EACpB,EAAE,KAAK,IAAI,EAAG,SAEtB,EAAK,EAAK,KAAK,EAAS,CAAC,CAC7B,OAAS,EAAP,CAAY,EAAK,CAAC,EAAG,CAAC,EAAG,EAAI,CAAG,QAAE,CAAU,EAAI,EAAI,CAAG,CACzD,GAAI,EAAG,GAAK,EAAG,KAAM,GAAG,GAAI,MAAO,CAAE,MAAO,EAAG,GAAK,EAAG,GAAK,OAAQ,KAAM,EAAK,CACnF,CACJ,EAEA,GAAe,SAAS,EAAG,EAAG,CAC1B,OAAS,KAAK,GAAG,AAAI,IAAM,WAAa,CAAC,OAAO,UAAU,eAAe,KAAK,EAAG,CAAC,GAAG,GAAgB,EAAG,EAAG,CAAC,CAChH,EAEA,GAAkB,OAAO,OAAU,SAAS,EAAG,EAAG,EAAG,EAAI,CACrD,AAAI,IAAO,QAAW,GAAK,GAC3B,OAAO,eAAe,EAAG,EAAI,CAAE,WAAY,GAAM,IAAK,UAAW,CAAE,MAAO,GAAE,EAAI,CAAE,CAAC,CACvF,EAAM,SAAS,EAAG,EAAG,EAAG,EAAI,CACxB,AAAI,IAAO,QAAW,GAAK,GAC3B,EAAE,GAAM,EAAE,EACd,EAEA,GAAW,SAAU,EAAG,CACpB,GAAI,GAAI,MAAO,SAAW,YAAc,OAAO,SAAU,EAAI,GAAK,EAAE,GAAI,EAAI,EAC5E,GAAI,EAAG,MAAO,GAAE,KAAK,CAAC,EACtB,GAAI,GAAK,MAAO,GAAE,QAAW,SAAU,MAAO,CAC1C,KAAM,UAAY,CACd,MAAI,IAAK,GAAK,EAAE,QAAQ,GAAI,QACrB,CAAE,MAAO,GAAK,EAAE,KAAM,KAAM,CAAC,CAAE,CAC1C,CACJ,EACA,KAAM,IAAI,WAAU,EAAI,0BAA4B,iCAAiC,CACzF,EAEA,GAAS,SAAU,EAAG,EAAG,CACrB,GAAI,GAAI,MAAO,SAAW,YAAc,EAAE,OAAO,UACjD,GAAI,CAAC,EAAG,MAAO,GACf,GAAI,GAAI,EAAE,KAAK,CAAC,EAAG,EAAG,EAAK,CAAC,EAAG,EAC/B,GAAI,CACA,KAAQ,KAAM,QAAU,KAAM,IAAM,CAAE,GAAI,EAAE,KAAK,GAAG,MAAM,EAAG,KAAK,EAAE,KAAK,CAC7E,OACO,EAAP,CAAgB,EAAI,CAAE,MAAO,CAAM,CAAG,QACtC,CACI,GAAI,CACA,AAAI,GAAK,CAAC,EAAE,MAAS,GAAI,EAAE,SAAY,EAAE,KAAK,CAAC,CACnD,QACA,CAAU,GAAI,EAAG,KAAM,GAAE,KAAO,CACpC,CACA,MAAO,EACX,EAGA,GAAW,UAAY,CACnB,OAAS,GAAK,CAAC,EAAG,EAAI,EAAG,EAAI,UAAU,OAAQ,IAC3C,EAAK,EAAG,OAAO,GAAO,UAAU,EAAE,CAAC,EACvC,MAAO,EACX,EAGA,GAAiB,UAAY,CACzB,OAAS,GAAI,EAAG,EAAI,EAAG,EAAK,UAAU,OAAQ,EAAI,EAAI,IAAK,GAAK,UAAU,GAAG,OAC7E,OAAS,GAAI,MAAM,CAAC,EAAG,EAAI,EAAG,EAAI,EAAG,EAAI,EAAI,IACzC,OAAS,GAAI,UAAU,GAAI,EAAI,EAAG,EAAK,EAAE,OAAQ,EAAI,EAAI,IAAK,IAC1D,EAAE,GAAK,EAAE,GACjB,MAAO,EACX,EAEA,GAAgB,SAAU,EAAI,EAAM,EAAM,CACtC,GAAI,GAAQ,UAAU,SAAW,EAAG,OAAS,GAAI,EAAG,EAAI,EAAK,OAAQ,EAAI,EAAI,EAAG,IAC5E,AAAI,IAAM,CAAE,KAAK,MACR,IAAI,GAAK,MAAM,UAAU,MAAM,KAAK,EAAM,EAAG,CAAC,GACnD,EAAG,GAAK,EAAK,IAGrB,MAAO,GAAG,OAAO,GAAM,MAAM,UAAU,MAAM,KAAK,CAAI,CAAC,CAC3D,EAEA,GAAU,SAAU,EAAG,CACnB,MAAO,gBAAgB,IAAW,MAAK,EAAI,EAAG,MAAQ,GAAI,IAAQ,CAAC,CACvE,EAEA,GAAmB,SAAU,EAAS,EAAY,EAAW,CACzD,GAAI,CAAC,OAAO,cAAe,KAAM,IAAI,WAAU,sCAAsC,EACrF,GAAI,GAAI,EAAU,MAAM,EAAS,GAAc,CAAC,CAAC,EAAG,EAAG,EAAI,CAAC,EAC5D,MAAO,GAAI,CAAC,EAAG,EAAK,MAAM,EAAG,EAAK,OAAO,EAAG,EAAK,QAAQ,EAAG,EAAE,OAAO,eAAiB,UAAY,CAAE,MAAO,KAAM,EAAG,EACpH,WAAc,EAAG,CAAE,AAAI,EAAE,IAAI,GAAE,GAAK,SAAU,EAAG,CAAE,MAAO,IAAI,SAAQ,SAAU,EAAG,EAAG,CAAE,EAAE,KAAK,CAAC,EAAG,EAAG,EAAG,CAAC,CAAC,EAAI,GAAK,EAAO,EAAG,CAAC,CAAG,CAAC,CAAG,EAAG,CACzI,WAAgB,EAAG,EAAG,CAAE,GAAI,CAAE,EAAK,EAAE,GAAG,CAAC,CAAC,CAAG,OAAS,EAAP,CAAY,EAAO,EAAE,GAAG,GAAI,CAAC,CAAG,CAAE,CACjF,WAAc,EAAG,CAAE,EAAE,gBAAiB,IAAU,QAAQ,QAAQ,EAAE,MAAM,CAAC,EAAE,KAAK,EAAS,CAAM,EAAI,EAAO,EAAE,GAAG,GAAI,CAAC,CAAI,CACxH,WAAiB,EAAO,CAAE,EAAO,OAAQ,CAAK,CAAG,CACjD,WAAgB,EAAO,CAAE,EAAO,QAAS,CAAK,CAAG,CACjD,WAAgB,EAAG,EAAG,CAAE,AAAI,EAAE,CAAC,EAAG,EAAE,MAAM,EAAG,EAAE,QAAQ,EAAO,EAAE,GAAG,GAAI,EAAE,GAAG,EAAE,CAAG,CACrF,EAEA,GAAmB,SAAU,EAAG,CAC5B,GAAI,GAAG,EACP,MAAO,GAAI,CAAC,EAAG,EAAK,MAAM,EAAG,EAAK,QAAS,SAAU,EAAG,CAAE,KAAM,EAAG,CAAC,EAAG,EAAK,QAAQ,EAAG,EAAE,OAAO,UAAY,UAAY,CAAE,MAAO,KAAM,EAAG,EAC1I,WAAc,EAAG,EAAG,CAAE,EAAE,GAAK,EAAE,GAAK,SAAU,EAAG,CAAE,MAAQ,GAAI,CAAC,GAAK,CAAE,MAAO,GAAQ,EAAE,GAAG,CAAC,CAAC,EAAG,KAAM,IAAM,QAAS,EAAI,EAAI,EAAE,CAAC,EAAI,CAAG,EAAI,CAAG,CAClJ,EAEA,GAAgB,SAAU,EAAG,CACzB,GAAI,CAAC,OAAO,cAAe,KAAM,IAAI,WAAU,sCAAsC,EACrF,GAAI,GAAI,EAAE,OAAO,eAAgB,EACjC,MAAO,GAAI,EAAE,KAAK,CAAC,EAAK,GAAI,MAAO,KAAa,WAAa,GAAS,CAAC,EAAI,EAAE,OAAO,UAAU,EAAG,EAAI,CAAC,EAAG,EAAK,MAAM,EAAG,EAAK,OAAO,EAAG,EAAK,QAAQ,EAAG,EAAE,OAAO,eAAiB,UAAY,CAAE,MAAO,KAAM,EAAG,GAC9M,WAAc,EAAG,CAAE,EAAE,GAAK,EAAE,IAAM,SAAU,EAAG,CAAE,MAAO,IAAI,SAAQ,SAAU,EAAS,EAAQ,CAAE,EAAI,EAAE,GAAG,CAAC,EAAG,EAAO,EAAS,EAAQ,EAAE,KAAM,EAAE,KAAK,CAAG,CAAC,CAAG,CAAG,CAC/J,WAAgB,EAAS,EAAQ,EAAG,EAAG,CAAE,QAAQ,QAAQ,CAAC,EAAE,KAAK,SAAS,EAAG,CAAE,EAAQ,CAAE,MAAO,EAAG,KAAM,CAAE,CAAC,CAAG,EAAG,CAAM,CAAG,CAC/H,EAEA,GAAuB,SAAU,EAAQ,EAAK,CAC1C,MAAI,QAAO,eAAkB,OAAO,eAAe,EAAQ,MAAO,CAAE,MAAO,CAAI,CAAC,EAAY,EAAO,IAAM,EAClG,CACX,EAEA,GAAI,GAAqB,OAAO,OAAU,SAAS,EAAG,EAAG,CACrD,OAAO,eAAe,EAAG,UAAW,CAAE,WAAY,GAAM,MAAO,CAAE,CAAC,CACtE,EAAK,SAAS,EAAG,EAAG,CAChB,EAAE,QAAa,CACnB,EAEA,GAAe,SAAU,EAAK,CAC1B,GAAI,GAAO,EAAI,WAAY,MAAO,GAClC,GAAI,GAAS,CAAC,EACd,GAAI,GAAO,KAAM,OAAS,KAAK,GAAK,AAAI,IAAM,WAAa,OAAO,UAAU,eAAe,KAAK,EAAK,CAAC,GAAG,GAAgB,EAAQ,EAAK,CAAC,EACvI,SAAmB,EAAQ,CAAG,EACvB,CACX,EAEA,GAAkB,SAAU,EAAK,CAC7B,MAAQ,IAAO,EAAI,WAAc,EAAM,CAAE,QAAW,CAAI,CAC5D,EAEA,GAAyB,SAAU,EAAU,EAAO,EAAM,EAAG,CACzD,GAAI,IAAS,KAAO,CAAC,EAAG,KAAM,IAAI,WAAU,+CAA+C,EAC3F,GAAI,MAAO,IAAU,WAAa,IAAa,GAAS,CAAC,EAAI,CAAC,EAAM,IAAI,CAAQ,EAAG,KAAM,IAAI,WAAU,0EAA0E,EACjL,MAAO,KAAS,IAAM,EAAI,IAAS,IAAM,EAAE,KAAK,CAAQ,EAAI,EAAI,EAAE,MAAQ,EAAM,IAAI,CAAQ,CAChG,EAEA,GAAyB,SAAU,EAAU,EAAO,EAAO,EAAM,EAAG,CAChE,GAAI,IAAS,IAAK,KAAM,IAAI,WAAU,gCAAgC,EACtE,GAAI,IAAS,KAAO,CAAC,EAAG,KAAM,IAAI,WAAU,+CAA+C,EAC3F,GAAI,MAAO,IAAU,WAAa,IAAa,GAAS,CAAC,EAAI,CAAC,EAAM,IAAI,CAAQ,EAAG,KAAM,IAAI,WAAU,yEAAyE,EAChL,MAAQ,KAAS,IAAM,EAAE,KAAK,EAAU,CAAK,EAAI,EAAI,EAAE,MAAQ,EAAQ,EAAM,IAAI,EAAU,CAAK,EAAI,CACxG,EAEA,EAAS,YAAa,EAAS,EAC/B,EAAS,WAAY,EAAQ,EAC7B,EAAS,SAAU,EAAM,EACzB,EAAS,aAAc,EAAU,EACjC,EAAS,UAAW,EAAO,EAC3B,EAAS,aAAc,EAAU,EACjC,EAAS,YAAa,EAAS,EAC/B,EAAS,cAAe,EAAW,EACnC,EAAS,eAAgB,EAAY,EACrC,EAAS,kBAAmB,EAAe,EAC3C,EAAS,WAAY,EAAQ,EAC7B,EAAS,SAAU,EAAM,EACzB,EAAS,WAAY,EAAQ,EAC7B,EAAS,iBAAkB,EAAc,EACzC,EAAS,gBAAiB,EAAa,EACvC,EAAS,UAAW,EAAO,EAC3B,EAAS,mBAAoB,EAAgB,EAC7C,EAAS,mBAAoB,EAAgB,EAC7C,EAAS,gBAAiB,EAAa,EACvC,EAAS,uBAAwB,EAAoB,EACrD,EAAS,eAAgB,EAAY,EACrC,EAAS,kBAAmB,EAAe,EAC3C,EAAS,yBAA0B,EAAsB,EACzD,EAAS,yBAA0B,EAAsB,CAC7D,CAAC,ICjTD;AAAA;AAAA;AAAA;AAAA;AAAA,GAMA,AAAC,UAA0C,EAAM,EAAS,CACzD,AAAG,MAAO,KAAY,UAAY,MAAO,KAAW,SACnD,GAAO,QAAU,EAAQ,EACrB,AAAG,MAAO,SAAW,YAAc,OAAO,IAC9C,OAAO,CAAC,EAAG,CAAO,EACd,AAAG,MAAO,KAAY,SAC1B,GAAQ,YAAiB,EAAQ,EAEjC,EAAK,YAAiB,EAAQ,CAChC,GAAG,GAAM,UAAW,CACpB,MAAiB,WAAW,CAClB,GAAI,GAAuB,CAE/B,IACC,SAAS,EAAyB,EAAqB,EAAqB,CAEnF,aAGA,EAAoB,EAAE,EAAqB,CACzC,QAAW,UAAW,CAAE,MAAqB,GAAW,CAC1D,CAAC,EAGD,GAAI,GAAe,EAAoB,GAAG,EACtC,EAAoC,EAAoB,EAAE,CAAY,EAEtE,EAAS,EAAoB,GAAG,EAChC,EAA8B,EAAoB,EAAE,CAAM,EAE1D,EAAa,EAAoB,GAAG,EACpC,EAA8B,EAAoB,EAAE,CAAU,EAOlE,WAAiB,EAAM,CACrB,GAAI,CACF,MAAO,UAAS,YAAY,CAAI,CAClC,OAAS,EAAP,CACA,MAAO,EACT,CACF,CAUA,GAAI,GAAqB,SAA4B,EAAQ,CAC3D,GAAI,GAAe,EAAe,EAAE,CAAM,EAC1C,SAAQ,KAAK,EACN,CACT,EAEiC,EAAe,EAOhD,WAA2B,EAAO,CAChC,GAAI,GAAQ,SAAS,gBAAgB,aAAa,KAAK,IAAM,MACzD,EAAc,SAAS,cAAc,UAAU,EAEnD,EAAY,MAAM,SAAW,OAE7B,EAAY,MAAM,OAAS,IAC3B,EAAY,MAAM,QAAU,IAC5B,EAAY,MAAM,OAAS,IAE3B,EAAY,MAAM,SAAW,WAC7B,EAAY,MAAM,EAAQ,QAAU,QAAU,UAE9C,GAAI,GAAY,OAAO,aAAe,SAAS,gBAAgB,UAC/D,SAAY,MAAM,IAAM,GAAG,OAAO,EAAW,IAAI,EACjD,EAAY,aAAa,WAAY,EAAE,EACvC,EAAY,MAAQ,EACb,CACT,CAYA,GAAI,GAAsB,SAA6B,EAAQ,CAC7D,GAAI,GAAU,UAAU,OAAS,GAAK,UAAU,KAAO,OAAY,UAAU,GAAK,CAChF,UAAW,SAAS,IACtB,EACI,EAAe,GAEnB,GAAI,MAAO,IAAW,SAAU,CAC9B,GAAI,GAAc,EAAkB,CAAM,EAC1C,EAAQ,UAAU,YAAY,CAAW,EACzC,EAAe,EAAe,EAAE,CAAW,EAC3C,EAAQ,MAAM,EACd,EAAY,OAAO,CACrB,KACE,GAAe,EAAe,EAAE,CAAM,EACtC,EAAQ,MAAM,EAGhB,MAAO,EACT,EAEiC,EAAgB,EAEjD,WAAiB,EAAK,CAA6B,MAAI,OAAO,SAAW,YAAc,MAAO,QAAO,UAAa,SAAY,EAAU,SAAiB,EAAK,CAAE,MAAO,OAAO,EAAK,EAAY,EAAU,SAAiB,EAAK,CAAE,MAAO,IAAO,MAAO,SAAW,YAAc,EAAI,cAAgB,QAAU,IAAQ,OAAO,UAAY,SAAW,MAAO,EAAK,EAAY,EAAQ,CAAG,CAAG,CAUzX,GAAI,GAAyB,UAAkC,CAC7D,GAAI,GAAU,UAAU,OAAS,GAAK,UAAU,KAAO,OAAY,UAAU,GAAK,CAAC,EAE/E,EAAkB,EAAQ,OAC1B,EAAS,IAAoB,OAAS,OAAS,EAC/C,EAAY,EAAQ,UACpB,EAAS,EAAQ,OACjB,GAAO,EAAQ,KAEnB,GAAI,IAAW,QAAU,IAAW,MAClC,KAAM,IAAI,OAAM,oDAAoD,EAItE,GAAI,IAAW,OACb,GAAI,GAAU,EAAQ,CAAM,IAAM,UAAY,EAAO,WAAa,EAAG,CACnE,GAAI,IAAW,QAAU,EAAO,aAAa,UAAU,EACrD,KAAM,IAAI,OAAM,mFAAmF,EAGrG,GAAI,IAAW,OAAU,GAAO,aAAa,UAAU,GAAK,EAAO,aAAa,UAAU,GACxF,KAAM,IAAI,OAAM,uGAAwG,CAE5H,KACE,MAAM,IAAI,OAAM,6CAA6C,EAKjE,GAAI,GACF,MAAO,GAAa,GAAM,CACxB,UAAW,CACb,CAAC,EAIH,GAAI,EACF,MAAO,KAAW,MAAQ,EAAY,CAAM,EAAI,EAAa,EAAQ,CACnE,UAAW,CACb,CAAC,CAEL,EAEiC,GAAmB,EAEpD,YAA0B,EAAK,CAA6B,MAAI,OAAO,SAAW,YAAc,MAAO,QAAO,UAAa,SAAY,GAAmB,SAAiB,EAAK,CAAE,MAAO,OAAO,EAAK,EAAY,GAAmB,SAAiB,EAAK,CAAE,MAAO,IAAO,MAAO,SAAW,YAAc,EAAI,cAAgB,QAAU,IAAQ,OAAO,UAAY,SAAW,MAAO,EAAK,EAAY,GAAiB,CAAG,CAAG,CAE7Z,YAAyB,EAAU,EAAa,CAAE,GAAI,CAAE,aAAoB,IAAgB,KAAM,IAAI,WAAU,mCAAmC,CAAK,CAExJ,YAA2B,EAAQ,EAAO,CAAE,OAAS,GAAI,EAAG,EAAI,EAAM,OAAQ,IAAK,CAAE,GAAI,GAAa,EAAM,GAAI,EAAW,WAAa,EAAW,YAAc,GAAO,EAAW,aAAe,GAAU,SAAW,IAAY,GAAW,SAAW,IAAM,OAAO,eAAe,EAAQ,EAAW,IAAK,CAAU,CAAG,CAAE,CAE5T,YAAsB,EAAa,EAAY,EAAa,CAAE,MAAI,IAAY,GAAkB,EAAY,UAAW,CAAU,EAAO,GAAa,GAAkB,EAAa,CAAW,EAAU,CAAa,CAEtN,YAAmB,EAAU,EAAY,CAAE,GAAI,MAAO,IAAe,YAAc,IAAe,KAAQ,KAAM,IAAI,WAAU,oDAAoD,EAAK,EAAS,UAAY,OAAO,OAAO,GAAc,EAAW,UAAW,CAAE,YAAa,CAAE,MAAO,EAAU,SAAU,GAAM,aAAc,EAAK,CAAE,CAAC,EAAO,GAAY,GAAgB,EAAU,CAAU,CAAG,CAEhY,YAAyB,EAAG,EAAG,CAAE,UAAkB,OAAO,gBAAkB,SAAyB,EAAG,EAAG,CAAE,SAAE,UAAY,EAAU,CAAG,EAAU,GAAgB,EAAG,CAAC,CAAG,CAEzK,YAAsB,EAAS,CAAE,GAAI,GAA4B,GAA0B,EAAG,MAAO,WAAgC,CAAE,GAAI,GAAQ,GAAgB,CAAO,EAAG,EAAQ,GAAI,EAA2B,CAAE,GAAI,GAAY,GAAgB,IAAI,EAAE,YAAa,EAAS,QAAQ,UAAU,EAAO,UAAW,CAAS,CAAG,KAAS,GAAS,EAAM,MAAM,KAAM,SAAS,EAAK,MAAO,IAA2B,KAAM,CAAM,CAAG,CAAG,CAExa,YAAoC,EAAM,EAAM,CAAE,MAAI,IAAS,IAAiB,CAAI,IAAM,UAAY,MAAO,IAAS,YAAsB,EAAe,GAAuB,CAAI,CAAG,CAEzL,YAAgC,EAAM,CAAE,GAAI,IAAS,OAAU,KAAM,IAAI,gBAAe,2DAA2D,EAAK,MAAO,EAAM,CAErK,aAAqC,CAA0E,GAApE,MAAO,UAAY,aAAe,CAAC,QAAQ,WAA6B,QAAQ,UAAU,KAAM,MAAO,GAAO,GAAI,MAAO,QAAU,WAAY,MAAO,GAAM,GAAI,CAAE,YAAK,UAAU,SAAS,KAAK,QAAQ,UAAU,KAAM,CAAC,EAAG,UAAY,CAAC,CAAC,CAAC,EAAU,EAAM,OAAS,EAAP,CAAY,MAAO,EAAO,CAAE,CAEnU,YAAyB,EAAG,CAAE,UAAkB,OAAO,eAAiB,OAAO,eAAiB,SAAyB,EAAG,CAAE,MAAO,GAAE,WAAa,OAAO,eAAe,CAAC,CAAG,EAAU,GAAgB,CAAC,CAAG,CAa5M,YAA2B,EAAQ,EAAS,CAC1C,GAAI,GAAY,kBAAkB,OAAO,CAAM,EAE/C,GAAI,EAAC,EAAQ,aAAa,CAAS,EAInC,MAAO,GAAQ,aAAa,CAAS,CACvC,CAOA,GAAI,IAAyB,SAAU,EAAU,CAC/C,GAAU,EAAW,CAAQ,EAE7B,GAAI,GAAS,GAAa,CAAS,EAMnC,WAAmB,EAAS,EAAS,CACnC,GAAI,GAEJ,UAAgB,KAAM,CAAS,EAE/B,EAAQ,EAAO,KAAK,IAAI,EAExB,EAAM,eAAe,CAAO,EAE5B,EAAM,YAAY,CAAO,EAElB,CACT,CAQA,UAAa,EAAW,CAAC,CACvB,IAAK,iBACL,MAAO,UAA0B,CAC/B,GAAI,GAAU,UAAU,OAAS,GAAK,UAAU,KAAO,OAAY,UAAU,GAAK,CAAC,EACnF,KAAK,OAAS,MAAO,GAAQ,QAAW,WAAa,EAAQ,OAAS,KAAK,cAC3E,KAAK,OAAS,MAAO,GAAQ,QAAW,WAAa,EAAQ,OAAS,KAAK,cAC3E,KAAK,KAAO,MAAO,GAAQ,MAAS,WAAa,EAAQ,KAAO,KAAK,YACrE,KAAK,UAAY,GAAiB,EAAQ,SAAS,IAAM,SAAW,EAAQ,UAAY,SAAS,IACnG,CAMF,EAAG,CACD,IAAK,cACL,MAAO,SAAqB,EAAS,CACnC,GAAI,GAAS,KAEb,KAAK,SAAW,EAAe,EAAE,EAAS,QAAS,SAAU,GAAG,CAC9D,MAAO,GAAO,QAAQ,EAAC,CACzB,CAAC,CACH,CAMF,EAAG,CACD,IAAK,UACL,MAAO,SAAiB,EAAG,CACzB,GAAI,GAAU,EAAE,gBAAkB,EAAE,cAChC,GAAS,KAAK,OAAO,CAAO,GAAK,OACjC,GAAO,GAAgB,CACzB,OAAQ,GACR,UAAW,KAAK,UAChB,OAAQ,KAAK,OAAO,CAAO,EAC3B,KAAM,KAAK,KAAK,CAAO,CACzB,CAAC,EAED,KAAK,KAAK,GAAO,UAAY,QAAS,CACpC,OAAQ,GACR,KAAM,GACN,QAAS,EACT,eAAgB,UAA0B,CACxC,AAAI,GACF,EAAQ,MAAM,EAGhB,SAAS,cAAc,KAAK,EAC5B,OAAO,aAAa,EAAE,gBAAgB,CACxC,CACF,CAAC,CACH,CAMF,EAAG,CACD,IAAK,gBACL,MAAO,SAAuB,EAAS,CACrC,MAAO,IAAkB,SAAU,CAAO,CAC5C,CAMF,EAAG,CACD,IAAK,gBACL,MAAO,SAAuB,EAAS,CACrC,GAAI,GAAW,GAAkB,SAAU,CAAO,EAElD,GAAI,EACF,MAAO,UAAS,cAAc,CAAQ,CAE1C,CAQF,EAAG,CACD,IAAK,cAML,MAAO,SAAqB,EAAS,CACnC,MAAO,IAAkB,OAAQ,CAAO,CAC1C,CAKF,EAAG,CACD,IAAK,UACL,MAAO,UAAmB,CACxB,KAAK,SAAS,QAAQ,CACxB,CACF,CAAC,EAAG,CAAC,CACH,IAAK,OACL,MAAO,SAAc,EAAQ,CAC3B,GAAI,GAAU,UAAU,OAAS,GAAK,UAAU,KAAO,OAAY,UAAU,GAAK,CAChF,UAAW,SAAS,IACtB,EACA,MAAO,GAAa,EAAQ,CAAO,CACrC,CAOF,EAAG,CACD,IAAK,MACL,MAAO,SAAa,EAAQ,CAC1B,MAAO,GAAY,CAAM,CAC3B,CAOF,EAAG,CACD,IAAK,cACL,MAAO,UAAuB,CAC5B,GAAI,GAAS,UAAU,OAAS,GAAK,UAAU,KAAO,OAAY,UAAU,GAAK,CAAC,OAAQ,KAAK,EAC3F,EAAU,MAAO,IAAW,SAAW,CAAC,CAAM,EAAI,EAClD,GAAU,CAAC,CAAC,SAAS,sBACzB,SAAQ,QAAQ,SAAU,GAAQ,CAChC,GAAU,IAAW,CAAC,CAAC,SAAS,sBAAsB,EAAM,CAC9D,CAAC,EACM,EACT,CACF,CAAC,CAAC,EAEK,CACT,EAAG,EAAqB,CAAE,EAEO,GAAa,EAExC,EAEA,IACC,SAAS,EAAQ,CAExB,GAAI,GAAqB,EAKzB,GAAI,MAAO,UAAY,aAAe,CAAC,QAAQ,UAAU,QAAS,CAC9D,GAAI,GAAQ,QAAQ,UAEpB,EAAM,QAAU,EAAM,iBACN,EAAM,oBACN,EAAM,mBACN,EAAM,kBACN,EAAM,qBAC1B,CASA,WAAkB,EAAS,EAAU,CACjC,KAAO,GAAW,EAAQ,WAAa,GAAoB,CACvD,GAAI,MAAO,GAAQ,SAAY,YAC3B,EAAQ,QAAQ,CAAQ,EAC1B,MAAO,GAET,EAAU,EAAQ,UACtB,CACJ,CAEA,EAAO,QAAU,CAGX,EAEA,IACC,SAAS,EAAQ,EAA0B,EAAqB,CAEvE,GAAI,GAAU,EAAoB,GAAG,EAYrC,WAAmB,EAAS,EAAU,EAAM,EAAU,EAAY,CAC9D,GAAI,GAAa,EAAS,MAAM,KAAM,SAAS,EAE/C,SAAQ,iBAAiB,EAAM,EAAY,CAAU,EAE9C,CACH,QAAS,UAAW,CAChB,EAAQ,oBAAoB,EAAM,EAAY,CAAU,CAC5D,CACJ,CACJ,CAYA,WAAkB,EAAU,EAAU,EAAM,EAAU,EAAY,CAE9D,MAAI,OAAO,GAAS,kBAAqB,WAC9B,EAAU,MAAM,KAAM,SAAS,EAItC,MAAO,IAAS,WAGT,EAAU,KAAK,KAAM,QAAQ,EAAE,MAAM,KAAM,SAAS,EAI3D,OAAO,IAAa,UACpB,GAAW,SAAS,iBAAiB,CAAQ,GAI1C,MAAM,UAAU,IAAI,KAAK,EAAU,SAAU,EAAS,CACzD,MAAO,GAAU,EAAS,EAAU,EAAM,EAAU,CAAU,CAClE,CAAC,EACL,CAWA,WAAkB,EAAS,EAAU,EAAM,EAAU,CACjD,MAAO,UAAS,EAAG,CACf,EAAE,eAAiB,EAAQ,EAAE,OAAQ,CAAQ,EAEzC,EAAE,gBACF,EAAS,KAAK,EAAS,CAAC,CAEhC,CACJ,CAEA,EAAO,QAAU,CAGX,EAEA,IACC,SAAS,EAAyB,EAAS,CAQlD,EAAQ,KAAO,SAAS,EAAO,CAC3B,MAAO,KAAU,QACV,YAAiB,cACjB,EAAM,WAAa,CAC9B,EAQA,EAAQ,SAAW,SAAS,EAAO,CAC/B,GAAI,GAAO,OAAO,UAAU,SAAS,KAAK,CAAK,EAE/C,MAAO,KAAU,QACT,KAAS,qBAAuB,IAAS,4BACzC,UAAY,IACZ,GAAM,SAAW,GAAK,EAAQ,KAAK,EAAM,EAAE,EACvD,EAQA,EAAQ,OAAS,SAAS,EAAO,CAC7B,MAAO,OAAO,IAAU,UACjB,YAAiB,OAC5B,EAQA,EAAQ,GAAK,SAAS,EAAO,CACzB,GAAI,GAAO,OAAO,UAAU,SAAS,KAAK,CAAK,EAE/C,MAAO,KAAS,mBACpB,CAGM,EAEA,IACC,SAAS,EAAQ,EAA0B,EAAqB,CAEvE,GAAI,GAAK,EAAoB,GAAG,EAC5B,EAAW,EAAoB,GAAG,EAWtC,WAAgB,EAAQ,EAAM,EAAU,CACpC,GAAI,CAAC,GAAU,CAAC,GAAQ,CAAC,EACrB,KAAM,IAAI,OAAM,4BAA4B,EAGhD,GAAI,CAAC,EAAG,OAAO,CAAI,EACf,KAAM,IAAI,WAAU,kCAAkC,EAG1D,GAAI,CAAC,EAAG,GAAG,CAAQ,EACf,KAAM,IAAI,WAAU,mCAAmC,EAG3D,GAAI,EAAG,KAAK,CAAM,EACd,MAAO,GAAW,EAAQ,EAAM,CAAQ,EAEvC,GAAI,EAAG,SAAS,CAAM,EACvB,MAAO,GAAe,EAAQ,EAAM,CAAQ,EAE3C,GAAI,EAAG,OAAO,CAAM,EACrB,MAAO,GAAe,EAAQ,EAAM,CAAQ,EAG5C,KAAM,IAAI,WAAU,2EAA2E,CAEvG,CAWA,WAAoB,EAAM,EAAM,EAAU,CACtC,SAAK,iBAAiB,EAAM,CAAQ,EAE7B,CACH,QAAS,UAAW,CAChB,EAAK,oBAAoB,EAAM,CAAQ,CAC3C,CACJ,CACJ,CAWA,WAAwB,EAAU,EAAM,EAAU,CAC9C,aAAM,UAAU,QAAQ,KAAK,EAAU,SAAS,EAAM,CAClD,EAAK,iBAAiB,EAAM,CAAQ,CACxC,CAAC,EAEM,CACH,QAAS,UAAW,CAChB,MAAM,UAAU,QAAQ,KAAK,EAAU,SAAS,EAAM,CAClD,EAAK,oBAAoB,EAAM,CAAQ,CAC3C,CAAC,CACL,CACJ,CACJ,CAWA,WAAwB,EAAU,EAAM,EAAU,CAC9C,MAAO,GAAS,SAAS,KAAM,EAAU,EAAM,CAAQ,CAC3D,CAEA,EAAO,QAAU,CAGX,EAEA,IACC,SAAS,EAAQ,CAExB,WAAgB,EAAS,CACrB,GAAI,GAEJ,GAAI,EAAQ,WAAa,SACrB,EAAQ,MAAM,EAEd,EAAe,EAAQ,cAElB,EAAQ,WAAa,SAAW,EAAQ,WAAa,WAAY,CACtE,GAAI,GAAa,EAAQ,aAAa,UAAU,EAEhD,AAAK,GACD,EAAQ,aAAa,WAAY,EAAE,EAGvC,EAAQ,OAAO,EACf,EAAQ,kBAAkB,EAAG,EAAQ,MAAM,MAAM,EAE5C,GACD,EAAQ,gBAAgB,UAAU,EAGtC,EAAe,EAAQ,KAC3B,KACK,CACD,AAAI,EAAQ,aAAa,iBAAiB,GACtC,EAAQ,MAAM,EAGlB,GAAI,GAAY,OAAO,aAAa,EAChC,EAAQ,SAAS,YAAY,EAEjC,EAAM,mBAAmB,CAAO,EAChC,EAAU,gBAAgB,EAC1B,EAAU,SAAS,CAAK,EAExB,EAAe,EAAU,SAAS,CACtC,CAEA,MAAO,EACX,CAEA,EAAO,QAAU,CAGX,EAEA,IACC,SAAS,EAAQ,CAExB,YAAc,CAGd,CAEA,EAAE,UAAY,CACZ,GAAI,SAAU,EAAM,EAAU,EAAK,CACjC,GAAI,GAAI,KAAK,GAAM,MAAK,EAAI,CAAC,GAE7B,MAAC,GAAE,IAAU,GAAE,GAAQ,CAAC,IAAI,KAAK,CAC/B,GAAI,EACJ,IAAK,CACP,CAAC,EAEM,IACT,EAEA,KAAM,SAAU,EAAM,EAAU,EAAK,CACnC,GAAI,GAAO,KACX,YAAqB,CACnB,EAAK,IAAI,EAAM,CAAQ,EACvB,EAAS,MAAM,EAAK,SAAS,CAC/B,CAEA,SAAS,EAAI,EACN,KAAK,GAAG,EAAM,EAAU,CAAG,CACpC,EAEA,KAAM,SAAU,EAAM,CACpB,GAAI,GAAO,CAAC,EAAE,MAAM,KAAK,UAAW,CAAC,EACjC,EAAW,OAAK,GAAM,MAAK,EAAI,CAAC,IAAI,IAAS,CAAC,GAAG,MAAM,EACvD,EAAI,EACJ,EAAM,EAAO,OAEjB,IAAK,EAAG,EAAI,EAAK,IACf,EAAO,GAAG,GAAG,MAAM,EAAO,GAAG,IAAK,CAAI,EAGxC,MAAO,KACT,EAEA,IAAK,SAAU,EAAM,EAAU,CAC7B,GAAI,GAAI,KAAK,GAAM,MAAK,EAAI,CAAC,GACzB,EAAO,EAAE,GACT,EAAa,CAAC,EAElB,GAAI,GAAQ,EACV,OAAS,GAAI,EAAG,EAAM,EAAK,OAAQ,EAAI,EAAK,IAC1C,AAAI,EAAK,GAAG,KAAO,GAAY,EAAK,GAAG,GAAG,IAAM,GAC9C,EAAW,KAAK,EAAK,EAAE,EAQ7B,MAAC,GAAW,OACR,EAAE,GAAQ,EACV,MAAO,GAAE,GAEN,IACT,CACF,EAEA,EAAO,QAAU,EACjB,EAAO,QAAQ,YAAc,CAGvB,CAEI,EAGI,EAA2B,CAAC,EAGhC,WAA6B,EAAU,CAEtC,GAAG,EAAyB,GAC3B,MAAO,GAAyB,GAAU,QAG3C,GAAI,GAAS,EAAyB,GAAY,CAGjD,QAAS,CAAC,CACX,EAGA,SAAoB,GAAU,EAAQ,EAAO,QAAS,CAAmB,EAGlE,EAAO,OACf,CAIA,MAAC,WAAW,CAEX,EAAoB,EAAI,SAAS,EAAQ,CACxC,GAAI,GAAS,GAAU,EAAO,WAC7B,UAAW,CAAE,MAAO,GAAO,OAAY,EACvC,UAAW,CAAE,MAAO,EAAQ,EAC7B,SAAoB,EAAE,EAAQ,CAAE,EAAG,CAAO,CAAC,EACpC,CACR,CACD,EAAE,EAGD,UAAW,CAEX,EAAoB,EAAI,SAAS,EAAS,EAAY,CACrD,OAAQ,KAAO,GACd,AAAG,EAAoB,EAAE,EAAY,CAAG,GAAK,CAAC,EAAoB,EAAE,EAAS,CAAG,GAC/E,OAAO,eAAe,EAAS,EAAK,CAAE,WAAY,GAAM,IAAK,EAAW,EAAK,CAAC,CAGjF,CACD,EAAE,EAGD,UAAW,CACX,EAAoB,EAAI,SAAS,EAAK,EAAM,CAAE,MAAO,QAAO,UAAU,eAAe,KAAK,EAAK,CAAI,CAAG,CACvG,EAAE,EAMK,EAAoB,GAAG,CAC/B,EAAG,EACX,OACD,CAAC,IC32BD;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,GAeA,GAAI,IAAkB,UAOtB,GAAO,QAAU,GAUjB,YAAoB,EAAQ,CAC1B,GAAI,GAAM,GAAK,EACX,EAAQ,GAAgB,KAAK,CAAG,EAEpC,GAAI,CAAC,EACH,MAAO,GAGT,GAAI,GACA,EAAO,GACP,EAAQ,EACR,EAAY,EAEhB,IAAK,EAAQ,EAAM,MAAO,EAAQ,EAAI,OAAQ,IAAS,CACrD,OAAQ,EAAI,WAAW,CAAK,OACrB,IACH,EAAS,SACT,UACG,IACH,EAAS,QACT,UACG,IACH,EAAS,QACT,UACG,IACH,EAAS,OACT,UACG,IACH,EAAS,OACT,cAEA,SAGJ,AAAI,IAAc,GAChB,IAAQ,EAAI,UAAU,EAAW,CAAK,GAGxC,EAAY,EAAQ,EACpB,GAAQ,CACV,CAEA,MAAO,KAAc,EACjB,EAAO,EAAI,UAAU,EAAW,CAAK,EACrC,CACN,IC7EA,MAAM,UAAU,MAAM,OAAO,eAAe,MAAM,UAAU,OAAO,CAAC,aAAa,GAAG,MAAM,YAAY,CAAC,GAAI,GAAE,MAAM,UAAU,EAAE,EAAE,EAAE,OAAO,UAAU,EAAE,EAAE,MAAO,GAAE,MAAM,UAAU,OAAO,KAAK,KAAK,SAAS,EAAE,EAAE,CAAC,MAAO,OAAM,QAAQ,CAAC,EAAE,EAAE,KAAK,MAAM,EAAE,EAAE,KAAK,EAAE,EAAE,CAAC,CAAC,EAAE,EAAE,KAAK,CAAC,EAAE,CAAC,EAAE,CAAC,CAAC,EAAE,MAAM,UAAU,MAAM,KAAK,IAAI,CAAC,EAAE,SAAS,EAAE,CAAC,EAAE,MAAM,UAAU,SAAS,OAAO,eAAe,MAAM,UAAU,UAAU,CAAC,aAAa,GAAG,MAAM,SAAS,EAAE,CAAC,MAAO,OAAM,UAAU,IAAI,MAAM,KAAK,SAAS,EAAE,KAAK,CAAC,EAAE,SAAS,EAAE,CAAC,ECuBxf,OAAO,SCvBP,KAAK,OAAQ,MAAK,MAAM,SAAS,EAAE,EAAE,CAAC,MAAO,GAAE,GAAG,CAAC,EAAE,GAAI,SAAQ,SAAS,EAAE,EAAE,CAAC,GAAI,GAAE,GAAI,gBAAe,EAAE,CAAC,EAAE,EAAE,CAAC,EAAE,EAAE,CAAC,EAAE,EAAE,UAAU,CAAC,MAAM,CAAC,GAAG,AAAI,GAAE,OAAO,IAAI,IAAjB,EAAoB,WAAW,EAAE,WAAW,OAAO,EAAE,OAAO,IAAI,EAAE,YAAY,KAAK,UAAU,CAAC,MAAO,SAAQ,QAAQ,EAAE,YAAY,CAAC,EAAE,KAAK,UAAU,CAAC,MAAO,SAAQ,QAAQ,EAAE,YAAY,EAAE,KAAK,KAAK,KAAK,CAAC,EAAE,KAAK,UAAU,CAAC,MAAO,SAAQ,QAAQ,GAAI,MAAK,CAAC,EAAE,QAAQ,CAAC,CAAC,CAAC,EAAE,MAAM,EAAE,QAAQ,CAAC,KAAK,UAAU,CAAC,MAAO,EAAC,EAAE,QAAQ,UAAU,CAAC,MAAO,EAAC,EAAE,IAAI,SAAS,EAAE,CAAC,MAAO,GAAE,EAAE,YAAY,EAAE,EAAE,IAAI,SAAS,EAAE,CAAC,MAAO,GAAE,YAAY,GAAI,EAAC,CAAC,CAAC,CAAC,EAAE,OAAQ,KAAK,GAAE,KAAK,EAAE,QAAQ,MAAM,EAAE,EAAE,EAAE,EAAE,OAAO,UAAU,CAAC,EAAE,sBAAsB,EAAE,QAAQ,+BAA+B,SAAS,EAAE,EAAE,EAAE,CAAC,EAAE,KAAK,EAAE,EAAE,YAAY,CAAC,EAAE,EAAE,KAAK,CAAC,EAAE,CAAC,CAAC,EAAE,EAAE,GAAG,EAAE,GAAG,EAAE,GAAG,IAAI,EAAE,CAAC,CAAC,EAAE,EAAE,EAAE,CAAC,CAAC,EAAE,EAAE,QAAQ,EAAE,EAAE,gBAAgB,AAAW,EAAE,aAAb,UAAyB,EAAE,QAAQ,EAAE,iBAAiB,EAAE,EAAE,QAAQ,EAAE,EAAE,EAAE,KAAK,EAAE,MAAM,IAAI,CAAC,CAAC,CAAC,GDyBj5B,OAAO,SEzBP,OAAkB,WACZ,CACF,aACA,YACA,UACA,cACA,WACA,cACA,aACA,eACA,gBACA,mBACA,YACA,SACA,YACA,kBACA,gBACA,WACA,oBACA,oBACA,iBACA,wBACA,gBACA,mBACA,0BACA,2BACA,WCtBE,WAAqB,EAAU,CACnC,MAAO,OAAO,IAAU,UAC1B,CCGM,YAA8B,EAAgC,CAClE,GAAM,GAAS,SAAC,EAAa,CAC3B,MAAM,KAAK,CAAQ,EACnB,EAAS,MAAQ,GAAI,OAAK,EAAG,KAC/B,EAEM,EAAW,EAAW,CAAM,EAClC,SAAS,UAAY,OAAO,OAAO,MAAM,SAAS,EAClD,EAAS,UAAU,YAAc,EAC1B,CACT,CCDO,GAAM,IAA+C,GAC1D,SAAC,EAAM,CACL,MAAA,UAA4C,EAA0B,CACpE,EAAO,IAAI,EACX,KAAK,QAAU,EACR,EAAO,OAAM;EACxB,EAAO,IAAI,SAAC,EAAK,EAAC,CAAK,MAAG,GAAI,EAAC,KAAK,EAAI,SAAQ,CAAzB,CAA6B,EAAE,KAAK;GAAM,EACzD,GACJ,KAAK,KAAO,sBACZ,KAAK,OAAS,CAChB,CARA,CAQC,ECvBC,YAAuB,EAA6B,EAAO,CAC/D,GAAI,EAAK,CACP,GAAM,GAAQ,EAAI,QAAQ,CAAI,EAC9B,GAAK,GAAS,EAAI,OAAO,EAAO,CAAC,EAErC,CCOA,GAAA,IAAA,UAAA,CAyBE,WAAoB,EAA4B,CAA5B,KAAA,gBAAA,EAdb,KAAA,OAAS,GAER,KAAA,WAAmD,KAMnD,KAAA,YAAqD,IAMV,CAQnD,SAAA,UAAA,YAAA,UAAA,aACM,EAEJ,GAAI,CAAC,KAAK,OAAQ,CAChB,KAAK,OAAS,GAGN,GAAA,GAAe,KAAI,WAC3B,GAAI,EAEF,GADA,KAAK,WAAa,KACd,MAAM,QAAQ,CAAU,MAC1B,OAAqB,GAAA,GAAA,CAAU,EAAA,EAAA,EAAA,KAAA,EAAA,CAAA,EAAA,KAAA,EAAA,EAAA,KAAA,EAAE,CAA5B,GAAM,GAAM,EAAA,MACf,EAAO,OAAO,IAAI,wGAGpB,GAAW,OAAO,IAAI,EAIlB,GAAiB,GAAqB,KAAI,gBAClD,GAAI,EAAW,CAAgB,EAC7B,GAAI,CACF,EAAgB,QACT,EAAP,CACA,EAAS,YAAa,IAAsB,EAAE,OAAS,CAAC,CAAC,EAIrD,GAAA,GAAgB,KAAI,YAC5B,GAAI,EAAa,CACf,KAAK,YAAc,SACnB,OAAwB,GAAA,GAAA,CAAW,EAAA,EAAA,EAAA,KAAA,EAAA,CAAA,EAAA,KAAA,EAAA,EAAA,KAAA,EAAE,CAAhC,GAAM,GAAS,EAAA,MAClB,GAAI,CACF,GAAc,CAAS,QAChB,EAAP,CACA,EAAS,GAAM,KAAN,EAAU,CAAA,EACnB,AAAI,YAAe,IACjB,EAAM,EAAA,EAAA,CAAA,EAAA,EAAO,CAAM,CAAA,EAAA,EAAK,EAAI,MAAM,CAAA,EAElC,EAAO,KAAK,CAAG,sGAMvB,GAAI,EACF,KAAM,IAAI,IAAoB,CAAM,EAG1C,EAoBA,EAAA,UAAA,IAAA,SAAI,EAAuB,OAGzB,GAAI,GAAY,IAAa,KAC3B,GAAI,KAAK,OAGP,GAAc,CAAQ,MACjB,CACL,GAAI,YAAoB,GAAc,CAGpC,GAAI,EAAS,QAAU,EAAS,WAAW,IAAI,EAC7C,OAEF,EAAS,WAAW,IAAI,EAE1B,AAAC,MAAK,YAAc,GAAA,KAAK,eAAW,MAAA,IAAA,OAAA,EAAI,CAAA,GAAI,KAAK,CAAQ,EAG/D,EAOQ,EAAA,UAAA,WAAR,SAAmB,EAAoB,CAC7B,GAAA,GAAe,KAAI,WAC3B,MAAO,KAAe,GAAW,MAAM,QAAQ,CAAU,GAAK,EAAW,SAAS,CAAM,CAC1F,EASQ,EAAA,UAAA,WAAR,SAAmB,EAAoB,CAC7B,GAAA,GAAe,KAAI,WAC3B,KAAK,WAAa,MAAM,QAAQ,CAAU,EAAK,GAAW,KAAK,CAAM,EAAG,GAAc,EAAa,CAAC,EAAY,CAAM,EAAI,CAC5H,EAMQ,EAAA,UAAA,cAAR,SAAsB,EAAoB,CAChC,GAAA,GAAe,KAAI,WAC3B,AAAI,IAAe,EACjB,KAAK,WAAa,KACT,MAAM,QAAQ,CAAU,GACjC,GAAU,EAAY,CAAM,CAEhC,EAgBA,EAAA,UAAA,OAAA,SAAO,EAAsC,CACnC,GAAA,GAAgB,KAAI,YAC5B,GAAe,GAAU,EAAa,CAAQ,EAE1C,YAAoB,IACtB,EAAS,cAAc,IAAI,CAE/B,EAlLc,EAAA,MAAS,UAAA,CACrB,GAAM,GAAQ,GAAI,GAClB,SAAM,OAAS,GACR,CACT,EAAE,EA+KJ,GArLA,EAuLO,GAAM,IAAqB,GAAa,MAEzC,YAAyB,EAAU,CACvC,MACE,aAAiB,KAChB,GAAS,UAAY,IAAS,EAAW,EAAM,MAAM,GAAK,EAAW,EAAM,GAAG,GAAK,EAAW,EAAM,WAAW,CAEpH,CAEA,YAAuB,EAAwC,CAC7D,AAAI,EAAW,CAAS,EACtB,EAAS,EAET,EAAU,YAAW,CAEzB,CChNO,GAAM,IAAuB,CAClC,iBAAkB,KAClB,sBAAuB,KACvB,QAAS,OACT,sCAAuC,GACvC,yBAA0B,ICErB,GAAM,IAAmC,CAG9C,WAAA,SAAW,EAAqB,EAAgB,QAAE,GAAA,CAAA,EAAA,EAAA,EAAA,EAAA,UAAA,OAAA,IAAA,EAAA,EAAA,GAAA,UAAA,GACzC,GAAA,GAAY,GAAe,SAClC,MAAI,IAAQ,MAAR,EAAU,WACL,EAAS,WAAU,MAAnB,EAAQ,EAAA,CAAY,EAAS,CAAO,EAAA,EAAK,CAAI,CAAA,CAAA,EAE/C,WAAU,MAAA,OAAA,EAAA,CAAC,EAAS,CAAO,EAAA,EAAK,CAAI,CAAA,CAAA,CAC7C,EACA,aAAY,SAAC,EAAM,CACT,GAAA,GAAa,GAAe,SACpC,MAAQ,KAAQ,KAAA,OAAR,EAAU,eAAgB,cAAc,CAAM,CACxD,EACA,SAAU,QChBN,YAA+B,EAAQ,CAC3C,GAAgB,WAAW,UAAA,CACjB,GAAA,GAAqB,GAAM,iBACnC,GAAI,EAEF,EAAiB,CAAG,MAGpB,MAAM,EAEV,CAAC,CACH,CCtBM,aAAc,CAAK,CCMlB,GAAM,IAAyB,UAAA,CAAM,MAAA,IAAmB,IAAK,OAAW,MAAS,CAA5C,EAAsE,EAO5G,YAA4B,EAAU,CAC1C,MAAO,IAAmB,IAAK,OAAW,CAAK,CACjD,CAOM,YAA8B,EAAQ,CAC1C,MAAO,IAAmB,IAAK,EAAO,MAAS,CACjD,CAQM,YAA6B,EAAuB,EAAY,EAAU,CAC9E,MAAO,CACL,KAAI,EACJ,MAAK,EACL,MAAK,EAET,CCrCA,GAAI,IAAuD,KASrD,YAAuB,EAAc,CACzC,GAAI,GAAO,sCAAuC,CAChD,GAAM,GAAS,CAAC,GAKhB,GAJI,GACF,IAAU,CAAE,YAAa,GAAO,MAAO,IAAI,GAE7C,EAAE,EACE,EAAQ,CACJ,GAAA,GAAyB,GAAvB,EAAW,EAAA,YAAE,EAAK,EAAA,MAE1B,GADA,GAAU,KACN,EACF,KAAM,QAMV,GAAE,CAEN,CAMM,YAAuB,EAAQ,CACnC,AAAI,GAAO,uCAAyC,IAClD,IAAQ,YAAc,GACtB,GAAQ,MAAQ,EAEpB,CCrBA,GAAA,IAAA,SAAA,EAAA,CAAmC,GAAA,EAAA,CAAA,EA6BjC,WAAY,EAA6C,CAAzD,GAAA,GACE,EAAA,KAAA,IAAA,GAAO,KATC,SAAA,UAAqB,GAU7B,AAAI,EACF,GAAK,YAAc,EAGf,GAAe,CAAW,GAC5B,EAAY,IAAI,CAAI,GAGtB,EAAK,YAAc,IAEvB,CAzBO,SAAA,OAAP,SAAiB,EAAwB,EAA2B,EAAqB,CACvF,MAAO,IAAI,IAAe,EAAM,EAAO,CAAQ,CACjD,EAgCA,EAAA,UAAA,KAAA,SAAK,EAAS,CACZ,AAAI,KAAK,UACP,GAA0B,GAAiB,CAAK,EAAG,IAAI,EAEvD,KAAK,MAAM,CAAM,CAErB,EASA,EAAA,UAAA,MAAA,SAAM,EAAS,CACb,AAAI,KAAK,UACP,GAA0B,GAAkB,CAAG,EAAG,IAAI,EAEtD,MAAK,UAAY,GACjB,KAAK,OAAO,CAAG,EAEnB,EAQA,EAAA,UAAA,SAAA,UAAA,CACE,AAAI,KAAK,UACP,GAA0B,GAAuB,IAAI,EAErD,MAAK,UAAY,GACjB,KAAK,UAAS,EAElB,EAEA,EAAA,UAAA,YAAA,UAAA,CACE,AAAK,KAAK,QACR,MAAK,UAAY,GACjB,EAAA,UAAM,YAAW,KAAA,IAAA,EACjB,KAAK,YAAc,KAEvB,EAEU,EAAA,UAAA,MAAV,SAAgB,EAAQ,CACtB,KAAK,YAAY,KAAK,CAAK,CAC7B,EAEU,EAAA,UAAA,OAAV,SAAiB,EAAQ,CACvB,GAAI,CACF,KAAK,YAAY,MAAM,CAAG,UAE1B,KAAK,YAAW,EAEpB,EAEU,EAAA,UAAA,UAAV,UAAA,CACE,GAAI,CACF,KAAK,YAAY,SAAQ,UAEzB,KAAK,YAAW,EAEpB,EACF,CAAA,EApHmC,EAAY,EA2H/C,GAAM,IAAQ,SAAS,UAAU,KAEjC,YAAkD,EAAQ,EAAY,CACpE,MAAO,IAAM,KAAK,EAAI,CAAO,CAC/B,CAMA,GAAA,IAAA,UAAA,CACE,WAAoB,EAAqC,CAArC,KAAA,gBAAA,CAAwC,CAE5D,SAAA,UAAA,KAAA,SAAK,EAAQ,CACH,GAAA,GAAoB,KAAI,gBAChC,GAAI,EAAgB,KAClB,GAAI,CACF,EAAgB,KAAK,CAAK,QACnB,EAAP,CACA,GAAqB,CAAK,EAGhC,EAEA,EAAA,UAAA,MAAA,SAAM,EAAQ,CACJ,GAAA,GAAoB,KAAI,gBAChC,GAAI,EAAgB,MAClB,GAAI,CACF,EAAgB,MAAM,CAAG,QAClB,EAAP,CACA,GAAqB,CAAK,MAG5B,IAAqB,CAAG,CAE5B,EAEA,EAAA,UAAA,SAAA,UAAA,CACU,GAAA,GAAoB,KAAI,gBAChC,GAAI,EAAgB,SAClB,GAAI,CACF,EAAgB,SAAQ,QACjB,EAAP,CACA,GAAqB,CAAK,EAGhC,EACF,CAAA,EArCA,EAuCA,GAAA,SAAA,EAAA,CAAuC,GAAA,EAAA,CAAA,EACrC,WACE,EACA,EACA,EAA8B,CAHhC,GAAA,GAKE,EAAA,KAAA,IAAA,GAAO,KAEH,EACJ,GAAI,EAAW,CAAc,GAAK,CAAC,EAGjC,EAAkB,CAChB,KAAM,GAAc,KAAd,EAAkB,OACxB,MAAO,GAAK,KAAL,EAAS,OAChB,SAAU,GAAQ,KAAR,EAAY,YAEnB,CAEL,GAAI,GACJ,AAAI,GAAQ,GAAO,yBAIjB,GAAU,OAAO,OAAO,CAAc,EACtC,EAAQ,YAAc,UAAA,CAAM,MAAA,GAAK,YAAW,CAAhB,EAC5B,EAAkB,CAChB,KAAM,EAAe,MAAQ,GAAK,EAAe,KAAM,CAAO,EAC9D,MAAO,EAAe,OAAS,GAAK,EAAe,MAAO,CAAO,EACjE,SAAU,EAAe,UAAY,GAAK,EAAe,SAAU,CAAO,IAI5E,EAAkB,EAMtB,SAAK,YAAc,GAAI,IAAiB,CAAe,GACzD,CACF,MAAA,EAAA,EAzCuC,EAAU,EA2CjD,YAA8B,EAAU,CACtC,AAAI,GAAO,sCACT,GAAa,CAAK,EAIlB,GAAqB,CAAK,CAE9B,CAQA,YAA6B,EAAQ,CACnC,KAAM,EACR,CAOA,YAAmC,EAA2C,EAA2B,CAC/F,GAAA,GAA0B,GAAM,sBACxC,GAAyB,GAAgB,WAAW,UAAA,CAAM,MAAA,GAAsB,EAAc,CAAU,CAA9C,CAA+C,CAC3G,CAOO,GAAM,IAA6D,CACxE,OAAQ,GACR,KAAM,GACN,MAAO,GACP,SAAU,ICjRL,GAAM,IAA+B,UAAA,CAAM,MAAC,OAAO,SAAW,YAAc,OAAO,YAAe,cAAvD,EAAsE,ECyClH,YAAsB,EAAI,CAC9B,MAAO,EACT,CCiCM,aAAc,QAAC,GAAA,CAAA,EAAA,EAAA,EAAA,EAAA,UAAA,OAAA,IAAA,EAAA,GAAA,UAAA,GACnB,MAAO,IAAc,CAAG,CAC1B,CAGM,YAA8B,EAA+B,CACjE,MAAI,GAAI,SAAW,EACV,GAGL,EAAI,SAAW,EACV,EAAI,GAGN,SAAe,EAAQ,CAC5B,MAAO,GAAI,OAAO,SAAC,EAAW,EAAuB,CAAK,MAAA,GAAG,CAAI,CAAP,EAAU,CAAY,CAClF,CACF,CC9EA,GAAA,GAAA,UAAA,CAkBE,WAAY,EAA6E,CACvF,AAAI,GACF,MAAK,WAAa,EAEtB,CA4BA,SAAA,UAAA,KAAA,SAAQ,EAAyB,CAC/B,GAAM,GAAa,GAAI,GACvB,SAAW,OAAS,KACpB,EAAW,SAAW,EACf,CACT,EA8IA,EAAA,UAAA,UAAA,SACE,EACA,EACA,EAA8B,CAHhC,GAAA,GAAA,KAKQ,EAAa,GAAa,CAAc,EAAI,EAAiB,GAAI,IAAe,EAAgB,EAAO,CAAQ,EAErH,UAAa,UAAA,CACL,GAAA,GAAuB,EAArB,EAAQ,EAAA,SAAE,EAAM,EAAA,OACxB,EAAW,IACT,EAGI,EAAS,KAAK,EAAY,CAAM,EAChC,EAIA,EAAK,WAAW,CAAU,EAG1B,EAAK,cAAc,CAAU,CAAC,CAEtC,CAAC,EAEM,CACT,EAGU,EAAA,UAAA,cAAV,SAAwB,EAAmB,CACzC,GAAI,CACF,MAAO,MAAK,WAAW,CAAI,QACpB,EAAP,CAIA,EAAK,MAAM,CAAG,EAElB,EA6DA,EAAA,UAAA,QAAA,SAAQ,EAA0B,EAAoC,CAAtE,GAAA,GAAA,KACE,SAAc,GAAe,CAAW,EAEjC,GAAI,GAAkB,SAAC,EAAS,EAAM,CAC3C,GAAM,GAAa,GAAI,IAAkB,CACvC,KAAM,SAAC,EAAK,CACV,GAAI,CACF,EAAK,CAAK,QACH,EAAP,CACA,EAAO,CAAG,EACV,EAAW,YAAW,EAE1B,EACA,MAAO,EACP,SAAU,EACX,EACD,EAAK,UAAU,CAAU,CAC3B,CAAC,CACH,EAGU,EAAA,UAAA,WAAV,SAAqB,EAA2B,OAC9C,MAAO,GAAA,KAAK,UAAM,MAAA,IAAA,OAAA,OAAA,EAAE,UAAU,CAAU,CAC1C,EAOA,EAAA,UAAC,IAAD,UAAA,CACE,MAAO,KACT,EA4FA,EAAA,UAAA,KAAA,UAAA,QAAK,GAAA,CAAA,EAAA,EAAA,EAAA,EAAA,UAAA,OAAA,IAAA,EAAA,GAAA,UAAA,GACH,MAAO,IAAc,CAAU,EAAE,IAAI,CACvC,EA6BA,EAAA,UAAA,UAAA,SAAU,EAAoC,CAA9C,GAAA,GAAA,KACE,SAAc,GAAe,CAAW,EAEjC,GAAI,GAAY,SAAC,EAAS,EAAM,CACrC,GAAI,GACJ,EAAK,UACH,SAAC,EAAI,CAAK,MAAC,GAAQ,CAAT,EACV,SAAC,EAAQ,CAAK,MAAA,GAAO,CAAG,CAAV,EACd,UAAA,CAAM,MAAA,GAAQ,CAAK,CAAb,CAAc,CAExB,CAAC,CACH,EA3aO,EAAA,OAAkC,SAAI,EAAwD,CACnG,MAAO,IAAI,GAAc,CAAS,CACpC,EA0aF,GA/cA,EAwdA,YAAwB,EAA+C,OACrE,MAAO,GAAA,GAAW,KAAX,EAAe,GAAO,WAAO,MAAA,IAAA,OAAA,EAAI,OAC1C,CAEA,YAAuB,EAAU,CAC/B,MAAO,IAAS,EAAW,EAAM,IAAI,GAAK,EAAW,EAAM,KAAK,GAAK,EAAW,EAAM,QAAQ,CAChG,CAEA,YAAyB,EAAU,CACjC,MAAQ,IAAS,YAAiB,KAAgB,GAAW,CAAK,GAAK,GAAe,CAAK,CAC7F,CC1eM,YAAkB,EAAW,CACjC,MAAO,GAAW,GAAM,KAAA,OAAN,EAAQ,IAAI,CAChC,CAMM,WACJ,EAAqF,CAErF,MAAO,UAAC,EAAqB,CAC3B,GAAI,GAAQ,CAAM,EAChB,MAAO,GAAO,KAAK,SAA+B,EAA2B,CAC3E,GAAI,CACF,MAAO,GAAK,EAAc,IAAI,QACvB,EAAP,CACA,KAAK,MAAM,CAAG,EAElB,CAAC,EAEH,KAAM,IAAI,WAAU,wCAAwC,CAC9D,CACF,CCjBM,WACJ,EACA,EACA,EACA,EACA,EAAuB,CAEvB,MAAO,IAAI,IAAmB,EAAa,EAAQ,EAAY,EAAS,CAAU,CACpF,CAMA,GAAA,IAAA,SAAA,EAAA,CAA2C,GAAA,EAAA,CAAA,EAiBzC,WACE,EACA,EACA,EACA,EACQ,EACA,EAAiC,CAN3C,GAAA,GAoBE,EAAA,KAAA,KAAM,CAAW,GAAC,KAfV,SAAA,WAAA,EACA,EAAA,kBAAA,EAeR,EAAK,MAAQ,EACT,SAAuC,EAAQ,CAC7C,GAAI,CACF,EAAO,CAAK,QACL,EAAP,CACA,EAAY,MAAM,CAAG,EAEzB,EACA,EAAA,UAAM,MACV,EAAK,OAAS,EACV,SAAuC,EAAQ,CAC7C,GAAI,CACF,EAAQ,CAAG,QACJ,EAAP,CAEA,EAAY,MAAM,CAAG,UAGrB,KAAK,YAAW,EAEpB,EACA,EAAA,UAAM,OACV,EAAK,UAAY,EACb,UAAA,CACE,GAAI,CACF,EAAU,QACH,EAAP,CAEA,EAAY,MAAM,CAAG,UAGrB,KAAK,YAAW,EAEpB,EACA,EAAA,UAAM,WACZ,CAEA,SAAA,UAAA,YAAA,UAAA,OACE,GAAI,CAAC,KAAK,mBAAqB,KAAK,kBAAiB,EAAI,CAC/C,GAAA,GAAW,KAAI,OACvB,EAAA,UAAM,YAAW,KAAA,IAAA,EAEjB,CAAC,GAAU,IAAA,KAAK,cAAU,MAAA,IAAA,QAAA,EAAA,KAAf,IAAI,GAEnB,EACF,CAAA,EAnF2C,EAAU,ECd9C,GAAM,IAAiD,CAG5D,SAAA,SAAS,EAAQ,CACf,GAAI,GAAU,sBACV,EAAkD,qBAC9C,EAAa,GAAsB,SAC3C,AAAI,GACF,GAAU,EAAS,sBACnB,EAAS,EAAS,sBAEpB,GAAM,GAAS,EAAQ,SAAC,EAAS,CAI/B,EAAS,OACT,EAAS,CAAS,CACpB,CAAC,EACD,MAAO,IAAI,IAAa,UAAA,CAAM,MAAA,IAAM,KAAA,OAAN,EAAS,CAAM,CAAf,CAAgB,CAChD,EACA,sBAAqB,UAAA,QAAC,GAAA,CAAA,EAAA,EAAA,EAAA,EAAA,UAAA,OAAA,IAAA,EAAA,GAAA,UAAA,GACZ,GAAA,GAAa,GAAsB,SAC3C,MAAQ,KAAQ,KAAA,OAAR,EAAU,wBAAyB,uBAAsB,MAAA,OAAA,EAAA,CAAA,EAAA,EAAI,CAAI,CAAA,CAAA,CAC3E,EACA,qBAAoB,UAAA,QAAC,GAAA,CAAA,EAAA,EAAA,EAAA,EAAA,UAAA,OAAA,IAAA,EAAA,GAAA,UAAA,GACX,GAAA,GAAa,GAAsB,SAC3C,MAAQ,KAAQ,KAAA,OAAR,EAAU,uBAAwB,sBAAqB,MAAA,OAAA,EAAA,CAAA,EAAA,EAAI,CAAI,CAAA,CAAA,CACzE,EACA,SAAU,QCrBL,GAAM,IAAuD,GAClE,SAAC,EAAM,CACL,MAAA,WAAoC,CAClC,EAAO,IAAI,EACX,KAAK,KAAO,0BACZ,KAAK,QAAU,qBACjB,CAJA,CAIC,ECXL,GAAA,GAAA,SAAA,EAAA,CAAgC,GAAA,EAAA,CAAA,EAwB9B,YAAA,CAAA,GAAA,GAEE,EAAA,KAAA,IAAA,GAAO,KAzBT,SAAA,OAAS,GAED,EAAA,iBAAyC,KAGjD,EAAA,UAA2B,CAAA,EAE3B,EAAA,UAAY,GAEZ,EAAA,SAAW,GAEX,EAAA,YAAmB,MAenB,CAGA,SAAA,UAAA,KAAA,SAAQ,EAAwB,CAC9B,GAAM,GAAU,GAAI,IAAiB,KAAM,IAAI,EAC/C,SAAQ,SAAW,EACZ,CACT,EAGU,EAAA,UAAA,eAAV,UAAA,CACE,GAAI,KAAK,OACP,KAAM,IAAI,GAEd,EAEA,EAAA,UAAA,KAAA,SAAK,EAAQ,CAAb,GAAA,GAAA,KACE,GAAa,UAAA,SAEX,GADA,EAAK,eAAc,EACf,CAAC,EAAK,UAAW,CACnB,AAAK,EAAK,kBACR,GAAK,iBAAmB,MAAM,KAAK,EAAK,SAAS,OAEnD,OAAuB,GAAA,GAAA,EAAK,gBAAgB,EAAA,EAAA,EAAA,KAAA,EAAA,CAAA,EAAA,KAAA,EAAA,EAAA,KAAA,EAAE,CAAzC,GAAM,GAAQ,EAAA,MACjB,EAAS,KAAK,CAAK,qGAGzB,CAAC,CACH,EAEA,EAAA,UAAA,MAAA,SAAM,EAAQ,CAAd,GAAA,GAAA,KACE,GAAa,UAAA,CAEX,GADA,EAAK,eAAc,EACf,CAAC,EAAK,UAAW,CACnB,EAAK,SAAW,EAAK,UAAY,GACjC,EAAK,YAAc,EAEnB,OADQ,GAAc,EAAI,UACnB,EAAU,QACf,EAAU,MAAK,EAAI,MAAM,CAAG,EAGlC,CAAC,CACH,EAEA,EAAA,UAAA,SAAA,UAAA,CAAA,GAAA,GAAA,KACE,GAAa,UAAA,CAEX,GADA,EAAK,eAAc,EACf,CAAC,EAAK,UAAW,CACnB,EAAK,UAAY,GAEjB,OADQ,GAAc,EAAI,UACnB,EAAU,QACf,EAAU,MAAK,EAAI,SAAQ,EAGjC,CAAC,CACH,EAEA,EAAA,UAAA,YAAA,UAAA,CACE,KAAK,UAAY,KAAK,OAAS,GAC/B,KAAK,UAAY,KAAK,iBAAmB,IAC3C,EAEA,OAAA,eAAI,EAAA,UAAA,WAAQ,KAAZ,UAAA,OACE,MAAO,IAAA,KAAK,aAAS,MAAA,IAAA,OAAA,OAAA,EAAE,QAAS,CAClC,kCAGU,EAAA,UAAA,cAAV,SAAwB,EAAyB,CAC/C,YAAK,eAAc,EACZ,EAAA,UAAM,cAAa,KAAA,KAAC,CAAU,CACvC,EAGU,EAAA,UAAA,WAAV,SAAqB,EAAyB,CAC5C,YAAK,eAAc,EACnB,KAAK,wBAAwB,CAAU,EAChC,KAAK,gBAAgB,CAAU,CACxC,EAGU,EAAA,UAAA,gBAAV,SAA0B,EAA2B,CAArD,GAAA,GAAA,KACQ,EAAqC,KAAnC,EAAQ,EAAA,SAAE,EAAS,EAAA,UAAE,EAAS,EAAA,UACtC,MAAI,IAAY,EACP,GAET,MAAK,iBAAmB,KACxB,EAAU,KAAK,CAAU,EAClB,GAAI,IAAa,UAAA,CACtB,EAAK,iBAAmB,KACxB,GAAU,EAAW,CAAU,CACjC,CAAC,EACH,EAGU,EAAA,UAAA,wBAAV,SAAkC,EAA2B,CACrD,GAAA,GAAuC,KAArC,EAAQ,EAAA,SAAE,EAAW,EAAA,YAAE,EAAS,EAAA,UACxC,AAAI,EACF,EAAW,MAAM,CAAW,EACnB,GACT,EAAW,SAAQ,CAEvB,EAQA,EAAA,UAAA,aAAA,UAAA,CACE,GAAM,GAAkB,GAAI,GAC5B,SAAW,OAAS,KACb,CACT,EAxHO,EAAA,OAAkC,SAAI,EAA0B,EAAqB,CAC1F,MAAO,IAAI,IAAoB,EAAa,CAAM,CACpD,EAuHF,GA7IgC,CAAU,EAkJ1C,GAAA,IAAA,SAAA,EAAA,CAAyC,GAAA,EAAA,CAAA,EACvC,WAES,EACP,EAAsB,CAHxB,GAAA,GAKE,EAAA,KAAA,IAAA,GAAO,KAHA,SAAA,YAAA,EAIP,EAAK,OAAS,GAChB,CAEA,SAAA,UAAA,KAAA,SAAK,EAAQ,SACX,AAAA,GAAA,GAAA,KAAK,eAAW,MAAA,IAAA,OAAA,OAAA,EAAE,QAAI,MAAA,IAAA,QAAA,EAAA,KAAA,EAAG,CAAK,CAChC,EAEA,EAAA,UAAA,MAAA,SAAM,EAAQ,SACZ,AAAA,GAAA,GAAA,KAAK,eAAW,MAAA,IAAA,OAAA,OAAA,EAAE,SAAK,MAAA,IAAA,QAAA,EAAA,KAAA,EAAG,CAAG,CAC/B,EAEA,EAAA,UAAA,SAAA,UAAA,SACE,AAAA,GAAA,GAAA,KAAK,eAAW,MAAA,IAAA,OAAA,OAAA,EAAE,YAAQ,MAAA,IAAA,QAAA,EAAA,KAAA,CAAA,CAC5B,EAGU,EAAA,UAAA,WAAV,SAAqB,EAAyB,SAC5C,MAAO,GAAA,GAAA,KAAK,UAAM,MAAA,IAAA,OAAA,OAAA,EAAE,UAAU,CAAU,KAAC,MAAA,IAAA,OAAA,EAAI,EAC/C,EACF,CAAA,EA1ByC,CAAO,EC5JzC,GAAM,IAA+C,CAC1D,IAAG,UAAA,CAGD,MAAQ,IAAsB,UAAY,MAAM,IAAG,CACrD,EACA,SAAU,QCwBZ,GAAA,IAAA,SAAA,EAAA,CAAsC,GAAA,EAAA,CAAA,EAUpC,WACU,EACA,EACA,EAA6D,CAF7D,AAAA,IAAA,QAAA,GAAA,KACA,IAAA,QAAA,GAAA,KACA,IAAA,QAAA,GAAA,IAHV,GAAA,GAKE,EAAA,KAAA,IAAA,GAAO,KAJC,SAAA,YAAA,EACA,EAAA,YAAA,EACA,EAAA,mBAAA,EAZF,EAAA,QAA0B,CAAA,EAC1B,EAAA,oBAAsB,GAc5B,EAAK,oBAAsB,IAAgB,IAC3C,EAAK,YAAc,KAAK,IAAI,EAAG,CAAW,EAC1C,EAAK,YAAc,KAAK,IAAI,EAAG,CAAW,GAC5C,CAEA,SAAA,UAAA,KAAA,SAAK,EAAQ,CACL,GAAA,GAA+E,KAA7E,EAAS,EAAA,UAAE,EAAO,EAAA,QAAE,EAAmB,EAAA,oBAAE,EAAkB,EAAA,mBAAE,EAAW,EAAA,YAChF,AAAK,GACH,GAAQ,KAAK,CAAK,EAClB,CAAC,GAAuB,EAAQ,KAAK,EAAmB,IAAG,EAAK,CAAW,GAE7E,KAAK,YAAW,EAChB,EAAA,UAAM,KAAI,KAAA,KAAC,CAAK,CAClB,EAGU,EAAA,UAAA,WAAV,SAAqB,EAAyB,CAC5C,KAAK,eAAc,EACnB,KAAK,YAAW,EAQhB,OANM,GAAe,KAAK,gBAAgB,CAAU,EAE9C,EAAmC,KAAjC,EAAmB,EAAA,oBAAE,EAAO,EAAA,QAG9B,EAAO,EAAQ,MAAK,EACjB,EAAI,EAAG,EAAI,EAAK,QAAU,CAAC,EAAW,OAAQ,GAAK,EAAsB,EAAI,EACpF,EAAW,KAAK,EAAK,EAAO,EAG9B,YAAK,wBAAwB,CAAU,EAEhC,CACT,EAEQ,EAAA,UAAA,YAAR,UAAA,CACQ,GAAA,GAAoE,KAAlE,EAAW,EAAA,YAAE,EAAkB,EAAA,mBAAE,EAAO,EAAA,QAAE,EAAmB,EAAA,oBAK/D,EAAsB,GAAsB,EAAI,GAAK,EAK3D,GAJA,EAAc,KAAY,EAAqB,EAAQ,QAAU,EAAQ,OAAO,EAAG,EAAQ,OAAS,CAAkB,EAIlH,CAAC,EAAqB,CAKxB,OAJM,GAAM,EAAmB,IAAG,EAC9B,EAAO,EAGF,EAAI,EAAG,EAAI,EAAQ,QAAW,EAAQ,IAAiB,EAAK,GAAK,EACxE,EAAO,EAET,GAAQ,EAAQ,OAAO,EAAG,EAAO,CAAC,EAEtC,EACF,CAAA,EAzEsC,CAAO,EClB7C,GAAA,IAAA,SAAA,EAAA,CAA+B,GAAA,EAAA,CAAA,EAC7B,WAAY,EAAsB,EAAmD,OACnF,GAAA,KAAA,IAAA,GAAO,IACT,CAWO,SAAA,UAAA,SAAP,SAAgB,EAAW,EAAiB,CAAjB,MAAA,KAAA,QAAA,GAAA,GAClB,IACT,EACF,CAAA,EAjB+B,EAAY,ECJpC,GAAM,IAAqC,CAGhD,YAAA,SAAY,EAAqB,EAAgB,QAAE,GAAA,CAAA,EAAA,EAAA,EAAA,EAAA,UAAA,OAAA,IAAA,EAAA,EAAA,GAAA,UAAA,GAC1C,GAAA,GAAY,GAAgB,SACnC,MAAI,IAAQ,MAAR,EAAU,YACL,EAAS,YAAW,MAApB,EAAQ,EAAA,CAAa,EAAS,CAAO,EAAA,EAAK,CAAI,CAAA,CAAA,EAEhD,YAAW,MAAA,OAAA,EAAA,CAAC,EAAS,CAAO,EAAA,EAAK,CAAI,CAAA,CAAA,CAC9C,EACA,cAAa,SAAC,EAAM,CACV,GAAA,GAAa,GAAgB,SACrC,MAAQ,KAAQ,KAAA,OAAR,EAAU,gBAAiB,eAAe,CAAM,CAC1D,EACA,SAAU,QCrBZ,GAAA,IAAA,SAAA,EAAA,CAAoC,GAAA,EAAA,CAAA,EAOlC,WAAsB,EAAqC,EAAmD,CAA9G,GAAA,GACE,EAAA,KAAA,KAAM,EAAW,CAAI,GAAC,KADF,SAAA,UAAA,EAAqC,EAAA,KAAA,EAFjD,EAAA,QAAmB,IAI7B,CAEO,SAAA,UAAA,SAAP,SAAgB,EAAW,EAAiB,CAC1C,GADyB,IAAA,QAAA,GAAA,GACrB,KAAK,OACP,MAAO,MAIT,KAAK,MAAQ,EAEb,GAAM,GAAK,KAAK,GACV,EAAY,KAAK,UAuBvB,MAAI,IAAM,MACR,MAAK,GAAK,KAAK,eAAe,EAAW,EAAI,CAAK,GAKpD,KAAK,QAAU,GAEf,KAAK,MAAQ,EAEb,KAAK,GAAK,KAAK,IAAM,KAAK,eAAe,EAAW,KAAK,GAAI,CAAK,EAE3D,IACT,EAEU,EAAA,UAAA,eAAV,SAAyB,EAA2B,EAAW,EAAiB,CAAjB,MAAA,KAAA,QAAA,GAAA,GACtD,GAAiB,YAAY,EAAU,MAAM,KAAK,EAAW,IAAI,EAAG,CAAK,CAClF,EAEU,EAAA,UAAA,eAAV,SAAyB,EAA4B,EAAS,EAAwB,CAEpF,GAF4D,IAAA,QAAA,GAAA,GAExD,GAAS,MAAQ,KAAK,QAAU,GAAS,KAAK,UAAY,GAC5D,MAAO,GAIT,GAAiB,cAAc,CAAE,CAEnC,EAMO,EAAA,UAAA,QAAP,SAAe,EAAU,EAAa,CACpC,GAAI,KAAK,OACP,MAAO,IAAI,OAAM,8BAA8B,EAGjD,KAAK,QAAU,GACf,GAAM,GAAQ,KAAK,SAAS,EAAO,CAAK,EACxC,GAAI,EACF,MAAO,GACF,AAAI,KAAK,UAAY,IAAS,KAAK,IAAM,MAc9C,MAAK,GAAK,KAAK,eAAe,KAAK,UAAW,KAAK,GAAI,IAAI,EAE/D,EAEU,EAAA,UAAA,SAAV,SAAmB,EAAU,EAAc,CACzC,GAAI,GAAmB,GACnB,EACJ,GAAI,CACF,KAAK,KAAK,CAAK,QACR,EAAP,CACA,EAAU,GAIV,EAAa,GAAQ,GAAI,OAAM,oCAAoC,EAErE,GAAI,EACF,YAAK,YAAW,EACT,CAEX,EAEA,EAAA,UAAA,YAAA,UAAA,CACE,GAAI,CAAC,KAAK,OAAQ,CACV,GAAA,GAAoB,KAAlB,EAAE,EAAA,GAAE,EAAS,EAAA,UACb,EAAY,EAAS,QAE7B,KAAK,KAAO,KAAK,MAAQ,KAAK,UAAY,KAC1C,KAAK,QAAU,GAEf,GAAU,EAAS,IAAI,EACnB,GAAM,MACR,MAAK,GAAK,KAAK,eAAe,EAAW,EAAI,IAAI,GAGnD,KAAK,MAAQ,KACb,EAAA,UAAM,YAAW,KAAA,IAAA,EAErB,EACF,CAAA,EA3IoC,EAAM,ECiB1C,GAAA,IAAA,UAAA,CAGE,WAAoB,EAAoC,EAAiC,CAAjC,AAAA,IAAA,QAAA,GAAoB,EAAU,KAAlE,KAAA,oBAAA,EAClB,KAAK,IAAM,CACb,CA6BO,SAAA,UAAA,SAAP,SAAmB,EAAqD,EAAmB,EAAS,CAA5B,MAAA,KAAA,QAAA,GAAA,GAC/D,GAAI,MAAK,oBAAuB,KAAM,CAAI,EAAE,SAAS,EAAO,CAAK,CAC1E,EAnCc,EAAA,IAAoB,GAAsB,IAoC1D,GArCA,ECpBA,GAAA,IAAA,SAAA,EAAA,CAAoC,GAAA,EAAA,CAAA,EAkBlC,WAAY,EAAgC,EAAiC,CAAjC,AAAA,IAAA,QAAA,GAAoB,GAAU,KAA1E,GAAA,GACE,EAAA,KAAA,KAAM,EAAiB,CAAG,GAAC,KAlBtB,SAAA,QAAmC,CAAA,EAOnC,EAAA,QAAmB,GAQnB,EAAA,WAAkB,QAIzB,CAEO,SAAA,UAAA,MAAP,SAAa,EAAwB,CAC3B,GAAA,GAAY,KAAI,QAExB,GAAI,KAAK,QAAS,CAChB,EAAQ,KAAK,CAAM,EACnB,OAGF,GAAI,GACJ,KAAK,QAAU,GAEf,EACE,IAAK,EAAQ,EAAO,QAAQ,EAAO,MAAO,EAAO,KAAK,EACpD,YAEM,EAAS,EAAQ,MAAK,GAIhC,GAFA,KAAK,QAAU,GAEX,EAAO,CACT,KAAQ,EAAS,EAAQ,MAAK,GAC5B,EAAO,YAAW,EAEpB,KAAM,GAEV,EACF,CAAA,EAhDoC,EAAS,EC8CtC,GAAM,IAAiB,GAAI,IAAe,EAAW,EAK/C,GAAQ,GClDrB,GAAA,IAAA,SAAA,EAAA,CAA6C,GAAA,EAAA,CAAA,EAC3C,WAAsB,EAA8C,EAAmD,CAAvH,GAAA,GACE,EAAA,KAAA,KAAM,EAAW,CAAI,GAAC,KADF,SAAA,UAAA,EAA8C,EAAA,KAAA,GAEpE,CAEU,SAAA,UAAA,eAAV,SAAyB,EAAoC,EAAU,EAAiB,CAEtF,MAFqE,KAAA,QAAA,GAAA,GAEjE,IAAU,MAAQ,EAAQ,EACrB,EAAA,UAAM,eAAc,KAAA,KAAC,EAAW,EAAI,CAAK,EAGlD,GAAU,QAAQ,KAAK,IAAI,EAIpB,EAAU,YAAe,GAAU,WAAa,GAAuB,sBAAsB,UAAA,CAAM,MAAA,GAAU,MAAM,MAAS,CAAzB,CAA0B,GACtI,EACU,EAAA,UAAA,eAAV,SAAyB,EAAoC,EAAU,EAAiB,CAItF,GAJqE,IAAA,QAAA,GAAA,GAIhE,GAAS,MAAQ,EAAQ,GAAO,GAAS,MAAQ,KAAK,MAAQ,EACjE,MAAO,GAAA,UAAM,eAAc,KAAA,KAAC,EAAW,EAAI,CAAK,EAKlD,AAAK,EAAU,QAAQ,KAAK,SAAC,EAAM,CAAK,MAAA,GAAO,KAAO,CAAd,CAAgB,GACtD,IAAuB,qBAAqB,CAAE,EAC9C,EAAU,WAAa,OAI3B,EACF,CAAA,EAlC6C,EAAW,ECFxD,GAAA,IAAA,SAAA,EAAA,CAA6C,GAAA,EAAA,CAAA,EAA7C,YAAA,+CAkCA,CAjCS,SAAA,UAAA,MAAP,SAAa,EAAyB,CACpC,KAAK,QAAU,GAUf,GAAM,GAAU,KAAK,WACrB,KAAK,WAAa,OAEV,GAAA,GAAY,KAAI,QACpB,EACJ,EAAS,GAAU,EAAQ,MAAK,EAEhC,EACE,IAAK,EAAQ,EAAO,QAAQ,EAAO,MAAO,EAAO,KAAK,EACpD,YAEM,GAAS,EAAQ,KAAO,EAAO,KAAO,GAAW,EAAQ,MAAK,GAIxE,GAFA,KAAK,QAAU,GAEX,EAAO,CACT,KAAQ,GAAS,EAAQ,KAAO,EAAO,KAAO,GAAW,EAAQ,MAAK,GACpE,EAAO,YAAW,EAEpB,KAAM,GAEV,EACF,CAAA,EAlC6C,EAAc,ECgCpD,GAAM,IAA0B,GAAI,IAAwB,EAAoB,EC8BhF,GAAM,GAAQ,GAAI,GAAkB,SAAC,EAAU,CAAK,MAAA,GAAW,SAAQ,CAAnB,CAAqB,EC9D1E,YAAsB,EAAU,CACpC,MAAO,IAAS,EAAW,EAAM,QAAQ,CAC3C,CCDA,YAAiB,EAAQ,CACvB,MAAO,GAAI,EAAI,OAAS,EAC1B,CAEM,YAA4B,EAAW,CAC3C,MAAO,GAAW,GAAK,CAAI,CAAC,EAAI,EAAK,IAAG,EAAK,MAC/C,CAEM,YAAuB,EAAW,CACtC,MAAO,IAAY,GAAK,CAAI,CAAC,EAAI,EAAK,IAAG,EAAK,MAChD,CAEM,YAAoB,EAAa,EAAoB,CACzD,MAAO,OAAO,IAAK,CAAI,GAAM,SAAW,EAAK,IAAG,EAAM,CACxD,CClBO,GAAM,IAAe,SAAI,EAAM,CAAwB,MAAA,IAAK,MAAO,GAAE,QAAW,UAAY,MAAO,IAAM,UAAlD,ECMxD,YAAoB,EAAU,CAClC,MAAO,GAAW,GAAK,KAAA,OAAL,EAAO,IAAI,CAC/B,CCHM,YAA8B,EAAU,CAC5C,MAAO,GAAW,EAAM,GAAkB,CAC5C,CCLM,YAA6B,EAAQ,CACzC,MAAO,QAAO,eAAiB,EAAW,GAAG,KAAA,OAAH,EAAM,OAAO,cAAc,CACvE,CCAM,YAA2C,EAAU,CAEzD,MAAO,IAAI,WACT,gBACE,KAAU,MAAQ,MAAO,IAAU,SAAW,oBAAsB,IAAI,EAAK,KAAG,0HACwC,CAE9H,CCXM,aAA2B,CAC/B,MAAI,OAAO,SAAW,YAAc,CAAC,OAAO,SACnC,aAGF,OAAO,QAChB,CAEO,GAAM,IAAW,GAAiB,ECJnC,YAAqB,EAAU,CACnC,MAAO,GAAW,GAAK,KAAA,OAAL,EAAQ,GAAgB,CAC5C,CCHM,YAAuD,EAAqC,mGAC1F,EAAS,EAAe,UAAS,2DAGX,MAAA,CAAA,EAAA,GAAM,EAAO,KAAI,CAAE,CAAA,eAArC,GAAkB,EAAA,KAAA,EAAhB,EAAK,EAAA,MAAE,EAAI,EAAA,KACf,iBAAA,CAAA,EAAA,CAAA,SACF,MAAA,CAAA,EAAA,EAAA,KAAA,CAAA,qBAEI,CAAM,CAAA,SAAZ,MAAA,CAAA,EAAA,EAAA,KAAA,CAAA,SAAA,SAAA,KAAA,mCAGF,SAAO,YAAW,6BAIhB,YAAkC,EAAQ,CAG9C,MAAO,GAAW,GAAG,KAAA,OAAH,EAAK,SAAS,CAClC,CCRM,WAAuB,EAAyB,CACpD,GAAI,YAAiB,GACnB,MAAO,GAET,GAAI,GAAS,KAAM,CACjB,GAAI,GAAoB,CAAK,EAC3B,MAAO,IAAsB,CAAK,EAEpC,GAAI,GAAY,CAAK,EACnB,MAAO,IAAc,CAAK,EAE5B,GAAI,GAAU,CAAK,EACjB,MAAO,IAAY,CAAK,EAE1B,GAAI,GAAgB,CAAK,EACvB,MAAO,IAAkB,CAAK,EAEhC,GAAI,GAAW,CAAK,EAClB,MAAO,IAAa,CAAK,EAE3B,GAAI,GAAqB,CAAK,EAC5B,MAAO,IAAuB,CAAK,EAIvC,KAAM,IAAiC,CAAK,CAC9C,CAMM,YAAmC,EAAQ,CAC/C,MAAO,IAAI,GAAW,SAAC,EAAyB,CAC9C,GAAM,GAAM,EAAI,IAAkB,EAClC,GAAI,EAAW,EAAI,SAAS,EAC1B,MAAO,GAAI,UAAU,CAAU,EAGjC,KAAM,IAAI,WAAU,gEAAgE,CACtF,CAAC,CACH,CASM,YAA2B,EAAmB,CAClD,MAAO,IAAI,GAAW,SAAC,EAAyB,CAU9C,OAAS,GAAI,EAAG,EAAI,EAAM,QAAU,CAAC,EAAW,OAAQ,IACtD,EAAW,KAAK,EAAM,EAAE,EAE1B,EAAW,SAAQ,CACrB,CAAC,CACH,CAEM,YAAyB,EAAuB,CACpD,MAAO,IAAI,GAAW,SAAC,EAAyB,CAC9C,EACG,KACC,SAAC,EAAK,CACJ,AAAK,EAAW,QACd,GAAW,KAAK,CAAK,EACrB,EAAW,SAAQ,EAEvB,EACA,SAAC,EAAQ,CAAK,MAAA,GAAW,MAAM,CAAG,CAApB,CAAqB,EAEpC,KAAK,KAAM,EAAoB,CACpC,CAAC,CACH,CAEM,YAA0B,EAAqB,CACnD,MAAO,IAAI,GAAW,SAAC,EAAyB,aAC9C,OAAoB,GAAA,GAAA,CAAQ,EAAA,EAAA,EAAA,KAAA,EAAA,CAAA,EAAA,KAAA,EAAA,EAAA,KAAA,EAAE,CAAzB,GAAM,GAAK,EAAA,MAEd,GADA,EAAW,KAAK,CAAK,EACjB,EAAW,OACb,yGAGJ,EAAW,SAAQ,CACrB,CAAC,CACH,CAEM,YAA+B,EAA+B,CAClE,MAAO,IAAI,GAAW,SAAC,EAAyB,CAC9C,GAAQ,EAAe,CAAU,EAAE,MAAM,SAAC,EAAG,CAAK,MAAA,GAAW,MAAM,CAAG,CAApB,CAAqB,CACzE,CAAC,CACH,CAEM,YAAoC,EAAqC,CAC7E,MAAO,IAAkB,GAAmC,CAAc,CAAC,CAC7E,CAEA,YAA0B,EAAiC,EAAyB,uIACxD,EAAA,GAAA,CAAa,gFAIrC,GAJe,EAAK,EAAA,MACpB,EAAW,KAAK,CAAK,EAGjB,EAAW,OACb,MAAA,CAAA,CAAA,6RAGJ,SAAW,SAAQ,WC/Gf,YACJ,EACA,EACA,EACA,EACA,EAAc,CADd,AAAA,IAAA,QAAA,GAAA,GACA,IAAA,QAAA,GAAA,IAEA,GAAM,GAAuB,EAAU,SAAS,UAAA,CAC9C,EAAI,EACJ,AAAI,EACF,EAAmB,IAAI,KAAK,SAAS,KAAM,CAAK,CAAC,EAEjD,KAAK,YAAW,CAEpB,EAAG,CAAK,EAIR,GAFA,EAAmB,IAAI,CAAoB,EAEvC,CAAC,EAKH,MAAO,EAEX,CCeM,YAAuB,EAA0B,EAAS,CAAT,MAAA,KAAA,QAAA,GAAA,GAC9C,EAAQ,SAAC,EAAQ,EAAU,CAChC,EAAO,UACL,EACE,EACA,SAAC,EAAK,CAAK,MAAA,IAAgB,EAAY,EAAW,UAAA,CAAM,MAAA,GAAW,KAAK,CAAK,CAArB,EAAwB,CAAK,CAA1E,EACX,UAAA,CAAM,MAAA,IAAgB,EAAY,EAAW,UAAA,CAAM,MAAA,GAAW,SAAQ,CAAnB,EAAuB,CAAK,CAAzE,EACN,SAAC,EAAG,CAAK,MAAA,IAAgB,EAAY,EAAW,UAAA,CAAM,MAAA,GAAW,MAAM,CAAG,CAApB,EAAuB,CAAK,CAAzE,CAA0E,CACpF,CAEL,CAAC,CACH,CCPM,YAAyB,EAA0B,EAAiB,CAAjB,MAAA,KAAA,QAAA,GAAA,GAChD,EAAQ,SAAC,EAAQ,EAAU,CAChC,EAAW,IAAI,EAAU,SAAS,UAAA,CAAM,MAAA,GAAO,UAAU,CAAU,CAA3B,EAA8B,CAAK,CAAC,CAC9E,CAAC,CACH,CC7DM,YAAgC,EAA6B,EAAwB,CACzF,MAAO,GAAU,CAAK,EAAE,KAAK,GAAY,CAAS,EAAG,GAAU,CAAS,CAAC,CAC3E,CCFM,YAA6B,EAAuB,EAAwB,CAChF,MAAO,GAAU,CAAK,EAAE,KAAK,GAAY,CAAS,EAAG,GAAU,CAAS,CAAC,CAC3E,CCJM,YAA2B,EAAqB,EAAwB,CAC5E,MAAO,IAAI,GAAc,SAAC,EAAU,CAElC,GAAI,GAAI,EAER,MAAO,GAAU,SAAS,UAAA,CACxB,AAAI,IAAM,EAAM,OAGd,EAAW,SAAQ,EAInB,GAAW,KAAK,EAAM,IAAI,EAIrB,EAAW,QACd,KAAK,SAAQ,EAGnB,CAAC,CACH,CAAC,CACH,CCfM,YAA8B,EAAoB,EAAwB,CAC9E,MAAO,IAAI,GAAc,SAAC,EAAU,CAClC,GAAI,GAKJ,UAAgB,EAAY,EAAW,UAAA,CAErC,EAAY,EAAc,IAAgB,EAE1C,GACE,EACA,EACA,UAAA,OACM,EACA,EACJ,GAAI,CAEF,AAAC,EAAkB,EAAS,KAAI,EAA7B,EAAK,EAAA,MAAE,EAAI,EAAA,WACP,EAAP,CAEA,EAAW,MAAM,CAAG,EACpB,OAGF,AAAI,EAKF,EAAW,SAAQ,EAGnB,EAAW,KAAK,CAAK,CAEzB,EACA,EACA,EAAI,CAER,CAAC,EAMM,UAAA,CAAM,MAAA,GAAW,GAAQ,KAAA,OAAR,EAAU,MAAM,GAAK,EAAS,OAAM,CAA/C,CACf,CAAC,CACH,CCvDM,YAAmC,EAAyB,EAAwB,CACxF,GAAI,CAAC,EACH,KAAM,IAAI,OAAM,yBAAyB,EAE3C,MAAO,IAAI,GAAc,SAAC,EAAU,CAClC,GAAgB,EAAY,EAAW,UAAA,CACrC,GAAM,GAAW,EAAM,OAAO,eAAc,EAC5C,GACE,EACA,EACA,UAAA,CACE,EAAS,KAAI,EAAG,KAAK,SAAC,EAAM,CAC1B,AAAI,EAAO,KAGT,EAAW,SAAQ,EAEnB,EAAW,KAAK,EAAO,KAAK,CAEhC,CAAC,CACH,EACA,EACA,EAAI,CAER,CAAC,CACH,CAAC,CACH,CCzBM,YAAwC,EAA8B,EAAwB,CAClG,MAAO,IAAsB,GAAmC,CAAK,EAAG,CAAS,CACnF,CCoBM,YAAuB,EAA2B,EAAwB,CAC9E,GAAI,GAAS,KAAM,CACjB,GAAI,GAAoB,CAAK,EAC3B,MAAO,IAAmB,EAAO,CAAS,EAE5C,GAAI,GAAY,CAAK,EACnB,MAAO,IAAc,EAAO,CAAS,EAEvC,GAAI,GAAU,CAAK,EACjB,MAAO,IAAgB,EAAO,CAAS,EAEzC,GAAI,GAAgB,CAAK,EACvB,MAAO,IAAsB,EAAO,CAAS,EAE/C,GAAI,GAAW,CAAK,EAClB,MAAO,IAAiB,EAAO,CAAS,EAE1C,GAAI,GAAqB,CAAK,EAC5B,MAAO,IAA2B,EAAO,CAAS,EAGtD,KAAM,IAAiC,CAAK,CAC9C,CCoDM,YAAkB,EAA2B,EAAyB,CAC1E,MAAO,GAAY,GAAU,EAAO,CAAS,EAAI,EAAU,CAAK,CAClE,CCxBM,YAAY,QAAI,GAAA,CAAA,EAAA,EAAA,EAAA,EAAA,UAAA,OAAA,IAAA,EAAA,GAAA,UAAA,GACpB,GAAM,GAAY,GAAa,CAAI,EACnC,MAAO,IAAK,EAAa,CAAS,CACpC,CCsCM,YAAqB,EAA0B,EAAyB,CAC5E,GAAM,GAAe,EAAW,CAAmB,EAAI,EAAsB,UAAA,CAAM,MAAA,EAAA,EAC7E,EAAO,SAAC,EAA6B,CAAK,MAAA,GAAW,MAAM,EAAY,CAAE,CAA/B,EAChD,MAAO,IAAI,GAAW,EAAY,SAAC,EAAU,CAAK,MAAA,GAAU,SAAS,EAAa,EAAG,CAAU,CAA7C,EAAiD,CAAI,CACzG,CCrHM,YAAsB,EAAU,CACpC,MAAO,aAAiB,OAAQ,CAAC,MAAM,CAAY,CACrD,CCsCM,WAAoB,EAAyC,EAAa,CAC9E,MAAO,GAAQ,SAAC,EAAQ,EAAU,CAEhC,GAAI,GAAQ,EAGZ,EAAO,UACL,EAAyB,EAAY,SAAC,EAAQ,CAG5C,EAAW,KAAK,EAAQ,KAAK,EAAS,EAAO,GAAO,CAAC,CACvD,CAAC,CAAC,CAEN,CAAC,CACH,CC1DQ,GAAA,IAAY,MAAK,QAEzB,YAA2B,EAA6B,EAAW,CAC/D,MAAO,IAAQ,CAAI,EAAI,EAAE,MAAA,OAAA,EAAA,CAAA,EAAA,EAAI,CAAI,CAAA,CAAA,EAAI,EAAG,CAAI,CAChD,CAMM,YAAiC,EAA2B,CAC9D,MAAO,GAAI,SAAA,EAAI,CAAI,MAAA,IAAY,EAAI,CAAI,CAApB,CAAqB,CAC5C,CCfQ,GAAA,IAAY,MAAK,QACjB,GAA0D,OAAM,eAArC,GAA+B,OAAM,UAAlB,GAAY,OAAM,KAQlE,YAA+D,EAAuB,CAC1F,GAAI,EAAK,SAAW,EAAG,CACrB,GAAM,GAAQ,EAAK,GACnB,GAAI,GAAQ,CAAK,EACf,MAAO,CAAE,KAAM,EAAO,KAAM,IAAI,EAElC,GAAI,GAAO,CAAK,EAAG,CACjB,GAAM,GAAO,GAAQ,CAAK,EAC1B,MAAO,CACL,KAAM,EAAK,IAAI,SAAC,EAAG,CAAK,MAAA,GAAM,EAAN,CAAU,EAClC,KAAI,IAKV,MAAO,CAAE,KAAM,EAAa,KAAM,IAAI,CACxC,CAEA,YAAgB,EAAQ,CACtB,MAAO,IAAO,MAAO,IAAQ,UAAY,GAAe,CAAG,IAAM,EACnE,CC7BM,YAAuB,EAAgB,EAAa,CACxD,MAAO,GAAK,OAAO,SAAC,EAAQ,EAAK,EAAC,CAAK,MAAE,GAAO,GAAO,EAAO,GAAK,CAA5B,EAAqC,CAAA,CAAS,CACvF,CCsMM,YAAuB,QAAoC,GAAA,CAAA,EAAA,EAAA,EAAA,EAAA,UAAA,OAAA,IAAA,EAAA,GAAA,UAAA,GAC/D,GAAM,GAAY,GAAa,CAAI,EAC7B,EAAiB,GAAkB,CAAI,EAEvC,EAA8B,GAAqB,CAAI,EAA/C,EAAW,EAAA,KAAE,EAAI,EAAA,KAE/B,GAAI,EAAY,SAAW,EAIzB,MAAO,IAAK,CAAA,EAAI,CAAgB,EAGlC,GAAM,GAAS,GAAI,GACjB,GACE,EACA,EACA,EAEI,SAAC,EAAM,CAAK,MAAA,IAAa,EAAM,CAAM,CAAzB,EAEZ,EAAQ,CACb,EAGH,MAAO,GAAkB,EAAO,KAAK,GAAiB,CAAc,CAAC,EAAsB,CAC7F,CAEM,YACJ,EACA,EACA,EAAiD,CAAjD,MAAA,KAAA,QAAA,GAAA,IAEO,SAAC,EAA2B,CAGjC,GACE,EACA,UAAA,CAaE,OAZQ,GAAW,EAAW,OAExB,EAAS,GAAI,OAAM,CAAM,EAG3B,EAAS,EAIT,EAAuB,aAGlB,EAAC,CACR,GACE,EACA,UAAA,CACE,GAAM,GAAS,GAAK,EAAY,GAAI,CAAgB,EAChD,EAAgB,GACpB,EAAO,UACL,EACE,EACA,SAAC,EAAK,CAEJ,EAAO,GAAK,EACP,GAEH,GAAgB,GAChB,KAEG,GAGH,EAAW,KAAK,EAAe,EAAO,MAAK,CAAE,CAAC,CAElD,EACA,UAAA,CACE,AAAK,EAAE,GAGL,EAAW,SAAQ,CAEvB,CAAC,CACF,CAEL,EACA,CAAU,GAjCL,EAAI,EAAG,EAAI,EAAQ,MAAnB,CAAC,CAoCZ,EACA,CAAU,CAEd,CACF,CAMA,YAAuB,EAAsC,EAAqB,EAA0B,CAC1G,AAAI,EACF,GAAgB,EAAc,EAAW,CAAO,EAEhD,EAAO,CAEX,CC3RM,YACJ,EACA,EACA,EACA,EACA,EACA,EACA,EACA,EAAgC,CAGhC,GAAM,GAAc,CAAA,EAEhB,EAAS,EAET,EAAQ,EAER,EAAa,GAKX,EAAgB,UAAA,CAIpB,AAAI,GAAc,CAAC,EAAO,QAAU,CAAC,GACnC,EAAW,SAAQ,CAEvB,EAGM,EAAY,SAAC,EAAQ,CAAK,MAAC,GAAS,EAAa,EAAW,CAAK,EAAI,EAAO,KAAK,CAAK,CAA5D,EAE1B,EAAa,SAAC,EAAQ,CAI1B,GAAU,EAAW,KAAK,CAAY,EAItC,IAKA,GAAI,GAAgB,GAGpB,EAAU,EAAQ,EAAO,GAAO,CAAC,EAAE,UACjC,EACE,EACA,SAAC,EAAU,CAGT,GAAY,MAAZ,EAAe,CAAU,EAEzB,AAAI,EAGF,EAAU,CAAiB,EAG3B,EAAW,KAAK,CAAU,CAE9B,EACA,UAAA,CAGE,EAAgB,EAClB,EAEA,OACA,UAAA,CAIE,GAAI,EAKF,GAAI,CAIF,IAKA,qBACE,GAAM,GAAgB,EAAO,MAAK,EAIlC,AAAI,EACF,GAAgB,EAAY,EAAmB,UAAA,CAAM,MAAA,GAAW,CAAa,CAAxB,CAAyB,EAE9E,EAAW,CAAa,GARrB,EAAO,QAAU,EAAS,OAYjC,EAAa,QACN,EAAP,CACA,EAAW,MAAM,CAAG,EAG1B,CAAC,CACF,CAEL,EAGA,SAAO,UACL,EAAyB,EAAY,EAAW,UAAA,CAE9C,EAAa,GACb,EAAa,CACf,CAAC,CAAC,EAKG,UAAA,CACL,GAAmB,MAAnB,EAAmB,CACrB,CACF,CClEM,YACJ,EACA,EACA,EAA6B,CAE7B,MAFA,KAAA,QAAA,GAAA,KAEI,EAAW,CAAc,EAEpB,GAAS,SAAC,EAAG,EAAC,CAAK,MAAA,GAAI,SAAC,EAAQ,EAAU,CAAK,MAAA,GAAe,EAAG,EAAG,EAAG,CAAE,CAA1B,CAA2B,EAAE,EAAU,EAAQ,EAAG,CAAC,CAAC,CAAC,CAAjF,EAAoF,CAAU,EAC/G,OAAO,IAAmB,UACnC,GAAa,GAGR,EAAQ,SAAC,EAAQ,EAAU,CAAK,MAAA,IAAe,EAAQ,EAAY,EAAS,CAAU,CAAtD,CAAuD,EAChG,CChCM,YAAmD,EAA6B,CAA7B,MAAA,KAAA,QAAA,GAAA,KAChD,GAAS,GAAU,CAAU,CACtC,CCNM,aAAmB,CACvB,MAAO,IAAS,CAAC,CACnB,CCmDM,aAAgB,QAAC,GAAA,CAAA,EAAA,EAAA,EAAA,EAAA,UAAA,OAAA,IAAA,EAAA,GAAA,UAAA,GACrB,MAAO,IAAS,EAAG,GAAK,EAAM,GAAa,CAAI,CAAC,CAAC,CACnD,CC9DM,WAAgD,EAA0B,CAC9E,MAAO,IAAI,GAA+B,SAAC,EAAU,CACnD,EAAU,EAAiB,CAAE,EAAE,UAAU,CAAU,CACrD,CAAC,CACH,CChDA,GAAM,IAA0B,CAAC,cAAe,gBAAgB,EAC1D,GAAqB,CAAC,mBAAoB,qBAAqB,EAC/D,GAAgB,CAAC,KAAM,KAAK,EA8N5B,WACJ,EACA,EACA,EACA,EAAsC,CAMtC,GAJI,EAAW,CAAO,GACpB,GAAiB,EACjB,EAAU,QAER,EACF,MAAO,GAAa,EAAQ,EAAW,CAA+B,EAAE,KAAK,GAAiB,CAAc,CAAC,EAUzG,GAAA,GAAA,EAEJ,GAAc,CAAM,EAChB,GAAmB,IAAI,SAAC,EAAU,CAAK,MAAA,UAAC,EAAY,CAAK,MAAA,GAAO,GAAY,EAAW,EAAS,CAA+B,CAAtE,CAAlB,CAAyF,EAElI,GAAwB,CAAM,EAC5B,GAAwB,IAAI,GAAwB,EAAQ,CAAS,CAAC,EACtE,GAA0B,CAAM,EAChC,GAAc,IAAI,GAAwB,EAAQ,CAAS,CAAC,EAC5D,CAAA,EAAE,CAAA,EATD,EAAG,EAAA,GAAE,EAAM,EAAA,GAgBlB,GAAI,CAAC,GACC,GAAY,CAAM,EACpB,MAAO,IAAS,SAAC,EAAc,CAAK,MAAA,GAAU,EAAW,EAAW,CAA+B,CAA/D,CAAgE,EAClG,EAAU,CAAM,CAAC,EAOvB,GAAI,CAAC,EACH,KAAM,IAAI,WAAU,sBAAsB,EAG5C,MAAO,IAAI,GAAc,SAAC,EAAU,CAIlC,GAAM,GAAU,UAAA,QAAC,GAAA,CAAA,EAAA,EAAA,EAAA,EAAA,UAAA,OAAA,IAAA,EAAA,GAAA,UAAA,GAAmB,MAAA,GAAW,KAAK,EAAI,EAAK,OAAS,EAAO,EAAK,EAAE,CAAhD,EAEpC,SAAI,CAAO,EAEJ,UAAA,CAAM,MAAA,GAAQ,CAAO,CAAf,CACf,CAAC,CACH,CASA,YAAiC,EAAa,EAAiB,CAC7D,MAAO,UAAC,EAAkB,CAAK,MAAA,UAAC,EAAY,CAAK,MAAA,GAAO,GAAY,EAAW,CAAO,CAArC,CAAlB,CACjC,CAOA,YAAiC,EAAW,CAC1C,MAAO,GAAW,EAAO,WAAW,GAAK,EAAW,EAAO,cAAc,CAC3E,CAOA,YAAmC,EAAW,CAC5C,MAAO,GAAW,EAAO,EAAE,GAAK,EAAW,EAAO,GAAG,CACvD,CAOA,YAAuB,EAAW,CAChC,MAAO,GAAW,EAAO,gBAAgB,GAAK,EAAW,EAAO,mBAAmB,CACrF,CC/LM,YACJ,EACA,EACA,EAAsC,CAEtC,MAAI,GACK,GAAoB,EAAY,CAAa,EAAE,KAAK,GAAiB,CAAc,CAAC,EAGtF,GAAI,GAAoB,SAAC,EAAU,CACxC,GAAM,GAAU,UAAA,QAAC,GAAA,CAAA,EAAA,EAAA,EAAA,EAAA,UAAA,OAAA,IAAA,EAAA,GAAA,UAAA,GAAc,MAAA,GAAW,KAAK,EAAE,SAAW,EAAI,EAAE,GAAK,CAAC,CAAzC,EACzB,EAAW,EAAW,CAAO,EACnC,MAAO,GAAW,CAAa,EAAI,UAAA,CAAM,MAAA,GAAc,EAAS,CAAQ,CAA/B,EAAmC,MAC9E,CAAC,CACH,CCtBM,YACJ,EACA,EACA,EAAyC,CAFzC,AAAA,IAAA,QAAA,GAAA,GAEA,IAAA,QAAA,GAAA,IAIA,GAAI,GAAmB,GAEvB,MAAI,IAAuB,MAIzB,CAAI,GAAY,CAAmB,EACjC,EAAY,EAIZ,EAAmB,GAIhB,GAAI,GAAW,SAAC,EAAU,CAI/B,GAAI,GAAM,GAAY,CAAO,EAAI,CAAC,EAAU,EAAW,IAAG,EAAK,EAE/D,AAAI,EAAM,GAER,GAAM,GAIR,GAAI,GAAI,EAGR,MAAO,GAAU,SAAS,UAAA,CACxB,AAAK,EAAW,QAEd,GAAW,KAAK,GAAG,EAEnB,AAAI,GAAK,EAGP,KAAK,SAAS,OAAW,CAAgB,EAGzC,EAAW,SAAQ,EAGzB,EAAG,CAAG,CACR,CAAC,CACH,CChGM,YAAe,QAAC,GAAA,CAAA,EAAA,EAAA,EAAA,EAAA,UAAA,OAAA,IAAA,EAAA,GAAA,UAAA,GACpB,GAAM,GAAY,GAAa,CAAI,EAC7B,EAAa,GAAU,EAAM,GAAQ,EACrC,EAAU,EAChB,MAAO,AAAC,GAAQ,OAGZ,EAAQ,SAAW,EAEnB,EAAU,EAAQ,EAAE,EAEpB,GAAS,CAAU,EAAE,GAAK,EAAS,CAAS,CAAC,EAL7C,CAMN,CCjEO,GAAM,IAAQ,GAAI,GAAkB,EAAI,ECpCvC,GAAA,IAAY,MAAK,QAMnB,YAA4B,EAAiB,CACjD,MAAO,GAAK,SAAW,GAAK,GAAQ,EAAK,EAAE,EAAI,EAAK,GAAM,CAC5D,CCoDM,WAAoB,EAAiD,EAAa,CACtF,MAAO,GAAQ,SAAC,EAAQ,EAAU,CAEhC,GAAI,GAAQ,EAIZ,EAAO,UAIL,EAAyB,EAAY,SAAC,EAAK,CAAK,MAAA,GAAU,KAAK,EAAS,EAAO,GAAO,GAAK,EAAW,KAAK,CAAK,CAAhE,CAAiE,CAAC,CAEtH,CAAC,CACH,CCxBM,aAAa,QAAC,GAAA,CAAA,EAAA,EAAA,EAAA,EAAA,UAAA,OAAA,IAAA,EAAA,GAAA,UAAA,GAClB,GAAM,GAAiB,GAAkB,CAAI,EAEvC,EAAU,GAAe,CAAI,EAEnC,MAAO,GAAQ,OACX,GAAI,GAAsB,SAAC,EAAU,CAGnC,GAAI,GAAuB,EAAQ,IAAI,UAAA,CAAM,MAAA,CAAA,CAAA,CAAE,EAK3C,EAAY,EAAQ,IAAI,UAAA,CAAM,MAAA,EAAA,CAAK,EAGvC,EAAW,IAAI,UAAA,CACb,EAAU,EAAY,IACxB,CAAC,EAKD,mBAAS,EAAW,CAClB,EAAU,EAAQ,EAAY,EAAE,UAC9B,EACE,EACA,SAAC,EAAK,CAKJ,GAJA,EAAQ,GAAa,KAAK,CAAK,EAI3B,EAAQ,MAAM,SAAC,EAAM,CAAK,MAAA,GAAO,MAAP,CAAa,EAAG,CAC5C,GAAM,GAAc,EAAQ,IAAI,SAAC,EAAM,CAAK,MAAA,GAAO,MAAK,CAAZ,CAAe,EAE3D,EAAW,KAAK,EAAiB,EAAc,MAAA,OAAA,EAAA,CAAA,EAAA,EAAI,CAAM,CAAA,CAAA,EAAI,CAAM,EAI/D,EAAQ,KAAK,SAAC,EAAQ,EAAC,CAAK,MAAA,CAAC,EAAO,QAAU,EAAU,EAA5B,CAA8B,GAC5D,EAAW,SAAQ,EAGzB,EACA,UAAA,CAGE,EAAU,GAAe,GAIzB,CAAC,EAAQ,GAAa,QAAU,EAAW,SAAQ,CACrD,CAAC,CACF,GA9BI,EAAc,EAAG,CAAC,EAAW,QAAU,EAAc,EAAQ,OAAQ,MAArE,CAAW,EAmCpB,MAAO,WAAA,CACL,EAAU,EAAY,IACxB,CACF,CAAC,EACD,CACN,CC9DM,YAAmB,EAAoD,CAC3E,MAAO,GAAQ,SAAC,EAAQ,EAAU,CAChC,GAAI,GAAW,GACX,EAAsB,KACtB,EAA6C,KAC7C,EAAa,GAEX,EAAc,UAAA,CAGlB,GAFA,GAAkB,MAAlB,EAAoB,YAAW,EAC/B,EAAqB,KACjB,EAAU,CACZ,EAAW,GACX,GAAM,GAAQ,EACd,EAAY,KACZ,EAAW,KAAK,CAAK,EAEvB,GAAc,EAAW,SAAQ,CACnC,EAEM,EAAkB,UAAA,CACtB,EAAqB,KACrB,GAAc,EAAW,SAAQ,CACnC,EAEA,EAAO,UACL,EACE,EACA,SAAC,EAAK,CACJ,EAAW,GACX,EAAY,EACP,GACH,EAAU,EAAiB,CAAK,CAAC,EAAE,UAChC,EAAqB,EAAyB,EAAY,EAAa,CAAe,CAAE,CAG/F,EACA,UAAA,CACE,EAAa,GACZ,EAAC,GAAY,CAAC,GAAsB,EAAmB,SAAW,EAAW,SAAQ,CACxF,CAAC,CACF,CAEL,CAAC,CACH,CC3CM,YAAuB,EAAkB,EAAyC,CAAzC,MAAA,KAAA,QAAA,GAAA,IACtC,GAAM,UAAA,CAAM,MAAA,IAAM,EAAU,CAAS,CAAzB,CAA0B,CAC/C,CCEM,YAAyB,EAAoB,EAAsC,CAAtC,MAAA,KAAA,QAAA,GAAA,MAGjD,EAAmB,GAAgB,KAAhB,EAAoB,EAEhC,EAAQ,SAAC,EAAQ,EAAU,CAChC,GAAI,GAAiB,CAAA,EACjB,EAAQ,EAEZ,EAAO,UACL,EACE,EACA,SAAC,EAAK,aACA,EAAuB,KAK3B,AAAI,IAAU,IAAsB,GAClC,EAAQ,KAAK,CAAA,CAAE,MAIjB,OAAqB,GAAA,GAAA,CAAO,EAAA,EAAA,EAAA,KAAA,EAAA,CAAA,EAAA,KAAA,EAAA,EAAA,KAAA,EAAE,CAAzB,GAAM,GAAM,EAAA,MACf,EAAO,KAAK,CAAK,EAMb,GAAc,EAAO,QACvB,GAAS,GAAM,KAAN,EAAU,CAAA,EACnB,EAAO,KAAK,CAAM,qGAItB,GAAI,MAIF,OAAqB,GAAA,GAAA,CAAM,EAAA,EAAA,EAAA,KAAA,EAAA,CAAA,EAAA,KAAA,EAAA,EAAA,KAAA,EAAE,CAAxB,GAAM,GAAM,EAAA,MACf,GAAU,EAAS,CAAM,EACzB,EAAW,KAAK,CAAM,oGAG5B,EACA,UAAA,aAGE,OAAqB,GAAA,GAAA,CAAO,EAAA,EAAA,EAAA,KAAA,EAAA,CAAA,EAAA,KAAA,EAAA,EAAA,KAAA,EAAE,CAAzB,GAAM,GAAM,EAAA,MACf,EAAW,KAAK,CAAM,oGAExB,EAAW,SAAQ,CACrB,EAEA,OACA,UAAA,CAEE,EAAU,IACZ,CAAC,CACF,CAEL,CAAC,CACH,CCbM,YACJ,EAAgD,CAEhD,MAAO,GAAQ,SAAC,EAAQ,EAAU,CAChC,GAAI,GAAgC,KAChC,EAAY,GACZ,EAEJ,EAAW,EAAO,UAChB,EAAyB,EAAY,OAAW,OAAW,SAAC,EAAG,CAC7D,EAAgB,EAAU,EAAS,EAAK,GAAW,CAAQ,EAAE,CAAM,CAAC,CAAC,EACrE,AAAI,EACF,GAAS,YAAW,EACpB,EAAW,KACX,EAAc,UAAU,CAAU,GAIlC,EAAY,EAEhB,CAAC,CAAC,EAGA,GAMF,GAAS,YAAW,EACpB,EAAW,KACX,EAAe,UAAU,CAAU,EAEvC,CAAC,CACH,CC/HM,YACJ,EACA,EACA,EACA,EACA,EAAqC,CAErC,MAAO,UAAC,EAAuB,EAA2B,CAIxD,GAAI,GAAW,EAIX,EAAa,EAEb,EAAQ,EAGZ,EAAO,UACL,EACE,EACA,SAAC,EAAK,CAEJ,GAAM,GAAI,IAEV,EAAQ,EAEJ,EAAY,EAAO,EAAO,CAAC,EAIzB,GAAW,GAAO,GAGxB,GAAc,EAAW,KAAK,CAAK,CACrC,EAGA,GACG,UAAA,CACC,GAAY,EAAW,KAAK,CAAK,EACjC,EAAW,SAAQ,CACrB,CAAE,CACL,CAEL,CACF,CCnCM,aAAuB,QAAO,GAAA,CAAA,EAAA,EAAA,EAAA,EAAA,UAAA,OAAA,IAAA,EAAA,GAAA,UAAA,GAClC,GAAM,GAAiB,GAAkB,CAAI,EAC7C,MAAO,GACH,GAAK,GAAa,MAAA,OAAA,EAAA,CAAA,EAAA,EAAK,CAAoC,CAAA,CAAA,EAAG,GAAiB,CAAc,CAAC,EAC9F,EAAQ,SAAC,EAAQ,EAAU,CACzB,GAAiB,EAAA,CAAE,CAAM,EAAA,EAAK,GAAe,CAAI,CAAC,CAAA,CAAA,EAAG,CAAU,CACjE,CAAC,CACP,CCUM,aAA2B,QAC/B,GAAA,CAAA,EAAA,EAAA,EAAA,EAAA,UAAA,OAAA,IAAA,EAAA,GAAA,UAAA,GAEA,MAAO,IAAa,MAAA,OAAA,EAAA,CAAA,EAAA,EAAI,CAAY,CAAA,CAAA,CACtC,CC+BM,YACJ,EACA,EAA6G,CAE7G,MAAO,GAAW,CAAc,EAAI,GAAS,EAAS,EAAgB,CAAC,EAAI,GAAS,EAAS,CAAC,CAChG,CCpBM,YAA0B,EAAiB,EAAyC,CAAzC,MAAA,KAAA,QAAA,GAAA,IACxC,EAAQ,SAAC,EAAQ,EAAU,CAChC,GAAI,GAAkC,KAClC,EAAsB,KACtB,EAA0B,KAExB,EAAO,UAAA,CACX,GAAI,EAAY,CAEd,EAAW,YAAW,EACtB,EAAa,KACb,GAAM,GAAQ,EACd,EAAY,KACZ,EAAW,KAAK,CAAK,EAEzB,EACA,YAAqB,CAInB,GAAM,GAAa,EAAY,EACzB,EAAM,EAAU,IAAG,EACzB,GAAI,EAAM,EAAY,CAEpB,EAAa,KAAK,SAAS,OAAW,EAAa,CAAG,EACtD,EAAW,IAAI,CAAU,EACzB,OAGF,EAAI,CACN,CAEA,EAAO,UACL,EACE,EACA,SAAC,EAAQ,CACP,EAAY,EACZ,EAAW,EAAU,IAAG,EAGnB,GACH,GAAa,EAAU,SAAS,EAAc,CAAO,EACrD,EAAW,IAAI,CAAU,EAE7B,EACA,UAAA,CAGE,EAAI,EACJ,EAAW,SAAQ,CACrB,EAEA,OACA,UAAA,CAEE,EAAY,EAAa,IAC3B,CAAC,CACF,CAEL,CAAC,CACH,CCpFM,YAA+B,EAAe,CAClD,MAAO,GAAQ,SAAC,EAAQ,EAAU,CAChC,GAAI,GAAW,GACf,EAAO,UACL,EACE,EACA,SAAC,EAAK,CACJ,EAAW,GACX,EAAW,KAAK,CAAK,CACvB,EACA,UAAA,CACE,AAAK,GACH,EAAW,KAAK,CAAa,EAE/B,EAAW,SAAQ,CACrB,CAAC,CACF,CAEL,CAAC,CACH,CCXM,YAAkB,EAAa,CACnC,MAAO,IAAS,EAEZ,UAAA,CAAM,MAAA,EAAA,EACN,EAAQ,SAAC,EAAQ,EAAU,CACzB,GAAI,GAAO,EACX,EAAO,UACL,EAAyB,EAAY,SAAC,EAAK,CAIzC,AAAI,EAAE,GAAQ,GACZ,GAAW,KAAK,CAAK,EAIjB,GAAS,GACX,EAAW,SAAQ,EAGzB,CAAC,CAAC,CAEN,CAAC,CACP,CC9BM,aAAwB,CAC5B,MAAO,GAAQ,SAAC,EAAQ,EAAU,CAChC,EAAO,UAAU,EAAyB,EAAY,EAAI,CAAC,CAC7D,CAAC,CACH,CCCM,WAAmB,EAAQ,CAC/B,MAAO,GAAI,UAAA,CAAM,MAAA,EAAA,CAAK,CACxB,CC2BM,YACJ,EACA,EAAmC,CAEnC,MAAI,GAEK,SAAC,EAAqB,CAC3B,MAAA,IAAO,EAAkB,KAAK,GAAK,CAAC,EAAG,GAAc,CAAE,EAAG,EAAO,KAAK,GAAU,CAAqB,CAAC,CAAC,CAAvG,EAGG,GAAS,SAAC,EAAO,EAAK,CAAK,MAAA,GAAsB,EAAO,CAAK,EAAE,KAAK,GAAK,CAAC,EAAG,EAAM,CAAK,CAAC,CAA9D,CAA+D,CACnG,CCxBM,YAAmB,EAAoB,EAAyC,CAAzC,AAAA,IAAA,QAAA,GAAA,IAC3C,GAAM,GAAW,GAAM,EAAK,CAAS,EACrC,MAAO,IAAU,UAAA,CAAM,MAAA,EAAA,CAAQ,CACjC,CC4EM,WACJ,EACA,EAA0D,CAA1D,MAAA,KAAA,QAAA,GAA+B,IAK/B,EAAa,GAAU,KAAV,EAAc,GAEpB,EAAQ,SAAC,EAAQ,EAAU,CAGhC,GAAI,GAEA,EAAQ,GAEZ,EAAO,UACL,EAAyB,EAAY,SAAC,EAAK,CAEzC,GAAM,GAAa,EAAY,CAAK,EAKpC,AAAI,IAAS,CAAC,EAAY,EAAa,CAAU,IAM/C,GAAQ,GACR,EAAc,EAGd,EAAW,KAAK,CAAK,EAEzB,CAAC,CAAC,CAEN,CAAC,CACH,CAEA,YAAwB,EAAQ,EAAM,CACpC,MAAO,KAAM,CACf,CCnHM,WAAwD,EAAQ,EAAuC,CAC3G,MAAO,GAAqB,SAAC,EAAM,EAAI,CAAK,MAAA,GAAU,EAAQ,EAAE,GAAM,EAAE,EAAI,EAAI,EAAE,KAAS,EAAE,EAAjD,CAAqD,CACnG,CCLM,aAAiB,QAAI,GAAA,CAAA,EAAA,EAAA,EAAA,EAAA,UAAA,OAAA,IAAA,EAAA,GAAA,UAAA,GACzB,MAAO,UAAC,EAAqB,CAAK,MAAA,IAAO,EAAQ,EAAE,MAAA,OAAA,EAAA,CAAA,EAAA,EAAI,CAAM,CAAA,CAAA,CAAA,CAA3B,CACpC,CCHM,WAAsB,EAAoB,CAC9C,MAAO,GAAQ,SAAC,EAAQ,EAAU,CAGhC,GAAI,CACF,EAAO,UAAU,CAAU,UAE3B,EAAW,IAAI,CAAQ,EAE3B,CAAC,CACH,CC9BM,YAAsB,EAAa,CACvC,MAAO,IAAS,EACZ,UAAA,CAAM,MAAA,EAAA,EACN,EAAQ,SAAC,EAAQ,EAAU,CAKzB,GAAI,GAAc,CAAA,EAClB,EAAO,UACL,EACE,EACA,SAAC,EAAK,CAEJ,EAAO,KAAK,CAAK,EAGjB,EAAQ,EAAO,QAAU,EAAO,MAAK,CACvC,EACA,UAAA,aAGE,OAAoB,GAAA,GAAA,CAAM,EAAA,EAAA,EAAA,KAAA,EAAA,CAAA,EAAA,KAAA,EAAA,EAAA,KAAA,EAAE,CAAvB,GAAM,GAAK,EAAA,MACd,EAAW,KAAK,CAAK,oGAEvB,EAAW,SAAQ,CACrB,EAEA,OACA,UAAA,CAEE,EAAS,IACX,CAAC,CACF,CAEL,CAAC,CACP,CC1DM,aAAe,QAAI,GAAA,CAAA,EAAA,EAAA,EAAA,EAAA,UAAA,OAAA,IAAA,EAAA,GAAA,UAAA,GACvB,GAAM,GAAY,GAAa,CAAI,EAC7B,EAAa,GAAU,EAAM,GAAQ,EAC3C,SAAO,GAAe,CAAI,EAEnB,EAAQ,SAAC,EAAQ,EAAU,CAChC,GAAS,CAAU,EAAE,GAAI,EAAA,CAAE,CAAM,EAAA,EAAM,CAA6B,CAAA,EAAG,CAAS,CAAC,EAAE,UAAU,CAAU,CACzG,CAAC,CACH,CCcM,aAAmB,QACvB,GAAA,CAAA,EAAA,EAAA,EAAA,EAAA,UAAA,OAAA,IAAA,EAAA,GAAA,UAAA,GAEA,MAAO,IAAK,MAAA,OAAA,EAAA,CAAA,EAAA,EAAI,CAAY,CAAA,CAAA,CAC9B,CCmEM,YAAoB,EAAqC,OACzD,EAAQ,IACR,EAEJ,MAAI,IAAiB,MACnB,CAAI,MAAO,IAAkB,SACxB,GAA4B,EAAa,MAAzC,EAAK,IAAA,OAAG,IAAQ,EAAE,EAAU,EAAa,OAE5C,EAAQ,GAIL,GAAS,EACZ,UAAA,CAAM,MAAA,EAAA,EACN,EAAQ,SAAC,EAAQ,EAAU,CACzB,GAAI,GAAQ,EACR,EAEE,EAAc,UAAA,CAGlB,GAFA,GAAS,MAAT,EAAW,YAAW,EACtB,EAAY,KACR,GAAS,KAAM,CACjB,GAAM,GAAW,MAAO,IAAU,SAAW,GAAM,CAAK,EAAI,EAAU,EAAM,CAAK,CAAC,EAC5E,EAAqB,EAAyB,EAAY,UAAA,CAC9D,EAAmB,YAAW,EAC9B,EAAiB,CACnB,CAAC,EACD,EAAS,UAAU,CAAkB,MAErC,GAAiB,CAErB,EAEM,EAAoB,UAAA,CACxB,GAAI,GAAY,GAChB,EAAY,EAAO,UACjB,EAAyB,EAAY,OAAW,UAAA,CAC9C,AAAI,EAAE,EAAQ,EACZ,AAAI,EACF,EAAW,EAEX,EAAY,GAGd,EAAW,SAAQ,CAEvB,CAAC,CAAC,EAGA,GACF,EAAW,CAEf,EAEA,EAAiB,CACnB,CAAC,CACP,CC7HM,YAAoB,EAAyB,CACjD,MAAO,GAAQ,SAAC,EAAQ,EAAU,CAChC,GAAI,GAAW,GACX,EAAsB,KAC1B,EAAO,UACL,EAAyB,EAAY,SAAC,EAAK,CACzC,EAAW,GACX,EAAY,CACd,CAAC,CAAC,EAEJ,EAAS,UACP,EACE,EACA,UAAA,CACE,GAAI,EAAU,CACZ,EAAW,GACX,GAAM,GAAQ,EACd,EAAY,KACZ,EAAW,KAAK,CAAK,EAEzB,EACA,EAAI,CACL,CAEL,CAAC,CACH,CCgBM,YAAwB,EAA6D,EAAQ,CAMjG,MAAO,GAAQ,GAAc,EAAa,EAAW,UAAU,QAAU,EAAG,EAAI,CAAC,CACnF,CCiDM,YAAmB,EAA4B,CAA5B,AAAA,IAAA,QAAA,GAAA,CAAA,GACf,GAAA,GAAgH,EAAO,UAAvH,EAAS,IAAA,OAAG,UAAA,CAAM,MAAA,IAAI,EAAJ,EAAgB,EAAE,EAA4E,EAAO,aAAnF,EAAY,IAAA,OAAG,GAAI,EAAE,EAAuD,EAAO,gBAA9D,EAAe,IAAA,OAAG,GAAI,EAAE,EAA+B,EAAO,oBAAtC,EAAmB,IAAA,OAAG,GAAI,EAUnH,MAAO,UAAC,EAAa,CACnB,GAAI,GAAuC,KACvC,EAAuC,KACvC,EAAiC,KACjC,EAAW,EACX,EAAe,GACf,EAAa,GAEX,EAAc,UAAA,CAClB,GAAe,MAAf,EAAiB,YAAW,EAC5B,EAAkB,IACpB,EAGM,EAAQ,UAAA,CACZ,EAAW,EACX,EAAa,EAAU,KACvB,EAAe,EAAa,EAC9B,EACM,EAAsB,UAAA,CAG1B,GAAM,GAAO,EACb,EAAK,EACL,GAAI,MAAJ,EAAM,YAAW,CACnB,EAEA,MAAO,GAAc,SAAC,EAAQ,GAAU,CACtC,IACI,CAAC,GAAc,CAAC,GAClB,EAAW,EAOb,GAAM,IAAQ,EAAU,GAAO,KAAP,EAAW,EAAS,EAO5C,GAAW,IAAI,UAAA,CACb,IAKI,IAAa,GAAK,CAAC,GAAc,CAAC,GACpC,GAAkB,GAAY,EAAqB,CAAmB,EAE1E,CAAC,EAID,GAAK,UAAU,EAAU,EAEpB,GAMH,GAAa,GAAI,IAAe,CAC9B,KAAM,SAAC,GAAK,CAAK,MAAA,IAAK,KAAK,EAAK,CAAf,EACjB,MAAO,SAAC,GAAG,CACT,EAAa,GACb,EAAW,EACX,EAAkB,GAAY,EAAO,EAAc,EAAG,EACtD,GAAK,MAAM,EAAG,CAChB,EACA,SAAU,UAAA,CACR,EAAe,GACf,EAAW,EACX,EAAkB,GAAY,EAAO,CAAe,EACpD,GAAK,SAAQ,CACf,EACD,EACD,GAAK,CAAM,EAAE,UAAU,CAAU,EAErC,CAAC,EAAE,CAAa,CAClB,CACF,CAEA,YACE,EACA,EAA+C,QAC/C,GAAA,CAAA,EAAA,EAAA,EAAA,EAAA,UAAA,OAAA,IAAA,EAAA,EAAA,GAAA,UAAA,GAEA,MAAI,KAAO,GACT,GAAK,EAEE,MAGL,IAAO,GACF,KAGF,EAAE,MAAA,OAAA,EAAA,CAAA,EAAA,EAAI,CAAI,CAAA,CAAA,EACd,KAAK,GAAK,CAAC,CAAC,EACZ,UAAU,UAAA,CAAM,MAAA,GAAK,CAAL,CAAO,CAC5B,CCzGM,WACJ,EACA,EACA,EAAyB,WAErB,EACA,EAAW,GACf,MAAI,IAAsB,MAAO,IAAuB,SACnD,GAA8E,EAAkB,WAAhG,EAAU,IAAA,OAAG,IAAQ,EAAE,EAAuD,EAAkB,WAAzE,EAAU,IAAA,OAAG,IAAQ,EAAE,EAAgC,EAAkB,SAAlD,EAAQ,IAAA,OAAG,GAAK,EAAE,EAAc,EAAkB,WAEnG,EAAa,GAAkB,KAAlB,EAAsB,IAE9B,GAAS,CACd,UAAW,UAAA,CAAM,MAAA,IAAI,IAAc,EAAY,EAAY,CAAS,CAAnD,EACjB,aAAc,GACd,gBAAiB,GACjB,oBAAqB,EACtB,CACH,CCvIM,YAAkB,EAAa,CACnC,MAAO,GAAO,SAAC,EAAG,EAAK,CAAK,MAAA,IAAS,CAAT,CAAc,CAC5C,CCWM,YAAuB,EAAyB,CACpD,MAAO,GAAQ,SAAC,EAAQ,EAAU,CAChC,GAAI,GAAS,GAEP,EAAiB,EACrB,EACA,UAAA,CACE,GAAc,MAAd,EAAgB,YAAW,EAC3B,EAAS,EACX,EACA,EAAI,EAGN,EAAU,CAAQ,EAAE,UAAU,CAAc,EAE5C,EAAO,UAAU,EAAyB,EAAY,SAAC,EAAK,CAAK,MAAA,IAAU,EAAW,KAAK,CAAK,CAA/B,CAAgC,CAAC,CACpG,CAAC,CACH,CCRM,YAAmB,QAAO,GAAA,CAAA,EAAA,EAAA,EAAA,EAAA,UAAA,OAAA,IAAA,EAAA,GAAA,UAAA,GAC9B,GAAM,GAAY,GAAa,CAAM,EACrC,MAAO,GAAQ,SAAC,EAAQ,EAAU,CAIhC,AAAC,GAAY,GAAO,EAAQ,EAAQ,CAAS,EAAI,GAAO,EAAQ,CAAM,GAAG,UAAU,CAAU,CAC/F,CAAC,CACH,CCmBM,WACJ,EACA,EAA6G,CAE7G,MAAO,GAAQ,SAAC,EAAQ,EAAU,CAChC,GAAI,GAAyD,KACzD,EAAQ,EAER,EAAa,GAIX,EAAgB,UAAA,CAAM,MAAA,IAAc,CAAC,GAAmB,EAAW,SAAQ,CAArD,EAE5B,EAAO,UACL,EACE,EACA,SAAC,EAAK,CAEJ,GAAe,MAAf,EAAiB,YAAW,EAC5B,GAAI,GAAa,EACX,EAAa,IAEnB,EAAU,EAAQ,EAAO,CAAU,CAAC,EAAE,UACnC,EAAkB,EACjB,EAIA,SAAC,EAAU,CAAK,MAAA,GAAW,KAAK,EAAiB,EAAe,EAAO,EAAY,EAAY,GAAY,EAAI,CAAU,CAAzG,EAChB,UAAA,CAIE,EAAkB,KAClB,EAAa,CACf,CAAC,CACD,CAEN,EACA,UAAA,CACE,EAAa,GACb,EAAa,CACf,CAAC,CACF,CAEL,CAAC,CACH,CC1EM,YACJ,EACA,EAA6G,CAE7G,MAAO,GAAW,CAAc,EAAI,EAAU,UAAA,CAAM,MAAA,EAAA,EAAiB,CAAc,EAAI,EAAU,UAAA,CAAM,MAAA,EAAA,CAAe,CACxH,CClBM,YAAuB,EAA8B,CACzD,MAAO,GAAQ,SAAC,EAAQ,EAAU,CAChC,EAAU,CAAQ,EAAE,UAAU,EAAyB,EAAY,UAAA,CAAM,MAAA,GAAW,SAAQ,CAAnB,EAAuB,EAAI,CAAC,EACrG,CAAC,EAAW,QAAU,EAAO,UAAU,CAAU,CACnD,CAAC,CACH,CCIM,YAAuB,EAAiD,EAAiB,CAAjB,MAAA,KAAA,QAAA,GAAA,IACrE,EAAQ,SAAC,EAAQ,EAAU,CAChC,GAAI,GAAQ,EACZ,EAAO,UACL,EAAyB,EAAY,SAAC,EAAK,CACzC,GAAM,GAAS,EAAU,EAAO,GAAO,EACvC,AAAC,IAAU,IAAc,EAAW,KAAK,CAAK,EAC9C,CAAC,GAAU,EAAW,SAAQ,CAChC,CAAC,CAAC,CAEN,CAAC,CACH,CCyCM,WACJ,EACA,EACA,EAA8B,CAK9B,GAAM,GACJ,EAAW,CAAc,GAAK,GAAS,EAElC,CAAE,KAAM,EAA2E,MAAK,EAAE,SAAQ,CAAA,EACnG,EAEN,MAAO,GACH,EAAQ,SAAC,EAAQ,EAAU,OACzB,AAAA,GAAA,EAAY,aAAS,MAAA,IAAA,QAAA,EAAA,KAArB,CAAW,EACX,GAAI,GAAU,GACd,EAAO,UACL,EACE,EACA,SAAC,EAAK,OACJ,AAAA,GAAA,EAAY,QAAI,MAAA,IAAA,QAAA,EAAA,KAAhB,EAAmB,CAAK,EACxB,EAAW,KAAK,CAAK,CACvB,EACA,UAAA,OACE,EAAU,GACV,GAAA,EAAY,YAAQ,MAAA,IAAA,QAAA,EAAA,KAApB,CAAW,EACX,EAAW,SAAQ,CACrB,EACA,SAAC,EAAG,OACF,EAAU,GACV,GAAA,EAAY,SAAK,MAAA,IAAA,QAAA,EAAA,KAAjB,EAAoB,CAAG,EACvB,EAAW,MAAM,CAAG,CACtB,EACA,UAAA,SACE,AAAI,GACF,IAAA,EAAY,eAAW,MAAA,IAAA,QAAA,EAAA,KAAvB,CAAW,GAEb,GAAA,EAAY,YAAQ,MAAA,IAAA,QAAA,EAAA,KAApB,CAAW,CACb,CAAC,CACF,CAEL,CAAC,EAID,EACN,CC9IO,GAAM,IAAwC,CACnD,QAAS,GACT,SAAU,IAiDN,YACJ,EACA,EAA8C,CAA9C,MAAA,KAAA,QAAA,GAAA,IAEO,EAAQ,SAAC,EAAQ,EAAU,CACxB,GAAA,GAAsB,EAAM,QAAnB,EAAa,EAAM,SAChC,EAAW,GACX,EAAsB,KACtB,EAAiC,KACjC,EAAa,GAEX,EAAgB,UAAA,CACpB,GAAS,MAAT,EAAW,YAAW,EACtB,EAAY,KACR,GACF,GAAI,EACJ,GAAc,EAAW,SAAQ,EAErC,EAEM,EAAoB,UAAA,CACxB,EAAY,KACZ,GAAc,EAAW,SAAQ,CACnC,EAEM,EAAgB,SAAC,EAAQ,CAC7B,MAAC,GAAY,EAAU,EAAiB,CAAK,CAAC,EAAE,UAAU,EAAyB,EAAY,EAAe,CAAiB,CAAC,CAAhI,EAEI,EAAO,UAAA,CACX,GAAI,EAAU,CAIZ,EAAW,GACX,GAAM,GAAQ,EACd,EAAY,KAEZ,EAAW,KAAK,CAAK,EACrB,CAAC,GAAc,EAAc,CAAK,EAEtC,EAEA,EAAO,UACL,EACE,EAMA,SAAC,EAAK,CACJ,EAAW,GACX,EAAY,EACZ,CAAE,IAAa,CAAC,EAAU,SAAY,GAAU,EAAI,EAAK,EAAc,CAAK,EAC9E,EACA,UAAA,CACE,EAAa,GACb,CAAE,IAAY,GAAY,GAAa,CAAC,EAAU,SAAW,EAAW,SAAQ,CAClF,CAAC,CACF,CAEL,CAAC,CACH,CCvEM,YACJ,EACA,EACA,EAA8B,CAD9B,AAAA,IAAA,QAAA,GAAA,IACA,IAAA,QAAA,GAAA,IAEA,GAAM,GAAY,GAAM,EAAU,CAAS,EAC3C,MAAO,IAAS,UAAA,CAAM,MAAA,EAAA,EAAW,CAAM,CACzC,CCJM,aAAwB,QAAO,GAAA,CAAA,EAAA,EAAA,EAAA,EAAA,UAAA,OAAA,IAAA,EAAA,GAAA,UAAA,GACnC,GAAM,GAAU,GAAkB,CAAM,EAExC,MAAO,GAAQ,SAAC,EAAQ,EAAU,CAehC,OAdM,GAAM,EAAO,OACb,EAAc,GAAI,OAAM,CAAG,EAI7B,EAAW,EAAO,IAAI,UAAA,CAAM,MAAA,EAAA,CAAK,EAGjC,EAAQ,cAMH,EAAC,CACR,EAAU,EAAO,EAAE,EAAE,UACnB,EACE,EACA,SAAC,EAAK,CACJ,EAAY,GAAK,EACb,CAAC,GAAS,CAAC,EAAS,IAEtB,GAAS,GAAK,GAKb,GAAQ,EAAS,MAAM,EAAQ,IAAO,GAAW,MAEtD,EAGA,EAAI,CACL,GAnBI,EAAI,EAAG,EAAI,EAAK,MAAhB,CAAC,EAwBV,EAAO,UACL,EAAyB,EAAY,SAAC,EAAK,CACzC,GAAI,EAAO,CAET,GAAM,GAAM,EAAA,CAAI,CAAK,EAAA,EAAK,CAAW,CAAA,EACrC,EAAW,KAAK,EAAU,EAAO,MAAA,OAAA,EAAA,CAAA,EAAA,EAAI,CAAM,CAAA,CAAA,EAAI,CAAM,EAEzD,CAAC,CAAC,CAEN,CAAC,CACH,CCxFM,aAAa,QAAO,GAAA,CAAA,EAAA,EAAA,EAAA,EAAA,UAAA,OAAA,IAAA,EAAA,GAAA,UAAA,GACxB,MAAO,GAAQ,SAAC,EAAQ,EAAU,CAChC,GAAS,MAAA,OAAA,EAAA,CAAC,CAA8B,EAAA,EAAM,CAAuC,CAAA,CAAA,EAAE,UAAU,CAAU,CAC7G,CAAC,CACH,CCCM,aAAiB,QAAkC,GAAA,CAAA,EAAA,EAAA,EAAA,EAAA,UAAA,OAAA,IAAA,EAAA,GAAA,UAAA,GACvD,MAAO,IAAG,MAAA,OAAA,EAAA,CAAA,EAAA,EAAI,CAAW,CAAA,CAAA,CAC3B,CCYO,aAA4C,CACjD,GAAM,GAAY,GAAI,IAAwB,CAAC,EAC/C,SAAU,SAAU,mBAAoB,CAAE,KAAM,EAAK,CAAC,EACnD,UAAU,IAAM,EAAU,KAAK,QAAQ,CAAC,EAGpC,CACT,CCHO,WACL,EAAkB,EAAmB,SAChC,CACL,MAAO,OAAM,KAAK,EAAK,iBAAoB,CAAQ,CAAC,CACtD,CAuBO,WACL,EAAkB,EAAmB,SAClC,CACH,GAAM,GAAK,GAAsB,EAAU,CAAI,EAC/C,GAAI,MAAO,IAAO,YAChB,KAAM,IAAI,gBACR,8BAA8B,kBAChC,EAGF,MAAO,EACT,CAsBO,YACL,EAAkB,EAAmB,SACtB,CACf,MAAO,GAAK,cAAiB,CAAQ,GAAK,MAC5C,CAOO,aAAqD,CAC1D,MAAO,UAAS,wBAAyB,cACrC,SAAS,eAAiB,MAEhC,CClEO,YACL,EACqB,CACrB,MAAO,GACL,EAAU,SAAS,KAAM,SAAS,EAClC,EAAU,SAAS,KAAM,UAAU,CACrC,EACG,KACC,GAAa,CAAC,EACd,EAAI,IAAM,CACR,GAAM,GAAS,GAAiB,EAChC,MAAO,OAAO,IAAW,YACrB,EAAG,SAAS,CAAM,EAClB,EACN,CAAC,EACD,EAAU,IAAO,GAAiB,CAAC,EACnC,EAAqB,CACvB,CACJ,CChBO,YACL,EACe,CACf,MAAO,CACL,EAAG,EAAG,WACN,EAAG,EAAG,SACR,CACF,CAWO,YACL,EAC2B,CAC3B,MAAO,GACL,EAAU,OAAQ,MAAM,EACxB,EAAU,OAAQ,QAAQ,CAC5B,EACG,KACC,GAAU,EAAG,EAAuB,EACpC,EAAI,IAAM,GAAiB,CAAE,CAAC,EAC9B,EAAU,GAAiB,CAAE,CAAC,CAChC,CACJ,CCxCO,YACL,EACe,CACf,MAAO,CACL,EAAG,EAAG,WACN,EAAG,EAAG,SACR,CACF,CAWO,YACL,EAC2B,CAC3B,MAAO,GACL,EAAU,EAAI,QAAQ,EACtB,EAAU,OAAQ,QAAQ,CAC5B,EACG,KACC,GAAU,EAAG,EAAuB,EACpC,EAAI,IAAM,GAAwB,CAAE,CAAC,EACrC,EAAU,GAAwB,CAAE,CAAC,CACvC,CACJ,CCpEA,GAAI,IAAW,UAAY,CACvB,GAAI,MAAO,MAAQ,YACf,MAAO,KASX,WAAkB,EAAK,EAAK,CACxB,GAAI,GAAS,GACb,SAAI,KAAK,SAAU,EAAO,EAAO,CAC7B,MAAI,GAAM,KAAO,EACb,GAAS,EACF,IAEJ,EACX,CAAC,EACM,CACX,CACA,MAAsB,WAAY,CAC9B,YAAmB,CACf,KAAK,YAAc,CAAC,CACxB,CACA,cAAO,eAAe,EAAQ,UAAW,OAAQ,CAI7C,IAAK,UAAY,CACb,MAAO,MAAK,YAAY,MAC5B,EACA,WAAY,GACZ,aAAc,EAClB,CAAC,EAKD,EAAQ,UAAU,IAAM,SAAU,EAAK,CACnC,GAAI,GAAQ,EAAS,KAAK,YAAa,CAAG,EACtC,EAAQ,KAAK,YAAY,GAC7B,MAAO,IAAS,EAAM,EAC1B,EAMA,EAAQ,UAAU,IAAM,SAAU,EAAK,EAAO,CAC1C,GAAI,GAAQ,EAAS,KAAK,YAAa,CAAG,EAC1C,AAAI,CAAC,EACD,KAAK,YAAY,GAAO,GAAK,EAG7B,KAAK,YAAY,KAAK,CAAC,EAAK,CAAK,CAAC,CAE1C,EAKA,EAAQ,UAAU,OAAS,SAAU,EAAK,CACtC,GAAI,GAAU,KAAK,YACf,EAAQ,EAAS,EAAS,CAAG,EACjC,AAAI,CAAC,GACD,EAAQ,OAAO,EAAO,CAAC,CAE/B,EAKA,EAAQ,UAAU,IAAM,SAAU,EAAK,CACnC,MAAO,CAAC,CAAC,CAAC,EAAS,KAAK,YAAa,CAAG,CAC5C,EAIA,EAAQ,UAAU,MAAQ,UAAY,CAClC,KAAK,YAAY,OAAO,CAAC,CAC7B,EAMA,EAAQ,UAAU,QAAU,SAAU,EAAU,EAAK,CACjD,AAAI,IAAQ,QAAU,GAAM,MAC5B,OAAS,GAAK,EAAG,EAAK,KAAK,YAAa,EAAK,EAAG,OAAQ,IAAM,CAC1D,GAAI,GAAQ,EAAG,GACf,EAAS,KAAK,EAAK,EAAM,GAAI,EAAM,EAAE,CACzC,CACJ,EACO,CACX,EAAE,CACN,EAAG,EAKC,GAAY,MAAO,SAAW,aAAe,MAAO,WAAa,aAAe,OAAO,WAAa,SAGpG,GAAY,UAAY,CACxB,MAAI,OAAO,SAAW,aAAe,OAAO,OAAS,KAC1C,OAEP,MAAO,OAAS,aAAe,KAAK,OAAS,KACtC,KAEP,MAAO,SAAW,aAAe,OAAO,OAAS,KAC1C,OAGJ,SAAS,aAAa,EAAE,CACnC,EAAG,EAQC,GAA2B,UAAY,CACvC,MAAI,OAAO,wBAA0B,WAI1B,sBAAsB,KAAK,EAAQ,EAEvC,SAAU,EAAU,CAAE,MAAO,YAAW,UAAY,CAAE,MAAO,GAAS,KAAK,IAAI,CAAC,CAAG,EAAG,IAAO,EAAE,CAAG,CAC7G,EAAG,EAGC,GAAkB,EAStB,YAAmB,EAAU,EAAO,CAChC,GAAI,GAAc,GAAO,EAAe,GAAO,EAAe,EAO9D,YAA0B,CACtB,AAAI,GACA,GAAc,GACd,EAAS,GAET,GACA,EAAM,CAEd,CAQA,YAA2B,CACvB,GAAwB,CAAc,CAC1C,CAMA,YAAiB,CACb,GAAI,GAAY,KAAK,IAAI,EACzB,GAAI,EAAa,CAEb,GAAI,EAAY,EAAe,GAC3B,OAMJ,EAAe,EACnB,KAEI,GAAc,GACd,EAAe,GACf,WAAW,EAAiB,CAAK,EAErC,EAAe,CACnB,CACA,MAAO,EACX,CAGA,GAAI,IAAgB,GAGhB,GAAiB,CAAC,MAAO,QAAS,SAAU,OAAQ,QAAS,SAAU,OAAQ,QAAQ,EAEvF,GAA4B,MAAO,mBAAqB,YAIxD,GAA0C,UAAY,CAMtD,YAAoC,CAMhC,KAAK,WAAa,GAMlB,KAAK,qBAAuB,GAM5B,KAAK,mBAAqB,KAM1B,KAAK,WAAa,CAAC,EACnB,KAAK,iBAAmB,KAAK,iBAAiB,KAAK,IAAI,EACvD,KAAK,QAAU,GAAS,KAAK,QAAQ,KAAK,IAAI,EAAG,EAAa,CAClE,CAOA,SAAyB,UAAU,YAAc,SAAU,EAAU,CACjE,AAAK,CAAC,KAAK,WAAW,QAAQ,CAAQ,GAClC,KAAK,WAAW,KAAK,CAAQ,EAG5B,KAAK,YACN,KAAK,SAAS,CAEtB,EAOA,EAAyB,UAAU,eAAiB,SAAU,EAAU,CACpE,GAAI,GAAY,KAAK,WACjB,EAAQ,EAAU,QAAQ,CAAQ,EAEtC,AAAI,CAAC,GACD,EAAU,OAAO,EAAO,CAAC,EAGzB,CAAC,EAAU,QAAU,KAAK,YAC1B,KAAK,YAAY,CAEzB,EAOA,EAAyB,UAAU,QAAU,UAAY,CACrD,GAAI,GAAkB,KAAK,iBAAiB,EAG5C,AAAI,GACA,KAAK,QAAQ,CAErB,EASA,EAAyB,UAAU,iBAAmB,UAAY,CAE9D,GAAI,GAAkB,KAAK,WAAW,OAAO,SAAU,EAAU,CAC7D,MAAO,GAAS,aAAa,EAAG,EAAS,UAAU,CACvD,CAAC,EAMD,SAAgB,QAAQ,SAAU,EAAU,CAAE,MAAO,GAAS,gBAAgB,CAAG,CAAC,EAC3E,EAAgB,OAAS,CACpC,EAOA,EAAyB,UAAU,SAAW,UAAY,CAGtD,AAAI,CAAC,IAAa,KAAK,YAMvB,UAAS,iBAAiB,gBAAiB,KAAK,gBAAgB,EAChE,OAAO,iBAAiB,SAAU,KAAK,OAAO,EAC9C,AAAI,GACA,MAAK,mBAAqB,GAAI,kBAAiB,KAAK,OAAO,EAC3D,KAAK,mBAAmB,QAAQ,SAAU,CACtC,WAAY,GACZ,UAAW,GACX,cAAe,GACf,QAAS,EACb,CAAC,GAGD,UAAS,iBAAiB,qBAAsB,KAAK,OAAO,EAC5D,KAAK,qBAAuB,IAEhC,KAAK,WAAa,GACtB,EAOA,EAAyB,UAAU,YAAc,UAAY,CAGzD,AAAI,CAAC,IAAa,CAAC,KAAK,YAGxB,UAAS,oBAAoB,gBAAiB,KAAK,gBAAgB,EACnE,OAAO,oBAAoB,SAAU,KAAK,OAAO,EAC7C,KAAK,oBACL,KAAK,mBAAmB,WAAW,EAEnC,KAAK,sBACL,SAAS,oBAAoB,qBAAsB,KAAK,OAAO,EAEnE,KAAK,mBAAqB,KAC1B,KAAK,qBAAuB,GAC5B,KAAK,WAAa,GACtB,EAQA,EAAyB,UAAU,iBAAmB,SAAU,EAAI,CAChE,GAAI,GAAK,EAAG,aAAc,EAAe,IAAO,OAAS,GAAK,EAE1D,EAAmB,GAAe,KAAK,SAAU,EAAK,CACtD,MAAO,CAAC,CAAC,CAAC,EAAa,QAAQ,CAAG,CACtC,CAAC,EACD,AAAI,GACA,KAAK,QAAQ,CAErB,EAMA,EAAyB,YAAc,UAAY,CAC/C,MAAK,MAAK,WACN,MAAK,UAAY,GAAI,IAElB,KAAK,SAChB,EAMA,EAAyB,UAAY,KAC9B,CACX,EAAE,EASE,GAAsB,SAAU,EAAQ,EAAO,CAC/C,OAAS,GAAK,EAAG,EAAK,OAAO,KAAK,CAAK,EAAG,EAAK,EAAG,OAAQ,IAAM,CAC5D,GAAI,GAAM,EAAG,GACb,OAAO,eAAe,EAAQ,EAAK,CAC/B,MAAO,EAAM,GACb,WAAY,GACZ,SAAU,GACV,aAAc,EAClB,CAAC,CACL,CACA,MAAO,EACX,EAQI,GAAe,SAAU,EAAQ,CAIjC,GAAI,GAAc,GAAU,EAAO,eAAiB,EAAO,cAAc,YAGzE,MAAO,IAAe,EAC1B,EAGI,GAAY,GAAe,EAAG,EAAG,EAAG,CAAC,EAOzC,YAAiB,EAAO,CACpB,MAAO,YAAW,CAAK,GAAK,CAChC,CAQA,YAAwB,EAAQ,CAE5B,OADI,GAAY,CAAC,EACR,EAAK,EAAG,EAAK,UAAU,OAAQ,IACpC,EAAU,EAAK,GAAK,UAAU,GAElC,MAAO,GAAU,OAAO,SAAU,EAAM,EAAU,CAC9C,GAAI,GAAQ,EAAO,UAAY,EAAW,UAC1C,MAAO,GAAO,GAAQ,CAAK,CAC/B,EAAG,CAAC,CACR,CAOA,YAAqB,EAAQ,CAGzB,OAFI,GAAY,CAAC,MAAO,QAAS,SAAU,MAAM,EAC7C,EAAW,CAAC,EACP,EAAK,EAAG,EAAc,EAAW,EAAK,EAAY,OAAQ,IAAM,CACrE,GAAI,GAAW,EAAY,GACvB,EAAQ,EAAO,WAAa,GAChC,EAAS,GAAY,GAAQ,CAAK,CACtC,CACA,MAAO,EACX,CAQA,YAA2B,EAAQ,CAC/B,GAAI,GAAO,EAAO,QAAQ,EAC1B,MAAO,IAAe,EAAG,EAAG,EAAK,MAAO,EAAK,MAAM,CACvD,CAOA,YAAmC,EAAQ,CAGvC,GAAI,GAAc,EAAO,YAAa,EAAe,EAAO,aAS5D,GAAI,CAAC,GAAe,CAAC,EACjB,MAAO,IAEX,GAAI,GAAS,GAAY,CAAM,EAAE,iBAAiB,CAAM,EACpD,EAAW,GAAY,CAAM,EAC7B,EAAW,EAAS,KAAO,EAAS,MACpC,EAAU,EAAS,IAAM,EAAS,OAKlC,EAAQ,GAAQ,EAAO,KAAK,EAAG,EAAS,GAAQ,EAAO,MAAM,EAqBjE,GAlBI,EAAO,YAAc,cAOjB,MAAK,MAAM,EAAQ,CAAQ,IAAM,GACjC,IAAS,GAAe,EAAQ,OAAQ,OAAO,EAAI,GAEnD,KAAK,MAAM,EAAS,CAAO,IAAM,GACjC,IAAU,GAAe,EAAQ,MAAO,QAAQ,EAAI,IAOxD,CAAC,GAAkB,CAAM,EAAG,CAK5B,GAAI,GAAgB,KAAK,MAAM,EAAQ,CAAQ,EAAI,EAC/C,EAAiB,KAAK,MAAM,EAAS,CAAO,EAAI,EAMpD,AAAI,KAAK,IAAI,CAAa,IAAM,GAC5B,IAAS,GAET,KAAK,IAAI,CAAc,IAAM,GAC7B,IAAU,EAElB,CACA,MAAO,IAAe,EAAS,KAAM,EAAS,IAAK,EAAO,CAAM,CACpE,CAOA,GAAI,IAAwB,UAAY,CAGpC,MAAI,OAAO,qBAAuB,YACvB,SAAU,EAAQ,CAAE,MAAO,aAAkB,IAAY,CAAM,EAAE,kBAAoB,EAKzF,SAAU,EAAQ,CAAE,MAAQ,aAAkB,IAAY,CAAM,EAAE,YACrE,MAAO,GAAO,SAAY,UAAa,CAC/C,EAAG,EAOH,YAA2B,EAAQ,CAC/B,MAAO,KAAW,GAAY,CAAM,EAAE,SAAS,eACnD,CAOA,YAAwB,EAAQ,CAC5B,MAAK,IAGD,GAAqB,CAAM,EACpB,GAAkB,CAAM,EAE5B,GAA0B,CAAM,EAL5B,EAMf,CAQA,YAA4B,EAAI,CAC5B,GAAI,GAAI,EAAG,EAAG,EAAI,EAAG,EAAG,EAAQ,EAAG,MAAO,EAAS,EAAG,OAElD,EAAS,MAAO,kBAAoB,YAAc,gBAAkB,OACpE,EAAO,OAAO,OAAO,EAAO,SAAS,EAEzC,UAAmB,EAAM,CACrB,EAAG,EAAG,EAAG,EAAG,MAAO,EAAO,OAAQ,EAClC,IAAK,EACL,MAAO,EAAI,EACX,OAAQ,EAAS,EACjB,KAAM,CACV,CAAC,EACM,CACX,CAWA,YAAwB,EAAG,EAAG,EAAO,EAAQ,CACzC,MAAO,CAAE,EAAG,EAAG,EAAG,EAAG,MAAO,EAAO,OAAQ,CAAO,CACtD,CAMA,GAAI,IAAmC,UAAY,CAM/C,WAA2B,EAAQ,CAM/B,KAAK,eAAiB,EAMtB,KAAK,gBAAkB,EAMvB,KAAK,aAAe,GAAe,EAAG,EAAG,EAAG,CAAC,EAC7C,KAAK,OAAS,CAClB,CAOA,SAAkB,UAAU,SAAW,UAAY,CAC/C,GAAI,GAAO,GAAe,KAAK,MAAM,EACrC,YAAK,aAAe,EACZ,EAAK,QAAU,KAAK,gBACxB,EAAK,SAAW,KAAK,eAC7B,EAOA,EAAkB,UAAU,cAAgB,UAAY,CACpD,GAAI,GAAO,KAAK,aAChB,YAAK,eAAiB,EAAK,MAC3B,KAAK,gBAAkB,EAAK,OACrB,CACX,EACO,CACX,EAAE,EAEE,GAAqC,UAAY,CAOjD,WAA6B,EAAQ,EAAU,CAC3C,GAAI,GAAc,GAAmB,CAAQ,EAO7C,GAAmB,KAAM,CAAE,OAAQ,EAAQ,YAAa,CAAY,CAAC,CACzE,CACA,MAAO,EACX,EAAE,EAEE,GAAmC,UAAY,CAW/C,WAA2B,EAAU,EAAY,EAAa,CAc1D,GAPA,KAAK,oBAAsB,CAAC,EAM5B,KAAK,cAAgB,GAAI,IACrB,MAAO,IAAa,WACpB,KAAM,IAAI,WAAU,yDAAyD,EAEjF,KAAK,UAAY,EACjB,KAAK,YAAc,EACnB,KAAK,aAAe,CACxB,CAOA,SAAkB,UAAU,QAAU,SAAU,EAAQ,CACpD,GAAI,CAAC,UAAU,OACX,KAAM,IAAI,WAAU,0CAA0C,EAGlE,GAAI,QAAO,UAAY,aAAe,CAAE,mBAAmB,UAG3D,IAAI,CAAE,aAAkB,IAAY,CAAM,EAAE,SACxC,KAAM,IAAI,WAAU,uCAAuC,EAE/D,GAAI,GAAe,KAAK,cAExB,AAAI,EAAa,IAAI,CAAM,GAG3B,GAAa,IAAI,EAAQ,GAAI,IAAkB,CAAM,CAAC,EACtD,KAAK,YAAY,YAAY,IAAI,EAEjC,KAAK,YAAY,QAAQ,GAC7B,EAOA,EAAkB,UAAU,UAAY,SAAU,EAAQ,CACtD,GAAI,CAAC,UAAU,OACX,KAAM,IAAI,WAAU,0CAA0C,EAGlE,GAAI,QAAO,UAAY,aAAe,CAAE,mBAAmB,UAG3D,IAAI,CAAE,aAAkB,IAAY,CAAM,EAAE,SACxC,KAAM,IAAI,WAAU,uCAAuC,EAE/D,GAAI,GAAe,KAAK,cAExB,AAAI,CAAC,EAAa,IAAI,CAAM,GAG5B,GAAa,OAAO,CAAM,EACrB,EAAa,MACd,KAAK,YAAY,eAAe,IAAI,GAE5C,EAMA,EAAkB,UAAU,WAAa,UAAY,CACjD,KAAK,YAAY,EACjB,KAAK,cAAc,MAAM,EACzB,KAAK,YAAY,eAAe,IAAI,CACxC,EAOA,EAAkB,UAAU,aAAe,UAAY,CACnD,GAAI,GAAQ,KACZ,KAAK,YAAY,EACjB,KAAK,cAAc,QAAQ,SAAU,EAAa,CAC9C,AAAI,EAAY,SAAS,GACrB,EAAM,oBAAoB,KAAK,CAAW,CAElD,CAAC,CACL,EAOA,EAAkB,UAAU,gBAAkB,UAAY,CAEtD,GAAI,EAAC,KAAK,UAAU,EAGpB,IAAI,GAAM,KAAK,aAEX,EAAU,KAAK,oBAAoB,IAAI,SAAU,EAAa,CAC9D,MAAO,IAAI,IAAoB,EAAY,OAAQ,EAAY,cAAc,CAAC,CAClF,CAAC,EACD,KAAK,UAAU,KAAK,EAAK,EAAS,CAAG,EACrC,KAAK,YAAY,EACrB,EAMA,EAAkB,UAAU,YAAc,UAAY,CAClD,KAAK,oBAAoB,OAAO,CAAC,CACrC,EAMA,EAAkB,UAAU,UAAY,UAAY,CAChD,MAAO,MAAK,oBAAoB,OAAS,CAC7C,EACO,CACX,EAAE,EAKE,GAAY,MAAO,UAAY,YAAc,GAAI,SAAY,GAAI,IAKjE,GAAgC,UAAY,CAO5C,WAAwB,EAAU,CAC9B,GAAI,CAAE,gBAAgB,IAClB,KAAM,IAAI,WAAU,oCAAoC,EAE5D,GAAI,CAAC,UAAU,OACX,KAAM,IAAI,WAAU,0CAA0C,EAElE,GAAI,GAAa,GAAyB,YAAY,EAClD,EAAW,GAAI,IAAkB,EAAU,EAAY,IAAI,EAC/D,GAAU,IAAI,KAAM,CAAQ,CAChC,CACA,MAAO,EACX,EAAE,EAEF,CACI,UACA,YACA,YACJ,EAAE,QAAQ,SAAU,EAAQ,CACxB,GAAe,UAAU,GAAU,UAAY,CAC3C,GAAI,GACJ,MAAQ,GAAK,GAAU,IAAI,IAAI,GAAG,GAAQ,MAAM,EAAI,SAAS,CACjE,CACJ,CAAC,EAED,GAAI,IAAS,UAAY,CAErB,MAAI,OAAO,IAAS,gBAAmB,YAC5B,GAAS,eAEb,EACX,EAAG,EAEI,GAAQ,GCr2Bf,GAAM,IAAS,GAAI,GAYb,GAAY,EAAM,IAAM,EAC5B,GAAI,IAAe,GAAW,CAC5B,OAAW,KAAS,GAClB,GAAO,KAAK,CAAK,CACrB,CAAC,CACH,CAAC,EACE,KACC,EAAU,GAAY,EAAM,GAAO,EAAG,CAAQ,CAAC,EAC5C,KACC,EAAS,IAAM,EAAS,WAAW,CAAC,CACtC,CACF,EACA,EAAY,CAAC,CACf,EAaK,YACL,EACa,CACb,MAAO,CACL,MAAQ,EAAG,YACX,OAAQ,EAAG,YACb,CACF,CAuBO,YACL,EACyB,CACzB,MAAO,IACJ,KACC,EAAI,GAAY,EAAS,QAAQ,CAAE,CAAC,EACpC,EAAU,GAAY,GACnB,KACC,EAAO,CAAC,CAAE,YAAa,IAAW,CAAE,EACpC,EAAS,IAAM,EAAS,UAAU,CAAE,CAAC,EACrC,EAAI,IAAM,GAAe,CAAE,CAAC,CAC9B,CACF,EACA,EAAU,GAAe,CAAE,CAAC,CAC9B,CACJ,CC1GO,YACL,EACa,CACb,MAAO,CACL,MAAQ,EAAG,YACX,OAAQ,EAAG,YACb,CACF,CCSA,GAAM,IAAS,GAAI,GAUb,GAAY,EAAM,IAAM,EAC5B,GAAI,sBAAqB,GAAW,CAClC,OAAW,KAAS,GAClB,GAAO,KAAK,CAAK,CACrB,EAAG,CACD,UAAW,CACb,CAAC,CACH,CAAC,EACE,KACC,EAAU,GAAY,EAAM,GAAO,EAAG,CAAQ,CAAC,EAC5C,KACC,EAAS,IAAM,EAAS,WAAW,CAAC,CACtC,CACF,EACA,EAAY,CAAC,CACf,EAaK,YACL,EACqB,CACrB,MAAO,IACJ,KACC,EAAI,GAAY,EAAS,QAAQ,CAAE,CAAC,EACpC,EAAU,GAAY,GACnB,KACC,EAAO,CAAC,CAAE,YAAa,IAAW,CAAE,EACpC,EAAS,IAAM,EAAS,UAAU,CAAE,CAAC,EACrC,EAAI,CAAC,CAAE,oBAAqB,CAAc,CAC5C,CACF,CACF,CACJ,CAaO,YACL,EAAiB,EAAY,GACR,CACrB,MAAO,IAA0B,CAAE,EAChC,KACC,EAAI,CAAC,CAAE,OAAQ,CACb,GAAM,GAAU,GAAe,CAAE,EAC3B,EAAU,GAAsB,CAAE,EACxC,MAAO,IACL,EAAQ,OAAS,EAAQ,OAAS,CAEtC,CAAC,EACD,EAAqB,CACvB,CACJ,CCjFA,GAAM,IAA4C,CAChD,OAAQ,EAAW,yBAAyB,EAC5C,OAAQ,EAAW,yBAAyB,CAC9C,EAaO,YAAmB,EAAuB,CAC/C,MAAO,IAAQ,GAAM,OACvB,CAaO,YAAmB,EAAc,EAAsB,CAC5D,AAAI,GAAQ,GAAM,UAAY,GAC5B,GAAQ,GAAM,MAAM,CACxB,CAWO,YAAqB,EAAmC,CAC7D,GAAM,GAAK,GAAQ,GACnB,MAAO,GAAU,EAAI,QAAQ,EAC1B,KACC,EAAI,IAAM,EAAG,OAAO,EACpB,EAAU,EAAG,OAAO,CACtB,CACJ,CClCA,YACE,EAAiB,EACR,CACT,OAAQ,EAAG,iBAGJ,kBAEH,MAAI,GAAG,OAAS,QACP,SAAS,KAAK,CAAI,EAElB,OAGN,uBACA,qBACH,MAAO,WAIP,MAAO,GAAG,kBAEhB,CAWO,aAA+C,CACpD,MAAO,GAAyB,OAAQ,SAAS,EAC9C,KACC,EAAO,GAAM,CAAE,GAAG,SAAW,EAAG,QAAQ,EACxC,EAAI,GAAO,EACT,KAAM,GAAU,QAAQ,EAAI,SAAW,SACvC,KAAM,EAAG,IACT,OAAQ,CACN,EAAG,eAAe,EAClB,EAAG,gBAAgB,CACrB,CACF,EAAc,EACd,EAAO,CAAC,CAAE,OAAM,UAAW,CACzB,GAAI,IAAS,SAAU,CACrB,GAAM,GAAS,GAAiB,EAChC,GAAI,MAAO,IAAW,YACpB,MAAO,CAAC,GAAwB,EAAQ,CAAI,CAChD,CACA,MAAO,EACT,CAAC,EACD,GAAM,CACR,CACJ,CCpFO,aAA4B,CACjC,MAAO,IAAI,KAAI,SAAS,IAAI,CAC9B,CAOO,YAAqB,EAAgB,CAC1C,SAAS,KAAO,EAAI,IACtB,CASO,aAAuC,CAC5C,MAAO,IAAI,EACb,CCLA,YAAqB,EAAiB,EAA8B,CAGlE,GAAI,MAAO,IAAU,UAAY,MAAO,IAAU,SAChD,EAAG,WAAa,EAAM,SAAS,UAGtB,YAAiB,MAC1B,EAAG,YAAY,CAAK,UAGX,MAAM,QAAQ,CAAK,EAC5B,OAAW,KAAQ,GACjB,GAAY,EAAI,CAAI,CAE1B,CAyBO,WACL,EAAa,KAAmC,EAC7C,CACH,GAAM,GAAK,SAAS,cAAc,CAAG,EAGrC,GAAI,EACF,OAAW,KAAQ,QAAO,KAAK,CAAU,EACvC,AAAI,MAAO,GAAW,IAAU,UAC9B,EAAG,aAAa,EAAM,EAAW,EAAK,EAC/B,EAAW,IAClB,EAAG,aAAa,EAAM,EAAE,EAG9B,OAAW,KAAS,GAClB,GAAY,EAAI,CAAK,EAGvB,MAAO,EACT,CC3EO,YAAkB,EAAe,EAAmB,CACzD,GAAI,GAAI,EACR,GAAI,EAAM,OAAS,EAAG,CACpB,KAAO,EAAM,KAAO,KAAO,EAAE,EAAI,GAAG,CACpC,MAAO,GAAG,EAAM,UAAU,EAAG,CAAC,MAChC,CACA,MAAO,EACT,CAkBO,YAAe,EAAuB,CAC3C,GAAI,EAAQ,IAAK,CACf,GAAM,GAAS,CAAG,IAAQ,KAAO,IAAO,IACxC,MAAO,GAAK,IAAQ,MAAY,KAAM,QAAQ,CAAM,IACtD,KACE,OAAO,GAAM,SAAS,CAE1B,CC5BO,aAAmC,CACxC,MAAO,UAAS,KAAK,UAAU,CAAC,CAClC,CAYO,YAAyB,EAAoB,CAClD,GAAM,GAAK,EAAE,IAAK,CAAE,KAAM,CAAK,CAAC,EAChC,EAAG,iBAAiB,QAAS,GAAM,EAAG,gBAAgB,CAAC,EACvD,EAAG,MAAM,CACX,CASO,aAAiD,CACtD,MAAO,GAA2B,OAAQ,YAAY,EACnD,KACC,EAAI,EAAe,EACnB,EAAU,GAAgB,CAAC,EAC3B,EAAO,GAAQ,EAAK,OAAS,CAAC,EAC9B,EAAY,CAAC,CACf,CACJ,CAOO,aAAwD,CAC7D,MAAO,IAAkB,EACtB,KACC,EAAI,GAAM,GAAmB,QAAQ,KAAM,CAAE,EAC7C,EAAO,GAAM,MAAO,IAAO,WAAW,CACxC,CACJ,CC1CO,YAAoB,EAAoC,CAC7D,GAAM,GAAQ,WAAW,CAAK,EAC9B,MAAO,IAA0B,GAC/B,EAAM,YAAY,IAAM,EAAK,EAAM,OAAO,CAAC,CAC5C,EACE,KACC,EAAU,EAAM,OAAO,CACzB,CACJ,CAOO,aAA2C,CAChD,GAAM,GAAQ,WAAW,OAAO,EAChC,MAAO,GACL,EAAU,OAAQ,aAAa,EAAE,KAAK,EAAM,EAAI,CAAC,EACjD,EAAU,OAAQ,YAAY,EAAE,KAAK,EAAM,EAAK,CAAC,CACnD,EACG,KACC,EAAU,EAAM,OAAO,CACzB,CACJ,CAcO,YACL,EAA6B,EACd,CACf,MAAO,GACJ,KACC,EAAU,GAAU,EAAS,EAAQ,EAAI,CAAK,CAChD,CACJ,CC9CO,YACL,EAAmB,EAAuB,CAAE,YAAa,aAAc,EACjD,CACtB,MAAO,IAAK,MAAM,GAAG,IAAO,CAAO,CAAC,EACjC,KACC,EAAO,GAAO,EAAI,SAAW,GAAG,EAChC,GAAW,IAAM,CAAK,CACxB,CACJ,CAYO,YACL,EAAmB,EACJ,CACf,MAAO,IAAQ,EAAK,CAAO,EACxB,KACC,EAAU,GAAO,EAAI,KAAK,CAAC,EAC3B,EAAY,CAAC,CACf,CACJ,CAUO,YACL,EAAmB,EACG,CACtB,GAAM,GAAM,GAAI,WAChB,MAAO,IAAQ,EAAK,CAAO,EACxB,KACC,EAAU,GAAO,EAAI,KAAK,CAAC,EAC3B,EAAI,GAAO,EAAI,gBAAgB,EAAK,UAAU,CAAC,EAC/C,EAAY,CAAC,CACf,CACJ,CC9CO,YAAqB,EAA+B,CACzD,GAAM,GAAS,EAAE,SAAU,CAAE,KAAI,CAAC,EAClC,MAAO,GAAM,IACX,UAAS,KAAK,YAAY,CAAM,EACzB,EACL,EAAU,EAAQ,MAAM,EACxB,EAAU,EAAQ,OAAO,EACtB,KACC,EAAU,IACR,GAAW,IAAM,GAAI,gBAAe,mBAAmB,GAAK,CAAC,CAC9D,CACH,CACJ,EACG,KACC,EAAM,MAAS,EACf,EAAS,IAAM,SAAS,KAAK,YAAY,CAAM,CAAC,EAChD,GAAK,CAAC,CACR,EACH,CACH,CCfO,aAA6C,CAClD,MAAO,CACL,EAAG,KAAK,IAAI,EAAG,OAAO,EACtB,EAAG,KAAK,IAAI,EAAG,OAAO,CACxB,CACF,CASO,aAA2D,CAChE,MAAO,GACL,EAAU,OAAQ,SAAU,CAAE,QAAS,EAAK,CAAC,EAC7C,EAAU,OAAQ,SAAU,CAAE,QAAS,EAAK,CAAC,CAC/C,EACG,KACC,EAAI,EAAiB,EACrB,EAAU,GAAkB,CAAC,CAC/B,CACJ,CC3BO,aAAyC,CAC9C,MAAO,CACL,MAAQ,WACR,OAAQ,WACV,CACF,CASO,aAAuD,CAC5D,MAAO,GAAU,OAAQ,SAAU,CAAE,QAAS,EAAK,CAAC,EACjD,KACC,EAAI,EAAe,EACnB,EAAU,GAAgB,CAAC,CAC7B,CACJ,CCXO,aAA+C,CACpD,MAAO,GAAc,CACnB,GAAoB,EACpB,GAAkB,CACpB,CAAC,EACE,KACC,EAAI,CAAC,CAAC,EAAQ,KAAW,EAAE,SAAQ,MAAK,EAAE,EAC1C,EAAY,CAAC,CACf,CACJ,CCVO,YACL,EAAiB,CAAE,YAAW,WACR,CACtB,GAAM,GAAQ,EACX,KACC,EAAwB,MAAM,CAChC,EAGI,EAAU,EAAc,CAAC,EAAO,CAAO,CAAC,EAC3C,KACC,EAAI,IAAM,GAAiB,CAAE,CAAC,CAChC,EAGF,MAAO,GAAc,CAAC,EAAS,EAAW,CAAO,CAAC,EAC/C,KACC,EAAI,CAAC,CAAC,CAAE,UAAU,CAAE,SAAQ,QAAQ,CAAE,IAAG,QAAU,EACjD,OAAQ,CACN,EAAG,EAAO,EAAI,EACd,EAAG,EAAO,EAAI,EAAI,CACpB,EACA,MACF,EAAE,CACJ,CACJ,CCIO,YACL,EAAgB,CAAE,OACH,CAGf,GAAM,GAAM,EAAwB,EAAQ,SAAS,EAClD,KACC,EAAI,CAAC,CAAE,UAAW,CAAS,CAC7B,EAGF,MAAO,GACJ,KACC,GAAS,IAAM,EAAK,CAAE,QAAS,GAAM,SAAU,EAAK,CAAC,EACrD,EAAI,GAAW,EAAO,YAAY,CAAO,CAAC,EAC1C,GAAY,CAAG,EACf,GAAM,CACR,CACJ,CCJA,GAAM,IAAS,EAAW,WAAW,EAC/B,GAAiB,KAAK,MAAM,GAAO,WAAY,EACrD,GAAO,KAAO,GAAG,GAAI,KAAI,GAAO,KAAM,GAAY,CAAC,IAW5C,aAAiC,CACtC,MAAO,GACT,CASO,YAAiB,EAAqB,CAC3C,MAAO,IAAO,SAAS,SAAS,CAAI,CACtC,CAUO,YACL,EAAkB,EACV,CACR,MAAO,OAAO,IAAU,YACpB,GAAO,aAAa,GAAK,QAAQ,IAAK,EAAM,SAAS,CAAC,EACtD,GAAO,aAAa,EAC1B,CC9BO,YACL,EAAS,EAAmB,SACP,CACrB,MAAO,GAAW,sBAAsB,KAAS,CAAI,CACvD,CAYO,YACL,EAAS,EAAmB,SACL,CACvB,MAAO,GAAY,sBAAsB,KAAS,CAAI,CACxD,CC/GA,OAAwB,SCajB,YAA0B,EAAyB,CACxD,MACE,GAAC,SAAM,MAAM,gBAAgB,SAAU,GACrC,EAAC,OAAI,MAAM,mCACT,EAAC,OAAI,MAAM,+BAA+B,CAC5C,EACA,EAAC,QAAK,MAAM,wBACV,EAAC,QAAK,wBAAuB,EAAI,CACnC,CACF,CAEJ,CCVO,YAA+B,EAAyB,CAC7D,MACE,GAAC,UACC,MAAM,uBACN,MAAO,GAAY,gBAAgB,EACnC,wBAAuB,IAAI,WAC5B,CAEL,CCYA,YACE,EAA2C,EAC9B,CACb,GAAM,GAAS,EAAO,EAChB,EAAS,EAAO,EAGhB,EAAU,OAAO,KAAK,EAAS,KAAK,EACvC,OAAO,GAAO,CAAC,EAAS,MAAM,EAAI,EAClC,OAAyB,CAAC,EAAM,IAAQ,CACvC,GAAG,EAAM,EAAC,WAAK,CAAI,EAAQ,GAC7B,EAAG,CAAC,CAAC,EACJ,MAAM,EAAG,EAAE,EAGR,EAAM,GAAI,KAAI,EAAS,QAAQ,EACrC,MAAI,IAAQ,kBAAkB,GAC5B,EAAI,aAAa,IAAI,IAAK,OAAO,QAAQ,EAAS,KAAK,EACpD,OAAO,CAAC,CAAC,CAAE,KAAW,CAAK,EAC3B,OAAO,CAAC,EAAW,CAAC,KAAW,GAAG,KAAa,IAAQ,KAAK,EAAG,EAAE,CACpE,EAIA,EAAC,KAAE,KAAM,GAAG,IAAO,MAAM,yBAAyB,SAAU,IAC1D,EAAC,WACC,MAAO,CAAC,4BAA6B,GAAG,EACpC,CAAC,qCAAqC,EACtC,CAAC,CACL,EAAE,KAAK,GAAG,EACV,gBAAe,EAAS,MAAM,QAAQ,CAAC,GAEtC,EAAS,GAAK,EAAC,OAAI,MAAM,iCAAiC,EAC3D,EAAC,MAAG,MAAM,2BAA2B,EAAS,KAAM,EACnD,EAAS,GAAK,EAAS,KAAK,OAAS,GACpC,EAAC,KAAE,MAAM,4BACN,GAAS,EAAS,KAAM,GAAG,CAC9B,EAED,EAAS,MAAQ,EAAS,KAAK,IAAI,GAClC,EAAC,QAAK,MAAM,UAAU,CAAI,CAC3B,EACA,EAAS,GAAK,EAAQ,OAAS,GAC9B,EAAC,KAAE,MAAM,2BACN,GAAY,4BAA4B,EAAE,KAAM,CACnD,CAEJ,CACF,CAEJ,CAaO,YACL,EACa,CACb,GAAM,GAAY,EAAO,GAAG,MACtB,EAAO,CAAC,GAAG,CAAM,EAGjB,EAAS,EAAK,UAAU,GAAO,CAAC,EAAI,SAAS,SAAS,GAAG,CAAC,EAC1D,CAAC,GAAW,EAAK,OAAO,EAAQ,CAAC,EAGnC,EAAQ,EAAK,UAAU,GAAO,EAAI,MAAQ,CAAS,EACvD,AAAI,IAAU,IACZ,GAAQ,EAAK,QAGf,GAAM,GAAO,EAAK,MAAM,EAAG,CAAK,EAC1B,EAAO,EAAK,MAAM,CAAK,EAGvB,EAAW,CACf,GAAqB,EAAS,EAAc,CAAE,EAAC,GAAU,IAAU,EAAE,EACrE,GAAG,EAAK,IAAI,GAAW,GAAqB,EAAS,CAAW,CAAC,EACjE,GAAG,EAAK,OAAS,CACf,EAAC,WAAQ,MAAM,0BACb,EAAC,WAAQ,SAAU,IAChB,EAAK,OAAS,GAAK,EAAK,SAAW,EAChC,GAAY,wBAAwB,EACpC,GAAY,2BAA4B,EAAK,MAAM,CAEzD,EACI,EAAK,IAAI,GAAW,GAAqB,EAAS,CAAW,CAAC,CACpE,CACF,EAAI,CAAC,CACP,EAGA,MACE,GAAC,MAAG,MAAM,0BACP,CACH,CAEJ,CC7HO,YAA2B,EAAiC,CACjE,MACE,GAAC,MAAG,MAAM,oBACP,OAAO,QAAQ,CAAK,EAAE,IAAI,CAAC,CAAC,EAAK,KAChC,EAAC,MAAG,MAAO,oCAAoC,KAC5C,MAAO,IAAU,SAAW,GAAM,CAAK,EAAI,CAC9C,CACD,CACH,CAEJ,CCXO,YAAqB,EAAiC,CAC3D,MACE,GAAC,OAAI,MAAM,0BACT,EAAC,OAAI,MAAM,qBACR,CACH,CACF,CAEJ,CCMA,YAAuB,EAA+B,CACpD,GAAM,GAAS,GAAc,EAGvB,EAAM,GAAI,KAAI,MAAM,EAAQ,WAAY,EAAO,IAAI,EACzD,MACE,GAAC,MAAG,MAAM,oBACR,EAAC,KAAE,KAAM,EAAI,SAAS,EAAG,MAAM,oBAC5B,EAAQ,KACX,CACF,CAEJ,CAcO,YACL,EAAqB,EACR,CACb,MACE,GAAC,OAAI,MAAM,cACT,EAAC,UACC,MAAM,sBACN,aAAY,GAAY,sBAAsB,GAE7C,EAAO,KACV,EACA,EAAC,MAAG,MAAM,oBACP,EAAS,IAAI,EAAa,CAC7B,CACF,CAEJ,CClBO,YACL,EAAiB,EACO,CACxB,GAAM,GAAU,EAAM,IAAM,EAAc,CACxC,GAAmB,CAAE,EACrB,GAA0B,CAAS,CACrC,CAAC,CAAC,EACC,KACC,EAAI,CAAC,CAAC,CAAE,IAAG,KAAK,KAAY,CAC1B,GAAM,CAAE,SAAU,GAAe,CAAE,EACnC,MAAQ,CACN,EAAG,EAAI,EAAO,EAAI,EAAQ,EAC1B,EAAG,EAAI,EAAO,CAChB,CACF,CAAC,CACH,EAGF,MAAO,IAAkB,CAAE,EACxB,KACC,EAAU,GAAU,EACjB,KACC,EAAI,GAAW,EAAE,SAAQ,QAAO,EAAE,EAClC,GAAK,CAAC,CAAC,GAAU,GAAQ,CAC3B,CACF,CACF,CACJ,CAUO,YACL,EAAiB,EACkB,CACnC,MAAO,GAAM,IAAM,CACjB,GAAM,GAAQ,GAAI,GAClB,EAAM,UAAU,CAGd,KAAK,CAAE,UAAU,CACf,EAAG,MAAM,YAAY,iBAAkB,GAAG,EAAO,KAAK,EACtD,EAAG,MAAM,YAAY,iBAAkB,GAAG,EAAO,KAAK,CACxD,EAGA,UAAW,CACT,EAAG,MAAM,eAAe,gBAAgB,EACxC,EAAG,MAAM,eAAe,gBAAgB,CAC1C,CACF,CAAC,EAGD,EACG,KACC,GAAa,IAAK,EAAuB,EACzC,EAAI,IAAM,EAAU,sBAAsB,CAAC,EAC3C,EAAI,CAAC,CAAE,OAAQ,CAAC,CAClB,EACG,UAAU,CAGT,KAAK,EAAQ,CACX,AAAI,EACF,EAAG,MAAM,YAAY,iBAAkB,GAAG,CAAC,KAAU,EAErD,EAAG,MAAM,eAAe,gBAAgB,CAC5C,EAGA,UAAW,CACT,EAAG,MAAM,eAAe,gBAAgB,CAC1C,CACF,CAAC,EAGL,GAAM,GAAQ,EAAW,uBAAwB,CAAE,EAC7C,EAAQ,EAAU,EAAO,YAAa,CAAE,KAAM,EAAK,CAAC,EAC1D,SACG,KACC,EAAU,CAAC,CAAE,YAAa,EAAS,EAAQ,CAAK,EAChD,EAAI,GAAM,EAAG,eAAe,CAAC,CAC/B,EACG,UAAU,IAAM,EAAG,KAAK,CAAC,EAGvB,GAAgB,EAAI,CAAS,EACjC,KACC,EAAI,GAAS,EAAM,KAAK,CAAK,CAAC,EAC9B,EAAS,IAAM,EAAM,SAAS,CAAC,EAC/B,EAAI,GAAU,GAAE,IAAK,GAAO,EAAQ,CACtC,CACJ,CAAC,CACH,CCtGA,YAA+B,EAAgC,CAC7D,GAAM,GAAkB,CAAC,EACzB,OAAW,KAAW,GAAY,eAAgB,CAAS,EAAG,CAC5D,GAAI,GAGA,EAAO,EAAQ,WACnB,GAAI,YAAgB,MAClB,KAAQ,EAAQ,YAAY,KAAK,EAAK,WAAY,GAAI,CACpD,GAAM,GAAS,EAAK,UAAU,EAAM,KAAK,EACzC,EAAO,EAAO,UAAU,EAAM,GAAG,MAAM,EACvC,EAAQ,KAAK,CAAM,CACrB,CACJ,CACA,MAAO,EACT,CAQA,YAAc,EAAqB,EAA2B,CAC5D,EAAO,OAAO,GAAG,MAAM,KAAK,EAAO,UAAU,CAAC,CAChD,CAoBO,YACL,EAAiB,EAAwB,CAAE,UACR,CAGnC,GAAM,GAAc,GAAI,KACxB,OAAW,KAAU,IAAsB,CAAS,EAAG,CACrD,GAAM,CAAC,CAAE,GAAM,EAAO,YAAa,MAAM,WAAW,EACpD,AAAI,GAAmB,gBAAgB,KAAO,CAAE,GAC9C,GAAY,IAAI,CAAC,EAAI,GAAiB,CAAC,CAAE,CAAC,EAC1C,EAAO,YAAY,EAAY,IAAI,CAAC,CAAE,CAAE,EAE5C,CAGA,MAAI,GAAY,OAAS,EAChB,EAGF,EAAM,IAAM,CACjB,GAAM,GAAQ,GAAI,GAGlB,SACG,KACC,GAAU,EAAM,KAAK,GAAS,CAAC,CAAC,CAAC,CACnC,EACG,UAAU,GAAU,CACnB,EAAG,OAAS,CAAC,EAGb,OAAW,CAAC,EAAI,IAAe,GAAa,CAC1C,GAAM,GAAQ,EAAW,cAAe,CAAU,EAC5C,EAAQ,EAAW,gBAAgB,KAAO,CAAE,EAClD,AAAK,EAGH,GAAK,EAAO,CAAK,EAFjB,GAAK,EAAO,CAAK,CAGrB,CACF,CAAC,EAGE,EAAM,GAAG,CAAC,GAAG,CAAW,EAC5B,IAAI,CAAC,CAAC,CAAE,KACP,GAAgB,EAAY,CAAS,CACtC,CACH,EACG,KACC,EAAS,IAAM,EAAM,SAAS,CAAC,EAC/B,GAAM,CACR,CACJ,CAAC,CACH,CRlFA,GAAI,IAAW,EAaf,YAA2B,EAA0C,CACnE,GAAI,EAAG,mBAAoB,CACzB,GAAM,GAAU,EAAG,mBACnB,GAAI,EAAQ,UAAY,KACtB,MAAO,GAGJ,GAAI,EAAQ,UAAY,KAAO,CAAC,EAAQ,SAAS,OACpD,MAAO,IAAkB,CAAO,CACpC,CAIF,CAgBO,YACL,EACuB,CACvB,MAAO,IAAiB,CAAE,EACvB,KACC,EAAI,CAAC,CAAE,WAEE,EACL,WAAY,AAFE,GAAsB,CAAE,EAElB,MAAQ,CAC9B,EACD,EACD,EAAwB,YAAY,CACtC,CACJ,CAeO,YACL,EAAiB,EAC8B,CAC/C,GAAM,CAAE,QAAS,GAAU,WAAW,SAAS,EAGzC,EAAW,EAAM,IAAM,CAC3B,GAAM,GAAQ,GAAI,GASlB,GARA,EAAM,UAAU,CAAC,CAAE,gBAAiB,CAClC,AAAI,GAAc,EAChB,EAAG,aAAa,WAAY,GAAG,EAE/B,EAAG,gBAAgB,UAAU,CACjC,CAAC,EAGG,WAAY,YAAY,EAAG,CAC7B,GAAM,GAAS,EAAG,QAAQ,KAAK,EAC/B,EAAO,GAAK,UAAU,EAAE,KACxB,EAAO,aACL,GAAsB,EAAO,EAAE,EAC/B,CACF,CACF,CAGA,GAAM,GAAY,EAAG,QAAQ,CAC3B,mCACA,iBACF,EAAE,KAAK,IAAI,CAAC,EACZ,GAAI,YAAqB,aAAa,CACpC,GAAM,GAAO,GAAkB,CAAS,EAGxC,GAAI,MAAO,IAAS,aAClB,GAAU,UAAU,SAAS,UAAU,GACvC,GAAQ,uBAAuB,GAC9B,CACD,GAAM,GAAe,GAAoB,EAAM,EAAI,CAAO,EAG1D,MAAO,IAAe,CAAE,EACrB,KACC,EAAI,GAAS,EAAM,KAAK,CAAK,CAAC,EAC9B,EAAS,IAAM,EAAM,SAAS,CAAC,EAC/B,EAAI,GAAU,GAAE,IAAK,GAAO,EAAQ,EACpC,GACE,GAAiB,CAAS,EACvB,KACC,GAAU,EAAM,KAAK,GAAS,CAAC,CAAC,CAAC,EACjC,EAAI,CAAC,CAAE,QAAO,YAAa,GAAS,CAAM,EAC1C,EAAqB,EACrB,EAAU,GAAU,EAAS,EAAe,CAAK,CACnD,CACJ,CACF,CACJ,CACF,CAGA,MAAO,IAAe,CAAE,EACrB,KACC,EAAI,GAAS,EAAM,KAAK,CAAK,CAAC,EAC9B,EAAS,IAAM,EAAM,SAAS,CAAC,EAC/B,EAAI,GAAU,GAAE,IAAK,GAAO,EAAQ,CACtC,CACJ,CAAC,EAGD,MAAO,IAAuB,CAAE,EAC7B,KACC,EAAO,GAAW,CAAO,EACzB,GAAK,CAAC,EACN,EAAU,IAAM,CAAQ,CAC1B,CACJ,68IShLA,GAAI,IAKA,GAAQ,EAWZ,aAA0C,CACxC,MAAO,OAAO,UAAY,aAAe,kBAAmB,SACxD,GAAY,sDAAsD,EAClE,EAAG,MAAS,CAClB,CAaO,YACL,EACgC,CAChC,SAAG,UAAU,OAAO,SAAS,EAC7B,QAAa,GAAa,EACvB,KACC,EAAI,IAAM,QAAQ,WAAW,CAC3B,YAAa,GACb,WACF,CAAC,CAAC,EACF,EAAM,MAAS,EACf,EAAY,CAAC,CACf,GAGF,GAAS,UAAU,IAAM,CACvB,EAAG,UAAU,IAAI,SAAS,EAC1B,GAAM,GAAK,aAAa,OAClB,EAAO,EAAE,MAAO,CAAE,MAAO,SAAU,CAAC,EAC1C,QAAQ,WAAW,OAAO,EAAI,EAAG,YAAa,AAAC,GAAgB,CAG7D,GAAM,GAAS,EAAK,aAAa,CAAE,KAAM,QAAS,CAAC,EACnD,EAAO,UAAY,EAGnB,EAAG,YAAY,CAAI,CACrB,CAAC,CACH,CAAC,EAGM,GACJ,KACC,EAAM,CAAE,IAAK,CAAG,CAAC,CACnB,CACJ,CCzCO,YACL,EAAwB,CAAE,UAAS,UACd,CACrB,GAAI,GAAO,GACX,MAAO,GAGL,EACG,KACC,EAAI,GAAU,EAAO,QAAQ,qBAAqB,CAAE,EACpD,EAAO,GAAW,IAAO,CAAO,EAChC,EAAe,CAAE,OAAQ,OAAQ,OAAQ,EAAK,CAAC,CACjD,EAGF,EACG,KACC,EAAO,GAAU,GAAU,CAAC,CAAI,EAChC,EAAI,IAAM,EAAO,EAAG,IAAI,EACxB,EAAI,GAAW,EACb,OAAQ,EAAS,OAAS,OAC5B,EAAa,CACf,CACJ,CACF,CAaO,YACL,EAAwB,EACQ,CAChC,MAAO,GAAM,IAAM,CACjB,GAAM,GAAQ,GAAI,GAClB,SAAM,UAAU,CAAC,CAAE,SAAQ,YAAa,CACtC,AAAI,IAAW,OACb,EAAG,aAAa,OAAQ,EAAE,EAE1B,EAAG,gBAAgB,MAAM,EACvB,GACF,EAAG,eAAe,CACtB,CAAC,EAGM,GAAa,EAAI,CAAO,EAC5B,KACC,EAAI,GAAS,EAAM,KAAK,CAAK,CAAC,EAC9B,EAAS,IAAM,EAAM,SAAS,CAAC,EAC/B,EAAI,GAAU,GAAE,IAAK,GAAO,EAAQ,CACtC,CACJ,CAAC,CACH,CC9FA,GAAM,IAAW,EAAE,OAAO,EAgBnB,YACL,EACkC,CAClC,SAAG,YAAY,EAAQ,EACvB,GAAS,YAAY,GAAY,CAAE,CAAC,EAG7B,EAAG,CAAE,IAAK,CAAG,CAAC,CACvB,CCKO,YACL,EACyB,CACzB,GAAM,GAAS,EAA8B,iBAAkB,CAAE,EAC3D,EAAS,EAAO,KAAK,GAAS,EAAM,OAAO,GAAK,EAAO,GAC7D,MAAO,GAAM,GAAG,EAAO,IAAI,GAAS,EAAU,EAAO,QAAQ,EAC1D,KACC,EAAmB,CACjB,OAAQ,EAAW,aAAa,EAAM,KAAK,CAC7C,CAAC,CACH,CACF,CAAC,EACE,KACC,EAAU,CACR,OAAQ,EAAW,aAAa,EAAO,KAAK,CAC9C,CAAgB,CAClB,CACJ,CAcO,YACL,EACoC,CACpC,GAAM,GAAY,EAAW,iBAAkB,CAAE,EACjD,MAAO,GAAM,IAAM,CACjB,GAAM,GAAQ,GAAI,GAClB,SAAc,CAAC,EAAO,GAAiB,CAAE,CAAC,CAAC,EACxC,KACC,GAAU,EAAG,EAAuB,EACpC,GAAU,EAAM,KAAK,GAAS,CAAC,CAAC,CAAC,CACnC,EACG,UAAU,CAGT,KAAK,CAAC,CAAE,WAAW,CACjB,GAAM,GAAS,GAAiB,CAAM,EAChC,CAAE,SAAU,GAAe,CAAM,EAGvC,EAAG,MAAM,YAAY,mBAAoB,GAAG,EAAO,KAAK,EACxD,EAAG,MAAM,YAAY,uBAAwB,GAAG,KAAS,EAGzD,EAAU,SAAS,CACjB,SAAU,SACV,KAAM,EAAO,CACf,CAAC,CACH,EAGA,UAAW,CACT,EAAG,MAAM,eAAe,kBAAkB,EAC1C,EAAG,MAAM,eAAe,sBAAsB,CAChD,CACF,CAAC,EAGE,GAAiB,CAAE,EACvB,KACC,EAAI,GAAS,EAAM,KAAK,CAAK,CAAC,EAC9B,EAAS,IAAM,EAAM,SAAS,CAAC,EAC/B,EAAI,GAAU,GAAE,IAAK,GAAO,EAAQ,CACtC,CACJ,CAAC,EACE,KACC,GAAY,EAAc,CAC5B,CACJ,CC/DO,YACL,EAAiB,CAAE,UAAS,UACI,CAChC,MAAO,GAGL,GAAG,EAAY,2BAA4B,CAAE,EAC1C,IAAI,GAAS,GAAe,EAAO,CAAE,QAAO,CAAC,CAAC,EAGjD,GAAG,EAAY,cAAe,CAAE,EAC7B,IAAI,GAAS,GAAa,CAAK,CAAC,EAGnC,GAAG,EAAY,qBAAsB,CAAE,EACpC,IAAI,GAAS,GAAe,CAAK,CAAC,EAGrC,GAAG,EAAY,UAAW,CAAE,EACzB,IAAI,GAAS,GAAa,EAAO,CAAE,UAAS,QAAO,CAAC,CAAC,EAGxD,GAAG,EAAY,cAAe,CAAE,EAC7B,IAAI,GAAS,GAAiB,CAAK,CAAC,CACzC,CACF,CCjCO,YACL,EAAkB,CAAE,UACA,CACpB,MAAO,GACJ,KACC,EAAU,GAAW,EACnB,EAAG,EAAI,EACP,EAAG,EAAK,EAAE,KAAK,GAAM,GAAI,CAAC,CAC5B,EACG,KACC,EAAI,GAAW,EAAE,UAAS,QAAO,EAAE,CACrC,CACF,CACF,CACJ,CAaO,YACL,EAAiB,EACc,CAC/B,GAAM,GAAQ,EAAW,cAAe,CAAE,EAC1C,MAAO,GAAM,IAAM,CACjB,GAAM,GAAQ,GAAI,GAClB,SAAM,UAAU,CAAC,CAAE,UAAS,YAAa,CACvC,EAAM,YAAc,EACpB,AAAI,EACF,EAAG,aAAa,gBAAiB,MAAM,EAEvC,EAAG,gBAAgB,eAAe,CACtC,CAAC,EAGM,GAAY,EAAI,CAAO,EAC3B,KACC,EAAI,GAAS,EAAM,KAAK,CAAK,CAAC,EAC9B,EAAS,IAAM,EAAM,SAAS,CAAC,EAC/B,EAAI,GAAU,GAAE,IAAK,GAAO,EAAQ,CACtC,CACJ,CAAC,CACH,CChCA,YAAkB,CAAE,aAAgD,CAClE,GAAI,CAAC,GAAQ,iBAAiB,EAC5B,MAAO,GAAG,EAAK,EAGjB,GAAM,GAAa,EAChB,KACC,EAAI,CAAC,CAAE,OAAQ,CAAE,QAAU,CAAC,EAC5B,GAAY,EAAG,CAAC,EAChB,EAAI,CAAC,CAAC,EAAG,KAAO,CAAC,EAAI,EAAG,CAAC,CAAU,EACnC,EAAwB,CAAC,CAC3B,EAGI,EAAU,EAAc,CAAC,EAAW,CAAU,CAAC,EAClD,KACC,EAAO,CAAC,CAAC,CAAE,UAAU,CAAC,CAAE,MAAQ,KAAK,IAAI,EAAI,EAAO,CAAC,EAAI,GAAG,EAC5D,EAAI,CAAC,CAAC,CAAE,CAAC,MAAgB,CAAS,EAClC,EAAqB,CACvB,EAGI,EAAU,GAAY,QAAQ,EACpC,MAAO,GAAc,CAAC,EAAW,CAAO,CAAC,EACtC,KACC,EAAI,CAAC,CAAC,CAAE,UAAU,KAAY,EAAO,EAAI,KAAO,CAAC,CAAM,EACvD,EAAqB,EACrB,EAAU,GAAU,EAAS,EAAU,EAAG,EAAK,CAAC,EAChD,EAAU,EAAK,CACjB,CACJ,CAcO,YACL,EAAiB,EACG,CACpB,MAAO,GAAM,IAAM,CACjB,GAAM,GAAS,iBAAiB,CAAE,EAClC,MAAO,GACL,EAAO,WAAa,UACpB,EAAO,WAAa,gBACtB,CACF,CAAC,EACE,KACC,GAAkB,GAAiB,CAAE,EAAG,GAAS,CAAO,CAAC,EACzD,EAAI,CAAC,CAAC,EAAQ,CAAE,UAAU,KAAa,EACrC,OAAQ,EAAS,EAAS,EAC1B,SACA,QACF,EAAE,EACF,EAAqB,CAAC,EAAG,IACvB,EAAE,SAAW,EAAE,QACf,EAAE,SAAW,EAAE,QACf,EAAE,SAAW,EAAE,MAChB,EACD,EAAY,CAAC,CACf,CACJ,CAaO,YACL,EAAiB,CAAE,UAAS,SACG,CAC/B,MAAO,GAAM,IAAM,CACjB,GAAM,GAAQ,GAAI,GAClB,SACG,KACC,EAAwB,QAAQ,EAChC,GAAkB,CAAO,CAC3B,EACG,UAAU,CAAC,CAAC,CAAE,UAAU,CAAE,aAAc,CACvC,AAAI,EACF,EAAG,aAAa,gBAAiB,EAAS,SAAW,QAAQ,EAE7D,EAAG,gBAAgB,eAAe,CACtC,CAAC,EAGL,EAAM,UAAU,CAAK,EAGd,EACJ,KACC,GAAU,EAAM,KAAK,GAAS,CAAC,CAAC,CAAC,EACjC,EAAI,GAAU,GAAE,IAAK,GAAO,EAAQ,CACtC,CACJ,CAAC,CACH,CCxHO,YACL,EAAiB,CAAE,YAAW,WACL,CACzB,MAAO,IAAgB,EAAI,CAAE,YAAW,SAAQ,CAAC,EAC9C,KACC,EAAI,CAAC,CAAE,OAAQ,CAAE,QAAU,CACzB,GAAM,CAAE,UAAW,GAAe,CAAE,EACpC,MAAO,CACL,OAAQ,GAAK,CACf,CACF,CAAC,EACD,EAAwB,QAAQ,CAClC,CACJ,CAaO,YACL,EAAiB,EACmB,CACpC,MAAO,GAAM,IAAM,CACjB,GAAM,GAAQ,GAAI,GAClB,EAAM,UAAU,CAAC,CAAE,YAAa,CAC9B,AAAI,EACF,EAAG,aAAa,gBAAiB,QAAQ,EAEzC,EAAG,gBAAgB,eAAe,CACtC,CAAC,EAGD,GAAM,GAAU,GAAmB,YAAY,EAC/C,MAAI,OAAO,IAAY,YACd,EAGF,GAAiB,EAAS,CAAO,EACrC,KACC,EAAI,GAAS,EAAM,KAAK,CAAK,CAAC,EAC9B,EAAS,IAAM,EAAM,SAAS,CAAC,EAC/B,EAAI,GAAU,GAAE,IAAK,GAAO,EAAQ,CACtC,CACJ,CAAC,CACH,CC1DO,YACL,EAAiB,CAAE,YAAW,WACZ,CAGlB,GAAM,GAAU,EACb,KACC,EAAI,CAAC,CAAE,YAAa,CAAM,EAC1B,EAAqB,CACvB,EAGI,EAAU,EACb,KACC,EAAU,IAAM,GAAiB,CAAE,EAChC,KACC,EAAI,CAAC,CAAE,YAAc,EACnB,IAAQ,EAAG,UACX,OAAQ,EAAG,UAAY,CACzB,EAAE,EACF,EAAwB,QAAQ,CAClC,CACF,CACF,EAGF,MAAO,GAAc,CAAC,EAAS,EAAS,CAAS,CAAC,EAC/C,KACC,EAAI,CAAC,CAAC,EAAQ,CAAE,MAAK,UAAU,CAAE,OAAQ,CAAE,KAAK,KAAM,CAAE,cACtD,GAAS,KAAK,IAAI,EAAG,EACjB,KAAK,IAAI,EAAG,EAAS,EAAI,CAAM,EAC/B,KAAK,IAAI,EAAG,EAAS,EAAI,CAAM,CACnC,EACO,CACL,OAAQ,EAAM,EACd,SACA,OAAQ,EAAM,GAAU,CAC1B,EACD,EACD,EAAqB,CAAC,EAAG,IACvB,EAAE,SAAW,EAAE,QACf,EAAE,SAAW,EAAE,QACf,EAAE,SAAW,EAAE,MAChB,CACH,CACJ,CCnDO,YACL,EACqB,CACrB,GAAM,GAAU,SAAkB,WAAW,GAAK,CAChD,MAAO,EAAO,UAAU,GAAS,WAC/B,EAAM,aAAa,qBAAqB,CAC1C,EAAE,OAAO,CACX,EAGA,MAAO,GAAG,GAAG,CAAM,EAChB,KACC,GAAS,GAAS,EAAU,EAAO,QAAQ,EACxC,KACC,EAAM,CAAK,CACb,CACF,EACA,EAAU,EAAO,KAAK,IAAI,EAAG,EAAQ,KAAK,EAAE,EAC5C,EAAI,GAAU,EACZ,MAAO,EAAO,QAAQ,CAAK,EAC3B,MAAO,CACL,OAAS,EAAM,aAAa,sBAAsB,EAClD,QAAS,EAAM,aAAa,uBAAuB,EACnD,OAAS,EAAM,aAAa,sBAAsB,CACpD,CACF,EAAa,EACb,EAAY,CAAC,CACf,CACJ,CASO,YACL,EACgC,CAChC,MAAO,GAAM,IAAM,CACjB,GAAM,GAAQ,GAAI,GAClB,EAAM,UAAU,GAAW,CAGzB,OAAW,CAAC,EAAK,IAAU,QAAO,QAAQ,EAAQ,KAAK,EACrD,SAAS,KAAK,aAAa,iBAAiB,IAAO,CAAK,EAG1D,OAAS,GAAQ,EAAG,EAAQ,EAAO,OAAQ,IAAS,CAClD,GAAM,GAAQ,EAAO,GAAO,mBAC5B,AAAI,YAAiB,cACnB,GAAM,OAAS,EAAQ,QAAU,EACrC,CAGA,SAAS,YAAa,CAAO,CAC/B,CAAC,EAGD,GAAM,GAAS,EAA8B,QAAS,CAAE,EACxD,MAAO,IAAa,CAAM,EACvB,KACC,EAAI,GAAS,EAAM,KAAK,CAAK,CAAC,EAC9B,EAAS,IAAM,EAAM,SAAS,CAAC,EAC/B,EAAI,GAAU,GAAE,IAAK,GAAO,EAAQ,CACtC,CACJ,CAAC,CACH,CCvHA,OAAwB,SAiCxB,YAAiB,EAAyB,CACxC,EAAG,aAAa,kBAAmB,EAAE,EACrC,GAAM,GAAO,EAAG,UAChB,SAAG,gBAAgB,iBAAiB,EAC7B,CACT,CAWO,YACL,CAAE,UACI,CACN,AAAI,WAAY,YAAY,GAC1B,GAAI,GAA8B,GAAc,CAC9C,GAAI,YAAY,iDAAkD,CAChE,KAAM,GACJ,EAAG,aAAa,qBAAqB,GACrC,GAAQ,EACN,EAAG,aAAa,uBAAuB,CACzC,CAAC,CAEL,CAAC,EACE,GAAG,UAAW,GAAM,EAAW,KAAK,CAAE,CAAC,CAC5C,CAAC,EACE,KACC,EAAI,GAAM,CAER,AADgB,EAAG,QACX,MAAM,CAChB,CAAC,EACD,EAAM,GAAY,kBAAkB,CAAC,CACvC,EACG,UAAU,CAAM,CAEzB,CCvCA,YAAoB,EAAwB,CAC1C,GAAI,EAAK,OAAS,EAChB,MAAO,CAAC,EAAE,EAGZ,GAAM,CAAC,EAAM,GAAQ,CAAC,GAAG,CAAI,EAC1B,KAAK,CAAC,EAAG,IAAM,EAAE,OAAS,EAAE,MAAM,EAClC,IAAI,GAAO,EAAI,QAAQ,SAAU,EAAE,CAAC,EAGnC,EAAQ,EACZ,GAAI,IAAS,EACX,EAAQ,EAAK,WAEb,MAAO,EAAK,WAAW,CAAK,IAAM,EAAK,WAAW,CAAK,GACrD,IAGJ,MAAO,GAAK,IAAI,GAAO,EAAI,QAAQ,EAAK,MAAM,EAAG,CAAK,EAAG,EAAE,CAAC,CAC9D,CAaO,YAAsB,EAAiC,CAC5D,GAAM,GAAS,SAAkB,YAAa,eAAgB,CAAI,EAClE,GAAI,EACF,MAAO,GAAG,CAAM,EACX,CACL,GAAM,GAAS,GAAc,EAC7B,MAAO,IAAW,GAAI,KAAI,cAAe,GAAQ,EAAO,IAAI,CAAC,EAC1D,KACC,EAAI,GAAW,GAAW,EAAY,MAAO,CAAO,EACjD,IAAI,GAAQ,EAAK,WAAY,CAChC,CAAC,EACD,GAAe,CAAC,CAAC,EACjB,EAAI,GAAW,SAAS,YAAa,EAAS,eAAgB,CAAI,CAAC,CACrE,CACJ,CACF,CCOO,YACL,CAAE,YAAW,YAAW,aAClB,CACN,GAAM,GAAS,GAAc,EAC7B,GAAI,SAAS,WAAa,QACxB,OAGF,AAAI,qBAAuB,UACzB,SAAQ,kBAAoB,SAG5B,EAAU,OAAQ,cAAc,EAC7B,UAAU,IAAM,CACf,QAAQ,kBAAoB,MAC9B,CAAC,GAIL,GAAM,GAAU,GAAoC,gBAAgB,EACpE,AAAI,MAAO,IAAY,aACrB,GAAQ,KAAO,EAAQ,MAGzB,GAAM,GAAQ,GAAa,EACxB,KACC,EAAI,GAAS,EAAM,IAAI,GAAQ,GAAG,GAAI,KAAI,EAAM,EAAO,IAAI,GAAG,CAAC,EAC/D,EAAU,GAAQ,EAAsB,SAAS,KAAM,OAAO,EAC3D,KACC,EAAO,GAAM,CAAC,EAAG,SAAW,CAAC,EAAG,OAAO,EACvC,EAAU,GAAM,CACd,GAAI,EAAG,iBAAkB,SAAS,CAChC,GAAM,GAAK,EAAG,OAAO,QAAQ,GAAG,EAChC,GAAI,GAAM,CAAC,EAAG,OAAQ,CACpB,GAAM,GAAM,GAAI,KAAI,EAAG,IAAI,EAO3B,GAJA,EAAI,OAAS,GACb,EAAI,KAAO,GAIT,EAAI,WAAa,SAAS,UAC1B,EAAK,SAAS,EAAI,SAAS,CAAC,EAE5B,SAAG,eAAe,EACX,EAAG,CACR,IAAK,GAAI,KAAI,EAAG,IAAI,CACtB,CAAC,CAEL,CACF,CACA,MAAO,GACT,CAAC,CACH,CACF,EACA,GAAoB,CACtB,EAGI,EAAO,EAAyB,OAAQ,UAAU,EACrD,KACC,EAAO,GAAM,EAAG,QAAU,IAAI,EAC9B,EAAI,GAAO,EACT,IAAK,GAAI,KAAI,SAAS,IAAI,EAC1B,OAAQ,EAAG,KACb,EAAE,EACF,GAAoB,CACtB,EAGF,EAAM,EAAO,CAAI,EACd,KACC,EAAqB,CAAC,EAAG,IAAM,EAAE,IAAI,OAAS,EAAE,IAAI,IAAI,EACxD,EAAI,CAAC,CAAE,SAAU,CAAG,CACtB,EACG,UAAU,CAAS,EAGxB,GAAM,GAAY,EACf,KACC,EAAwB,UAAU,EAClC,EAAU,GAAO,GAAQ,EAAI,IAAI,EAC9B,KACC,GAAW,IACT,IAAY,CAAG,EACR,GACR,CACH,CACF,EACA,GAAM,CACR,EAGF,EACG,KACC,GAAO,CAAS,CAClB,EACG,UAAU,CAAC,CAAE,SAAU,CACtB,QAAQ,UAAU,CAAC,EAAG,GAAI,GAAG,GAAK,CACpC,CAAC,EAGL,GAAM,GAAM,GAAI,WAChB,EACG,KACC,EAAU,GAAO,EAAI,KAAK,CAAC,EAC3B,EAAI,GAAO,EAAI,gBAAgB,EAAK,WAAW,CAAC,CAClD,EACG,UAAU,CAAS,EAGxB,EACG,KACC,GAAK,CAAC,CACR,EACG,UAAU,GAAe,CACxB,OAAW,KAAY,CAGrB,QACA,sBACA,oBACA,yBAGA,+BACA,gCACA,mCACA,+BACA,2BACA,2BACA,GAAG,GAAQ,wBAAwB,EAC/B,CAAC,0BAA0B,EAC3B,CAAC,CACP,EAAG,CACD,GAAM,GAAS,GAAmB,CAAQ,EACpC,EAAS,GAAmB,EAAU,CAAW,EACvD,AACE,MAAO,IAAW,aAClB,MAAO,IAAW,aAElB,EAAO,YAAY,CAAM,CAE7B,CACF,CAAC,EAGL,EACG,KACC,GAAK,CAAC,EACN,EAAI,IAAM,GAAoB,WAAW,CAAC,EAC1C,EAAU,GAAM,EAAY,SAAU,CAAE,CAAC,EACzC,GAAU,GAAM,CACd,GAAM,GAAS,EAAE,QAAQ,EACzB,GAAI,EAAG,IAAK,CACV,OAAW,KAAQ,GAAG,kBAAkB,EACtC,EAAO,aAAa,EAAM,EAAG,aAAa,CAAI,CAAE,EAClD,SAAG,YAAY,CAAM,EAGd,GAAI,GAAW,GAAY,CAChC,EAAO,OAAS,IAAM,EAAS,SAAS,CAC1C,CAAC,CAGH,KACE,UAAO,YAAc,EAAG,YACxB,EAAG,YAAY,CAAM,EACd,CAEX,CAAC,CACH,EACG,UAAU,EAGf,EAAM,EAAO,CAAI,EACd,KACC,GAAO,CAAS,CAClB,EACG,UAAU,CAAC,CAAE,MAAK,YAAa,CAC9B,AAAI,EAAI,MAAQ,CAAC,EACf,GAAgB,EAAI,IAAI,EAExB,OAAO,SAAS,EAAG,kBAAQ,IAAK,CAAC,CAErC,CAAC,EAGL,EACG,KACC,GAAU,CAAK,EACf,GAAa,GAAG,EAChB,EAAwB,QAAQ,CAClC,EACG,UAAU,CAAC,CAAE,YAAa,CACzB,QAAQ,aAAa,EAAQ,EAAE,CACjC,CAAC,EAGL,EAAM,EAAO,CAAI,EACd,KACC,GAAY,EAAG,CAAC,EAChB,EAAO,CAAC,CAAC,EAAG,KAAO,EAAE,IAAI,WAAa,EAAE,IAAI,QAAQ,EACpD,EAAI,CAAC,CAAC,CAAE,KAAW,CAAK,CAC1B,EACG,UAAU,CAAC,CAAE,YAAa,CACzB,OAAO,SAAS,EAAG,kBAAQ,IAAK,CAAC,CACnC,CAAC,CACP,CCzSA,OAAuB,SCAvB,OAAuB,SAsChB,YACL,EAA2B,EACD,CAC1B,GAAM,GAAY,GAAI,QAAO,EAAO,UAAW,KAAK,EAC9C,EAAY,CAAC,EAAY,EAAc,IACpC,GAAG,4BAA+B,WAI3C,MAAO,AAAC,IAAkB,CACxB,EAAQ,EACL,QAAQ,gBAAiB,GAAG,EAC5B,KAAK,EAGR,GAAM,GAAQ,GAAI,QAAO,MAAM,EAAO,cACpC,EACG,QAAQ,uBAAwB,MAAM,EACtC,QAAQ,EAAW,GAAG,KACtB,KAAK,EAGV,MAAO,IACL,GACI,eAAW,CAAK,EAChB,GAED,QAAQ,EAAO,CAAS,EACxB,QAAQ,8BAA+B,IAAI,CAClD,CACF,CC9BO,YAA0B,EAAuB,CACtD,MAAO,GACJ,MAAM,YAAY,EAChB,IAAI,CAAC,EAAO,IAAU,EAAQ,EAC3B,EAAM,QAAQ,+BAAgC,IAAI,EAClD,CACJ,EACC,KAAK,EAAE,EACT,QAAQ,kCAAmC,EAAE,EAC7C,KAAK,CACV,CCoCO,YACL,EAC+B,CAC/B,MAAO,GAAQ,OAAS,CAC1B,CASO,YACL,EAC+B,CAC/B,MAAO,GAAQ,OAAS,CAC1B,CASO,YACL,EACgC,CAChC,MAAO,GAAQ,OAAS,CAC1B,CCvEA,YAA0B,CAAE,SAAQ,QAAkC,CAGpE,AAAI,EAAO,KAAK,SAAW,GAAK,EAAO,KAAK,KAAO,MACjD,GAAO,KAAO,CACZ,GAAY,oBAAoB,CAClC,GAGE,EAAO,YAAc,aACvB,GAAO,UAAY,GAAY,yBAAyB,GAQ1D,GAAM,GAAyB,CAC7B,SANe,GAAY,wBAAwB,EAClD,MAAM,SAAS,EACf,OAAO,OAAO,EAKf,YAAa,GAAQ,gBAAgB,CACvC,EAGA,MAAO,CAAE,SAAQ,OAAM,SAAQ,CACjC,CAkBO,YACL,EAAa,EACC,CACd,GAAM,GAAS,GAAc,EACvB,EAAS,GAAI,QAAO,CAAG,EAGvB,EAAM,GAAI,GACV,EAAM,GAAY,EAAQ,CAAE,KAAI,CAAC,EACpC,KACC,EAAI,GAAW,CACb,GAAI,GAAsB,CAAO,EAC/B,OAAW,KAAU,GAAQ,KAAK,MAChC,OAAW,KAAY,GACrB,EAAS,SAAW,GAAG,GAAI,KAAI,EAAS,SAAU,EAAO,IAAI,IAEnE,MAAO,EACT,CAAC,EACD,GAAM,CACR,EAGF,UAAK,CAAK,EACP,KACC,EAAI,GAAS,EACX,KAAM,EACN,KAAM,GAAiB,CAAI,CAC7B,EAAwB,CAC1B,EACG,UAAU,EAAI,KAAK,KAAK,CAAG,CAAC,EAG1B,CAAE,MAAK,KAAI,CACpB,CCxEO,YACL,CAAE,aACI,CACN,GAAM,GAAS,GAAc,EACvB,EAAY,GAChB,GAAI,KAAI,mBAAoB,EAAO,IAAI,CACzC,EAGM,EAAW,EACd,KACC,EAAI,GAAY,CACd,GAAM,CAAC,CAAE,GAAW,EAAO,KAAK,MAAM,aAAa,EACnD,MAAO,GAAS,KAAK,CAAC,CAAE,UAAS,aAC/B,IAAY,GAAW,EAAQ,SAAS,CAAO,CAChD,GAAK,EAAS,EACjB,CAAC,CACH,EAGF,EAAc,CAAC,EAAW,CAAQ,CAAC,EAChC,KACC,EAAI,CAAC,CAAC,EAAU,KAAa,GAAI,KAAI,EAClC,OAAO,GAAW,IAAY,CAAO,EACrC,IAAI,GAAW,CACd,GAAG,GAAI,KAAI,MAAM,EAAQ,WAAY,EAAO,IAAI,IAChD,CACF,CAAC,CACH,CAAC,EACD,EAAU,GAAQ,EAAsB,SAAS,KAAM,OAAO,EAC3D,KACC,EAAO,GAAM,CAAC,EAAG,SAAW,CAAC,EAAG,OAAO,EACvC,EAAU,GAAM,CACd,GAAI,EAAG,iBAAkB,SAAS,CAChC,GAAM,GAAK,EAAG,OAAO,QAAQ,GAAG,EAChC,GAAI,GAAM,CAAC,EAAG,QAAU,EAAK,IAAI,EAAG,IAAI,EACtC,SAAG,eAAe,EACX,EAAG,EAAG,IAAI,CAErB,CACA,MAAO,EACT,CAAC,EACD,EAAU,GAAO,CACf,GAAM,CAAE,WAAY,EAAK,IAAI,CAAG,EAChC,MAAO,IAAa,GAAI,KAAI,CAAG,CAAC,EAC7B,KACC,EAAI,GAAW,CAEb,GAAM,GAAO,AADI,GAAY,EACP,KAAK,QAAQ,EAAO,KAAM,EAAE,EAClD,MAAO,GAAQ,SAAS,CAAI,EACxB,GAAI,KAAI,MAAM,KAAW,IAAQ,EAAO,IAAI,EAC5C,GAAI,KAAI,CAAG,CACjB,CAAC,CACH,CACJ,CAAC,CACH,CACF,CACF,EACG,UAAU,GAAO,GAAY,CAAG,CAAC,EAGtC,EAAc,CAAC,EAAW,CAAQ,CAAC,EAChC,UAAU,CAAC,CAAC,EAAU,KAAa,CAElC,AADc,EAAW,mBAAmB,EACtC,YAAY,GAAsB,EAAU,CAAO,CAAC,CAC5D,CAAC,EAGH,EAAU,KAAK,GAAY,CAAQ,CAAC,EACjC,UAAU,GAAW,CA1I1B,MA6IM,GAAI,GAAW,SAAS,aAAc,cAAc,EACpD,GAAI,IAAa,KAAM,CACrB,GAAM,GAAS,MAAO,UAAP,cAAgB,UAAW,SAC1C,EAAW,CAAC,EAAQ,QAAQ,SAAS,CAAM,EAG3C,SAAS,aAAc,EAAU,cAAc,CACjD,CAGA,GAAI,EACF,OAAW,KAAW,IAAqB,UAAU,EACnD,EAAQ,OAAS,EACvB,CAAC,CACL,CCrEO,YACL,EAAsB,CAAE,OACC,CACzB,GAAM,GAAK,gCAAU,YAAa,GAG5B,CAAE,gBAAiB,GAAY,EACrC,AAAI,EAAa,IAAI,GAAG,GACtB,GAAU,SAAU,EAAI,EAG1B,GAAM,GAAS,EACZ,KACC,EAAO,EAAoB,EAC3B,GAAK,CAAC,EACN,EAAI,IAAM,EAAa,IAAI,GAAG,GAAK,EAAE,CACvC,EAGF,GAAY,QAAQ,EACjB,KACC,EAAO,GAAU,CAAC,CAAM,EACxB,GAAK,CAAC,CACR,EACG,UAAU,IAAM,CACf,GAAM,GAAM,GAAI,KAAI,SAAS,IAAI,EACjC,EAAI,aAAa,OAAO,GAAG,EAC3B,QAAQ,aAAa,CAAC,EAAG,GAAI,GAAG,GAAK,CACvC,CAAC,EAGL,EAAO,UAAU,GAAS,CACxB,AAAI,GACF,GAAG,MAAQ,EACf,CAAC,EAGD,GAAM,GAAS,GAAkB,CAAE,EAC7B,EAAS,EACb,EAAU,EAAI,OAAO,EACrB,EAAU,EAAI,OAAO,EAAE,KAAK,GAAM,CAAC,CAAC,EACpC,CACF,EACG,KACC,EAAI,IAAM,EAAG,EAAG,KAAK,CAAC,EACtB,EAAU,EAAE,EACZ,EAAqB,CACvB,EAGF,MAAO,GAAc,CAAC,EAAQ,CAAM,CAAC,EAClC,KACC,EAAI,CAAC,CAAC,EAAO,KAAY,EAAE,QAAO,OAAM,EAAE,EAC1C,EAAY,CAAC,CACf,CACJ,CAUO,YACL,EAAsB,CAAE,MAAK,OACyB,CACtD,GAAM,GAAQ,GAAI,GAGlB,SACG,KACC,EAAwB,OAAO,EAC/B,EAAI,CAAC,CAAE,WAAiC,EACtC,KAAM,EACN,KAAM,CACR,EAAE,CACJ,EACG,UAAU,EAAI,KAAK,KAAK,CAAG,CAAC,EAGjC,EACG,KACC,EAAwB,OAAO,CACjC,EACG,UAAU,CAAC,CAAE,WAAY,CACxB,AAAI,EACF,IAAU,SAAU,CAAK,EACzB,EAAG,YAAc,IAEjB,EAAG,YAAc,GAAY,oBAAoB,CAErD,CAAC,EAGL,EAAU,EAAG,KAAO,OAAO,EACxB,KACC,GAAU,EAAM,KAAK,GAAS,CAAC,CAAC,CAAC,CACnC,EACG,UAAU,IAAM,EAAG,MAAM,CAAC,EAGxB,GAAiB,EAAI,CAAE,MAAK,KAAI,CAAC,EACrC,KACC,EAAI,GAAS,EAAM,KAAK,CAAK,CAAC,EAC9B,EAAS,IAAM,EAAM,SAAS,CAAC,EAC/B,EAAI,GAAU,GAAE,IAAK,GAAO,EAAQ,CACtC,CACJ,CChHO,YACL,EAAiB,CAAE,OAAqB,CAAE,UACL,CACrC,GAAM,GAAQ,GAAI,GACZ,EAAY,GAAqB,EAAG,aAAc,EACrD,KACC,EAAO,OAAO,CAChB,EAGI,EAAO,EAAW,wBAAyB,CAAE,EAC7C,EAAO,EAAW,uBAAwB,CAAE,EAG5C,EAAS,EACZ,KACC,EAAO,EAAoB,EAC3B,GAAK,CAAC,CACR,EAGF,SACG,KACC,GAAe,CAAM,EACrB,GAAU,CAAM,CAClB,EACG,UAAU,CAAC,CAAC,CAAE,SAAS,CAAE,YAAa,CACrC,GAAI,EACF,OAAQ,EAAM,YAGP,GACH,EAAK,YAAc,GAAY,oBAAoB,EACnD,UAGG,GACH,EAAK,YAAc,GAAY,mBAAmB,EAClD,cAIA,EAAK,YAAc,GACjB,sBACA,GAAM,EAAM,MAAM,CACpB,MAGJ,GAAK,YAAc,GAAY,2BAA2B,CAE9D,CAAC,EAGL,EACG,KACC,EAAI,IAAM,EAAK,UAAY,EAAE,EAC7B,EAAU,CAAC,CAAE,WAAY,EACvB,EAAG,GAAG,EAAM,MAAM,EAAG,EAAE,CAAC,EACxB,EAAG,GAAG,EAAM,MAAM,EAAE,CAAC,EAClB,KACC,GAAY,CAAC,EACb,GAAQ,CAAS,EACjB,EAAU,CAAC,CAAC,KAAW,CAAK,CAC9B,CACJ,CAAC,CACH,EACG,UAAU,GAAU,EAAK,YACxB,GAAuB,CAAM,CAC/B,CAAC,EAUE,AAPS,EACb,KACC,EAAO,EAAqB,EAC5B,EAAI,CAAC,CAAE,UAAW,CAAI,CACxB,EAIC,KACC,EAAI,GAAS,EAAM,KAAK,CAAK,CAAC,EAC9B,EAAS,IAAM,EAAM,SAAS,CAAC,EAC/B,EAAI,GAAU,GAAE,IAAK,GAAO,EAAQ,CACtC,CACJ,CC1FO,YACL,EAAkB,CAAE,UACK,CACzB,MAAO,GACJ,KACC,EAAI,CAAC,CAAE,WAAY,CACjB,GAAM,GAAM,GAAY,EACxB,SAAI,KAAO,GACX,EAAI,aAAa,OAAO,GAAG,EAC3B,EAAI,aAAa,IAAI,IAAK,CAAK,EACxB,CAAE,KAAI,CACf,CAAC,CACH,CACJ,CAUO,YACL,EAAuB,EACa,CACpC,GAAM,GAAQ,GAAI,GAClB,SAAM,UAAU,CAAC,CAAE,SAAU,CAC3B,EAAG,aAAa,sBAAuB,EAAG,IAAI,EAC9C,EAAG,KAAO,GAAG,GACf,CAAC,EAGD,EAAU,EAAI,OAAO,EAClB,UAAU,GAAM,EAAG,eAAe,CAAC,EAG/B,GAAiB,EAAI,CAAO,EAChC,KACC,EAAI,GAAS,EAAM,KAAK,CAAK,CAAC,EAC9B,EAAS,IAAM,EAAM,SAAS,CAAC,EAC/B,EAAI,GAAU,GAAE,IAAK,GAAO,EAAQ,CACtC,CACJ,CCtCO,YACL,EAAiB,CAAE,OAAqB,CAAE,aACJ,CACtC,GAAM,GAAQ,GAAI,GAGZ,EAAS,GAAoB,cAAc,EAC3C,EAAS,EACb,EAAU,EAAO,SAAS,EAC1B,EAAU,EAAO,OAAO,CAC1B,EACG,KACC,GAAU,EAAc,EACxB,EAAI,IAAM,EAAM,KAAK,EACrB,EAAqB,CACvB,EAGF,SACG,KACC,GAAkB,CAAM,EACxB,EAAI,CAAC,CAAC,CAAE,eAAe,KAAW,CAChC,GAAM,GAAQ,EAAM,MAAM,UAAU,EACpC,GAAI,kBAAa,SAAU,EAAM,EAAM,OAAS,GAAI,CAClD,GAAM,GAAO,EAAY,EAAY,OAAS,GAC9C,AAAI,EAAK,WAAW,EAAM,EAAM,OAAS,EAAE,GACzC,GAAM,EAAM,OAAS,GAAK,EAC9B,KACE,GAAM,OAAS,EAEjB,MAAO,EACT,CAAC,CACH,EACG,UAAU,GAAS,EAAG,UAAY,EAChC,KAAK,EAAE,EACP,QAAQ,MAAO,QAAQ,CAC1B,EAGJ,EACG,KACC,EAAO,CAAC,CAAE,UAAW,IAAS,QAAQ,CACxC,EACG,UAAU,GAAO,CAChB,OAAQ,EAAI,UAGL,aACH,AACE,EAAG,UAAU,QACb,EAAM,iBAAmB,EAAM,MAAM,QAErC,GAAM,MAAQ,EAAG,WACnB,MAEN,CAAC,EAUE,AAPS,EACb,KACC,EAAO,EAAqB,EAC5B,EAAI,CAAC,CAAE,UAAW,CAAI,CACxB,EAIC,KACC,EAAI,GAAS,EAAM,KAAK,CAAK,CAAC,EAC9B,EAAS,IAAM,EAAM,SAAS,CAAC,EAC/B,EAAI,IAAO,EAAE,IAAK,CAAG,EAAE,CACzB,CACJ,CC9CO,YACL,EAAiB,CAAE,SAAQ,aACI,CAC/B,GAAM,GAAS,GAAc,EAC7B,GAAI,CACF,GAAM,GAAM,gCAAU,SAAU,EAAO,OACjC,EAAS,GAAkB,EAAK,CAAM,EAGtC,EAAS,GAAoB,eAAgB,CAAE,EAC/C,EAAS,GAAoB,gBAAiB,CAAE,EAGhD,CAAE,MAAK,OAAQ,EACrB,EACG,KACC,EAAO,EAAoB,EAC3B,GAAO,EAAI,KAAK,EAAO,EAAoB,CAAC,CAAC,EAC7C,GAAK,CAAC,CACR,EACG,UAAU,EAAI,KAAK,KAAK,CAAG,CAAC,EAGjC,EACG,KACC,EAAO,CAAC,CAAE,UAAW,IAAS,QAAQ,CACxC,EACG,UAAU,GAAO,CAChB,GAAM,GAAS,GAAiB,EAChC,OAAQ,EAAI,UAGL,QACH,GAAI,IAAW,EAAO,CACpB,GAAM,GAAU,GAAI,KACpB,OAAW,KAAU,GACnB,sBAAuB,CACzB,EAAG,CACD,GAAM,GAAU,EAAO,kBACvB,EAAQ,IAAI,EAAQ,WAClB,EAAQ,aAAa,eAAe,CACtC,CAAC,CACH,CAGA,GAAI,EAAQ,KAAM,CAChB,GAAM,CAAC,CAAC,IAAS,CAAC,GAAG,CAAO,EAAE,KAAK,CAAC,CAAC,CAAE,GAAI,CAAC,CAAE,KAAO,EAAI,CAAC,EAC1D,EAAK,MAAM,CACb,CAGA,EAAI,MAAM,CACZ,CACA,UAGG,aACA,MACH,GAAU,SAAU,EAAK,EACzB,EAAM,KAAK,EACX,UAGG,cACA,YACH,GAAI,MAAO,IAAW,YACpB,EAAM,MAAM,MACP,CACL,GAAM,GAAM,CAAC,EAAO,GAAG,EACrB,wDACA,CACF,CAAC,EACK,EAAI,KAAK,IAAI,EACjB,MAAK,IAAI,EAAG,EAAI,QAAQ,CAAM,CAAC,EAAI,EAAI,OACrC,GAAI,OAAS,UAAY,GAAK,IAE9B,EAAI,MAAM,EACd,EAAI,GAAG,MAAM,CACf,CAGA,EAAI,MAAM,EACV,cAIA,AAAI,IAAU,GAAiB,GAC7B,EAAM,MAAM,EAEpB,CAAC,EAGL,EACG,KACC,EAAO,CAAC,CAAE,UAAW,IAAS,QAAQ,CACxC,EACG,UAAU,GAAO,CAChB,OAAQ,EAAI,UAGL,QACA,QACA,IACH,EAAM,MAAM,EACZ,EAAM,OAAO,EAGb,EAAI,MAAM,EACV,MAEN,CAAC,EAGL,GAAM,GAAU,GAAiB,EAAO,CAAM,EACxC,EAAU,GAAkB,EAAQ,EAAQ,CAAE,QAAO,CAAC,EAC5D,MAAO,GAAM,EAAQ,CAAO,EACzB,KACC,GAGE,GAAG,GAAqB,eAAgB,CAAE,EACvC,IAAI,GAAS,GAAiB,EAAO,CAAE,QAAO,CAAC,CAAC,EAGnD,GAAG,GAAqB,iBAAkB,CAAE,EACzC,IAAI,GAAS,GAAmB,EAAO,EAAQ,CAAE,WAAU,CAAC,CAAC,CAClE,CACF,CAGJ,OAAS,EAAP,CACA,SAAG,OAAS,GACL,EACT,CACF,CCtKO,YACL,EAAiB,CAAE,SAAQ,aACa,CACxC,MAAO,GAAc,CACnB,EACA,EACG,KACC,EAAU,GAAY,CAAC,EACvB,EAAO,GAAO,CAAC,CAAC,EAAI,aAAa,IAAI,GAAG,CAAC,CAC3C,CACJ,CAAC,EACE,KACC,EAAI,CAAC,CAAC,EAAO,KAAS,GAAuB,EAAM,OAAQ,EAAI,EAC7D,EAAI,aAAa,IAAI,GAAG,CAC1B,CAAC,EACD,EAAI,GAAM,CA1FhB,MA2FQ,GAAM,GAAQ,GAAI,KAGZ,EAAK,SAAS,mBAAmB,EAAI,WAAW,SAAS,EAC/D,OAAS,GAAO,EAAG,SAAS,EAAG,EAAM,EAAO,EAAG,SAAS,EACtD,GAAI,KAAK,gBAAL,QAAoB,aAAc,CACpC,GAAM,GAAW,EAAK,YAChB,EAAW,EAAG,CAAQ,EAC5B,AAAI,EAAS,OAAS,EAAS,QAC7B,EAAM,IAAI,EAAmB,CAAQ,CACzC,CAIF,OAAW,CAAC,EAAM,IAAS,GAAO,CAChC,GAAM,CAAE,cAAe,EAAE,OAAQ,KAAM,CAAI,EAC3C,EAAK,YAAY,GAAG,MAAM,KAAK,CAAU,CAAC,CAC5C,CAGA,MAAO,CAAE,IAAK,EAAI,OAAM,CAC1B,CAAC,CACH,CACJ,CClBO,YACL,EAAiB,CAAE,YAAW,SACT,CACrB,GAAM,GAAS,EAAG,cACZ,EACJ,EAAO,UACP,EAAO,cAAe,UAGxB,MAAO,GAAc,CAAC,EAAO,CAAS,CAAC,EACpC,KACC,EAAI,CAAC,CAAC,CAAE,SAAQ,UAAU,CAAE,OAAQ,CAAE,SACpC,GAAS,EACL,KAAK,IAAI,EAAQ,KAAK,IAAI,EAAG,EAAI,CAAM,CAAC,EACxC,EACG,CACL,SACA,OAAQ,GAAK,EAAS,CACxB,EACD,EACD,EAAqB,CAAC,EAAG,IACvB,EAAE,SAAW,EAAE,QACf,EAAE,SAAW,EAAE,MAChB,CACH,CACJ,CAuBO,YACL,EAAiB,EACe,CADf,QAAE,YAAF,EAAc,KAAd,EAAc,CAAZ,YAEnB,GAAM,GAAQ,EAAW,0BAA2B,CAAE,EAChD,CAAE,KAAM,GAAiB,CAAK,EACpC,MAAO,GAAM,IAAM,CACjB,GAAM,GAAQ,GAAI,GAClB,SACG,KACC,GAAU,EAAG,EAAuB,EACpC,GAAe,CAAO,CACxB,EACG,UAAU,CAGT,KAAK,CAAC,CAAE,UAAU,CAAE,OAAQ,IAAW,CACrC,EAAM,MAAM,OAAS,GAAG,EAAS,EAAI,MACrC,EAAG,MAAM,IAAY,GAAG,KAC1B,EAGA,UAAW,CACT,EAAM,MAAM,OAAS,GACrB,EAAG,MAAM,IAAY,EACvB,CACF,CAAC,EAGE,GAAa,EAAI,CAAO,EAC5B,KACC,EAAI,GAAS,EAAM,KAAK,CAAK,CAAC,EAC9B,EAAS,IAAM,EAAM,SAAS,CAAC,EAC/B,EAAI,GAAU,GAAE,IAAK,GAAO,EAAQ,CACtC,CACJ,CAAC,CACH,CC1HO,YACL,EAAc,EACW,CACzB,GAAI,MAAO,IAAS,YAAa,CAC/B,GAAM,GAAM,gCAAgC,KAAQ,IACpD,MAAO,IAGL,GAAqB,GAAG,mBAAqB,EAC1C,KACC,EAAI,GAAY,EACd,QAAS,EAAQ,QACnB,EAAE,EACF,GAAe,CAAC,CAAC,CACnB,EAGF,GAAkB,CAAG,EAClB,KACC,EAAI,GAAS,EACX,MAAO,EAAK,iBACZ,MAAO,EAAK,WACd,EAAE,EACF,GAAe,CAAC,CAAC,CACnB,CACJ,EACG,KACC,EAAI,CAAC,CAAC,EAAS,KAAW,OAAK,GAAY,EAAO,CACpD,CAGJ,KAAO,CACL,GAAM,GAAM,gCAAgC,IAC5C,MAAO,IAAkB,CAAG,EACzB,KACC,EAAI,GAAS,EACX,aAAc,EAAK,YACrB,EAAE,EACF,GAAe,CAAC,CAAC,CACnB,CACJ,CACF,CCrDO,YACL,EAAc,EACW,CACzB,GAAM,GAAM,WAAW,qBAAwB,mBAAmB,CAAO,IACzE,MAAO,IAA2B,CAAG,EAClC,KACC,EAAI,CAAC,CAAE,aAAY,iBAAmB,EACpC,MAAO,EACP,MAAO,CACT,EAAE,EACF,GAAe,CAAC,CAAC,CACnB,CACJ,CCUO,YACL,EACyB,CACzB,GAAM,CAAC,GAAQ,EAAI,MAAM,mBAAmB,GAAK,CAAC,EAClD,OAAQ,EAAK,YAAY,OAGlB,SACH,GAAM,CAAC,CAAE,EAAM,GAAQ,EAAI,MAAM,qCAAqC,EACtE,MAAO,IAA2B,EAAM,CAAI,MAGzC,SACH,GAAM,CAAC,CAAE,EAAM,GAAQ,EAAI,MAAM,oCAAoC,EACrE,MAAO,IAA2B,EAAM,CAAI,UAI5C,MAAO,GAEb,CCxBA,GAAI,IAgBG,YACL,EACoB,CACpB,MAAO,SAAW,EAAM,IAAM,CAC5B,GAAM,GAAS,SAAsB,WAAY,cAAc,EAC/D,MAAI,GACK,EAAG,CAAM,EAET,GAAiB,EAAG,IAAI,EAC5B,KACC,EAAI,GAAS,SAAS,WAAY,EAAO,cAAc,CAAC,CAC1D,CACN,CAAC,EACE,KACC,GAAW,IAAM,CAAK,EACtB,EAAO,GAAS,OAAO,KAAK,CAAK,EAAE,OAAS,CAAC,EAC7C,EAAI,GAAU,EAAE,OAAM,EAAE,EACxB,EAAY,CAAC,CACf,EACJ,CASO,YACL,EAC+B,CAC/B,GAAM,GAAQ,EAAW,uBAAwB,CAAE,EACnD,MAAO,GAAM,IAAM,CACjB,GAAM,GAAQ,GAAI,GAClB,SAAM,UAAU,CAAC,CAAE,WAAY,CAC7B,EAAM,YAAY,GAAkB,CAAK,CAAC,EAC1C,EAAM,aAAa,gBAAiB,MAAM,CAC5C,CAAC,EAGM,GAAY,CAAE,EAClB,KACC,EAAI,GAAS,EAAM,KAAK,CAAK,CAAC,EAC9B,EAAS,IAAM,EAAM,SAAS,CAAC,EAC/B,EAAI,GAAU,GAAE,IAAK,GAAO,EAAQ,CACtC,CACJ,CAAC,CACH,CCvCO,YACL,EAAiB,CAAE,YAAW,WACZ,CAClB,MAAO,IAAiB,SAAS,IAAI,EAClC,KACC,EAAU,IAAM,GAAgB,EAAI,CAAE,UAAS,WAAU,CAAC,CAAC,EAC3D,EAAI,CAAC,CAAE,OAAQ,CAAE,QACR,EACL,OAAQ,GAAK,EACf,EACD,EACD,EAAwB,QAAQ,CAClC,CACJ,CAaO,YACL,EAAiB,EACY,CAC7B,MAAO,GAAM,IAAM,CACjB,GAAM,GAAQ,GAAI,GAClB,SAAM,UAAU,CAGd,KAAK,CAAE,UAAU,CACf,AAAI,EACF,EAAG,aAAa,gBAAiB,QAAQ,EAEzC,EAAG,gBAAgB,eAAe,CACtC,EAGA,UAAW,CACT,EAAG,gBAAgB,eAAe,CACpC,CACF,CAAC,EAIC,IAAQ,wBAAwB,EAC5B,EAAG,CAAE,OAAQ,EAAM,CAAC,EACpB,GAAU,EAAI,CAAO,GAExB,KACC,EAAI,GAAS,EAAM,KAAK,CAAK,CAAC,EAC9B,EAAS,IAAM,EAAM,SAAS,CAAC,EAC/B,EAAI,GAAU,GAAE,IAAK,GAAO,EAAQ,CACtC,CACJ,CAAC,CACH,CC3BO,YACL,EAAiB,CAAE,YAAW,WACD,CAC7B,GAAM,GAAQ,GAAI,KAGZ,EAAU,EAA+B,cAAe,CAAE,EAChE,OAAW,KAAU,GAAS,CAC5B,GAAM,GAAK,mBAAmB,EAAO,KAAK,UAAU,CAAC,CAAC,EAChD,EAAS,GAAmB,QAAQ,KAAM,EAChD,AAAI,MAAO,IAAW,aACpB,EAAM,IAAI,EAAQ,CAAM,CAC5B,CAGA,GAAM,GAAU,EACb,KACC,EAAwB,QAAQ,EAChC,EAAI,CAAC,CAAE,YAAa,CAClB,GAAM,GAAO,GAAoB,MAAM,EACjC,EAAO,EAAW,wBAAyB,CAAI,EACrD,MAAO,GAAS,GACd,GAAK,UACL,EAAK,UAET,CAAC,EACD,GAAM,CACR,EAgFF,MAAO,AA7EY,IAAiB,SAAS,IAAI,EAC9C,KACC,EAAwB,QAAQ,EAGhC,EAAU,GAAQ,EAAM,IAAM,CAC5B,GAAI,GAA4B,CAAC,EACjC,MAAO,GAAG,CAAC,GAAG,CAAK,EAAE,OAAO,CAAC,EAAO,CAAC,EAAQ,KAAY,CACvD,KAAO,EAAK,QAEN,AADS,EAAM,IAAI,EAAK,EAAK,OAAS,EAAE,EACnC,SAAW,EAAO,SACzB,EAAK,IAAI,EAOb,GAAI,GAAS,EAAO,UACpB,KAAO,CAAC,GAAU,EAAO,eACvB,EAAS,EAAO,cAChB,EAAS,EAAO,UAIlB,MAAO,GAAM,IACX,CAAC,GAAG,EAAO,CAAC,GAAG,EAAM,CAAM,CAAC,EAAE,QAAQ,EACtC,CACF,CACF,EAAG,GAAI,IAAkC,CAAC,CAC5C,CAAC,EACE,KAGC,EAAI,GAAS,GAAI,KAAI,CAAC,GAAG,CAAK,EAAE,KAAK,CAAC,CAAC,CAAE,GAAI,CAAC,CAAE,KAAO,EAAI,CAAC,CAAC,CAAC,EAC9D,GAAkB,CAAO,EAGzB,EAAU,CAAC,CAAC,EAAO,KAAY,EAC5B,KACC,GAAK,CAAC,CAAC,EAAM,GAAO,CAAE,OAAQ,CAAE,KAAK,UAAW,CAC9C,GAAM,GAAO,EAAI,EAAK,QAAU,KAAK,MAAM,EAAK,MAAM,EAGtD,KAAO,EAAK,QAAQ,CAClB,GAAM,CAAC,CAAE,GAAU,EAAK,GACxB,GAAI,EAAS,EAAS,GAAK,EACzB,EAAO,CAAC,GAAG,EAAM,EAAK,MAAM,CAAE,MAE9B,MAEJ,CAGA,KAAO,EAAK,QAAQ,CAClB,GAAM,CAAC,CAAE,GAAU,EAAK,EAAK,OAAS,GACtC,GAAI,EAAS,GAAU,GAAK,CAAC,EAC3B,EAAO,CAAC,EAAK,IAAI,EAAI,GAAG,CAAI,MAE5B,MAEJ,CAGA,MAAO,CAAC,EAAM,CAAI,CACpB,EAAG,CAAC,CAAC,EAAG,CAAC,GAAG,CAAK,CAAC,CAAC,EACnB,EAAqB,CAAC,EAAG,IACvB,EAAE,KAAO,EAAE,IACX,EAAE,KAAO,EAAE,EACZ,CACH,CACF,CACF,CACF,CACF,EAIC,KACC,EAAI,CAAC,CAAC,EAAM,KAAW,EACrB,KAAM,EAAK,IAAI,CAAC,CAAC,KAAU,CAAI,EAC/B,KAAM,EAAK,IAAI,CAAC,CAAC,KAAU,CAAI,CACjC,EAAE,EAGF,EAAU,CAAE,KAAM,CAAC,EAAG,KAAM,CAAC,CAAE,CAAC,EAChC,GAAY,EAAG,CAAC,EAChB,EAAI,CAAC,CAAC,EAAG,KAGH,EAAE,KAAK,OAAS,EAAE,KAAK,OAClB,CACL,KAAM,EAAE,KAAK,MAAM,KAAK,IAAI,EAAG,EAAE,KAAK,OAAS,CAAC,EAAG,EAAE,KAAK,MAAM,EAChE,KAAM,CAAC,CACT,EAIO,CACL,KAAM,EAAE,KAAK,MAAM,EAAE,EACrB,KAAM,EAAE,KAAK,MAAM,EAAG,EAAE,KAAK,OAAS,EAAE,KAAK,MAAM,CACrD,CAEH,CACH,CACJ,CAYO,YACL,EAAiB,CAAE,YAAW,UAAS,WACC,CACxC,MAAO,GAAM,IAAM,CACjB,GAAM,GAAQ,GAAI,GAClB,SAAM,UAAU,CAAC,CAAE,OAAM,UAAW,CAGlC,OAAW,CAAC,IAAW,GACrB,EAAO,gBAAgB,eAAe,EACtC,EAAO,UAAU,OACf,sBACF,EAIF,OAAW,CAAC,EAAO,CAAC,KAAY,GAAK,QAAQ,EAC3C,EAAO,aAAa,gBAAiB,MAAM,EAC3C,EAAO,UAAU,OACf,uBACA,IAAU,EAAK,OAAS,CAC1B,CAEJ,CAAC,EAGG,GAAQ,qBAAqB,GAC/B,EACG,KACC,GAAU,EAAM,KAAK,GAAS,CAAC,CAAC,CAAC,EACjC,EAAwB,QAAQ,EAChC,GAAa,GAAG,EAChB,GAAK,CAAC,EACN,GAAU,EAAQ,KAAK,GAAK,CAAC,CAAC,CAAC,EAC/B,GAAO,CAAE,MAAO,GAAI,CAAC,EACrB,GAAe,CAAK,CACtB,EACG,UAAU,CAAC,CAAC,CAAE,CAAE,WAAY,CAC3B,GAAM,GAAM,GAAY,EAGlB,EAAS,EAAK,EAAK,OAAS,GAClC,GAAI,GAAU,EAAO,OAAQ,CAC3B,GAAM,CAAC,GAAU,EACX,CAAE,QAAS,GAAI,KAAI,EAAO,IAAI,EACpC,AAAI,EAAI,OAAS,GACf,GAAI,KAAO,EACX,QAAQ,aAAa,CAAC,EAAG,GAAI,GAAG,GAAK,EAIzC,KACE,GAAI,KAAO,GACX,QAAQ,aAAa,CAAC,EAAG,GAAI,GAAG,GAAK,CAEzC,CAAC,EAGA,GAAqB,EAAI,CAAE,YAAW,SAAQ,CAAC,EACnD,KACC,EAAI,GAAS,EAAM,KAAK,CAAK,CAAC,EAC9B,EAAS,IAAM,EAAM,SAAS,CAAC,EAC/B,EAAI,GAAU,GAAE,IAAK,GAAO,EAAQ,CACtC,CACJ,CAAC,CACH,CChPO,YACL,EAAkB,CAAE,YAAW,QAAO,WACf,CAGvB,GAAM,GAAa,EAChB,KACC,EAAI,CAAC,CAAE,OAAQ,CAAE,QAAU,CAAC,EAC5B,GAAY,EAAG,CAAC,EAChB,EAAI,CAAC,CAAC,EAAG,KAAO,EAAI,GAAK,EAAI,CAAC,EAC9B,EAAqB,CACvB,EAGI,EAAU,EACb,KACC,EAAI,CAAC,CAAE,YAAa,CAAM,CAC5B,EAGF,MAAO,GAAc,CAAC,EAAS,CAAU,CAAC,EACvC,KACC,EAAI,CAAC,CAAC,EAAQ,KAAe,CAAE,IAAU,EAAU,EACnD,EAAqB,EACrB,GAAU,EAAQ,KAAK,GAAK,CAAC,CAAC,CAAC,EAC/B,GAAQ,EAAI,EACZ,GAAO,CAAE,MAAO,GAAI,CAAC,EACrB,EAAI,GAAW,EAAE,QAAO,EAAE,CAC5B,CACJ,CAYO,YACL,EAAiB,CAAE,YAAW,UAAS,QAAO,WACZ,CAClC,GAAM,GAAQ,GAAI,GAClB,SAAM,UAAU,CAGd,KAAK,CAAE,UAAU,CACf,AAAI,EACF,GAAG,aAAa,gBAAiB,QAAQ,EACzC,EAAG,aAAa,WAAY,IAAI,EAChC,EAAG,KAAK,GAER,GAAG,gBAAgB,eAAe,EAClC,EAAG,gBAAgB,UAAU,EAEjC,EAGA,UAAW,CACT,EAAG,MAAM,IAAM,GACf,EAAG,aAAa,gBAAiB,QAAQ,EACzC,EAAG,gBAAgB,UAAU,CAC/B,CACF,CAAC,EAGD,EACG,KACC,GAAU,EAAM,KAAK,GAAQ,CAAC,EAAG,GAAS,CAAC,CAAC,CAAC,EAC7C,EAAwB,QAAQ,CAClC,EACG,UAAU,CAAC,CAAE,YAAa,CACzB,EAAG,MAAM,IAAM,GAAG,EAAS,MAC7B,CAAC,EAGE,GAAe,EAAI,CAAE,YAAW,QAAO,SAAQ,CAAC,EACpD,KACC,EAAI,GAAS,EAAM,KAAK,CAAK,CAAC,EAC9B,EAAS,IAAM,EAAM,SAAS,CAAC,EAC/B,EAAI,GAAU,GAAE,IAAK,GAAO,EAAQ,CACtC,CACJ,CCpHO,YACL,CAAE,YAAW,WACP,CACN,EACG,KACC,EAAU,IAAM,EACd,+BACF,CAAC,EACD,EAAI,GAAM,CACR,EAAG,cAAgB,GACnB,EAAG,QAAU,EACf,CAAC,EACD,GAAS,GAAM,EAAU,EAAI,QAAQ,EAClC,KACC,GAAU,IAAM,EAAG,aAAa,eAAe,CAAC,EAChD,EAAM,CAAE,CACV,CACF,EACA,GAAe,CAAO,CACxB,EACG,UAAU,CAAC,CAAC,EAAI,KAAY,CAC3B,EAAG,gBAAgB,eAAe,EAC9B,GACF,GAAG,QAAU,GACjB,CAAC,CACP,CC9BA,aAAkC,CAChC,MAAO,qBAAqB,KAAK,UAAU,SAAS,CACtD,CAiBO,YACL,CAAE,aACI,CACN,EACG,KACC,EAAU,IAAM,EAAY,qBAAqB,CAAC,EAClD,EAAI,GAAM,EAAG,gBAAgB,mBAAmB,CAAC,EACjD,EAAO,EAAa,EACpB,GAAS,GAAM,EAAU,EAAI,YAAY,EACtC,KACC,EAAM,CAAE,CACV,CACF,CACF,EACG,UAAU,GAAM,CACf,GAAM,GAAM,EAAG,UAGf,AAAI,IAAQ,EACV,EAAG,UAAY,EAGN,EAAM,EAAG,eAAiB,EAAG,cACtC,GAAG,UAAY,EAAM,EAEzB,CAAC,CACP,CCpCO,YACL,CAAE,YAAW,WACP,CACN,EAAc,CAAC,GAAY,QAAQ,EAAG,CAAO,CAAC,EAC3C,KACC,EAAI,CAAC,CAAC,EAAQ,KAAY,GAAU,CAAC,CAAM,EAC3C,EAAU,GAAU,EAAG,CAAM,EAC1B,KACC,GAAM,EAAS,IAAM,GAAG,CAC1B,CACF,EACA,GAAe,CAAS,CAC1B,EACG,UAAU,CAAC,CAAC,EAAQ,CAAE,OAAQ,CAAE,SAAU,CACzC,GAAI,EACF,SAAS,KAAK,aAAa,gBAAiB,MAAM,EAClD,SAAS,KAAK,MAAM,IAAM,IAAI,UACzB,CACL,GAAM,GAAQ,GAAK,SAAS,SAAS,KAAK,MAAM,IAAK,EAAE,EACvD,SAAS,KAAK,gBAAgB,eAAe,EAC7C,SAAS,KAAK,MAAM,IAAM,GACtB,GACF,OAAO,SAAS,EAAG,CAAK,CAC5B,CACF,CAAC,CACP,CC7DA,AAAK,OAAO,SACV,QAAO,QAAU,SAAU,EAAa,CACtC,GAAM,GAA2B,CAAC,EAClC,OAAW,KAAO,QAAO,KAAK,CAAG,EAE/B,EAAK,KAAK,CAAC,EAAK,EAAI,EAAI,CAAC,EAG3B,MAAO,EACT,GAGF,AAAK,OAAO,QACV,QAAO,OAAS,SAAU,EAAa,CACrC,GAAM,GAAiB,CAAC,EACxB,OAAW,KAAO,QAAO,KAAK,CAAG,EAE/B,EAAK,KAAK,EAAI,EAAI,EAGpB,MAAO,EACT,GAKF,AAAI,MAAO,UAAY,aAGhB,SAAQ,UAAU,UACrB,SAAQ,UAAU,SAAW,SAC3B,EAA8B,EACxB,CACN,AAAI,MAAO,IAAM,SACf,MAAK,WAAa,EAAE,KACpB,KAAK,UAAY,EAAE,KAEnB,MAAK,WAAa,EAClB,KAAK,UAAY,EAErB,GAGG,QAAQ,UAAU,aACrB,SAAQ,UAAU,YAAc,YAC3B,EACG,CACN,GAAM,GAAS,KAAK,WACpB,GAAI,EAAQ,CACV,AAAI,EAAM,SAAW,GACnB,EAAO,YAAY,IAAI,EAGzB,OAAS,GAAI,EAAM,OAAS,EAAG,GAAK,EAAG,IAAK,CAC1C,GAAI,GAAO,EAAM,GACjB,AAAI,MAAO,IAAS,SAClB,EAAO,SAAS,eAAe,CAAI,EAC5B,EAAK,YACZ,EAAK,WAAW,YAAY,CAAI,EAGlC,AAAK,EAGH,EAAO,aAAa,KAAK,gBAAkB,CAAI,EAF/C,EAAO,aAAa,EAAM,IAAI,CAGlC,CACF,CACF,I9LHJ,SAAS,gBAAgB,UAAU,OAAO,OAAO,EACjD,SAAS,gBAAgB,UAAU,IAAI,IAAI,EAG3C,GAAM,IAAY,GAAc,EAC1B,GAAY,GAAc,EAC1B,GAAY,GAAoB,EAChC,GAAY,GAAc,EAG1B,GAAY,GAAc,EAC1B,GAAY,GAAW,oBAAoB,EAC3C,GAAY,GAAW,qBAAqB,EAC5C,GAAY,GAAW,EAGvB,GAAS,GAAc,EACvB,GAAS,SAAS,MAAM,UAAU,QAAQ,EAC5C,gCAAU,QAAS,GACnB,GAAI,KAAI,2BAA4B,GAAO,IAAI,CACjD,EACE,GAGE,GAAS,GAAI,GACnB,GAAiB,CAAE,SAAO,CAAC,EAG3B,AAAI,GAAQ,oBAAoB,GAC9B,GAAoB,CAAE,aAAW,aAAW,YAAU,CAAC,EAxHzD,OA2HA,AAAI,QAAO,UAAP,eAAgB,YAAa,QAC/B,GAAqB,CAAE,YAAU,CAAC,EAGpC,EAAM,GAAW,EAAO,EACrB,KACC,GAAM,GAAG,CACX,EACG,UAAU,IAAM,CACf,GAAU,SAAU,EAAK,EACzB,GAAU,SAAU,EAAK,CAC3B,CAAC,EAGL,GACG,KACC,EAAO,CAAC,CAAE,UAAW,IAAS,QAAQ,CACxC,EACG,UAAU,GAAO,CAChB,OAAQ,EAAI,UAGL,QACA,IACH,GAAM,GAAO,GAAmB,kBAAkB,EAClD,AAAI,MAAO,IAAS,aAClB,EAAK,MAAM,EACb,UAGG,QACA,IACH,GAAM,GAAO,GAAmB,kBAAkB,EAClD,AAAI,MAAO,IAAS,aAClB,EAAK,MAAM,EACb,MAEN,CAAC,EAGL,GAAmB,CAAE,aAAW,UAAQ,CAAC,EACzC,GAAe,CAAE,YAAU,CAAC,EAC5B,GAAgB,CAAE,aAAW,UAAQ,CAAC,EAGtC,GAAM,IAAU,GAAY,GAAoB,QAAQ,EAAG,CAAE,YAAU,CAAC,EAClE,GAAQ,GACX,KACC,EAAI,IAAM,GAAoB,MAAM,CAAC,EACrC,EAAU,GAAM,GAAU,EAAI,CAAE,aAAW,UAAQ,CAAC,CAAC,EACrD,EAAY,CAAC,CACf,EAGI,GAAW,EAGf,GAAG,GAAqB,QAAQ,EAC7B,IAAI,GAAM,GAAY,EAAI,CAAE,SAAO,CAAC,CAAC,EAGxC,GAAG,GAAqB,QAAQ,EAC7B,IAAI,GAAM,GAAY,EAAI,CAAE,aAAW,WAAS,QAAM,CAAC,CAAC,EAG3D,GAAG,GAAqB,SAAS,EAC9B,IAAI,GAAM,GAAa,CAAE,CAAC,EAG7B,GAAG,GAAqB,QAAQ,EAC7B,IAAI,GAAM,GAAY,EAAI,CAAE,UAAQ,YAAU,CAAC,CAAC,EAGnD,GAAG,GAAqB,QAAQ,EAC7B,IAAI,GAAM,GAAY,CAAE,CAAC,CAC9B,EAGM,GAAW,EAAM,IAAM,EAG3B,GAAG,GAAqB,SAAS,EAC9B,IAAI,GAAM,GAAa,EAAI,CAAE,WAAS,SAAO,CAAC,CAAC,EAGlD,GAAG,GAAqB,SAAS,EAC9B,IAAI,GAAM,GAAQ,kBAAkB,EACjC,GAAoB,EAAI,CAAE,UAAQ,YAAU,CAAC,EAC7C,CACJ,EAGF,GAAG,GAAqB,cAAc,EACnC,IAAI,GAAM,GAAiB,EAAI,CAAE,aAAW,UAAQ,CAAC,CAAC,EAGzD,GAAG,GAAqB,SAAS,EAC9B,IAAI,GAAM,EAAG,aAAa,cAAc,IAAM,aAC3C,GAAG,GAAS,IAAM,GAAa,EAAI,CAAE,aAAW,WAAS,QAAM,CAAC,CAAC,EACjE,GAAG,GAAS,IAAM,GAAa,EAAI,CAAE,aAAW,WAAS,QAAM,CAAC,CAAC,CACrE,EAGF,GAAG,GAAqB,MAAM,EAC3B,IAAI,GAAM,GAAU,EAAI,CAAE,aAAW,UAAQ,CAAC,CAAC,EAGlD,GAAG,GAAqB,KAAK,EAC1B,IAAI,GAAM,GAAqB,EAAI,CAAE,aAAW,WAAS,UAAQ,CAAC,CAAC,EAGtE,GAAG,GAAqB,KAAK,EAC1B,IAAI,GAAM,GAAe,EAAI,CAAE,aAAW,WAAS,SAAO,UAAQ,CAAC,CAAC,CACzE,CAAC,EAGK,GAAa,GAChB,KACC,EAAU,IAAM,EAAQ,EACxB,GAAU,EAAQ,EAClB,EAAY,CAAC,CACf,EAGF,GAAW,UAAU,EAMrB,OAAO,UAAa,GACpB,OAAO,UAAa,GACpB,OAAO,QAAa,GACpB,OAAO,UAAa,GACpB,OAAO,UAAa,GACpB,OAAO,QAAa,GACpB,OAAO,QAAa,GACpB,OAAO,OAAa,GACpB,OAAO,OAAa,GACpB,OAAO,WAAa", + "names": [] +} diff --git a/assets/javascripts/lunr/min/lunr.ar.min.js b/assets/javascripts/lunr/min/lunr.ar.min.js new file mode 100644 index 00000000..248ddc5d --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.ar.min.js @@ -0,0 +1 @@ +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.ar=function(){this.pipeline.reset(),this.pipeline.add(e.ar.trimmer,e.ar.stopWordFilter,e.ar.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.ar.stemmer))},e.ar.wordCharacters="ء-ٛٱـ",e.ar.trimmer=e.trimmerSupport.generateTrimmer(e.ar.wordCharacters),e.Pipeline.registerFunction(e.ar.trimmer,"trimmer-ar"),e.ar.stemmer=function(){var e=this;return e.result=!1,e.preRemoved=!1,e.sufRemoved=!1,e.pre={pre1:"ف ك ب و س ل ن ا ي ت",pre2:"ال لل",pre3:"بال وال فال تال كال ولل",pre4:"فبال كبال وبال وكال"},e.suf={suf1:"ه ك ت ن ا ي",suf2:"نك نه ها وك يا اه ون ين تن تم نا وا ان كم كن ني نن ما هم هن تك ته ات يه",suf3:"تين كهم نيه نهم ونه وها يهم ونا ونك وني وهم تكم تنا تها تني تهم كما كها ناه نكم هنا تان يها",suf4:"كموه ناها ونني ونهم تكما تموه تكاه كماه ناكم ناهم نيها وننا"},e.patterns=JSON.parse('{"pt43":[{"pt":[{"c":"ا","l":1}]},{"pt":[{"c":"ا,ت,ن,ي","l":0}],"mPt":[{"c":"ف","l":0,"m":1},{"c":"ع","l":1,"m":2},{"c":"ل","l":2,"m":3}]},{"pt":[{"c":"و","l":2}],"mPt":[{"c":"ف","l":0,"m":0},{"c":"ع","l":1,"m":1},{"c":"ل","l":2,"m":3}]},{"pt":[{"c":"ا","l":2}]},{"pt":[{"c":"ي","l":2}],"mPt":[{"c":"ف","l":0,"m":0},{"c":"ع","l":1,"m":1},{"c":"ا","l":2},{"c":"ل","l":3,"m":3}]},{"pt":[{"c":"م","l":0}]}],"pt53":[{"pt":[{"c":"ت","l":0},{"c":"ا","l":2}]},{"pt":[{"c":"ا,ن,ت,ي","l":0},{"c":"ت","l":2}],"mPt":[{"c":"ا","l":0},{"c":"ف","l":1,"m":1},{"c":"ت","l":2},{"c":"ع","l":3,"m":3},{"c":"ا","l":4},{"c":"ل","l":5,"m":4}]},{"pt":[{"c":"ا","l":0},{"c":"ا","l":2}],"mPt":[{"c":"ا","l":0},{"c":"ف","l":1,"m":1},{"c":"ع","l":2,"m":3},{"c":"ل","l":3,"m":4},{"c":"ا","l":4},{"c":"ل","l":5,"m":4}]},{"pt":[{"c":"ا","l":0},{"c":"ا","l":3}],"mPt":[{"c":"ف","l":0,"m":1},{"c":"ع","l":1,"m":2},{"c":"ل","l":2,"m":4}]},{"pt":[{"c":"ا","l":3},{"c":"ن","l":4}]},{"pt":[{"c":"ت","l":0},{"c":"ي","l":3}]},{"pt":[{"c":"م","l":0},{"c":"و","l":3}]},{"pt":[{"c":"ا","l":1},{"c":"و","l":3}]},{"pt":[{"c":"و","l":1},{"c":"ا","l":2}]},{"pt":[{"c":"م","l":0},{"c":"ا","l":3}]},{"pt":[{"c":"م","l":0},{"c":"ي","l":3}]},{"pt":[{"c":"ا","l":2},{"c":"ن","l":3}]},{"pt":[{"c":"م","l":0},{"c":"ن","l":1}],"mPt":[{"c":"ا","l":0},{"c":"ن","l":1},{"c":"ف","l":2,"m":2},{"c":"ع","l":3,"m":3},{"c":"ا","l":4},{"c":"ل","l":5,"m":4}]},{"pt":[{"c":"م","l":0},{"c":"ت","l":2}],"mPt":[{"c":"ا","l":0},{"c":"ف","l":1,"m":1},{"c":"ت","l":2},{"c":"ع","l":3,"m":3},{"c":"ا","l":4},{"c":"ل","l":5,"m":4}]},{"pt":[{"c":"م","l":0},{"c":"ا","l":2}]},{"pt":[{"c":"م","l":1},{"c":"ا","l":3}]},{"pt":[{"c":"ي,ت,ا,ن","l":0},{"c":"ت","l":1}],"mPt":[{"c":"ف","l":0,"m":2},{"c":"ع","l":1,"m":3},{"c":"ا","l":2},{"c":"ل","l":3,"m":4}]},{"pt":[{"c":"ت,ي,ا,ن","l":0},{"c":"ت","l":2}],"mPt":[{"c":"ا","l":0},{"c":"ف","l":1,"m":1},{"c":"ت","l":2},{"c":"ع","l":3,"m":3},{"c":"ا","l":4},{"c":"ل","l":5,"m":4}]},{"pt":[{"c":"ا","l":2},{"c":"ي","l":3}]},{"pt":[{"c":"ا,ي,ت,ن","l":0},{"c":"ن","l":1}],"mPt":[{"c":"ا","l":0},{"c":"ن","l":1},{"c":"ف","l":2,"m":2},{"c":"ع","l":3,"m":3},{"c":"ا","l":4},{"c":"ل","l":5,"m":4}]},{"pt":[{"c":"ا","l":3},{"c":"ء","l":4}]}],"pt63":[{"pt":[{"c":"ا","l":0},{"c":"ت","l":2},{"c":"ا","l":4}]},{"pt":[{"c":"ا,ت,ن,ي","l":0},{"c":"س","l":1},{"c":"ت","l":2}],"mPt":[{"c":"ا","l":0},{"c":"س","l":1},{"c":"ت","l":2},{"c":"ف","l":3,"m":3},{"c":"ع","l":4,"m":4},{"c":"ا","l":5},{"c":"ل","l":6,"m":5}]},{"pt":[{"c":"ا,ن,ت,ي","l":0},{"c":"و","l":3}]},{"pt":[{"c":"م","l":0},{"c":"س","l":1},{"c":"ت","l":2}],"mPt":[{"c":"ا","l":0},{"c":"س","l":1},{"c":"ت","l":2},{"c":"ف","l":3,"m":3},{"c":"ع","l":4,"m":4},{"c":"ا","l":5},{"c":"ل","l":6,"m":5}]},{"pt":[{"c":"ي","l":1},{"c":"ي","l":3},{"c":"ا","l":4},{"c":"ء","l":5}]},{"pt":[{"c":"ا","l":0},{"c":"ن","l":1},{"c":"ا","l":4}]}],"pt54":[{"pt":[{"c":"ت","l":0}]},{"pt":[{"c":"ا,ي,ت,ن","l":0}],"mPt":[{"c":"ا","l":0},{"c":"ف","l":1,"m":1},{"c":"ع","l":2,"m":2},{"c":"ل","l":3,"m":3},{"c":"ر","l":4,"m":4},{"c":"ا","l":5},{"c":"ر","l":6,"m":4}]},{"pt":[{"c":"م","l":0}],"mPt":[{"c":"ا","l":0},{"c":"ف","l":1,"m":1},{"c":"ع","l":2,"m":2},{"c":"ل","l":3,"m":3},{"c":"ر","l":4,"m":4},{"c":"ا","l":5},{"c":"ر","l":6,"m":4}]},{"pt":[{"c":"ا","l":2}]},{"pt":[{"c":"ا","l":0},{"c":"ن","l":2}]}],"pt64":[{"pt":[{"c":"ا","l":0},{"c":"ا","l":4}]},{"pt":[{"c":"م","l":0},{"c":"ت","l":1}]}],"pt73":[{"pt":[{"c":"ا","l":0},{"c":"س","l":1},{"c":"ت","l":2},{"c":"ا","l":5}]}],"pt75":[{"pt":[{"c":"ا","l":0},{"c":"ا","l":5}]}]}'),e.execArray=["cleanWord","removeDiacritics","cleanAlef","removeStopWords","normalizeHamzaAndAlef","removeStartWaw","removePre432","removeEndTaa","wordCheck"],e.stem=function(){var r=0;for(e.result=!1,e.preRemoved=!1,e.sufRemoved=!1;r=0)return!0},e.normalizeHamzaAndAlef=function(){return e.word=e.word.replace("ؤ","ء"),e.word=e.word.replace("ئ","ء"),e.word=e.word.replace(/([\u0627])\1+/gi,"ا"),!1},e.removeEndTaa=function(){return!(e.word.length>2)||(e.word=e.word.replace(/[\u0627]$/,""),e.word=e.word.replace("ة",""),!1)},e.removeStartWaw=function(){return e.word.length>3&&"و"==e.word[0]&&"و"==e.word[1]&&(e.word=e.word.slice(1)),!1},e.removePre432=function(){var r=e.word;if(e.word.length>=7){var t=new RegExp("^("+e.pre.pre4.split(" ").join("|")+")");e.word=e.word.replace(t,"")}if(e.word==r&&e.word.length>=6){var c=new RegExp("^("+e.pre.pre3.split(" ").join("|")+")");e.word=e.word.replace(c,"")}if(e.word==r&&e.word.length>=5){var l=new RegExp("^("+e.pre.pre2.split(" ").join("|")+")");e.word=e.word.replace(l,"")}return r!=e.word&&(e.preRemoved=!0),!1},e.patternCheck=function(r){for(var t=0;t3){var t=new RegExp("^("+e.pre.pre1.split(" ").join("|")+")");e.word=e.word.replace(t,"")}return r!=e.word&&(e.preRemoved=!0),!1},e.removeSuf1=function(){var r=e.word;if(0==e.sufRemoved&&e.word.length>3){var t=new RegExp("("+e.suf.suf1.split(" ").join("|")+")$");e.word=e.word.replace(t,"")}return r!=e.word&&(e.sufRemoved=!0),!1},e.removeSuf432=function(){var r=e.word;if(e.word.length>=6){var t=new RegExp("("+e.suf.suf4.split(" ").join("|")+")$");e.word=e.word.replace(t,"")}if(e.word==r&&e.word.length>=5){var c=new RegExp("("+e.suf.suf3.split(" ").join("|")+")$");e.word=e.word.replace(c,"")}if(e.word==r&&e.word.length>=4){var l=new RegExp("("+e.suf.suf2.split(" ").join("|")+")$");e.word=e.word.replace(l,"")}return r!=e.word&&(e.sufRemoved=!0),!1},e.wordCheck=function(){for(var r=(e.word,[e.removeSuf432,e.removeSuf1,e.removePre1]),t=0,c=!1;e.word.length>=7&&!e.result&&t=f.limit)return;f.cursor++}for(;!f.out_grouping(w,97,248);){if(f.cursor>=f.limit)return;f.cursor++}d=f.cursor,d=d&&(r=f.limit_backward,f.limit_backward=d,f.ket=f.cursor,e=f.find_among_b(c,32),f.limit_backward=r,e))switch(f.bra=f.cursor,e){case 1:f.slice_del();break;case 2:f.in_grouping_b(p,97,229)&&f.slice_del()}}function t(){var e,r=f.limit-f.cursor;f.cursor>=d&&(e=f.limit_backward,f.limit_backward=d,f.ket=f.cursor,f.find_among_b(l,4)?(f.bra=f.cursor,f.limit_backward=e,f.cursor=f.limit-r,f.cursor>f.limit_backward&&(f.cursor--,f.bra=f.cursor,f.slice_del())):f.limit_backward=e)}function s(){var e,r,i,n=f.limit-f.cursor;if(f.ket=f.cursor,f.eq_s_b(2,"st")&&(f.bra=f.cursor,f.eq_s_b(2,"ig")&&f.slice_del()),f.cursor=f.limit-n,f.cursor>=d&&(r=f.limit_backward,f.limit_backward=d,f.ket=f.cursor,e=f.find_among_b(m,5),f.limit_backward=r,e))switch(f.bra=f.cursor,e){case 1:f.slice_del(),i=f.limit-f.cursor,t(),f.cursor=f.limit-i;break;case 2:f.slice_from("løs")}}function o(){var e;f.cursor>=d&&(e=f.limit_backward,f.limit_backward=d,f.ket=f.cursor,f.out_grouping_b(w,97,248)?(f.bra=f.cursor,u=f.slice_to(u),f.limit_backward=e,f.eq_v_b(u)&&f.slice_del()):f.limit_backward=e)}var a,d,u,c=[new r("hed",-1,1),new r("ethed",0,1),new r("ered",-1,1),new r("e",-1,1),new r("erede",3,1),new r("ende",3,1),new r("erende",5,1),new r("ene",3,1),new r("erne",3,1),new r("ere",3,1),new r("en",-1,1),new r("heden",10,1),new r("eren",10,1),new r("er",-1,1),new r("heder",13,1),new r("erer",13,1),new r("s",-1,2),new r("heds",16,1),new r("es",16,1),new r("endes",18,1),new r("erendes",19,1),new r("enes",18,1),new r("ernes",18,1),new r("eres",18,1),new r("ens",16,1),new r("hedens",24,1),new r("erens",24,1),new r("ers",16,1),new r("ets",16,1),new r("erets",28,1),new r("et",-1,1),new r("eret",30,1)],l=[new r("gd",-1,-1),new r("dt",-1,-1),new r("gt",-1,-1),new r("kt",-1,-1)],m=[new r("ig",-1,1),new r("lig",0,1),new r("elig",1,1),new r("els",-1,1),new r("løst",-1,2)],w=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,48,0,128],p=[239,254,42,3,0,0,0,0,0,0,0,0,0,0,0,0,16],f=new i;this.setCurrent=function(e){f.setCurrent(e)},this.getCurrent=function(){return f.getCurrent()},this.stem=function(){var r=f.cursor;return e(),f.limit_backward=r,f.cursor=f.limit,n(),f.cursor=f.limit,t(),f.cursor=f.limit,s(),f.cursor=f.limit,o(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return n.setCurrent(e),n.stem(),n.getCurrent()}):(n.setCurrent(e),n.stem(),n.getCurrent())}}(),e.Pipeline.registerFunction(e.da.stemmer,"stemmer-da"),e.da.stopWordFilter=e.generateStopWordFilter("ad af alle alt anden at blev blive bliver da de dem den denne der deres det dette dig din disse dog du efter eller en end er et for fra ham han hans har havde have hende hendes her hos hun hvad hvis hvor i ikke ind jeg jer jo kunne man mange med meget men mig min mine mit mod ned noget nogle nu når og også om op os over på selv sig sin sine sit skal skulle som sådan thi til ud under var vi vil ville vor være været".split(" ")),e.Pipeline.registerFunction(e.da.stopWordFilter,"stopWordFilter-da")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.de.min.js b/assets/javascripts/lunr/min/lunr.de.min.js new file mode 100644 index 00000000..f3b5c108 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.de.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `German` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.de=function(){this.pipeline.reset(),this.pipeline.add(e.de.trimmer,e.de.stopWordFilter,e.de.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.de.stemmer))},e.de.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.de.trimmer=e.trimmerSupport.generateTrimmer(e.de.wordCharacters),e.Pipeline.registerFunction(e.de.trimmer,"trimmer-de"),e.de.stemmer=function(){var r=e.stemmerSupport.Among,n=e.stemmerSupport.SnowballProgram,i=new function(){function e(e,r,n){return!(!v.eq_s(1,e)||(v.ket=v.cursor,!v.in_grouping(p,97,252)))&&(v.slice_from(r),v.cursor=n,!0)}function i(){for(var r,n,i,s,t=v.cursor;;)if(r=v.cursor,v.bra=r,v.eq_s(1,"ß"))v.ket=v.cursor,v.slice_from("ss");else{if(r>=v.limit)break;v.cursor=r+1}for(v.cursor=t;;)for(n=v.cursor;;){if(i=v.cursor,v.in_grouping(p,97,252)){if(s=v.cursor,v.bra=s,e("u","U",i))break;if(v.cursor=s,e("y","Y",i))break}if(i>=v.limit)return void(v.cursor=n);v.cursor=i+1}}function s(){for(;!v.in_grouping(p,97,252);){if(v.cursor>=v.limit)return!0;v.cursor++}for(;!v.out_grouping(p,97,252);){if(v.cursor>=v.limit)return!0;v.cursor++}return!1}function t(){m=v.limit,l=m;var e=v.cursor+3;0<=e&&e<=v.limit&&(d=e,s()||(m=v.cursor,m=v.limit)return;v.cursor++}}}function c(){return m<=v.cursor}function u(){return l<=v.cursor}function a(){var e,r,n,i,s=v.limit-v.cursor;if(v.ket=v.cursor,(e=v.find_among_b(w,7))&&(v.bra=v.cursor,c()))switch(e){case 1:v.slice_del();break;case 2:v.slice_del(),v.ket=v.cursor,v.eq_s_b(1,"s")&&(v.bra=v.cursor,v.eq_s_b(3,"nis")&&v.slice_del());break;case 3:v.in_grouping_b(g,98,116)&&v.slice_del()}if(v.cursor=v.limit-s,v.ket=v.cursor,(e=v.find_among_b(f,4))&&(v.bra=v.cursor,c()))switch(e){case 1:v.slice_del();break;case 2:if(v.in_grouping_b(k,98,116)){var t=v.cursor-3;v.limit_backward<=t&&t<=v.limit&&(v.cursor=t,v.slice_del())}}if(v.cursor=v.limit-s,v.ket=v.cursor,(e=v.find_among_b(_,8))&&(v.bra=v.cursor,u()))switch(e){case 1:v.slice_del(),v.ket=v.cursor,v.eq_s_b(2,"ig")&&(v.bra=v.cursor,r=v.limit-v.cursor,v.eq_s_b(1,"e")||(v.cursor=v.limit-r,u()&&v.slice_del()));break;case 2:n=v.limit-v.cursor,v.eq_s_b(1,"e")||(v.cursor=v.limit-n,v.slice_del());break;case 3:if(v.slice_del(),v.ket=v.cursor,i=v.limit-v.cursor,!v.eq_s_b(2,"er")&&(v.cursor=v.limit-i,!v.eq_s_b(2,"en")))break;v.bra=v.cursor,c()&&v.slice_del();break;case 4:v.slice_del(),v.ket=v.cursor,e=v.find_among_b(b,2),e&&(v.bra=v.cursor,u()&&1==e&&v.slice_del())}}var d,l,m,h=[new r("",-1,6),new r("U",0,2),new r("Y",0,1),new r("ä",0,3),new r("ö",0,4),new r("ü",0,5)],w=[new r("e",-1,2),new r("em",-1,1),new r("en",-1,2),new r("ern",-1,1),new r("er",-1,1),new r("s",-1,3),new r("es",5,2)],f=[new r("en",-1,1),new r("er",-1,1),new r("st",-1,2),new r("est",2,1)],b=[new r("ig",-1,1),new r("lich",-1,1)],_=[new r("end",-1,1),new r("ig",-1,2),new r("ung",-1,1),new r("lich",-1,3),new r("isch",-1,2),new r("ik",-1,2),new r("heit",-1,3),new r("keit",-1,4)],p=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,8,0,32,8],g=[117,30,5],k=[117,30,4],v=new n;this.setCurrent=function(e){v.setCurrent(e)},this.getCurrent=function(){return v.getCurrent()},this.stem=function(){var e=v.cursor;return i(),v.cursor=e,t(),v.limit_backward=e,v.cursor=v.limit,a(),v.cursor=v.limit_backward,o(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return i.setCurrent(e),i.stem(),i.getCurrent()}):(i.setCurrent(e),i.stem(),i.getCurrent())}}(),e.Pipeline.registerFunction(e.de.stemmer,"stemmer-de"),e.de.stopWordFilter=e.generateStopWordFilter("aber alle allem allen aller alles als also am an ander andere anderem anderen anderer anderes anderm andern anderr anders auch auf aus bei bin bis bist da damit dann das dasselbe dazu daß dein deine deinem deinen deiner deines dem demselben den denn denselben der derer derselbe derselben des desselben dessen dich die dies diese dieselbe dieselben diesem diesen dieser dieses dir doch dort du durch ein eine einem einen einer eines einig einige einigem einigen einiger einiges einmal er es etwas euch euer eure eurem euren eurer eures für gegen gewesen hab habe haben hat hatte hatten hier hin hinter ich ihm ihn ihnen ihr ihre ihrem ihren ihrer ihres im in indem ins ist jede jedem jeden jeder jedes jene jenem jenen jener jenes jetzt kann kein keine keinem keinen keiner keines können könnte machen man manche manchem manchen mancher manches mein meine meinem meinen meiner meines mich mir mit muss musste nach nicht nichts noch nun nur ob oder ohne sehr sein seine seinem seinen seiner seines selbst sich sie sind so solche solchem solchen solcher solches soll sollte sondern sonst um und uns unse unsem unsen unser unses unter viel vom von vor war waren warst was weg weil weiter welche welchem welchen welcher welches wenn werde werden wie wieder will wir wird wirst wo wollen wollte während würde würden zu zum zur zwar zwischen über".split(" ")),e.Pipeline.registerFunction(e.de.stopWordFilter,"stopWordFilter-de")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.du.min.js b/assets/javascripts/lunr/min/lunr.du.min.js new file mode 100644 index 00000000..49a0f3f0 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.du.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Dutch` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");console.warn('[Lunr Languages] Please use the "nl" instead of the "du". The "nl" code is the standard code for Dutch language, and "du" will be removed in the next major versions.'),e.du=function(){this.pipeline.reset(),this.pipeline.add(e.du.trimmer,e.du.stopWordFilter,e.du.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.du.stemmer))},e.du.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.du.trimmer=e.trimmerSupport.generateTrimmer(e.du.wordCharacters),e.Pipeline.registerFunction(e.du.trimmer,"trimmer-du"),e.du.stemmer=function(){var r=e.stemmerSupport.Among,i=e.stemmerSupport.SnowballProgram,n=new function(){function e(){for(var e,r,i,o=C.cursor;;){if(C.bra=C.cursor,e=C.find_among(b,11))switch(C.ket=C.cursor,e){case 1:C.slice_from("a");continue;case 2:C.slice_from("e");continue;case 3:C.slice_from("i");continue;case 4:C.slice_from("o");continue;case 5:C.slice_from("u");continue;case 6:if(C.cursor>=C.limit)break;C.cursor++;continue}break}for(C.cursor=o,C.bra=o,C.eq_s(1,"y")?(C.ket=C.cursor,C.slice_from("Y")):C.cursor=o;;)if(r=C.cursor,C.in_grouping(q,97,232)){if(i=C.cursor,C.bra=i,C.eq_s(1,"i"))C.ket=C.cursor,C.in_grouping(q,97,232)&&(C.slice_from("I"),C.cursor=r);else if(C.cursor=i,C.eq_s(1,"y"))C.ket=C.cursor,C.slice_from("Y"),C.cursor=r;else if(n(r))break}else if(n(r))break}function n(e){return C.cursor=e,e>=C.limit||(C.cursor++,!1)}function o(){_=C.limit,f=_,t()||(_=C.cursor,_<3&&(_=3),t()||(f=C.cursor))}function t(){for(;!C.in_grouping(q,97,232);){if(C.cursor>=C.limit)return!0;C.cursor++}for(;!C.out_grouping(q,97,232);){if(C.cursor>=C.limit)return!0;C.cursor++}return!1}function s(){for(var e;;)if(C.bra=C.cursor,e=C.find_among(p,3))switch(C.ket=C.cursor,e){case 1:C.slice_from("y");break;case 2:C.slice_from("i");break;case 3:if(C.cursor>=C.limit)return;C.cursor++}}function u(){return _<=C.cursor}function c(){return f<=C.cursor}function a(){var e=C.limit-C.cursor;C.find_among_b(g,3)&&(C.cursor=C.limit-e,C.ket=C.cursor,C.cursor>C.limit_backward&&(C.cursor--,C.bra=C.cursor,C.slice_del()))}function l(){var e;w=!1,C.ket=C.cursor,C.eq_s_b(1,"e")&&(C.bra=C.cursor,u()&&(e=C.limit-C.cursor,C.out_grouping_b(q,97,232)&&(C.cursor=C.limit-e,C.slice_del(),w=!0,a())))}function m(){var e;u()&&(e=C.limit-C.cursor,C.out_grouping_b(q,97,232)&&(C.cursor=C.limit-e,C.eq_s_b(3,"gem")||(C.cursor=C.limit-e,C.slice_del(),a())))}function d(){var e,r,i,n,o,t,s=C.limit-C.cursor;if(C.ket=C.cursor,e=C.find_among_b(h,5))switch(C.bra=C.cursor,e){case 1:u()&&C.slice_from("heid");break;case 2:m();break;case 3:u()&&C.out_grouping_b(z,97,232)&&C.slice_del()}if(C.cursor=C.limit-s,l(),C.cursor=C.limit-s,C.ket=C.cursor,C.eq_s_b(4,"heid")&&(C.bra=C.cursor,c()&&(r=C.limit-C.cursor,C.eq_s_b(1,"c")||(C.cursor=C.limit-r,C.slice_del(),C.ket=C.cursor,C.eq_s_b(2,"en")&&(C.bra=C.cursor,m())))),C.cursor=C.limit-s,C.ket=C.cursor,e=C.find_among_b(k,6))switch(C.bra=C.cursor,e){case 1:if(c()){if(C.slice_del(),i=C.limit-C.cursor,C.ket=C.cursor,C.eq_s_b(2,"ig")&&(C.bra=C.cursor,c()&&(n=C.limit-C.cursor,!C.eq_s_b(1,"e")))){C.cursor=C.limit-n,C.slice_del();break}C.cursor=C.limit-i,a()}break;case 2:c()&&(o=C.limit-C.cursor,C.eq_s_b(1,"e")||(C.cursor=C.limit-o,C.slice_del()));break;case 3:c()&&(C.slice_del(),l());break;case 4:c()&&C.slice_del();break;case 5:c()&&w&&C.slice_del()}C.cursor=C.limit-s,C.out_grouping_b(j,73,232)&&(t=C.limit-C.cursor,C.find_among_b(v,4)&&C.out_grouping_b(q,97,232)&&(C.cursor=C.limit-t,C.ket=C.cursor,C.cursor>C.limit_backward&&(C.cursor--,C.bra=C.cursor,C.slice_del())))}var f,_,w,b=[new r("",-1,6),new r("á",0,1),new r("ä",0,1),new r("é",0,2),new r("ë",0,2),new r("í",0,3),new r("ï",0,3),new r("ó",0,4),new r("ö",0,4),new r("ú",0,5),new r("ü",0,5)],p=[new r("",-1,3),new r("I",0,2),new r("Y",0,1)],g=[new r("dd",-1,-1),new r("kk",-1,-1),new r("tt",-1,-1)],h=[new r("ene",-1,2),new r("se",-1,3),new r("en",-1,2),new r("heden",2,1),new r("s",-1,3)],k=[new r("end",-1,1),new r("ig",-1,2),new r("ing",-1,1),new r("lijk",-1,3),new r("baar",-1,4),new r("bar",-1,5)],v=[new r("aa",-1,-1),new r("ee",-1,-1),new r("oo",-1,-1),new r("uu",-1,-1)],q=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,128],j=[1,0,0,17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,128],z=[17,67,16,1,0,0,0,0,0,0,0,0,0,0,0,0,128],C=new i;this.setCurrent=function(e){C.setCurrent(e)},this.getCurrent=function(){return C.getCurrent()},this.stem=function(){var r=C.cursor;return e(),C.cursor=r,o(),C.limit_backward=r,C.cursor=C.limit,d(),C.cursor=C.limit_backward,s(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return n.setCurrent(e),n.stem(),n.getCurrent()}):(n.setCurrent(e),n.stem(),n.getCurrent())}}(),e.Pipeline.registerFunction(e.du.stemmer,"stemmer-du"),e.du.stopWordFilter=e.generateStopWordFilter(" aan al alles als altijd andere ben bij daar dan dat de der deze die dit doch doen door dus een eens en er ge geen geweest haar had heb hebben heeft hem het hier hij hoe hun iemand iets ik in is ja je kan kon kunnen maar me meer men met mij mijn moet na naar niet niets nog nu of om omdat onder ons ook op over reeds te tegen toch toen tot u uit uw van veel voor want waren was wat werd wezen wie wil worden wordt zal ze zelf zich zij zijn zo zonder zou".split(" ")),e.Pipeline.registerFunction(e.du.stopWordFilter,"stopWordFilter-du")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.es.min.js b/assets/javascripts/lunr/min/lunr.es.min.js new file mode 100644 index 00000000..2989d342 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.es.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Spanish` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,s){"function"==typeof define&&define.amd?define(s):"object"==typeof exports?module.exports=s():s()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.es=function(){this.pipeline.reset(),this.pipeline.add(e.es.trimmer,e.es.stopWordFilter,e.es.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.es.stemmer))},e.es.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.es.trimmer=e.trimmerSupport.generateTrimmer(e.es.wordCharacters),e.Pipeline.registerFunction(e.es.trimmer,"trimmer-es"),e.es.stemmer=function(){var s=e.stemmerSupport.Among,r=e.stemmerSupport.SnowballProgram,n=new function(){function e(){if(A.out_grouping(x,97,252)){for(;!A.in_grouping(x,97,252);){if(A.cursor>=A.limit)return!0;A.cursor++}return!1}return!0}function n(){if(A.in_grouping(x,97,252)){var s=A.cursor;if(e()){if(A.cursor=s,!A.in_grouping(x,97,252))return!0;for(;!A.out_grouping(x,97,252);){if(A.cursor>=A.limit)return!0;A.cursor++}}return!1}return!0}function i(){var s,r=A.cursor;if(n()){if(A.cursor=r,!A.out_grouping(x,97,252))return;if(s=A.cursor,e()){if(A.cursor=s,!A.in_grouping(x,97,252)||A.cursor>=A.limit)return;A.cursor++}}g=A.cursor}function a(){for(;!A.in_grouping(x,97,252);){if(A.cursor>=A.limit)return!1;A.cursor++}for(;!A.out_grouping(x,97,252);){if(A.cursor>=A.limit)return!1;A.cursor++}return!0}function t(){var e=A.cursor;g=A.limit,p=g,v=g,i(),A.cursor=e,a()&&(p=A.cursor,a()&&(v=A.cursor))}function o(){for(var e;;){if(A.bra=A.cursor,e=A.find_among(k,6))switch(A.ket=A.cursor,e){case 1:A.slice_from("a");continue;case 2:A.slice_from("e");continue;case 3:A.slice_from("i");continue;case 4:A.slice_from("o");continue;case 5:A.slice_from("u");continue;case 6:if(A.cursor>=A.limit)break;A.cursor++;continue}break}}function u(){return g<=A.cursor}function w(){return p<=A.cursor}function c(){return v<=A.cursor}function m(){var e;if(A.ket=A.cursor,A.find_among_b(y,13)&&(A.bra=A.cursor,(e=A.find_among_b(q,11))&&u()))switch(e){case 1:A.bra=A.cursor,A.slice_from("iendo");break;case 2:A.bra=A.cursor,A.slice_from("ando");break;case 3:A.bra=A.cursor,A.slice_from("ar");break;case 4:A.bra=A.cursor,A.slice_from("er");break;case 5:A.bra=A.cursor,A.slice_from("ir");break;case 6:A.slice_del();break;case 7:A.eq_s_b(1,"u")&&A.slice_del()}}function l(e,s){if(!c())return!0;A.slice_del(),A.ket=A.cursor;var r=A.find_among_b(e,s);return r&&(A.bra=A.cursor,1==r&&c()&&A.slice_del()),!1}function d(e){return!c()||(A.slice_del(),A.ket=A.cursor,A.eq_s_b(2,e)&&(A.bra=A.cursor,c()&&A.slice_del()),!1)}function b(){var e;if(A.ket=A.cursor,e=A.find_among_b(S,46)){switch(A.bra=A.cursor,e){case 1:if(!c())return!1;A.slice_del();break;case 2:if(d("ic"))return!1;break;case 3:if(!c())return!1;A.slice_from("log");break;case 4:if(!c())return!1;A.slice_from("u");break;case 5:if(!c())return!1;A.slice_from("ente");break;case 6:if(!w())return!1;A.slice_del(),A.ket=A.cursor,e=A.find_among_b(C,4),e&&(A.bra=A.cursor,c()&&(A.slice_del(),1==e&&(A.ket=A.cursor,A.eq_s_b(2,"at")&&(A.bra=A.cursor,c()&&A.slice_del()))));break;case 7:if(l(P,3))return!1;break;case 8:if(l(F,3))return!1;break;case 9:if(d("at"))return!1}return!0}return!1}function f(){var e,s;if(A.cursor>=g&&(s=A.limit_backward,A.limit_backward=g,A.ket=A.cursor,e=A.find_among_b(W,12),A.limit_backward=s,e)){if(A.bra=A.cursor,1==e){if(!A.eq_s_b(1,"u"))return!1;A.slice_del()}return!0}return!1}function _(){var e,s,r,n;if(A.cursor>=g&&(s=A.limit_backward,A.limit_backward=g,A.ket=A.cursor,e=A.find_among_b(L,96),A.limit_backward=s,e))switch(A.bra=A.cursor,e){case 1:r=A.limit-A.cursor,A.eq_s_b(1,"u")?(n=A.limit-A.cursor,A.eq_s_b(1,"g")?A.cursor=A.limit-n:A.cursor=A.limit-r):A.cursor=A.limit-r,A.bra=A.cursor;case 2:A.slice_del()}}function h(){var e,s;if(A.ket=A.cursor,e=A.find_among_b(z,8))switch(A.bra=A.cursor,e){case 1:u()&&A.slice_del();break;case 2:u()&&(A.slice_del(),A.ket=A.cursor,A.eq_s_b(1,"u")&&(A.bra=A.cursor,s=A.limit-A.cursor,A.eq_s_b(1,"g")&&(A.cursor=A.limit-s,u()&&A.slice_del())))}}var v,p,g,k=[new s("",-1,6),new s("á",0,1),new s("é",0,2),new s("í",0,3),new s("ó",0,4),new s("ú",0,5)],y=[new s("la",-1,-1),new s("sela",0,-1),new s("le",-1,-1),new s("me",-1,-1),new s("se",-1,-1),new s("lo",-1,-1),new s("selo",5,-1),new s("las",-1,-1),new s("selas",7,-1),new s("les",-1,-1),new s("los",-1,-1),new s("selos",10,-1),new s("nos",-1,-1)],q=[new s("ando",-1,6),new s("iendo",-1,6),new s("yendo",-1,7),new s("ándo",-1,2),new s("iéndo",-1,1),new s("ar",-1,6),new s("er",-1,6),new s("ir",-1,6),new s("ár",-1,3),new s("ér",-1,4),new s("ír",-1,5)],C=[new s("ic",-1,-1),new s("ad",-1,-1),new s("os",-1,-1),new s("iv",-1,1)],P=[new s("able",-1,1),new s("ible",-1,1),new s("ante",-1,1)],F=[new s("ic",-1,1),new s("abil",-1,1),new s("iv",-1,1)],S=[new s("ica",-1,1),new s("ancia",-1,2),new s("encia",-1,5),new s("adora",-1,2),new s("osa",-1,1),new s("ista",-1,1),new s("iva",-1,9),new s("anza",-1,1),new s("logía",-1,3),new s("idad",-1,8),new s("able",-1,1),new s("ible",-1,1),new s("ante",-1,2),new s("mente",-1,7),new s("amente",13,6),new s("ación",-1,2),new s("ución",-1,4),new s("ico",-1,1),new s("ismo",-1,1),new s("oso",-1,1),new s("amiento",-1,1),new s("imiento",-1,1),new s("ivo",-1,9),new s("ador",-1,2),new s("icas",-1,1),new s("ancias",-1,2),new s("encias",-1,5),new s("adoras",-1,2),new s("osas",-1,1),new s("istas",-1,1),new s("ivas",-1,9),new s("anzas",-1,1),new s("logías",-1,3),new s("idades",-1,8),new s("ables",-1,1),new s("ibles",-1,1),new s("aciones",-1,2),new s("uciones",-1,4),new s("adores",-1,2),new s("antes",-1,2),new s("icos",-1,1),new s("ismos",-1,1),new s("osos",-1,1),new s("amientos",-1,1),new s("imientos",-1,1),new s("ivos",-1,9)],W=[new s("ya",-1,1),new s("ye",-1,1),new s("yan",-1,1),new s("yen",-1,1),new s("yeron",-1,1),new s("yendo",-1,1),new s("yo",-1,1),new s("yas",-1,1),new s("yes",-1,1),new s("yais",-1,1),new s("yamos",-1,1),new s("yó",-1,1)],L=[new s("aba",-1,2),new s("ada",-1,2),new s("ida",-1,2),new s("ara",-1,2),new s("iera",-1,2),new s("ía",-1,2),new s("aría",5,2),new s("ería",5,2),new s("iría",5,2),new s("ad",-1,2),new s("ed",-1,2),new s("id",-1,2),new s("ase",-1,2),new s("iese",-1,2),new s("aste",-1,2),new s("iste",-1,2),new s("an",-1,2),new s("aban",16,2),new s("aran",16,2),new s("ieran",16,2),new s("ían",16,2),new s("arían",20,2),new s("erían",20,2),new s("irían",20,2),new s("en",-1,1),new s("asen",24,2),new s("iesen",24,2),new s("aron",-1,2),new s("ieron",-1,2),new s("arán",-1,2),new s("erán",-1,2),new s("irán",-1,2),new s("ado",-1,2),new s("ido",-1,2),new s("ando",-1,2),new s("iendo",-1,2),new s("ar",-1,2),new s("er",-1,2),new s("ir",-1,2),new s("as",-1,2),new s("abas",39,2),new s("adas",39,2),new s("idas",39,2),new s("aras",39,2),new s("ieras",39,2),new s("ías",39,2),new s("arías",45,2),new s("erías",45,2),new s("irías",45,2),new s("es",-1,1),new s("ases",49,2),new s("ieses",49,2),new s("abais",-1,2),new s("arais",-1,2),new s("ierais",-1,2),new s("íais",-1,2),new s("aríais",55,2),new s("eríais",55,2),new s("iríais",55,2),new s("aseis",-1,2),new s("ieseis",-1,2),new s("asteis",-1,2),new s("isteis",-1,2),new s("áis",-1,2),new s("éis",-1,1),new s("aréis",64,2),new s("eréis",64,2),new s("iréis",64,2),new s("ados",-1,2),new s("idos",-1,2),new s("amos",-1,2),new s("ábamos",70,2),new s("áramos",70,2),new s("iéramos",70,2),new s("íamos",70,2),new s("aríamos",74,2),new s("eríamos",74,2),new s("iríamos",74,2),new s("emos",-1,1),new s("aremos",78,2),new s("eremos",78,2),new s("iremos",78,2),new s("ásemos",78,2),new s("iésemos",78,2),new s("imos",-1,2),new s("arás",-1,2),new s("erás",-1,2),new s("irás",-1,2),new s("ís",-1,2),new s("ará",-1,2),new s("erá",-1,2),new s("irá",-1,2),new s("aré",-1,2),new s("eré",-1,2),new s("iré",-1,2),new s("ió",-1,2)],z=[new s("a",-1,1),new s("e",-1,2),new s("o",-1,1),new s("os",-1,1),new s("á",-1,1),new s("é",-1,2),new s("í",-1,1),new s("ó",-1,1)],x=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,0,1,17,4,10],A=new r;this.setCurrent=function(e){A.setCurrent(e)},this.getCurrent=function(){return A.getCurrent()},this.stem=function(){var e=A.cursor;return t(),A.limit_backward=e,A.cursor=A.limit,m(),A.cursor=A.limit,b()||(A.cursor=A.limit,f()||(A.cursor=A.limit,_())),A.cursor=A.limit,h(),A.cursor=A.limit_backward,o(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return n.setCurrent(e),n.stem(),n.getCurrent()}):(n.setCurrent(e),n.stem(),n.getCurrent())}}(),e.Pipeline.registerFunction(e.es.stemmer,"stemmer-es"),e.es.stopWordFilter=e.generateStopWordFilter("a al algo algunas algunos ante antes como con contra cual cuando de del desde donde durante e el ella ellas ellos en entre era erais eran eras eres es esa esas ese eso esos esta estaba estabais estaban estabas estad estada estadas estado estados estamos estando estar estaremos estará estarán estarás estaré estaréis estaría estaríais estaríamos estarían estarías estas este estemos esto estos estoy estuve estuviera estuvierais estuvieran estuvieras estuvieron estuviese estuvieseis estuviesen estuvieses estuvimos estuviste estuvisteis estuviéramos estuviésemos estuvo está estábamos estáis están estás esté estéis estén estés fue fuera fuerais fueran fueras fueron fuese fueseis fuesen fueses fui fuimos fuiste fuisteis fuéramos fuésemos ha habida habidas habido habidos habiendo habremos habrá habrán habrás habré habréis habría habríais habríamos habrían habrías habéis había habíais habíamos habían habías han has hasta hay haya hayamos hayan hayas hayáis he hemos hube hubiera hubierais hubieran hubieras hubieron hubiese hubieseis hubiesen hubieses hubimos hubiste hubisteis hubiéramos hubiésemos hubo la las le les lo los me mi mis mucho muchos muy más mí mía mías mío míos nada ni no nos nosotras nosotros nuestra nuestras nuestro nuestros o os otra otras otro otros para pero poco por porque que quien quienes qué se sea seamos sean seas seremos será serán serás seré seréis sería seríais seríamos serían serías seáis sido siendo sin sobre sois somos son soy su sus suya suyas suyo suyos sí también tanto te tendremos tendrá tendrán tendrás tendré tendréis tendría tendríais tendríamos tendrían tendrías tened tenemos tenga tengamos tengan tengas tengo tengáis tenida tenidas tenido tenidos teniendo tenéis tenía teníais teníamos tenían tenías ti tiene tienen tienes todo todos tu tus tuve tuviera tuvierais tuvieran tuvieras tuvieron tuviese tuvieseis tuviesen tuvieses tuvimos tuviste tuvisteis tuviéramos tuviésemos tuvo tuya tuyas tuyo tuyos tú un una uno unos vosotras vosotros vuestra vuestras vuestro vuestros y ya yo él éramos".split(" ")),e.Pipeline.registerFunction(e.es.stopWordFilter,"stopWordFilter-es")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.fi.min.js b/assets/javascripts/lunr/min/lunr.fi.min.js new file mode 100644 index 00000000..29f5dfce --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.fi.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Finnish` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(i,e){"function"==typeof define&&define.amd?define(e):"object"==typeof exports?module.exports=e():e()(i.lunr)}(this,function(){return function(i){if(void 0===i)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===i.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");i.fi=function(){this.pipeline.reset(),this.pipeline.add(i.fi.trimmer,i.fi.stopWordFilter,i.fi.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(i.fi.stemmer))},i.fi.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",i.fi.trimmer=i.trimmerSupport.generateTrimmer(i.fi.wordCharacters),i.Pipeline.registerFunction(i.fi.trimmer,"trimmer-fi"),i.fi.stemmer=function(){var e=i.stemmerSupport.Among,r=i.stemmerSupport.SnowballProgram,n=new function(){function i(){f=A.limit,d=f,n()||(f=A.cursor,n()||(d=A.cursor))}function n(){for(var i;;){if(i=A.cursor,A.in_grouping(W,97,246))break;if(A.cursor=i,i>=A.limit)return!0;A.cursor++}for(A.cursor=i;!A.out_grouping(W,97,246);){if(A.cursor>=A.limit)return!0;A.cursor++}return!1}function t(){return d<=A.cursor}function s(){var i,e;if(A.cursor>=f)if(e=A.limit_backward,A.limit_backward=f,A.ket=A.cursor,i=A.find_among_b(h,10)){switch(A.bra=A.cursor,A.limit_backward=e,i){case 1:if(!A.in_grouping_b(x,97,246))return;break;case 2:if(!t())return}A.slice_del()}else A.limit_backward=e}function o(){var i,e,r;if(A.cursor>=f)if(e=A.limit_backward,A.limit_backward=f,A.ket=A.cursor,i=A.find_among_b(v,9))switch(A.bra=A.cursor,A.limit_backward=e,i){case 1:r=A.limit-A.cursor,A.eq_s_b(1,"k")||(A.cursor=A.limit-r,A.slice_del());break;case 2:A.slice_del(),A.ket=A.cursor,A.eq_s_b(3,"kse")&&(A.bra=A.cursor,A.slice_from("ksi"));break;case 3:A.slice_del();break;case 4:A.find_among_b(p,6)&&A.slice_del();break;case 5:A.find_among_b(g,6)&&A.slice_del();break;case 6:A.find_among_b(j,2)&&A.slice_del()}else A.limit_backward=e}function l(){return A.find_among_b(q,7)}function a(){return A.eq_s_b(1,"i")&&A.in_grouping_b(L,97,246)}function u(){var i,e,r;if(A.cursor>=f)if(e=A.limit_backward,A.limit_backward=f,A.ket=A.cursor,i=A.find_among_b(C,30)){switch(A.bra=A.cursor,A.limit_backward=e,i){case 1:if(!A.eq_s_b(1,"a"))return;break;case 2:case 9:if(!A.eq_s_b(1,"e"))return;break;case 3:if(!A.eq_s_b(1,"i"))return;break;case 4:if(!A.eq_s_b(1,"o"))return;break;case 5:if(!A.eq_s_b(1,"ä"))return;break;case 6:if(!A.eq_s_b(1,"ö"))return;break;case 7:if(r=A.limit-A.cursor,!l()&&(A.cursor=A.limit-r,!A.eq_s_b(2,"ie"))){A.cursor=A.limit-r;break}if(A.cursor=A.limit-r,A.cursor<=A.limit_backward){A.cursor=A.limit-r;break}A.cursor--,A.bra=A.cursor;break;case 8:if(!A.in_grouping_b(W,97,246)||!A.out_grouping_b(W,97,246))return}A.slice_del(),k=!0}else A.limit_backward=e}function c(){var i,e,r;if(A.cursor>=d)if(e=A.limit_backward,A.limit_backward=d,A.ket=A.cursor,i=A.find_among_b(P,14)){if(A.bra=A.cursor,A.limit_backward=e,1==i){if(r=A.limit-A.cursor,A.eq_s_b(2,"po"))return;A.cursor=A.limit-r}A.slice_del()}else A.limit_backward=e}function m(){var i;A.cursor>=f&&(i=A.limit_backward,A.limit_backward=f,A.ket=A.cursor,A.find_among_b(F,2)?(A.bra=A.cursor,A.limit_backward=i,A.slice_del()):A.limit_backward=i)}function w(){var i,e,r,n,t,s;if(A.cursor>=f){if(e=A.limit_backward,A.limit_backward=f,A.ket=A.cursor,A.eq_s_b(1,"t")&&(A.bra=A.cursor,r=A.limit-A.cursor,A.in_grouping_b(W,97,246)&&(A.cursor=A.limit-r,A.slice_del(),A.limit_backward=e,n=A.limit-A.cursor,A.cursor>=d&&(A.cursor=d,t=A.limit_backward,A.limit_backward=A.cursor,A.cursor=A.limit-n,A.ket=A.cursor,i=A.find_among_b(S,2))))){if(A.bra=A.cursor,A.limit_backward=t,1==i){if(s=A.limit-A.cursor,A.eq_s_b(2,"po"))return;A.cursor=A.limit-s}return void A.slice_del()}A.limit_backward=e}}function _(){var i,e,r,n;if(A.cursor>=f){for(i=A.limit_backward,A.limit_backward=f,e=A.limit-A.cursor,l()&&(A.cursor=A.limit-e,A.ket=A.cursor,A.cursor>A.limit_backward&&(A.cursor--,A.bra=A.cursor,A.slice_del())),A.cursor=A.limit-e,A.ket=A.cursor,A.in_grouping_b(y,97,228)&&(A.bra=A.cursor,A.out_grouping_b(W,97,246)&&A.slice_del()),A.cursor=A.limit-e,A.ket=A.cursor,A.eq_s_b(1,"j")&&(A.bra=A.cursor,r=A.limit-A.cursor,A.eq_s_b(1,"o")?A.slice_del():(A.cursor=A.limit-r,A.eq_s_b(1,"u")&&A.slice_del())),A.cursor=A.limit-e,A.ket=A.cursor,A.eq_s_b(1,"o")&&(A.bra=A.cursor,A.eq_s_b(1,"j")&&A.slice_del()),A.cursor=A.limit-e,A.limit_backward=i;;){if(n=A.limit-A.cursor,A.out_grouping_b(W,97,246)){A.cursor=A.limit-n;break}if(A.cursor=A.limit-n,A.cursor<=A.limit_backward)return;A.cursor--}A.ket=A.cursor,A.cursor>A.limit_backward&&(A.cursor--,A.bra=A.cursor,b=A.slice_to(),A.eq_v_b(b)&&A.slice_del())}}var k,b,d,f,h=[new e("pa",-1,1),new e("sti",-1,2),new e("kaan",-1,1),new e("han",-1,1),new e("kin",-1,1),new e("hän",-1,1),new e("kään",-1,1),new e("ko",-1,1),new e("pä",-1,1),new e("kö",-1,1)],p=[new e("lla",-1,-1),new e("na",-1,-1),new e("ssa",-1,-1),new e("ta",-1,-1),new e("lta",3,-1),new e("sta",3,-1)],g=[new e("llä",-1,-1),new e("nä",-1,-1),new e("ssä",-1,-1),new e("tä",-1,-1),new e("ltä",3,-1),new e("stä",3,-1)],j=[new e("lle",-1,-1),new e("ine",-1,-1)],v=[new e("nsa",-1,3),new e("mme",-1,3),new e("nne",-1,3),new e("ni",-1,2),new e("si",-1,1),new e("an",-1,4),new e("en",-1,6),new e("än",-1,5),new e("nsä",-1,3)],q=[new e("aa",-1,-1),new e("ee",-1,-1),new e("ii",-1,-1),new e("oo",-1,-1),new e("uu",-1,-1),new e("ää",-1,-1),new e("öö",-1,-1)],C=[new e("a",-1,8),new e("lla",0,-1),new e("na",0,-1),new e("ssa",0,-1),new e("ta",0,-1),new e("lta",4,-1),new e("sta",4,-1),new e("tta",4,9),new e("lle",-1,-1),new e("ine",-1,-1),new e("ksi",-1,-1),new e("n",-1,7),new e("han",11,1),new e("den",11,-1,a),new e("seen",11,-1,l),new e("hen",11,2),new e("tten",11,-1,a),new e("hin",11,3),new e("siin",11,-1,a),new e("hon",11,4),new e("hän",11,5),new e("hön",11,6),new e("ä",-1,8),new e("llä",22,-1),new e("nä",22,-1),new e("ssä",22,-1),new e("tä",22,-1),new e("ltä",26,-1),new e("stä",26,-1),new e("ttä",26,9)],P=[new e("eja",-1,-1),new e("mma",-1,1),new e("imma",1,-1),new e("mpa",-1,1),new e("impa",3,-1),new e("mmi",-1,1),new e("immi",5,-1),new e("mpi",-1,1),new e("impi",7,-1),new e("ejä",-1,-1),new e("mmä",-1,1),new e("immä",10,-1),new e("mpä",-1,1),new e("impä",12,-1)],F=[new e("i",-1,-1),new e("j",-1,-1)],S=[new e("mma",-1,1),new e("imma",0,-1)],y=[17,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,8],W=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,8,0,32],L=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,0,8,0,32],x=[17,97,24,1,0,0,0,0,0,0,0,0,0,0,0,0,8,0,32],A=new r;this.setCurrent=function(i){A.setCurrent(i)},this.getCurrent=function(){return A.getCurrent()},this.stem=function(){var e=A.cursor;return i(),k=!1,A.limit_backward=e,A.cursor=A.limit,s(),A.cursor=A.limit,o(),A.cursor=A.limit,u(),A.cursor=A.limit,c(),A.cursor=A.limit,k?(m(),A.cursor=A.limit):(A.cursor=A.limit,w(),A.cursor=A.limit),_(),!0}};return function(i){return"function"==typeof i.update?i.update(function(i){return n.setCurrent(i),n.stem(),n.getCurrent()}):(n.setCurrent(i),n.stem(),n.getCurrent())}}(),i.Pipeline.registerFunction(i.fi.stemmer,"stemmer-fi"),i.fi.stopWordFilter=i.generateStopWordFilter("ei eivät emme en et ette että he heidän heidät heihin heille heillä heiltä heissä heistä heitä hän häneen hänelle hänellä häneltä hänen hänessä hänestä hänet häntä itse ja johon joiden joihin joiksi joilla joille joilta joina joissa joista joita joka joksi jolla jolle jolta jona jonka jos jossa josta jota jotka kanssa keiden keihin keiksi keille keillä keiltä keinä keissä keistä keitä keneen keneksi kenelle kenellä keneltä kenen kenenä kenessä kenestä kenet ketkä ketkä ketä koska kuin kuka kun me meidän meidät meihin meille meillä meiltä meissä meistä meitä mihin miksi mikä mille millä miltä minkä minkä minua minulla minulle minulta minun minussa minusta minut minuun minä minä missä mistä mitkä mitä mukaan mutta ne niiden niihin niiksi niille niillä niiltä niin niin niinä niissä niistä niitä noiden noihin noiksi noilla noille noilta noin noina noissa noista noita nuo nyt näiden näihin näiksi näille näillä näiltä näinä näissä näistä näitä nämä ole olemme olen olet olette oli olimme olin olisi olisimme olisin olisit olisitte olisivat olit olitte olivat olla olleet ollut on ovat poikki se sekä sen siihen siinä siitä siksi sille sillä sillä siltä sinua sinulla sinulle sinulta sinun sinussa sinusta sinut sinuun sinä sinä sitä tai te teidän teidät teihin teille teillä teiltä teissä teistä teitä tuo tuohon tuoksi tuolla tuolle tuolta tuon tuona tuossa tuosta tuota tähän täksi tälle tällä tältä tämä tämän tänä tässä tästä tätä vaan vai vaikka yli".split(" ")),i.Pipeline.registerFunction(i.fi.stopWordFilter,"stopWordFilter-fi")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.fr.min.js b/assets/javascripts/lunr/min/lunr.fr.min.js new file mode 100644 index 00000000..68cd0094 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.fr.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `French` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.fr=function(){this.pipeline.reset(),this.pipeline.add(e.fr.trimmer,e.fr.stopWordFilter,e.fr.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.fr.stemmer))},e.fr.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.fr.trimmer=e.trimmerSupport.generateTrimmer(e.fr.wordCharacters),e.Pipeline.registerFunction(e.fr.trimmer,"trimmer-fr"),e.fr.stemmer=function(){var r=e.stemmerSupport.Among,s=e.stemmerSupport.SnowballProgram,i=new function(){function e(e,r,s){return!(!W.eq_s(1,e)||(W.ket=W.cursor,!W.in_grouping(F,97,251)))&&(W.slice_from(r),W.cursor=s,!0)}function i(e,r,s){return!!W.eq_s(1,e)&&(W.ket=W.cursor,W.slice_from(r),W.cursor=s,!0)}function n(){for(var r,s;;){if(r=W.cursor,W.in_grouping(F,97,251)){if(W.bra=W.cursor,s=W.cursor,e("u","U",r))continue;if(W.cursor=s,e("i","I",r))continue;if(W.cursor=s,i("y","Y",r))continue}if(W.cursor=r,W.bra=r,!e("y","Y",r)){if(W.cursor=r,W.eq_s(1,"q")&&(W.bra=W.cursor,i("u","U",r)))continue;if(W.cursor=r,r>=W.limit)return;W.cursor++}}}function t(){for(;!W.in_grouping(F,97,251);){if(W.cursor>=W.limit)return!0;W.cursor++}for(;!W.out_grouping(F,97,251);){if(W.cursor>=W.limit)return!0;W.cursor++}return!1}function u(){var e=W.cursor;if(q=W.limit,g=q,p=q,W.in_grouping(F,97,251)&&W.in_grouping(F,97,251)&&W.cursor=W.limit){W.cursor=q;break}W.cursor++}while(!W.in_grouping(F,97,251))}q=W.cursor,W.cursor=e,t()||(g=W.cursor,t()||(p=W.cursor))}function o(){for(var e,r;;){if(r=W.cursor,W.bra=r,!(e=W.find_among(h,4)))break;switch(W.ket=W.cursor,e){case 1:W.slice_from("i");break;case 2:W.slice_from("u");break;case 3:W.slice_from("y");break;case 4:if(W.cursor>=W.limit)return;W.cursor++}}}function c(){return q<=W.cursor}function a(){return g<=W.cursor}function l(){return p<=W.cursor}function w(){var e,r;if(W.ket=W.cursor,e=W.find_among_b(C,43)){switch(W.bra=W.cursor,e){case 1:if(!l())return!1;W.slice_del();break;case 2:if(!l())return!1;W.slice_del(),W.ket=W.cursor,W.eq_s_b(2,"ic")&&(W.bra=W.cursor,l()?W.slice_del():W.slice_from("iqU"));break;case 3:if(!l())return!1;W.slice_from("log");break;case 4:if(!l())return!1;W.slice_from("u");break;case 5:if(!l())return!1;W.slice_from("ent");break;case 6:if(!c())return!1;if(W.slice_del(),W.ket=W.cursor,e=W.find_among_b(z,6))switch(W.bra=W.cursor,e){case 1:l()&&(W.slice_del(),W.ket=W.cursor,W.eq_s_b(2,"at")&&(W.bra=W.cursor,l()&&W.slice_del()));break;case 2:l()?W.slice_del():a()&&W.slice_from("eux");break;case 3:l()&&W.slice_del();break;case 4:c()&&W.slice_from("i")}break;case 7:if(!l())return!1;if(W.slice_del(),W.ket=W.cursor,e=W.find_among_b(y,3))switch(W.bra=W.cursor,e){case 1:l()?W.slice_del():W.slice_from("abl");break;case 2:l()?W.slice_del():W.slice_from("iqU");break;case 3:l()&&W.slice_del()}break;case 8:if(!l())return!1;if(W.slice_del(),W.ket=W.cursor,W.eq_s_b(2,"at")&&(W.bra=W.cursor,l()&&(W.slice_del(),W.ket=W.cursor,W.eq_s_b(2,"ic")))){W.bra=W.cursor,l()?W.slice_del():W.slice_from("iqU");break}break;case 9:W.slice_from("eau");break;case 10:if(!a())return!1;W.slice_from("al");break;case 11:if(l())W.slice_del();else{if(!a())return!1;W.slice_from("eux")}break;case 12:if(!a()||!W.out_grouping_b(F,97,251))return!1;W.slice_del();break;case 13:return c()&&W.slice_from("ant"),!1;case 14:return c()&&W.slice_from("ent"),!1;case 15:return r=W.limit-W.cursor,W.in_grouping_b(F,97,251)&&c()&&(W.cursor=W.limit-r,W.slice_del()),!1}return!0}return!1}function f(){var e,r;if(W.cursor=q){if(s=W.limit_backward,W.limit_backward=q,W.ket=W.cursor,e=W.find_among_b(P,7))switch(W.bra=W.cursor,e){case 1:if(l()){if(i=W.limit-W.cursor,!W.eq_s_b(1,"s")&&(W.cursor=W.limit-i,!W.eq_s_b(1,"t")))break;W.slice_del()}break;case 2:W.slice_from("i");break;case 3:W.slice_del();break;case 4:W.eq_s_b(2,"gu")&&W.slice_del()}W.limit_backward=s}}function b(){var e=W.limit-W.cursor;W.find_among_b(U,5)&&(W.cursor=W.limit-e,W.ket=W.cursor,W.cursor>W.limit_backward&&(W.cursor--,W.bra=W.cursor,W.slice_del()))}function d(){for(var e,r=1;W.out_grouping_b(F,97,251);)r--;if(r<=0){if(W.ket=W.cursor,e=W.limit-W.cursor,!W.eq_s_b(1,"é")&&(W.cursor=W.limit-e,!W.eq_s_b(1,"è")))return;W.bra=W.cursor,W.slice_from("e")}}function k(){if(!w()&&(W.cursor=W.limit,!f()&&(W.cursor=W.limit,!m())))return W.cursor=W.limit,void _();W.cursor=W.limit,W.ket=W.cursor,W.eq_s_b(1,"Y")?(W.bra=W.cursor,W.slice_from("i")):(W.cursor=W.limit,W.eq_s_b(1,"ç")&&(W.bra=W.cursor,W.slice_from("c")))}var p,g,q,v=[new r("col",-1,-1),new r("par",-1,-1),new r("tap",-1,-1)],h=[new r("",-1,4),new r("I",0,1),new r("U",0,2),new r("Y",0,3)],z=[new r("iqU",-1,3),new r("abl",-1,3),new r("Ièr",-1,4),new r("ièr",-1,4),new r("eus",-1,2),new r("iv",-1,1)],y=[new r("ic",-1,2),new r("abil",-1,1),new r("iv",-1,3)],C=[new r("iqUe",-1,1),new r("atrice",-1,2),new r("ance",-1,1),new r("ence",-1,5),new r("logie",-1,3),new r("able",-1,1),new r("isme",-1,1),new r("euse",-1,11),new r("iste",-1,1),new r("ive",-1,8),new r("if",-1,8),new r("usion",-1,4),new r("ation",-1,2),new r("ution",-1,4),new r("ateur",-1,2),new r("iqUes",-1,1),new r("atrices",-1,2),new r("ances",-1,1),new r("ences",-1,5),new r("logies",-1,3),new r("ables",-1,1),new r("ismes",-1,1),new r("euses",-1,11),new r("istes",-1,1),new r("ives",-1,8),new r("ifs",-1,8),new r("usions",-1,4),new r("ations",-1,2),new r("utions",-1,4),new r("ateurs",-1,2),new r("ments",-1,15),new r("ements",30,6),new r("issements",31,12),new r("ités",-1,7),new r("ment",-1,15),new r("ement",34,6),new r("issement",35,12),new r("amment",34,13),new r("emment",34,14),new r("aux",-1,10),new r("eaux",39,9),new r("eux",-1,1),new r("ité",-1,7)],x=[new r("ira",-1,1),new r("ie",-1,1),new r("isse",-1,1),new r("issante",-1,1),new r("i",-1,1),new r("irai",4,1),new r("ir",-1,1),new r("iras",-1,1),new r("ies",-1,1),new r("îmes",-1,1),new r("isses",-1,1),new r("issantes",-1,1),new r("îtes",-1,1),new r("is",-1,1),new r("irais",13,1),new r("issais",13,1),new r("irions",-1,1),new r("issions",-1,1),new r("irons",-1,1),new r("issons",-1,1),new r("issants",-1,1),new r("it",-1,1),new r("irait",21,1),new r("issait",21,1),new r("issant",-1,1),new r("iraIent",-1,1),new r("issaIent",-1,1),new r("irent",-1,1),new r("issent",-1,1),new r("iront",-1,1),new r("ît",-1,1),new r("iriez",-1,1),new r("issiez",-1,1),new r("irez",-1,1),new r("issez",-1,1)],I=[new r("a",-1,3),new r("era",0,2),new r("asse",-1,3),new r("ante",-1,3),new r("ée",-1,2),new r("ai",-1,3),new r("erai",5,2),new r("er",-1,2),new r("as",-1,3),new r("eras",8,2),new r("âmes",-1,3),new r("asses",-1,3),new r("antes",-1,3),new r("âtes",-1,3),new r("ées",-1,2),new r("ais",-1,3),new r("erais",15,2),new r("ions",-1,1),new r("erions",17,2),new r("assions",17,3),new r("erons",-1,2),new r("ants",-1,3),new r("és",-1,2),new r("ait",-1,3),new r("erait",23,2),new r("ant",-1,3),new r("aIent",-1,3),new r("eraIent",26,2),new r("èrent",-1,2),new r("assent",-1,3),new r("eront",-1,2),new r("ât",-1,3),new r("ez",-1,2),new r("iez",32,2),new r("eriez",33,2),new r("assiez",33,3),new r("erez",32,2),new r("é",-1,2)],P=[new r("e",-1,3),new r("Ière",0,2),new r("ière",0,2),new r("ion",-1,1),new r("Ier",-1,2),new r("ier",-1,2),new r("ë",-1,4)],U=[new r("ell",-1,-1),new r("eill",-1,-1),new r("enn",-1,-1),new r("onn",-1,-1),new r("ett",-1,-1)],F=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,128,130,103,8,5],S=[1,65,20,0,0,0,0,0,0,0,0,0,0,0,0,0,128],W=new s;this.setCurrent=function(e){W.setCurrent(e)},this.getCurrent=function(){return W.getCurrent()},this.stem=function(){var e=W.cursor;return n(),W.cursor=e,u(),W.limit_backward=e,W.cursor=W.limit,k(),W.cursor=W.limit,b(),W.cursor=W.limit,d(),W.cursor=W.limit_backward,o(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return i.setCurrent(e),i.stem(),i.getCurrent()}):(i.setCurrent(e),i.stem(),i.getCurrent())}}(),e.Pipeline.registerFunction(e.fr.stemmer,"stemmer-fr"),e.fr.stopWordFilter=e.generateStopWordFilter("ai aie aient aies ait as au aura aurai auraient aurais aurait auras aurez auriez aurions aurons auront aux avaient avais avait avec avez aviez avions avons ayant ayez ayons c ce ceci celà ces cet cette d dans de des du elle en es est et eu eue eues eurent eus eusse eussent eusses eussiez eussions eut eux eûmes eût eûtes furent fus fusse fussent fusses fussiez fussions fut fûmes fût fûtes ici il ils j je l la le les leur leurs lui m ma mais me mes moi mon même n ne nos notre nous on ont ou par pas pour qu que quel quelle quelles quels qui s sa sans se sera serai seraient serais serait seras serez seriez serions serons seront ses soi soient sois soit sommes son sont soyez soyons suis sur t ta te tes toi ton tu un une vos votre vous y à étaient étais était étant étiez étions été étée étées étés êtes".split(" ")),e.Pipeline.registerFunction(e.fr.stopWordFilter,"stopWordFilter-fr")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.hi.min.js b/assets/javascripts/lunr/min/lunr.hi.min.js new file mode 100644 index 00000000..7dbc4140 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.hi.min.js @@ -0,0 +1 @@ +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.hi=function(){this.pipeline.reset(),this.pipeline.add(e.hi.trimmer,e.hi.stopWordFilter,e.hi.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.hi.stemmer))},e.hi.wordCharacters="ऀ-ःऄ-एऐ-टठ-यर-िी-ॏॐ-य़ॠ-९॰-ॿa-zA-Za-zA-Z0-90-9",e.hi.trimmer=e.trimmerSupport.generateTrimmer(e.hi.wordCharacters),e.Pipeline.registerFunction(e.hi.trimmer,"trimmer-hi"),e.hi.stopWordFilter=e.generateStopWordFilter("अत अपना अपनी अपने अभी अंदर आदि आप इत्यादि इन इनका इन्हीं इन्हें इन्हों इस इसका इसकी इसके इसमें इसी इसे उन उनका उनकी उनके उनको उन्हीं उन्हें उन्हों उस उसके उसी उसे एक एवं एस ऐसे और कई कर करता करते करना करने करें कहते कहा का काफ़ी कि कितना किन्हें किन्हों किया किर किस किसी किसे की कुछ कुल के को कोई कौन कौनसा गया घर जब जहाँ जा जितना जिन जिन्हें जिन्हों जिस जिसे जीधर जैसा जैसे जो तक तब तरह तिन तिन्हें तिन्हों तिस तिसे तो था थी थे दबारा दिया दुसरा दूसरे दो द्वारा न नके नहीं ना निहायत नीचे ने पर पहले पूरा पे फिर बनी बही बहुत बाद बाला बिलकुल भी भीतर मगर मानो मे में यदि यह यहाँ यही या यिह ये रखें रहा रहे ऱ्वासा लिए लिये लेकिन व वग़ैरह वर्ग वह वहाँ वहीं वाले वुह वे वो सकता सकते सबसे सभी साथ साबुत साभ सारा से सो संग ही हुआ हुई हुए है हैं हो होता होती होते होना होने".split(" ")),e.hi.stemmer=function(){return function(e){return"function"==typeof e.update?e.update(function(e){return e}):e}}();var r=e.wordcut;r.init(),e.hi.tokenizer=function(i){if(!arguments.length||null==i||void 0==i)return[];if(Array.isArray(i))return i.map(function(r){return isLunr2?new e.Token(r.toLowerCase()):r.toLowerCase()});var t=i.toString().toLowerCase().replace(/^\s+/,"");return r.cut(t).split("|")},e.Pipeline.registerFunction(e.hi.stemmer,"stemmer-hi"),e.Pipeline.registerFunction(e.hi.stopWordFilter,"stopWordFilter-hi")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.hu.min.js b/assets/javascripts/lunr/min/lunr.hu.min.js new file mode 100644 index 00000000..ed9d909f --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.hu.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Hungarian` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,n){"function"==typeof define&&define.amd?define(n):"object"==typeof exports?module.exports=n():n()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.hu=function(){this.pipeline.reset(),this.pipeline.add(e.hu.trimmer,e.hu.stopWordFilter,e.hu.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.hu.stemmer))},e.hu.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.hu.trimmer=e.trimmerSupport.generateTrimmer(e.hu.wordCharacters),e.Pipeline.registerFunction(e.hu.trimmer,"trimmer-hu"),e.hu.stemmer=function(){var n=e.stemmerSupport.Among,r=e.stemmerSupport.SnowballProgram,i=new function(){function e(){var e,n=L.cursor;if(d=L.limit,L.in_grouping(W,97,252))for(;;){if(e=L.cursor,L.out_grouping(W,97,252))return L.cursor=e,L.find_among(g,8)||(L.cursor=e,e=L.limit)return void(d=e);L.cursor++}if(L.cursor=n,L.out_grouping(W,97,252)){for(;!L.in_grouping(W,97,252);){if(L.cursor>=L.limit)return;L.cursor++}d=L.cursor}}function i(){return d<=L.cursor}function a(){var e;if(L.ket=L.cursor,(e=L.find_among_b(h,2))&&(L.bra=L.cursor,i()))switch(e){case 1:L.slice_from("a");break;case 2:L.slice_from("e")}}function t(){var e=L.limit-L.cursor;return!!L.find_among_b(p,23)&&(L.cursor=L.limit-e,!0)}function s(){if(L.cursor>L.limit_backward){L.cursor--,L.ket=L.cursor;var e=L.cursor-1;L.limit_backward<=e&&e<=L.limit&&(L.cursor=e,L.bra=e,L.slice_del())}}function c(){var e;if(L.ket=L.cursor,(e=L.find_among_b(_,2))&&(L.bra=L.cursor,i())){if((1==e||2==e)&&!t())return;L.slice_del(),s()}}function o(){L.ket=L.cursor,L.find_among_b(v,44)&&(L.bra=L.cursor,i()&&(L.slice_del(),a()))}function w(){var e;if(L.ket=L.cursor,(e=L.find_among_b(z,3))&&(L.bra=L.cursor,i()))switch(e){case 1:L.slice_from("e");break;case 2:case 3:L.slice_from("a")}}function l(){var e;if(L.ket=L.cursor,(e=L.find_among_b(y,6))&&(L.bra=L.cursor,i()))switch(e){case 1:case 2:L.slice_del();break;case 3:L.slice_from("a");break;case 4:L.slice_from("e")}}function u(){var e;if(L.ket=L.cursor,(e=L.find_among_b(j,2))&&(L.bra=L.cursor,i())){if((1==e||2==e)&&!t())return;L.slice_del(),s()}}function m(){var e;if(L.ket=L.cursor,(e=L.find_among_b(C,7))&&(L.bra=L.cursor,i()))switch(e){case 1:L.slice_from("a");break;case 2:L.slice_from("e");break;case 3:case 4:case 5:case 6:case 7:L.slice_del()}}function k(){var e;if(L.ket=L.cursor,(e=L.find_among_b(P,12))&&(L.bra=L.cursor,i()))switch(e){case 1:case 4:case 7:case 9:L.slice_del();break;case 2:case 5:case 8:L.slice_from("e");break;case 3:case 6:L.slice_from("a")}}function f(){var e;if(L.ket=L.cursor,(e=L.find_among_b(F,31))&&(L.bra=L.cursor,i()))switch(e){case 1:case 4:case 7:case 8:case 9:case 12:case 13:case 16:case 17:case 18:L.slice_del();break;case 2:case 5:case 10:case 14:case 19:L.slice_from("a");break;case 3:case 6:case 11:case 15:case 20:L.slice_from("e")}}function b(){var e;if(L.ket=L.cursor,(e=L.find_among_b(S,42))&&(L.bra=L.cursor,i()))switch(e){case 1:case 4:case 5:case 6:case 9:case 10:case 11:case 14:case 15:case 16:case 17:case 20:case 21:case 24:case 25:case 26:case 29:L.slice_del();break;case 2:case 7:case 12:case 18:case 22:case 27:L.slice_from("a");break;case 3:case 8:case 13:case 19:case 23:case 28:L.slice_from("e")}}var d,g=[new n("cs",-1,-1),new n("dzs",-1,-1),new n("gy",-1,-1),new n("ly",-1,-1),new n("ny",-1,-1),new n("sz",-1,-1),new n("ty",-1,-1),new n("zs",-1,-1)],h=[new n("á",-1,1),new n("é",-1,2)],p=[new n("bb",-1,-1),new n("cc",-1,-1),new n("dd",-1,-1),new n("ff",-1,-1),new n("gg",-1,-1),new n("jj",-1,-1),new n("kk",-1,-1),new n("ll",-1,-1),new n("mm",-1,-1),new n("nn",-1,-1),new n("pp",-1,-1),new n("rr",-1,-1),new n("ccs",-1,-1),new n("ss",-1,-1),new n("zzs",-1,-1),new n("tt",-1,-1),new n("vv",-1,-1),new n("ggy",-1,-1),new n("lly",-1,-1),new n("nny",-1,-1),new n("tty",-1,-1),new n("ssz",-1,-1),new n("zz",-1,-1)],_=[new n("al",-1,1),new n("el",-1,2)],v=[new n("ba",-1,-1),new n("ra",-1,-1),new n("be",-1,-1),new n("re",-1,-1),new n("ig",-1,-1),new n("nak",-1,-1),new n("nek",-1,-1),new n("val",-1,-1),new n("vel",-1,-1),new n("ul",-1,-1),new n("nál",-1,-1),new n("nél",-1,-1),new n("ból",-1,-1),new n("ról",-1,-1),new n("tól",-1,-1),new n("bõl",-1,-1),new n("rõl",-1,-1),new n("tõl",-1,-1),new n("ül",-1,-1),new n("n",-1,-1),new n("an",19,-1),new n("ban",20,-1),new n("en",19,-1),new n("ben",22,-1),new n("képpen",22,-1),new n("on",19,-1),new n("ön",19,-1),new n("képp",-1,-1),new n("kor",-1,-1),new n("t",-1,-1),new n("at",29,-1),new n("et",29,-1),new n("ként",29,-1),new n("anként",32,-1),new n("enként",32,-1),new n("onként",32,-1),new n("ot",29,-1),new n("ért",29,-1),new n("öt",29,-1),new n("hez",-1,-1),new n("hoz",-1,-1),new n("höz",-1,-1),new n("vá",-1,-1),new n("vé",-1,-1)],z=[new n("án",-1,2),new n("én",-1,1),new n("ánként",-1,3)],y=[new n("stul",-1,2),new n("astul",0,1),new n("ástul",0,3),new n("stül",-1,2),new n("estül",3,1),new n("éstül",3,4)],j=[new n("á",-1,1),new n("é",-1,2)],C=[new n("k",-1,7),new n("ak",0,4),new n("ek",0,6),new n("ok",0,5),new n("ák",0,1),new n("ék",0,2),new n("ök",0,3)],P=[new n("éi",-1,7),new n("áéi",0,6),new n("ééi",0,5),new n("é",-1,9),new n("ké",3,4),new n("aké",4,1),new n("eké",4,1),new n("oké",4,1),new n("áké",4,3),new n("éké",4,2),new n("öké",4,1),new n("éé",3,8)],F=[new n("a",-1,18),new n("ja",0,17),new n("d",-1,16),new n("ad",2,13),new n("ed",2,13),new n("od",2,13),new n("ád",2,14),new n("éd",2,15),new n("öd",2,13),new n("e",-1,18),new n("je",9,17),new n("nk",-1,4),new n("unk",11,1),new n("ánk",11,2),new n("énk",11,3),new n("ünk",11,1),new n("uk",-1,8),new n("juk",16,7),new n("ájuk",17,5),new n("ük",-1,8),new n("jük",19,7),new n("éjük",20,6),new n("m",-1,12),new n("am",22,9),new n("em",22,9),new n("om",22,9),new n("ám",22,10),new n("ém",22,11),new n("o",-1,18),new n("á",-1,19),new n("é",-1,20)],S=[new n("id",-1,10),new n("aid",0,9),new n("jaid",1,6),new n("eid",0,9),new n("jeid",3,6),new n("áid",0,7),new n("éid",0,8),new n("i",-1,15),new n("ai",7,14),new n("jai",8,11),new n("ei",7,14),new n("jei",10,11),new n("ái",7,12),new n("éi",7,13),new n("itek",-1,24),new n("eitek",14,21),new n("jeitek",15,20),new n("éitek",14,23),new n("ik",-1,29),new n("aik",18,26),new n("jaik",19,25),new n("eik",18,26),new n("jeik",21,25),new n("áik",18,27),new n("éik",18,28),new n("ink",-1,20),new n("aink",25,17),new n("jaink",26,16),new n("eink",25,17),new n("jeink",28,16),new n("áink",25,18),new n("éink",25,19),new n("aitok",-1,21),new n("jaitok",32,20),new n("áitok",-1,22),new n("im",-1,5),new n("aim",35,4),new n("jaim",36,1),new n("eim",35,4),new n("jeim",38,1),new n("áim",35,2),new n("éim",35,3)],W=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,0,1,17,52,14],L=new r;this.setCurrent=function(e){L.setCurrent(e)},this.getCurrent=function(){return L.getCurrent()},this.stem=function(){var n=L.cursor;return e(),L.limit_backward=n,L.cursor=L.limit,c(),L.cursor=L.limit,o(),L.cursor=L.limit,w(),L.cursor=L.limit,l(),L.cursor=L.limit,u(),L.cursor=L.limit,k(),L.cursor=L.limit,f(),L.cursor=L.limit,b(),L.cursor=L.limit,m(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return i.setCurrent(e),i.stem(),i.getCurrent()}):(i.setCurrent(e),i.stem(),i.getCurrent())}}(),e.Pipeline.registerFunction(e.hu.stemmer,"stemmer-hu"),e.hu.stopWordFilter=e.generateStopWordFilter("a abban ahhoz ahogy ahol aki akik akkor alatt amely amelyek amelyekben amelyeket amelyet amelynek ami amikor amit amolyan amíg annak arra arról az azok azon azonban azt aztán azután azzal azért be belül benne bár cikk cikkek cikkeket csak de e ebben eddig egy egyes egyetlen egyik egyre egyéb egész ehhez ekkor el ellen elsõ elég elõ elõször elõtt emilyen ennek erre ez ezek ezen ezt ezzel ezért fel felé hanem hiszen hogy hogyan igen ill ill. illetve ilyen ilyenkor ismét ison itt jobban jó jól kell kellett keressünk keresztül ki kívül között közül legalább legyen lehet lehetett lenne lenni lesz lett maga magát majd majd meg mellett mely melyek mert mi mikor milyen minden mindenki mindent mindig mint mintha mit mivel miért most már más másik még míg nagy nagyobb nagyon ne nekem neki nem nincs néha néhány nélkül olyan ott pedig persze rá s saját sem semmi sok sokat sokkal szemben szerint szinte számára talán tehát teljes tovább továbbá több ugyanis utolsó után utána vagy vagyis vagyok valaki valami valamint való van vannak vele vissza viszont volna volt voltak voltam voltunk által általában át én éppen és így õ õk õket össze úgy új újabb újra".split(" ")),e.Pipeline.registerFunction(e.hu.stopWordFilter,"stopWordFilter-hu")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.it.min.js b/assets/javascripts/lunr/min/lunr.it.min.js new file mode 100644 index 00000000..344b6a3c --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.it.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Italian` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.it=function(){this.pipeline.reset(),this.pipeline.add(e.it.trimmer,e.it.stopWordFilter,e.it.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.it.stemmer))},e.it.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.it.trimmer=e.trimmerSupport.generateTrimmer(e.it.wordCharacters),e.Pipeline.registerFunction(e.it.trimmer,"trimmer-it"),e.it.stemmer=function(){var r=e.stemmerSupport.Among,n=e.stemmerSupport.SnowballProgram,i=new function(){function e(e,r,n){return!(!x.eq_s(1,e)||(x.ket=x.cursor,!x.in_grouping(L,97,249)))&&(x.slice_from(r),x.cursor=n,!0)}function i(){for(var r,n,i,o,t=x.cursor;;){if(x.bra=x.cursor,r=x.find_among(h,7))switch(x.ket=x.cursor,r){case 1:x.slice_from("à");continue;case 2:x.slice_from("è");continue;case 3:x.slice_from("ì");continue;case 4:x.slice_from("ò");continue;case 5:x.slice_from("ù");continue;case 6:x.slice_from("qU");continue;case 7:if(x.cursor>=x.limit)break;x.cursor++;continue}break}for(x.cursor=t;;)for(n=x.cursor;;){if(i=x.cursor,x.in_grouping(L,97,249)){if(x.bra=x.cursor,o=x.cursor,e("u","U",i))break;if(x.cursor=o,e("i","I",i))break}if(x.cursor=i,x.cursor>=x.limit)return void(x.cursor=n);x.cursor++}}function o(e){if(x.cursor=e,!x.in_grouping(L,97,249))return!1;for(;!x.out_grouping(L,97,249);){if(x.cursor>=x.limit)return!1;x.cursor++}return!0}function t(){if(x.in_grouping(L,97,249)){var e=x.cursor;if(x.out_grouping(L,97,249)){for(;!x.in_grouping(L,97,249);){if(x.cursor>=x.limit)return o(e);x.cursor++}return!0}return o(e)}return!1}function s(){var e,r=x.cursor;if(!t()){if(x.cursor=r,!x.out_grouping(L,97,249))return;if(e=x.cursor,x.out_grouping(L,97,249)){for(;!x.in_grouping(L,97,249);){if(x.cursor>=x.limit)return x.cursor=e,void(x.in_grouping(L,97,249)&&x.cursor=x.limit)return;x.cursor++}k=x.cursor}function a(){for(;!x.in_grouping(L,97,249);){if(x.cursor>=x.limit)return!1;x.cursor++}for(;!x.out_grouping(L,97,249);){if(x.cursor>=x.limit)return!1;x.cursor++}return!0}function u(){var e=x.cursor;k=x.limit,p=k,g=k,s(),x.cursor=e,a()&&(p=x.cursor,a()&&(g=x.cursor))}function c(){for(var e;;){if(x.bra=x.cursor,!(e=x.find_among(q,3)))break;switch(x.ket=x.cursor,e){case 1:x.slice_from("i");break;case 2:x.slice_from("u");break;case 3:if(x.cursor>=x.limit)return;x.cursor++}}}function w(){return k<=x.cursor}function l(){return p<=x.cursor}function m(){return g<=x.cursor}function f(){var e;if(x.ket=x.cursor,x.find_among_b(C,37)&&(x.bra=x.cursor,(e=x.find_among_b(z,5))&&w()))switch(e){case 1:x.slice_del();break;case 2:x.slice_from("e")}}function v(){var e;if(x.ket=x.cursor,!(e=x.find_among_b(S,51)))return!1;switch(x.bra=x.cursor,e){case 1:if(!m())return!1;x.slice_del();break;case 2:if(!m())return!1;x.slice_del(),x.ket=x.cursor,x.eq_s_b(2,"ic")&&(x.bra=x.cursor,m()&&x.slice_del());break;case 3:if(!m())return!1;x.slice_from("log");break;case 4:if(!m())return!1;x.slice_from("u");break;case 5:if(!m())return!1;x.slice_from("ente");break;case 6:if(!w())return!1;x.slice_del();break;case 7:if(!l())return!1;x.slice_del(),x.ket=x.cursor,e=x.find_among_b(P,4),e&&(x.bra=x.cursor,m()&&(x.slice_del(),1==e&&(x.ket=x.cursor,x.eq_s_b(2,"at")&&(x.bra=x.cursor,m()&&x.slice_del()))));break;case 8:if(!m())return!1;x.slice_del(),x.ket=x.cursor,e=x.find_among_b(F,3),e&&(x.bra=x.cursor,1==e&&m()&&x.slice_del());break;case 9:if(!m())return!1;x.slice_del(),x.ket=x.cursor,x.eq_s_b(2,"at")&&(x.bra=x.cursor,m()&&(x.slice_del(),x.ket=x.cursor,x.eq_s_b(2,"ic")&&(x.bra=x.cursor,m()&&x.slice_del())))}return!0}function b(){var e,r;x.cursor>=k&&(r=x.limit_backward,x.limit_backward=k,x.ket=x.cursor,e=x.find_among_b(W,87),e&&(x.bra=x.cursor,1==e&&x.slice_del()),x.limit_backward=r)}function d(){var e=x.limit-x.cursor;if(x.ket=x.cursor,x.in_grouping_b(y,97,242)&&(x.bra=x.cursor,w()&&(x.slice_del(),x.ket=x.cursor,x.eq_s_b(1,"i")&&(x.bra=x.cursor,w()))))return void x.slice_del();x.cursor=x.limit-e}function _(){d(),x.ket=x.cursor,x.eq_s_b(1,"h")&&(x.bra=x.cursor,x.in_grouping_b(U,99,103)&&w()&&x.slice_del())}var g,p,k,h=[new r("",-1,7),new r("qu",0,6),new r("á",0,1),new r("é",0,2),new r("í",0,3),new r("ó",0,4),new r("ú",0,5)],q=[new r("",-1,3),new r("I",0,1),new r("U",0,2)],C=[new r("la",-1,-1),new r("cela",0,-1),new r("gliela",0,-1),new r("mela",0,-1),new r("tela",0,-1),new r("vela",0,-1),new r("le",-1,-1),new r("cele",6,-1),new r("gliele",6,-1),new r("mele",6,-1),new r("tele",6,-1),new r("vele",6,-1),new r("ne",-1,-1),new r("cene",12,-1),new r("gliene",12,-1),new r("mene",12,-1),new r("sene",12,-1),new r("tene",12,-1),new r("vene",12,-1),new r("ci",-1,-1),new r("li",-1,-1),new r("celi",20,-1),new r("glieli",20,-1),new r("meli",20,-1),new r("teli",20,-1),new r("veli",20,-1),new r("gli",20,-1),new r("mi",-1,-1),new r("si",-1,-1),new r("ti",-1,-1),new r("vi",-1,-1),new r("lo",-1,-1),new r("celo",31,-1),new r("glielo",31,-1),new r("melo",31,-1),new r("telo",31,-1),new r("velo",31,-1)],z=[new r("ando",-1,1),new r("endo",-1,1),new r("ar",-1,2),new r("er",-1,2),new r("ir",-1,2)],P=[new r("ic",-1,-1),new r("abil",-1,-1),new r("os",-1,-1),new r("iv",-1,1)],F=[new r("ic",-1,1),new r("abil",-1,1),new r("iv",-1,1)],S=[new r("ica",-1,1),new r("logia",-1,3),new r("osa",-1,1),new r("ista",-1,1),new r("iva",-1,9),new r("anza",-1,1),new r("enza",-1,5),new r("ice",-1,1),new r("atrice",7,1),new r("iche",-1,1),new r("logie",-1,3),new r("abile",-1,1),new r("ibile",-1,1),new r("usione",-1,4),new r("azione",-1,2),new r("uzione",-1,4),new r("atore",-1,2),new r("ose",-1,1),new r("ante",-1,1),new r("mente",-1,1),new r("amente",19,7),new r("iste",-1,1),new r("ive",-1,9),new r("anze",-1,1),new r("enze",-1,5),new r("ici",-1,1),new r("atrici",25,1),new r("ichi",-1,1),new r("abili",-1,1),new r("ibili",-1,1),new r("ismi",-1,1),new r("usioni",-1,4),new r("azioni",-1,2),new r("uzioni",-1,4),new r("atori",-1,2),new r("osi",-1,1),new r("anti",-1,1),new r("amenti",-1,6),new r("imenti",-1,6),new r("isti",-1,1),new r("ivi",-1,9),new r("ico",-1,1),new r("ismo",-1,1),new r("oso",-1,1),new r("amento",-1,6),new r("imento",-1,6),new r("ivo",-1,9),new r("ità",-1,8),new r("istà",-1,1),new r("istè",-1,1),new r("istì",-1,1)],W=[new r("isca",-1,1),new r("enda",-1,1),new r("ata",-1,1),new r("ita",-1,1),new r("uta",-1,1),new r("ava",-1,1),new r("eva",-1,1),new r("iva",-1,1),new r("erebbe",-1,1),new r("irebbe",-1,1),new r("isce",-1,1),new r("ende",-1,1),new r("are",-1,1),new r("ere",-1,1),new r("ire",-1,1),new r("asse",-1,1),new r("ate",-1,1),new r("avate",16,1),new r("evate",16,1),new r("ivate",16,1),new r("ete",-1,1),new r("erete",20,1),new r("irete",20,1),new r("ite",-1,1),new r("ereste",-1,1),new r("ireste",-1,1),new r("ute",-1,1),new r("erai",-1,1),new r("irai",-1,1),new r("isci",-1,1),new r("endi",-1,1),new r("erei",-1,1),new r("irei",-1,1),new r("assi",-1,1),new r("ati",-1,1),new r("iti",-1,1),new r("eresti",-1,1),new r("iresti",-1,1),new r("uti",-1,1),new r("avi",-1,1),new r("evi",-1,1),new r("ivi",-1,1),new r("isco",-1,1),new r("ando",-1,1),new r("endo",-1,1),new r("Yamo",-1,1),new r("iamo",-1,1),new r("avamo",-1,1),new r("evamo",-1,1),new r("ivamo",-1,1),new r("eremo",-1,1),new r("iremo",-1,1),new r("assimo",-1,1),new r("ammo",-1,1),new r("emmo",-1,1),new r("eremmo",54,1),new r("iremmo",54,1),new r("immo",-1,1),new r("ano",-1,1),new r("iscano",58,1),new r("avano",58,1),new r("evano",58,1),new r("ivano",58,1),new r("eranno",-1,1),new r("iranno",-1,1),new r("ono",-1,1),new r("iscono",65,1),new r("arono",65,1),new r("erono",65,1),new r("irono",65,1),new r("erebbero",-1,1),new r("irebbero",-1,1),new r("assero",-1,1),new r("essero",-1,1),new r("issero",-1,1),new r("ato",-1,1),new r("ito",-1,1),new r("uto",-1,1),new r("avo",-1,1),new r("evo",-1,1),new r("ivo",-1,1),new r("ar",-1,1),new r("ir",-1,1),new r("erà",-1,1),new r("irà",-1,1),new r("erò",-1,1),new r("irò",-1,1)],L=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,128,128,8,2,1],y=[17,65,0,0,0,0,0,0,0,0,0,0,0,0,0,128,128,8,2],U=[17],x=new n;this.setCurrent=function(e){x.setCurrent(e)},this.getCurrent=function(){return x.getCurrent()},this.stem=function(){var e=x.cursor;return i(),x.cursor=e,u(),x.limit_backward=e,x.cursor=x.limit,f(),x.cursor=x.limit,v()||(x.cursor=x.limit,b()),x.cursor=x.limit,_(),x.cursor=x.limit_backward,c(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return i.setCurrent(e),i.stem(),i.getCurrent()}):(i.setCurrent(e),i.stem(),i.getCurrent())}}(),e.Pipeline.registerFunction(e.it.stemmer,"stemmer-it"),e.it.stopWordFilter=e.generateStopWordFilter("a abbia abbiamo abbiano abbiate ad agl agli ai al all alla alle allo anche avemmo avendo avesse avessero avessi avessimo aveste avesti avete aveva avevamo avevano avevate avevi avevo avrai avranno avrebbe avrebbero avrei avremmo avremo avreste avresti avrete avrà avrò avuta avute avuti avuto c che chi ci coi col come con contro cui da dagl dagli dai dal dall dalla dalle dallo degl degli dei del dell della delle dello di dov dove e ebbe ebbero ebbi ed era erano eravamo eravate eri ero essendo faccia facciamo facciano facciate faccio facemmo facendo facesse facessero facessi facessimo faceste facesti faceva facevamo facevano facevate facevi facevo fai fanno farai faranno farebbe farebbero farei faremmo faremo fareste faresti farete farà farò fece fecero feci fosse fossero fossi fossimo foste fosti fu fui fummo furono gli ha hai hanno ho i il in io l la le lei li lo loro lui ma mi mia mie miei mio ne negl negli nei nel nell nella nelle nello noi non nostra nostre nostri nostro o per perché più quale quanta quante quanti quanto quella quelle quelli quello questa queste questi questo sarai saranno sarebbe sarebbero sarei saremmo saremo sareste saresti sarete sarà sarò se sei si sia siamo siano siate siete sono sta stai stando stanno starai staranno starebbe starebbero starei staremmo staremo stareste staresti starete starà starò stava stavamo stavano stavate stavi stavo stemmo stesse stessero stessi stessimo steste stesti stette stettero stetti stia stiamo stiano stiate sto su sua sue sugl sugli sui sul sull sulla sulle sullo suo suoi ti tra tu tua tue tuo tuoi tutti tutto un una uno vi voi vostra vostre vostri vostro è".split(" ")),e.Pipeline.registerFunction(e.it.stopWordFilter,"stopWordFilter-it")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.ja.min.js b/assets/javascripts/lunr/min/lunr.ja.min.js new file mode 100644 index 00000000..5f254ebe --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.ja.min.js @@ -0,0 +1 @@ +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");var r="2"==e.version[0];e.ja=function(){this.pipeline.reset(),this.pipeline.add(e.ja.trimmer,e.ja.stopWordFilter,e.ja.stemmer),r?this.tokenizer=e.ja.tokenizer:(e.tokenizer&&(e.tokenizer=e.ja.tokenizer),this.tokenizerFn&&(this.tokenizerFn=e.ja.tokenizer))};var t=new e.TinySegmenter;e.ja.tokenizer=function(i){var n,o,s,p,a,u,m,l,c,f;if(!arguments.length||null==i||void 0==i)return[];if(Array.isArray(i))return i.map(function(t){return r?new e.Token(t.toLowerCase()):t.toLowerCase()});for(o=i.toString().toLowerCase().replace(/^\s+/,""),n=o.length-1;n>=0;n--)if(/\S/.test(o.charAt(n))){o=o.substring(0,n+1);break}for(a=[],s=o.length,c=0,l=0;c<=s;c++)if(u=o.charAt(c),m=c-l,u.match(/\s/)||c==s){if(m>0)for(p=t.segment(o.slice(l,c)).filter(function(e){return!!e}),f=l,n=0;n=C.limit)break;C.cursor++;continue}break}for(C.cursor=o,C.bra=o,C.eq_s(1,"y")?(C.ket=C.cursor,C.slice_from("Y")):C.cursor=o;;)if(e=C.cursor,C.in_grouping(q,97,232)){if(i=C.cursor,C.bra=i,C.eq_s(1,"i"))C.ket=C.cursor,C.in_grouping(q,97,232)&&(C.slice_from("I"),C.cursor=e);else if(C.cursor=i,C.eq_s(1,"y"))C.ket=C.cursor,C.slice_from("Y"),C.cursor=e;else if(n(e))break}else if(n(e))break}function n(r){return C.cursor=r,r>=C.limit||(C.cursor++,!1)}function o(){_=C.limit,d=_,t()||(_=C.cursor,_<3&&(_=3),t()||(d=C.cursor))}function t(){for(;!C.in_grouping(q,97,232);){if(C.cursor>=C.limit)return!0;C.cursor++}for(;!C.out_grouping(q,97,232);){if(C.cursor>=C.limit)return!0;C.cursor++}return!1}function s(){for(var r;;)if(C.bra=C.cursor,r=C.find_among(p,3))switch(C.ket=C.cursor,r){case 1:C.slice_from("y");break;case 2:C.slice_from("i");break;case 3:if(C.cursor>=C.limit)return;C.cursor++}}function u(){return _<=C.cursor}function c(){return d<=C.cursor}function a(){var r=C.limit-C.cursor;C.find_among_b(g,3)&&(C.cursor=C.limit-r,C.ket=C.cursor,C.cursor>C.limit_backward&&(C.cursor--,C.bra=C.cursor,C.slice_del()))}function l(){var r;w=!1,C.ket=C.cursor,C.eq_s_b(1,"e")&&(C.bra=C.cursor,u()&&(r=C.limit-C.cursor,C.out_grouping_b(q,97,232)&&(C.cursor=C.limit-r,C.slice_del(),w=!0,a())))}function m(){var r;u()&&(r=C.limit-C.cursor,C.out_grouping_b(q,97,232)&&(C.cursor=C.limit-r,C.eq_s_b(3,"gem")||(C.cursor=C.limit-r,C.slice_del(),a())))}function f(){var r,e,i,n,o,t,s=C.limit-C.cursor;if(C.ket=C.cursor,r=C.find_among_b(h,5))switch(C.bra=C.cursor,r){case 1:u()&&C.slice_from("heid");break;case 2:m();break;case 3:u()&&C.out_grouping_b(j,97,232)&&C.slice_del()}if(C.cursor=C.limit-s,l(),C.cursor=C.limit-s,C.ket=C.cursor,C.eq_s_b(4,"heid")&&(C.bra=C.cursor,c()&&(e=C.limit-C.cursor,C.eq_s_b(1,"c")||(C.cursor=C.limit-e,C.slice_del(),C.ket=C.cursor,C.eq_s_b(2,"en")&&(C.bra=C.cursor,m())))),C.cursor=C.limit-s,C.ket=C.cursor,r=C.find_among_b(k,6))switch(C.bra=C.cursor,r){case 1:if(c()){if(C.slice_del(),i=C.limit-C.cursor,C.ket=C.cursor,C.eq_s_b(2,"ig")&&(C.bra=C.cursor,c()&&(n=C.limit-C.cursor,!C.eq_s_b(1,"e")))){C.cursor=C.limit-n,C.slice_del();break}C.cursor=C.limit-i,a()}break;case 2:c()&&(o=C.limit-C.cursor,C.eq_s_b(1,"e")||(C.cursor=C.limit-o,C.slice_del()));break;case 3:c()&&(C.slice_del(),l());break;case 4:c()&&C.slice_del();break;case 5:c()&&w&&C.slice_del()}C.cursor=C.limit-s,C.out_grouping_b(z,73,232)&&(t=C.limit-C.cursor,C.find_among_b(v,4)&&C.out_grouping_b(q,97,232)&&(C.cursor=C.limit-t,C.ket=C.cursor,C.cursor>C.limit_backward&&(C.cursor--,C.bra=C.cursor,C.slice_del())))}var d,_,w,b=[new e("",-1,6),new e("á",0,1),new e("ä",0,1),new e("é",0,2),new e("ë",0,2),new e("í",0,3),new e("ï",0,3),new e("ó",0,4),new e("ö",0,4),new e("ú",0,5),new e("ü",0,5)],p=[new e("",-1,3),new e("I",0,2),new e("Y",0,1)],g=[new e("dd",-1,-1),new e("kk",-1,-1),new e("tt",-1,-1)],h=[new e("ene",-1,2),new e("se",-1,3),new e("en",-1,2),new e("heden",2,1),new e("s",-1,3)],k=[new e("end",-1,1),new e("ig",-1,2),new e("ing",-1,1),new e("lijk",-1,3),new e("baar",-1,4),new e("bar",-1,5)],v=[new e("aa",-1,-1),new e("ee",-1,-1),new e("oo",-1,-1),new e("uu",-1,-1)],q=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,128],z=[1,0,0,17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,128],j=[17,67,16,1,0,0,0,0,0,0,0,0,0,0,0,0,128],C=new i;this.setCurrent=function(r){C.setCurrent(r)},this.getCurrent=function(){return C.getCurrent()},this.stem=function(){var e=C.cursor;return r(),C.cursor=e,o(),C.limit_backward=e,C.cursor=C.limit,f(),C.cursor=C.limit_backward,s(),!0}};return function(r){return"function"==typeof r.update?r.update(function(r){return n.setCurrent(r),n.stem(),n.getCurrent()}):(n.setCurrent(r),n.stem(),n.getCurrent())}}(),r.Pipeline.registerFunction(r.nl.stemmer,"stemmer-nl"),r.nl.stopWordFilter=r.generateStopWordFilter(" aan al alles als altijd andere ben bij daar dan dat de der deze die dit doch doen door dus een eens en er ge geen geweest haar had heb hebben heeft hem het hier hij hoe hun iemand iets ik in is ja je kan kon kunnen maar me meer men met mij mijn moet na naar niet niets nog nu of om omdat onder ons ook op over reeds te tegen toch toen tot u uit uw van veel voor want waren was wat werd wezen wie wil worden wordt zal ze zelf zich zij zijn zo zonder zou".split(" ")),r.Pipeline.registerFunction(r.nl.stopWordFilter,"stopWordFilter-nl")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.no.min.js b/assets/javascripts/lunr/min/lunr.no.min.js new file mode 100644 index 00000000..92bc7e4e --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.no.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Norwegian` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.no=function(){this.pipeline.reset(),this.pipeline.add(e.no.trimmer,e.no.stopWordFilter,e.no.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.no.stemmer))},e.no.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.no.trimmer=e.trimmerSupport.generateTrimmer(e.no.wordCharacters),e.Pipeline.registerFunction(e.no.trimmer,"trimmer-no"),e.no.stemmer=function(){var r=e.stemmerSupport.Among,n=e.stemmerSupport.SnowballProgram,i=new function(){function e(){var e,r=w.cursor+3;if(a=w.limit,0<=r||r<=w.limit){for(s=r;;){if(e=w.cursor,w.in_grouping(d,97,248)){w.cursor=e;break}if(e>=w.limit)return;w.cursor=e+1}for(;!w.out_grouping(d,97,248);){if(w.cursor>=w.limit)return;w.cursor++}a=w.cursor,a=a&&(r=w.limit_backward,w.limit_backward=a,w.ket=w.cursor,e=w.find_among_b(m,29),w.limit_backward=r,e))switch(w.bra=w.cursor,e){case 1:w.slice_del();break;case 2:n=w.limit-w.cursor,w.in_grouping_b(c,98,122)?w.slice_del():(w.cursor=w.limit-n,w.eq_s_b(1,"k")&&w.out_grouping_b(d,97,248)&&w.slice_del());break;case 3:w.slice_from("er")}}function t(){var e,r=w.limit-w.cursor;w.cursor>=a&&(e=w.limit_backward,w.limit_backward=a,w.ket=w.cursor,w.find_among_b(u,2)?(w.bra=w.cursor,w.limit_backward=e,w.cursor=w.limit-r,w.cursor>w.limit_backward&&(w.cursor--,w.bra=w.cursor,w.slice_del())):w.limit_backward=e)}function o(){var e,r;w.cursor>=a&&(r=w.limit_backward,w.limit_backward=a,w.ket=w.cursor,e=w.find_among_b(l,11),e?(w.bra=w.cursor,w.limit_backward=r,1==e&&w.slice_del()):w.limit_backward=r)}var s,a,m=[new r("a",-1,1),new r("e",-1,1),new r("ede",1,1),new r("ande",1,1),new r("ende",1,1),new r("ane",1,1),new r("ene",1,1),new r("hetene",6,1),new r("erte",1,3),new r("en",-1,1),new r("heten",9,1),new r("ar",-1,1),new r("er",-1,1),new r("heter",12,1),new r("s",-1,2),new r("as",14,1),new r("es",14,1),new r("edes",16,1),new r("endes",16,1),new r("enes",16,1),new r("hetenes",19,1),new r("ens",14,1),new r("hetens",21,1),new r("ers",14,1),new r("ets",14,1),new r("et",-1,1),new r("het",25,1),new r("ert",-1,3),new r("ast",-1,1)],u=[new r("dt",-1,-1),new r("vt",-1,-1)],l=[new r("leg",-1,1),new r("eleg",0,1),new r("ig",-1,1),new r("eig",2,1),new r("lig",2,1),new r("elig",4,1),new r("els",-1,1),new r("lov",-1,1),new r("elov",7,1),new r("slov",7,1),new r("hetslov",9,1)],d=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,48,0,128],c=[119,125,149,1],w=new n;this.setCurrent=function(e){w.setCurrent(e)},this.getCurrent=function(){return w.getCurrent()},this.stem=function(){var r=w.cursor;return e(),w.limit_backward=r,w.cursor=w.limit,i(),w.cursor=w.limit,t(),w.cursor=w.limit,o(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return i.setCurrent(e),i.stem(),i.getCurrent()}):(i.setCurrent(e),i.stem(),i.getCurrent())}}(),e.Pipeline.registerFunction(e.no.stemmer,"stemmer-no"),e.no.stopWordFilter=e.generateStopWordFilter("alle at av bare begge ble blei bli blir blitt både båe da de deg dei deim deira deires dem den denne der dere deres det dette di din disse ditt du dykk dykkar då eg ein eit eitt eller elles en enn er et ett etter for fordi fra før ha hadde han hans har hennar henne hennes her hjå ho hoe honom hoss hossen hun hva hvem hver hvilke hvilken hvis hvor hvordan hvorfor i ikke ikkje ikkje ingen ingi inkje inn inni ja jeg kan kom korleis korso kun kunne kva kvar kvarhelst kven kvi kvifor man mange me med medan meg meget mellom men mi min mine mitt mot mykje ned no noe noen noka noko nokon nokor nokre nå når og også om opp oss over på samme seg selv si si sia sidan siden sin sine sitt sjøl skal skulle slik so som som somme somt så sånn til um upp ut uten var vart varte ved vere verte vi vil ville vore vors vort vår være være vært å".split(" ")),e.Pipeline.registerFunction(e.no.stopWordFilter,"stopWordFilter-no")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.pt.min.js b/assets/javascripts/lunr/min/lunr.pt.min.js new file mode 100644 index 00000000..6c16996d --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.pt.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Portuguese` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.pt=function(){this.pipeline.reset(),this.pipeline.add(e.pt.trimmer,e.pt.stopWordFilter,e.pt.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.pt.stemmer))},e.pt.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.pt.trimmer=e.trimmerSupport.generateTrimmer(e.pt.wordCharacters),e.Pipeline.registerFunction(e.pt.trimmer,"trimmer-pt"),e.pt.stemmer=function(){var r=e.stemmerSupport.Among,s=e.stemmerSupport.SnowballProgram,n=new function(){function e(){for(var e;;){if(z.bra=z.cursor,e=z.find_among(k,3))switch(z.ket=z.cursor,e){case 1:z.slice_from("a~");continue;case 2:z.slice_from("o~");continue;case 3:if(z.cursor>=z.limit)break;z.cursor++;continue}break}}function n(){if(z.out_grouping(y,97,250)){for(;!z.in_grouping(y,97,250);){if(z.cursor>=z.limit)return!0;z.cursor++}return!1}return!0}function i(){if(z.in_grouping(y,97,250))for(;!z.out_grouping(y,97,250);){if(z.cursor>=z.limit)return!1;z.cursor++}return g=z.cursor,!0}function o(){var e,r,s=z.cursor;if(z.in_grouping(y,97,250))if(e=z.cursor,n()){if(z.cursor=e,i())return}else g=z.cursor;if(z.cursor=s,z.out_grouping(y,97,250)){if(r=z.cursor,n()){if(z.cursor=r,!z.in_grouping(y,97,250)||z.cursor>=z.limit)return;z.cursor++}g=z.cursor}}function t(){for(;!z.in_grouping(y,97,250);){if(z.cursor>=z.limit)return!1;z.cursor++}for(;!z.out_grouping(y,97,250);){if(z.cursor>=z.limit)return!1;z.cursor++}return!0}function a(){var e=z.cursor;g=z.limit,b=g,h=g,o(),z.cursor=e,t()&&(b=z.cursor,t()&&(h=z.cursor))}function u(){for(var e;;){if(z.bra=z.cursor,e=z.find_among(q,3))switch(z.ket=z.cursor,e){case 1:z.slice_from("ã");continue;case 2:z.slice_from("õ");continue;case 3:if(z.cursor>=z.limit)break;z.cursor++;continue}break}}function w(){return g<=z.cursor}function m(){return b<=z.cursor}function c(){return h<=z.cursor}function l(){var e;if(z.ket=z.cursor,!(e=z.find_among_b(F,45)))return!1;switch(z.bra=z.cursor,e){case 1:if(!c())return!1;z.slice_del();break;case 2:if(!c())return!1;z.slice_from("log");break;case 3:if(!c())return!1;z.slice_from("u");break;case 4:if(!c())return!1;z.slice_from("ente");break;case 5:if(!m())return!1;z.slice_del(),z.ket=z.cursor,e=z.find_among_b(j,4),e&&(z.bra=z.cursor,c()&&(z.slice_del(),1==e&&(z.ket=z.cursor,z.eq_s_b(2,"at")&&(z.bra=z.cursor,c()&&z.slice_del()))));break;case 6:if(!c())return!1;z.slice_del(),z.ket=z.cursor,e=z.find_among_b(C,3),e&&(z.bra=z.cursor,1==e&&c()&&z.slice_del());break;case 7:if(!c())return!1;z.slice_del(),z.ket=z.cursor,e=z.find_among_b(P,3),e&&(z.bra=z.cursor,1==e&&c()&&z.slice_del());break;case 8:if(!c())return!1;z.slice_del(),z.ket=z.cursor,z.eq_s_b(2,"at")&&(z.bra=z.cursor,c()&&z.slice_del());break;case 9:if(!w()||!z.eq_s_b(1,"e"))return!1;z.slice_from("ir")}return!0}function f(){var e,r;if(z.cursor>=g){if(r=z.limit_backward,z.limit_backward=g,z.ket=z.cursor,e=z.find_among_b(S,120))return z.bra=z.cursor,1==e&&z.slice_del(),z.limit_backward=r,!0;z.limit_backward=r}return!1}function d(){var e;z.ket=z.cursor,(e=z.find_among_b(W,7))&&(z.bra=z.cursor,1==e&&w()&&z.slice_del())}function v(e,r){if(z.eq_s_b(1,e)){z.bra=z.cursor;var s=z.limit-z.cursor;if(z.eq_s_b(1,r))return z.cursor=z.limit-s,w()&&z.slice_del(),!1}return!0}function p(){var e;if(z.ket=z.cursor,e=z.find_among_b(L,4))switch(z.bra=z.cursor,e){case 1:w()&&(z.slice_del(),z.ket=z.cursor,z.limit-z.cursor,v("u","g")&&v("i","c"));break;case 2:z.slice_from("c")}}function _(){if(!l()&&(z.cursor=z.limit,!f()))return z.cursor=z.limit,void d();z.cursor=z.limit,z.ket=z.cursor,z.eq_s_b(1,"i")&&(z.bra=z.cursor,z.eq_s_b(1,"c")&&(z.cursor=z.limit,w()&&z.slice_del()))}var h,b,g,k=[new r("",-1,3),new r("ã",0,1),new r("õ",0,2)],q=[new r("",-1,3),new r("a~",0,1),new r("o~",0,2)],j=[new r("ic",-1,-1),new r("ad",-1,-1),new r("os",-1,-1),new r("iv",-1,1)],C=[new r("ante",-1,1),new r("avel",-1,1),new r("ível",-1,1)],P=[new r("ic",-1,1),new r("abil",-1,1),new r("iv",-1,1)],F=[new r("ica",-1,1),new r("ância",-1,1),new r("ência",-1,4),new r("ira",-1,9),new r("adora",-1,1),new r("osa",-1,1),new r("ista",-1,1),new r("iva",-1,8),new r("eza",-1,1),new r("logía",-1,2),new r("idade",-1,7),new r("ante",-1,1),new r("mente",-1,6),new r("amente",12,5),new r("ável",-1,1),new r("ível",-1,1),new r("ución",-1,3),new r("ico",-1,1),new r("ismo",-1,1),new r("oso",-1,1),new r("amento",-1,1),new r("imento",-1,1),new r("ivo",-1,8),new r("aça~o",-1,1),new r("ador",-1,1),new r("icas",-1,1),new r("ências",-1,4),new r("iras",-1,9),new r("adoras",-1,1),new r("osas",-1,1),new r("istas",-1,1),new r("ivas",-1,8),new r("ezas",-1,1),new r("logías",-1,2),new r("idades",-1,7),new r("uciones",-1,3),new r("adores",-1,1),new r("antes",-1,1),new r("aço~es",-1,1),new r("icos",-1,1),new r("ismos",-1,1),new r("osos",-1,1),new r("amentos",-1,1),new r("imentos",-1,1),new r("ivos",-1,8)],S=[new r("ada",-1,1),new r("ida",-1,1),new r("ia",-1,1),new r("aria",2,1),new r("eria",2,1),new r("iria",2,1),new r("ara",-1,1),new r("era",-1,1),new r("ira",-1,1),new r("ava",-1,1),new r("asse",-1,1),new r("esse",-1,1),new r("isse",-1,1),new r("aste",-1,1),new r("este",-1,1),new r("iste",-1,1),new r("ei",-1,1),new r("arei",16,1),new r("erei",16,1),new r("irei",16,1),new r("am",-1,1),new r("iam",20,1),new r("ariam",21,1),new r("eriam",21,1),new r("iriam",21,1),new r("aram",20,1),new r("eram",20,1),new r("iram",20,1),new r("avam",20,1),new r("em",-1,1),new r("arem",29,1),new r("erem",29,1),new r("irem",29,1),new r("assem",29,1),new r("essem",29,1),new r("issem",29,1),new r("ado",-1,1),new r("ido",-1,1),new r("ando",-1,1),new r("endo",-1,1),new r("indo",-1,1),new r("ara~o",-1,1),new r("era~o",-1,1),new r("ira~o",-1,1),new r("ar",-1,1),new r("er",-1,1),new r("ir",-1,1),new r("as",-1,1),new r("adas",47,1),new r("idas",47,1),new r("ias",47,1),new r("arias",50,1),new r("erias",50,1),new r("irias",50,1),new r("aras",47,1),new r("eras",47,1),new r("iras",47,1),new r("avas",47,1),new r("es",-1,1),new r("ardes",58,1),new r("erdes",58,1),new r("irdes",58,1),new r("ares",58,1),new r("eres",58,1),new r("ires",58,1),new r("asses",58,1),new r("esses",58,1),new r("isses",58,1),new r("astes",58,1),new r("estes",58,1),new r("istes",58,1),new r("is",-1,1),new r("ais",71,1),new r("eis",71,1),new r("areis",73,1),new r("ereis",73,1),new r("ireis",73,1),new r("áreis",73,1),new r("éreis",73,1),new r("íreis",73,1),new r("ásseis",73,1),new r("ésseis",73,1),new r("ísseis",73,1),new r("áveis",73,1),new r("íeis",73,1),new r("aríeis",84,1),new r("eríeis",84,1),new r("iríeis",84,1),new r("ados",-1,1),new r("idos",-1,1),new r("amos",-1,1),new r("áramos",90,1),new r("éramos",90,1),new r("íramos",90,1),new r("ávamos",90,1),new r("íamos",90,1),new r("aríamos",95,1),new r("eríamos",95,1),new r("iríamos",95,1),new r("emos",-1,1),new r("aremos",99,1),new r("eremos",99,1),new r("iremos",99,1),new r("ássemos",99,1),new r("êssemos",99,1),new r("íssemos",99,1),new r("imos",-1,1),new r("armos",-1,1),new r("ermos",-1,1),new r("irmos",-1,1),new r("ámos",-1,1),new r("arás",-1,1),new r("erás",-1,1),new r("irás",-1,1),new r("eu",-1,1),new r("iu",-1,1),new r("ou",-1,1),new r("ará",-1,1),new r("erá",-1,1),new r("irá",-1,1)],W=[new r("a",-1,1),new r("i",-1,1),new r("o",-1,1),new r("os",-1,1),new r("á",-1,1),new r("í",-1,1),new r("ó",-1,1)],L=[new r("e",-1,1),new r("ç",-1,2),new r("é",-1,1),new r("ê",-1,1)],y=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,0,3,19,12,2],z=new s;this.setCurrent=function(e){z.setCurrent(e)},this.getCurrent=function(){return z.getCurrent()},this.stem=function(){var r=z.cursor;return e(),z.cursor=r,a(),z.limit_backward=r,z.cursor=z.limit,_(),z.cursor=z.limit,p(),z.cursor=z.limit_backward,u(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return n.setCurrent(e),n.stem(),n.getCurrent()}):(n.setCurrent(e),n.stem(),n.getCurrent())}}(),e.Pipeline.registerFunction(e.pt.stemmer,"stemmer-pt"),e.pt.stopWordFilter=e.generateStopWordFilter("a ao aos aquela aquelas aquele aqueles aquilo as até com como da das de dela delas dele deles depois do dos e ela elas ele eles em entre era eram essa essas esse esses esta estamos estas estava estavam este esteja estejam estejamos estes esteve estive estivemos estiver estivera estiveram estiverem estivermos estivesse estivessem estivéramos estivéssemos estou está estávamos estão eu foi fomos for fora foram forem formos fosse fossem fui fôramos fôssemos haja hajam hajamos havemos hei houve houvemos houver houvera houveram houverei houverem houveremos houveria houveriam houvermos houverá houverão houveríamos houvesse houvessem houvéramos houvéssemos há hão isso isto já lhe lhes mais mas me mesmo meu meus minha minhas muito na nas nem no nos nossa nossas nosso nossos num numa não nós o os ou para pela pelas pelo pelos por qual quando que quem se seja sejam sejamos sem serei seremos seria seriam será serão seríamos seu seus somos sou sua suas são só também te tem temos tenha tenham tenhamos tenho terei teremos teria teriam terá terão teríamos teu teus teve tinha tinham tive tivemos tiver tivera tiveram tiverem tivermos tivesse tivessem tivéramos tivéssemos tu tua tuas tém tínhamos um uma você vocês vos à às éramos".split(" ")),e.Pipeline.registerFunction(e.pt.stopWordFilter,"stopWordFilter-pt")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.ro.min.js b/assets/javascripts/lunr/min/lunr.ro.min.js new file mode 100644 index 00000000..72771401 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.ro.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Romanian` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,i){"function"==typeof define&&define.amd?define(i):"object"==typeof exports?module.exports=i():i()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.ro=function(){this.pipeline.reset(),this.pipeline.add(e.ro.trimmer,e.ro.stopWordFilter,e.ro.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.ro.stemmer))},e.ro.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.ro.trimmer=e.trimmerSupport.generateTrimmer(e.ro.wordCharacters),e.Pipeline.registerFunction(e.ro.trimmer,"trimmer-ro"),e.ro.stemmer=function(){var i=e.stemmerSupport.Among,r=e.stemmerSupport.SnowballProgram,n=new function(){function e(e,i){L.eq_s(1,e)&&(L.ket=L.cursor,L.in_grouping(W,97,259)&&L.slice_from(i))}function n(){for(var i,r;;){if(i=L.cursor,L.in_grouping(W,97,259)&&(r=L.cursor,L.bra=r,e("u","U"),L.cursor=r,e("i","I")),L.cursor=i,L.cursor>=L.limit)break;L.cursor++}}function t(){if(L.out_grouping(W,97,259)){for(;!L.in_grouping(W,97,259);){if(L.cursor>=L.limit)return!0;L.cursor++}return!1}return!0}function a(){if(L.in_grouping(W,97,259))for(;!L.out_grouping(W,97,259);){if(L.cursor>=L.limit)return!0;L.cursor++}return!1}function o(){var e,i,r=L.cursor;if(L.in_grouping(W,97,259)){if(e=L.cursor,!t())return void(h=L.cursor);if(L.cursor=e,!a())return void(h=L.cursor)}L.cursor=r,L.out_grouping(W,97,259)&&(i=L.cursor,t()&&(L.cursor=i,L.in_grouping(W,97,259)&&L.cursor=L.limit)return!1;L.cursor++}for(;!L.out_grouping(W,97,259);){if(L.cursor>=L.limit)return!1;L.cursor++}return!0}function c(){var e=L.cursor;h=L.limit,k=h,g=h,o(),L.cursor=e,u()&&(k=L.cursor,u()&&(g=L.cursor))}function s(){for(var e;;){if(L.bra=L.cursor,e=L.find_among(z,3))switch(L.ket=L.cursor,e){case 1:L.slice_from("i");continue;case 2:L.slice_from("u");continue;case 3:if(L.cursor>=L.limit)break;L.cursor++;continue}break}}function w(){return h<=L.cursor}function m(){return k<=L.cursor}function l(){return g<=L.cursor}function f(){var e,i;if(L.ket=L.cursor,(e=L.find_among_b(C,16))&&(L.bra=L.cursor,m()))switch(e){case 1:L.slice_del();break;case 2:L.slice_from("a");break;case 3:L.slice_from("e");break;case 4:L.slice_from("i");break;case 5:i=L.limit-L.cursor,L.eq_s_b(2,"ab")||(L.cursor=L.limit-i,L.slice_from("i"));break;case 6:L.slice_from("at");break;case 7:L.slice_from("aţi")}}function p(){var e,i=L.limit-L.cursor;if(L.ket=L.cursor,(e=L.find_among_b(P,46))&&(L.bra=L.cursor,m())){switch(e){case 1:L.slice_from("abil");break;case 2:L.slice_from("ibil");break;case 3:L.slice_from("iv");break;case 4:L.slice_from("ic");break;case 5:L.slice_from("at");break;case 6:L.slice_from("it")}return _=!0,L.cursor=L.limit-i,!0}return!1}function d(){var e,i;for(_=!1;;)if(i=L.limit-L.cursor,!p()){L.cursor=L.limit-i;break}if(L.ket=L.cursor,(e=L.find_among_b(F,62))&&(L.bra=L.cursor,l())){switch(e){case 1:L.slice_del();break;case 2:L.eq_s_b(1,"ţ")&&(L.bra=L.cursor,L.slice_from("t"));break;case 3:L.slice_from("ist")}_=!0}}function b(){var e,i,r;if(L.cursor>=h){if(i=L.limit_backward,L.limit_backward=h,L.ket=L.cursor,e=L.find_among_b(q,94))switch(L.bra=L.cursor,e){case 1:if(r=L.limit-L.cursor,!L.out_grouping_b(W,97,259)&&(L.cursor=L.limit-r,!L.eq_s_b(1,"u")))break;case 2:L.slice_del()}L.limit_backward=i}}function v(){var e;L.ket=L.cursor,(e=L.find_among_b(S,5))&&(L.bra=L.cursor,w()&&1==e&&L.slice_del())}var _,g,k,h,z=[new i("",-1,3),new i("I",0,1),new i("U",0,2)],C=[new i("ea",-1,3),new i("aţia",-1,7),new i("aua",-1,2),new i("iua",-1,4),new i("aţie",-1,7),new i("ele",-1,3),new i("ile",-1,5),new i("iile",6,4),new i("iei",-1,4),new i("atei",-1,6),new i("ii",-1,4),new i("ului",-1,1),new i("ul",-1,1),new i("elor",-1,3),new i("ilor",-1,4),new i("iilor",14,4)],P=[new i("icala",-1,4),new i("iciva",-1,4),new i("ativa",-1,5),new i("itiva",-1,6),new i("icale",-1,4),new i("aţiune",-1,5),new i("iţiune",-1,6),new i("atoare",-1,5),new i("itoare",-1,6),new i("ătoare",-1,5),new i("icitate",-1,4),new i("abilitate",-1,1),new i("ibilitate",-1,2),new i("ivitate",-1,3),new i("icive",-1,4),new i("ative",-1,5),new i("itive",-1,6),new i("icali",-1,4),new i("atori",-1,5),new i("icatori",18,4),new i("itori",-1,6),new i("ători",-1,5),new i("icitati",-1,4),new i("abilitati",-1,1),new i("ivitati",-1,3),new i("icivi",-1,4),new i("ativi",-1,5),new i("itivi",-1,6),new i("icităi",-1,4),new i("abilităi",-1,1),new i("ivităi",-1,3),new i("icităţi",-1,4),new i("abilităţi",-1,1),new i("ivităţi",-1,3),new i("ical",-1,4),new i("ator",-1,5),new i("icator",35,4),new i("itor",-1,6),new i("ător",-1,5),new i("iciv",-1,4),new i("ativ",-1,5),new i("itiv",-1,6),new i("icală",-1,4),new i("icivă",-1,4),new i("ativă",-1,5),new i("itivă",-1,6)],F=[new i("ica",-1,1),new i("abila",-1,1),new i("ibila",-1,1),new i("oasa",-1,1),new i("ata",-1,1),new i("ita",-1,1),new i("anta",-1,1),new i("ista",-1,3),new i("uta",-1,1),new i("iva",-1,1),new i("ic",-1,1),new i("ice",-1,1),new i("abile",-1,1),new i("ibile",-1,1),new i("isme",-1,3),new i("iune",-1,2),new i("oase",-1,1),new i("ate",-1,1),new i("itate",17,1),new i("ite",-1,1),new i("ante",-1,1),new i("iste",-1,3),new i("ute",-1,1),new i("ive",-1,1),new i("ici",-1,1),new i("abili",-1,1),new i("ibili",-1,1),new i("iuni",-1,2),new i("atori",-1,1),new i("osi",-1,1),new i("ati",-1,1),new i("itati",30,1),new i("iti",-1,1),new i("anti",-1,1),new i("isti",-1,3),new i("uti",-1,1),new i("işti",-1,3),new i("ivi",-1,1),new i("ităi",-1,1),new i("oşi",-1,1),new i("ităţi",-1,1),new i("abil",-1,1),new i("ibil",-1,1),new i("ism",-1,3),new i("ator",-1,1),new i("os",-1,1),new i("at",-1,1),new i("it",-1,1),new i("ant",-1,1),new i("ist",-1,3),new i("ut",-1,1),new i("iv",-1,1),new i("ică",-1,1),new i("abilă",-1,1),new i("ibilă",-1,1),new i("oasă",-1,1),new i("ată",-1,1),new i("ită",-1,1),new i("antă",-1,1),new i("istă",-1,3),new i("ută",-1,1),new i("ivă",-1,1)],q=[new i("ea",-1,1),new i("ia",-1,1),new i("esc",-1,1),new i("ăsc",-1,1),new i("ind",-1,1),new i("ând",-1,1),new i("are",-1,1),new i("ere",-1,1),new i("ire",-1,1),new i("âre",-1,1),new i("se",-1,2),new i("ase",10,1),new i("sese",10,2),new i("ise",10,1),new i("use",10,1),new i("âse",10,1),new i("eşte",-1,1),new i("ăşte",-1,1),new i("eze",-1,1),new i("ai",-1,1),new i("eai",19,1),new i("iai",19,1),new i("sei",-1,2),new i("eşti",-1,1),new i("ăşti",-1,1),new i("ui",-1,1),new i("ezi",-1,1),new i("âi",-1,1),new i("aşi",-1,1),new i("seşi",-1,2),new i("aseşi",29,1),new i("seseşi",29,2),new i("iseşi",29,1),new i("useşi",29,1),new i("âseşi",29,1),new i("işi",-1,1),new i("uşi",-1,1),new i("âşi",-1,1),new i("aţi",-1,2),new i("eaţi",38,1),new i("iaţi",38,1),new i("eţi",-1,2),new i("iţi",-1,2),new i("âţi",-1,2),new i("arăţi",-1,1),new i("serăţi",-1,2),new i("aserăţi",45,1),new i("seserăţi",45,2),new i("iserăţi",45,1),new i("userăţi",45,1),new i("âserăţi",45,1),new i("irăţi",-1,1),new i("urăţi",-1,1),new i("ârăţi",-1,1),new i("am",-1,1),new i("eam",54,1),new i("iam",54,1),new i("em",-1,2),new i("asem",57,1),new i("sesem",57,2),new i("isem",57,1),new i("usem",57,1),new i("âsem",57,1),new i("im",-1,2),new i("âm",-1,2),new i("ăm",-1,2),new i("arăm",65,1),new i("serăm",65,2),new i("aserăm",67,1),new i("seserăm",67,2),new i("iserăm",67,1),new i("userăm",67,1),new i("âserăm",67,1),new i("irăm",65,1),new i("urăm",65,1),new i("ârăm",65,1),new i("au",-1,1),new i("eau",76,1),new i("iau",76,1),new i("indu",-1,1),new i("ându",-1,1),new i("ez",-1,1),new i("ească",-1,1),new i("ară",-1,1),new i("seră",-1,2),new i("aseră",84,1),new i("seseră",84,2),new i("iseră",84,1),new i("useră",84,1),new i("âseră",84,1),new i("iră",-1,1),new i("ură",-1,1),new i("âră",-1,1),new i("ează",-1,1)],S=[new i("a",-1,1),new i("e",-1,1),new i("ie",1,1),new i("i",-1,1),new i("ă",-1,1)],W=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,0,2,32,0,0,4],L=new r;this.setCurrent=function(e){L.setCurrent(e)},this.getCurrent=function(){return L.getCurrent()},this.stem=function(){var e=L.cursor;return n(),L.cursor=e,c(),L.limit_backward=e,L.cursor=L.limit,f(),L.cursor=L.limit,d(),L.cursor=L.limit,_||(L.cursor=L.limit,b(),L.cursor=L.limit),v(),L.cursor=L.limit_backward,s(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return n.setCurrent(e),n.stem(),n.getCurrent()}):(n.setCurrent(e),n.stem(),n.getCurrent())}}(),e.Pipeline.registerFunction(e.ro.stemmer,"stemmer-ro"),e.ro.stopWordFilter=e.generateStopWordFilter("acea aceasta această aceea acei aceia acel acela acele acelea acest acesta aceste acestea aceşti aceştia acolo acord acum ai aia aibă aici al ale alea altceva altcineva am ar are asemenea asta astea astăzi asupra au avea avem aveţi azi aş aşadar aţi bine bucur bună ca care caut ce cel ceva chiar cinci cine cineva contra cu cum cumva curând curînd când cât câte câtva câţi cînd cît cîte cîtva cîţi că căci cărei căror cărui către da dacă dar datorită dată dau de deci deja deoarece departe deşi din dinaintea dintr- dintre doi doilea două drept după dă ea ei el ele eram este eu eşti face fata fi fie fiecare fii fim fiu fiţi frumos fără graţie halbă iar ieri la le li lor lui lângă lîngă mai mea mei mele mereu meu mi mie mine mult multă mulţi mulţumesc mâine mîine mă ne nevoie nici nicăieri nimeni nimeri nimic nişte noastre noastră noi noroc nostru nouă noştri nu opt ori oricare orice oricine oricum oricând oricât oricînd oricît oriunde patra patru patrulea pe pentru peste pic poate pot prea prima primul prin puţin puţina puţină până pînă rog sa sale sau se spate spre sub sunt suntem sunteţi sută sînt sîntem sînteţi să săi său ta tale te timp tine toate toată tot totuşi toţi trei treia treilea tu tăi tău un una unde undeva unei uneia unele uneori unii unor unora unu unui unuia unul vi voastre voastră voi vostru vouă voştri vreme vreo vreun vă zece zero zi zice îi îl îmi împotriva în înainte înaintea încotro încât încît între întrucât întrucît îţi ăla ălea ăsta ăstea ăştia şapte şase şi ştiu ţi ţie".split(" ")),e.Pipeline.registerFunction(e.ro.stopWordFilter,"stopWordFilter-ro")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.ru.min.js b/assets/javascripts/lunr/min/lunr.ru.min.js new file mode 100644 index 00000000..186cc485 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.ru.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Russian` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,n){"function"==typeof define&&define.amd?define(n):"object"==typeof exports?module.exports=n():n()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.ru=function(){this.pipeline.reset(),this.pipeline.add(e.ru.trimmer,e.ru.stopWordFilter,e.ru.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.ru.stemmer))},e.ru.wordCharacters="Ѐ-҄҇-ԯᴫᵸⷠ-ⷿꙀ-ꚟ︮︯",e.ru.trimmer=e.trimmerSupport.generateTrimmer(e.ru.wordCharacters),e.Pipeline.registerFunction(e.ru.trimmer,"trimmer-ru"),e.ru.stemmer=function(){var n=e.stemmerSupport.Among,r=e.stemmerSupport.SnowballProgram,t=new function(){function e(){for(;!W.in_grouping(S,1072,1103);){if(W.cursor>=W.limit)return!1;W.cursor++}return!0}function t(){for(;!W.out_grouping(S,1072,1103);){if(W.cursor>=W.limit)return!1;W.cursor++}return!0}function w(){b=W.limit,_=b,e()&&(b=W.cursor,t()&&e()&&t()&&(_=W.cursor))}function i(){return _<=W.cursor}function u(e,n){var r,t;if(W.ket=W.cursor,r=W.find_among_b(e,n)){switch(W.bra=W.cursor,r){case 1:if(t=W.limit-W.cursor,!W.eq_s_b(1,"а")&&(W.cursor=W.limit-t,!W.eq_s_b(1,"я")))return!1;case 2:W.slice_del()}return!0}return!1}function o(){return u(h,9)}function s(e,n){var r;return W.ket=W.cursor,!!(r=W.find_among_b(e,n))&&(W.bra=W.cursor,1==r&&W.slice_del(),!0)}function c(){return s(g,26)}function m(){return!!c()&&(u(C,8),!0)}function f(){return s(k,2)}function l(){return u(P,46)}function a(){s(v,36)}function p(){var e;W.ket=W.cursor,(e=W.find_among_b(F,2))&&(W.bra=W.cursor,i()&&1==e&&W.slice_del())}function d(){var e;if(W.ket=W.cursor,e=W.find_among_b(q,4))switch(W.bra=W.cursor,e){case 1:if(W.slice_del(),W.ket=W.cursor,!W.eq_s_b(1,"н"))break;W.bra=W.cursor;case 2:if(!W.eq_s_b(1,"н"))break;case 3:W.slice_del()}}var _,b,h=[new n("в",-1,1),new n("ив",0,2),new n("ыв",0,2),new n("вши",-1,1),new n("ивши",3,2),new n("ывши",3,2),new n("вшись",-1,1),new n("ившись",6,2),new n("ывшись",6,2)],g=[new n("ее",-1,1),new n("ие",-1,1),new n("ое",-1,1),new n("ые",-1,1),new n("ими",-1,1),new n("ыми",-1,1),new n("ей",-1,1),new n("ий",-1,1),new n("ой",-1,1),new n("ый",-1,1),new n("ем",-1,1),new n("им",-1,1),new n("ом",-1,1),new n("ым",-1,1),new n("его",-1,1),new n("ого",-1,1),new n("ему",-1,1),new n("ому",-1,1),new n("их",-1,1),new n("ых",-1,1),new n("ею",-1,1),new n("ою",-1,1),new n("ую",-1,1),new n("юю",-1,1),new n("ая",-1,1),new n("яя",-1,1)],C=[new n("ем",-1,1),new n("нн",-1,1),new n("вш",-1,1),new n("ивш",2,2),new n("ывш",2,2),new n("щ",-1,1),new n("ющ",5,1),new n("ующ",6,2)],k=[new n("сь",-1,1),new n("ся",-1,1)],P=[new n("ла",-1,1),new n("ила",0,2),new n("ыла",0,2),new n("на",-1,1),new n("ена",3,2),new n("ете",-1,1),new n("ите",-1,2),new n("йте",-1,1),new n("ейте",7,2),new n("уйте",7,2),new n("ли",-1,1),new n("или",10,2),new n("ыли",10,2),new n("й",-1,1),new n("ей",13,2),new n("уй",13,2),new n("л",-1,1),new n("ил",16,2),new n("ыл",16,2),new n("ем",-1,1),new n("им",-1,2),new n("ым",-1,2),new n("н",-1,1),new n("ен",22,2),new n("ло",-1,1),new n("ило",24,2),new n("ыло",24,2),new n("но",-1,1),new n("ено",27,2),new n("нно",27,1),new n("ет",-1,1),new n("ует",30,2),new n("ит",-1,2),new n("ыт",-1,2),new n("ют",-1,1),new n("уют",34,2),new n("ят",-1,2),new n("ны",-1,1),new n("ены",37,2),new n("ть",-1,1),new n("ить",39,2),new n("ыть",39,2),new n("ешь",-1,1),new n("ишь",-1,2),new n("ю",-1,2),new n("ую",44,2)],v=[new n("а",-1,1),new n("ев",-1,1),new n("ов",-1,1),new n("е",-1,1),new n("ие",3,1),new n("ье",3,1),new n("и",-1,1),new n("еи",6,1),new n("ии",6,1),new n("ами",6,1),new n("ями",6,1),new n("иями",10,1),new n("й",-1,1),new n("ей",12,1),new n("ией",13,1),new n("ий",12,1),new n("ой",12,1),new n("ам",-1,1),new n("ем",-1,1),new n("ием",18,1),new n("ом",-1,1),new n("ям",-1,1),new n("иям",21,1),new n("о",-1,1),new n("у",-1,1),new n("ах",-1,1),new n("ях",-1,1),new n("иях",26,1),new n("ы",-1,1),new n("ь",-1,1),new n("ю",-1,1),new n("ию",30,1),new n("ью",30,1),new n("я",-1,1),new n("ия",33,1),new n("ья",33,1)],F=[new n("ост",-1,1),new n("ость",-1,1)],q=[new n("ейше",-1,1),new n("н",-1,2),new n("ейш",-1,1),new n("ь",-1,3)],S=[33,65,8,232],W=new r;this.setCurrent=function(e){W.setCurrent(e)},this.getCurrent=function(){return W.getCurrent()},this.stem=function(){return w(),W.cursor=W.limit,!(W.cursor=i&&(e-=i,t[e>>3]&1<<(7&e)))return this.cursor++,!0}return!1},in_grouping_b:function(t,i,s){if(this.cursor>this.limit_backward){var e=r.charCodeAt(this.cursor-1);if(e<=s&&e>=i&&(e-=i,t[e>>3]&1<<(7&e)))return this.cursor--,!0}return!1},out_grouping:function(t,i,s){if(this.cursors||e>3]&1<<(7&e)))return this.cursor++,!0}return!1},out_grouping_b:function(t,i,s){if(this.cursor>this.limit_backward){var e=r.charCodeAt(this.cursor-1);if(e>s||e>3]&1<<(7&e)))return this.cursor--,!0}return!1},eq_s:function(t,i){if(this.limit-this.cursor>1),f=0,l=o0||e==s||c)break;c=!0}}for(;;){var _=t[s];if(o>=_.s_size){if(this.cursor=n+_.s_size,!_.method)return _.result;var b=_.method();if(this.cursor=n+_.s_size,b)return _.result}if((s=_.substring_i)<0)return 0}},find_among_b:function(t,i){for(var s=0,e=i,n=this.cursor,u=this.limit_backward,o=0,h=0,c=!1;;){for(var a=s+(e-s>>1),f=0,l=o=0;m--){if(n-l==u){f=-1;break}if(f=r.charCodeAt(n-1-l)-_.s[m])break;l++}if(f<0?(e=a,h=l):(s=a,o=l),e-s<=1){if(s>0||e==s||c)break;c=!0}}for(;;){var _=t[s];if(o>=_.s_size){if(this.cursor=n-_.s_size,!_.method)return _.result;var b=_.method();if(this.cursor=n-_.s_size,b)return _.result}if((s=_.substring_i)<0)return 0}},replace_s:function(t,i,s){var e=s.length-(i-t),n=r.substring(0,t),u=r.substring(i);return r=n+s+u,this.limit+=e,this.cursor>=i?this.cursor+=e:this.cursor>t&&(this.cursor=t),e},slice_check:function(){if(this.bra<0||this.bra>this.ket||this.ket>this.limit||this.limit>r.length)throw"faulty slice operation"},slice_from:function(r){this.slice_check(),this.replace_s(this.bra,this.ket,r)},slice_del:function(){this.slice_from("")},insert:function(r,t,i){var s=this.replace_s(r,t,i);r<=this.bra&&(this.bra+=s),r<=this.ket&&(this.ket+=s)},slice_to:function(){return this.slice_check(),r.substring(this.bra,this.ket)},eq_v_b:function(r){return this.eq_s_b(r.length,r)}}}},r.trimmerSupport={generateTrimmer:function(r){var t=new RegExp("^[^"+r+"]+"),i=new RegExp("[^"+r+"]+$");return function(r){return"function"==typeof r.update?r.update(function(r){return r.replace(t,"").replace(i,"")}):r.replace(t,"").replace(i,"")}}}}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.sv.min.js b/assets/javascripts/lunr/min/lunr.sv.min.js new file mode 100644 index 00000000..3e5eb640 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.sv.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Swedish` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.sv=function(){this.pipeline.reset(),this.pipeline.add(e.sv.trimmer,e.sv.stopWordFilter,e.sv.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.sv.stemmer))},e.sv.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.sv.trimmer=e.trimmerSupport.generateTrimmer(e.sv.wordCharacters),e.Pipeline.registerFunction(e.sv.trimmer,"trimmer-sv"),e.sv.stemmer=function(){var r=e.stemmerSupport.Among,n=e.stemmerSupport.SnowballProgram,t=new function(){function e(){var e,r=w.cursor+3;if(o=w.limit,0<=r||r<=w.limit){for(a=r;;){if(e=w.cursor,w.in_grouping(l,97,246)){w.cursor=e;break}if(w.cursor=e,w.cursor>=w.limit)return;w.cursor++}for(;!w.out_grouping(l,97,246);){if(w.cursor>=w.limit)return;w.cursor++}o=w.cursor,o=o&&(w.limit_backward=o,w.cursor=w.limit,w.ket=w.cursor,e=w.find_among_b(u,37),w.limit_backward=r,e))switch(w.bra=w.cursor,e){case 1:w.slice_del();break;case 2:w.in_grouping_b(d,98,121)&&w.slice_del()}}function i(){var e=w.limit_backward;w.cursor>=o&&(w.limit_backward=o,w.cursor=w.limit,w.find_among_b(c,7)&&(w.cursor=w.limit,w.ket=w.cursor,w.cursor>w.limit_backward&&(w.bra=--w.cursor,w.slice_del())),w.limit_backward=e)}function s(){var e,r;if(w.cursor>=o){if(r=w.limit_backward,w.limit_backward=o,w.cursor=w.limit,w.ket=w.cursor,e=w.find_among_b(m,5))switch(w.bra=w.cursor,e){case 1:w.slice_del();break;case 2:w.slice_from("lös");break;case 3:w.slice_from("full")}w.limit_backward=r}}var a,o,u=[new r("a",-1,1),new r("arna",0,1),new r("erna",0,1),new r("heterna",2,1),new r("orna",0,1),new r("ad",-1,1),new r("e",-1,1),new r("ade",6,1),new r("ande",6,1),new r("arne",6,1),new r("are",6,1),new r("aste",6,1),new r("en",-1,1),new r("anden",12,1),new r("aren",12,1),new r("heten",12,1),new r("ern",-1,1),new r("ar",-1,1),new r("er",-1,1),new r("heter",18,1),new r("or",-1,1),new r("s",-1,2),new r("as",21,1),new r("arnas",22,1),new r("ernas",22,1),new r("ornas",22,1),new r("es",21,1),new r("ades",26,1),new r("andes",26,1),new r("ens",21,1),new r("arens",29,1),new r("hetens",29,1),new r("erns",21,1),new r("at",-1,1),new r("andet",-1,1),new r("het",-1,1),new r("ast",-1,1)],c=[new r("dd",-1,-1),new r("gd",-1,-1),new r("nn",-1,-1),new r("dt",-1,-1),new r("gt",-1,-1),new r("kt",-1,-1),new r("tt",-1,-1)],m=[new r("ig",-1,1),new r("lig",0,1),new r("els",-1,1),new r("fullt",-1,3),new r("löst",-1,2)],l=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,24,0,32],d=[119,127,149],w=new n;this.setCurrent=function(e){w.setCurrent(e)},this.getCurrent=function(){return w.getCurrent()},this.stem=function(){var r=w.cursor;return e(),w.limit_backward=r,w.cursor=w.limit,t(),w.cursor=w.limit,i(),w.cursor=w.limit,s(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return t.setCurrent(e),t.stem(),t.getCurrent()}):(t.setCurrent(e),t.stem(),t.getCurrent())}}(),e.Pipeline.registerFunction(e.sv.stemmer,"stemmer-sv"),e.sv.stopWordFilter=e.generateStopWordFilter("alla allt att av blev bli blir blivit de dem den denna deras dess dessa det detta dig din dina ditt du där då efter ej eller en er era ert ett från för ha hade han hans har henne hennes hon honom hur här i icke ingen inom inte jag ju kan kunde man med mellan men mig min mina mitt mot mycket ni nu när någon något några och om oss på samma sedan sig sin sina sitta själv skulle som så sådan sådana sådant till under upp ut utan vad var vara varför varit varje vars vart vem vi vid vilka vilkas vilken vilket vår våra vårt än är åt över".split(" ")),e.Pipeline.registerFunction(e.sv.stopWordFilter,"stopWordFilter-sv")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.th.min.js b/assets/javascripts/lunr/min/lunr.th.min.js new file mode 100644 index 00000000..dee3aac6 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.th.min.js @@ -0,0 +1 @@ +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");var r="2"==e.version[0];e.th=function(){this.pipeline.reset(),this.pipeline.add(e.th.trimmer),r?this.tokenizer=e.th.tokenizer:(e.tokenizer&&(e.tokenizer=e.th.tokenizer),this.tokenizerFn&&(this.tokenizerFn=e.th.tokenizer))},e.th.wordCharacters="[฀-๿]",e.th.trimmer=e.trimmerSupport.generateTrimmer(e.th.wordCharacters),e.Pipeline.registerFunction(e.th.trimmer,"trimmer-th");var t=e.wordcut;t.init(),e.th.tokenizer=function(i){if(!arguments.length||null==i||void 0==i)return[];if(Array.isArray(i))return i.map(function(t){return r?new e.Token(t):t});var n=i.toString().replace(/^\s+/,"");return t.cut(n).split("|")}}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.tr.min.js b/assets/javascripts/lunr/min/lunr.tr.min.js new file mode 100644 index 00000000..563f6ec1 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.tr.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Turkish` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(r,i){"function"==typeof define&&define.amd?define(i):"object"==typeof exports?module.exports=i():i()(r.lunr)}(this,function(){return function(r){if(void 0===r)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===r.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");r.tr=function(){this.pipeline.reset(),this.pipeline.add(r.tr.trimmer,r.tr.stopWordFilter,r.tr.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(r.tr.stemmer))},r.tr.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",r.tr.trimmer=r.trimmerSupport.generateTrimmer(r.tr.wordCharacters),r.Pipeline.registerFunction(r.tr.trimmer,"trimmer-tr"),r.tr.stemmer=function(){var i=r.stemmerSupport.Among,e=r.stemmerSupport.SnowballProgram,n=new function(){function r(r,i,e){for(;;){var n=Dr.limit-Dr.cursor;if(Dr.in_grouping_b(r,i,e)){Dr.cursor=Dr.limit-n;break}if(Dr.cursor=Dr.limit-n,Dr.cursor<=Dr.limit_backward)return!1;Dr.cursor--}return!0}function n(){var i,e;i=Dr.limit-Dr.cursor,r(Wr,97,305);for(var n=0;nDr.limit_backward&&(Dr.cursor--,e=Dr.limit-Dr.cursor,i()))?(Dr.cursor=Dr.limit-e,!0):(Dr.cursor=Dr.limit-n,r()?(Dr.cursor=Dr.limit-n,!1):(Dr.cursor=Dr.limit-n,!(Dr.cursor<=Dr.limit_backward)&&(Dr.cursor--,!!i()&&(Dr.cursor=Dr.limit-n,!0))))}function u(r){return t(r,function(){return Dr.in_grouping_b(Wr,97,305)})}function o(){return u(function(){return Dr.eq_s_b(1,"n")})}function s(){return u(function(){return Dr.eq_s_b(1,"s")})}function c(){return u(function(){return Dr.eq_s_b(1,"y")})}function l(){return t(function(){return Dr.in_grouping_b(Lr,105,305)},function(){return Dr.out_grouping_b(Wr,97,305)})}function a(){return Dr.find_among_b(ur,10)&&l()}function m(){return n()&&Dr.in_grouping_b(Lr,105,305)&&s()}function d(){return Dr.find_among_b(or,2)}function f(){return n()&&Dr.in_grouping_b(Lr,105,305)&&c()}function b(){return n()&&Dr.find_among_b(sr,4)}function w(){return n()&&Dr.find_among_b(cr,4)&&o()}function _(){return n()&&Dr.find_among_b(lr,2)&&c()}function k(){return n()&&Dr.find_among_b(ar,2)}function p(){return n()&&Dr.find_among_b(mr,4)}function g(){return n()&&Dr.find_among_b(dr,2)}function y(){return n()&&Dr.find_among_b(fr,4)}function z(){return n()&&Dr.find_among_b(br,2)}function v(){return n()&&Dr.find_among_b(wr,2)&&c()}function h(){return Dr.eq_s_b(2,"ki")}function q(){return n()&&Dr.find_among_b(_r,2)&&o()}function C(){return n()&&Dr.find_among_b(kr,4)&&c()}function P(){return n()&&Dr.find_among_b(pr,4)}function F(){return n()&&Dr.find_among_b(gr,4)&&c()}function S(){return Dr.find_among_b(yr,4)}function W(){return n()&&Dr.find_among_b(zr,2)}function L(){return n()&&Dr.find_among_b(vr,4)}function x(){return n()&&Dr.find_among_b(hr,8)}function A(){return Dr.find_among_b(qr,2)}function E(){return n()&&Dr.find_among_b(Cr,32)&&c()}function j(){return Dr.find_among_b(Pr,8)&&c()}function T(){return n()&&Dr.find_among_b(Fr,4)&&c()}function Z(){return Dr.eq_s_b(3,"ken")&&c()}function B(){var r=Dr.limit-Dr.cursor;return!(T()||(Dr.cursor=Dr.limit-r,E()||(Dr.cursor=Dr.limit-r,j()||(Dr.cursor=Dr.limit-r,Z()))))}function D(){if(A()){var r=Dr.limit-Dr.cursor;if(S()||(Dr.cursor=Dr.limit-r,W()||(Dr.cursor=Dr.limit-r,C()||(Dr.cursor=Dr.limit-r,P()||(Dr.cursor=Dr.limit-r,F()||(Dr.cursor=Dr.limit-r))))),T())return!1}return!0}function G(){if(W()){Dr.bra=Dr.cursor,Dr.slice_del();var r=Dr.limit-Dr.cursor;return Dr.ket=Dr.cursor,x()||(Dr.cursor=Dr.limit-r,E()||(Dr.cursor=Dr.limit-r,j()||(Dr.cursor=Dr.limit-r,T()||(Dr.cursor=Dr.limit-r)))),nr=!1,!1}return!0}function H(){if(!L())return!0;var r=Dr.limit-Dr.cursor;return!E()&&(Dr.cursor=Dr.limit-r,!j())}function I(){var r,i=Dr.limit-Dr.cursor;return!(S()||(Dr.cursor=Dr.limit-i,F()||(Dr.cursor=Dr.limit-i,P()||(Dr.cursor=Dr.limit-i,C()))))||(Dr.bra=Dr.cursor,Dr.slice_del(),r=Dr.limit-Dr.cursor,Dr.ket=Dr.cursor,T()||(Dr.cursor=Dr.limit-r),!1)}function J(){var r,i=Dr.limit-Dr.cursor;if(Dr.ket=Dr.cursor,nr=!0,B()&&(Dr.cursor=Dr.limit-i,D()&&(Dr.cursor=Dr.limit-i,G()&&(Dr.cursor=Dr.limit-i,H()&&(Dr.cursor=Dr.limit-i,I()))))){if(Dr.cursor=Dr.limit-i,!x())return;Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,r=Dr.limit-Dr.cursor,S()||(Dr.cursor=Dr.limit-r,W()||(Dr.cursor=Dr.limit-r,C()||(Dr.cursor=Dr.limit-r,P()||(Dr.cursor=Dr.limit-r,F()||(Dr.cursor=Dr.limit-r))))),T()||(Dr.cursor=Dr.limit-r)}Dr.bra=Dr.cursor,Dr.slice_del()}function K(){var r,i,e,n;if(Dr.ket=Dr.cursor,h()){if(r=Dr.limit-Dr.cursor,p())return Dr.bra=Dr.cursor,Dr.slice_del(),i=Dr.limit-Dr.cursor,Dr.ket=Dr.cursor,W()?(Dr.bra=Dr.cursor,Dr.slice_del(),K()):(Dr.cursor=Dr.limit-i,a()&&(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K()))),!0;if(Dr.cursor=Dr.limit-r,w()){if(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,e=Dr.limit-Dr.cursor,d())Dr.bra=Dr.cursor,Dr.slice_del();else{if(Dr.cursor=Dr.limit-e,Dr.ket=Dr.cursor,!a()&&(Dr.cursor=Dr.limit-e,!m()&&(Dr.cursor=Dr.limit-e,!K())))return!0;Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K())}return!0}if(Dr.cursor=Dr.limit-r,g()){if(n=Dr.limit-Dr.cursor,d())Dr.bra=Dr.cursor,Dr.slice_del();else if(Dr.cursor=Dr.limit-n,m())Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K());else if(Dr.cursor=Dr.limit-n,!K())return!1;return!0}}return!1}function M(r){if(Dr.ket=Dr.cursor,!g()&&(Dr.cursor=Dr.limit-r,!k()))return!1;var i=Dr.limit-Dr.cursor;if(d())Dr.bra=Dr.cursor,Dr.slice_del();else if(Dr.cursor=Dr.limit-i,m())Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K());else if(Dr.cursor=Dr.limit-i,!K())return!1;return!0}function N(r){if(Dr.ket=Dr.cursor,!z()&&(Dr.cursor=Dr.limit-r,!b()))return!1;var i=Dr.limit-Dr.cursor;return!(!m()&&(Dr.cursor=Dr.limit-i,!d()))&&(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K()),!0)}function O(){var r,i=Dr.limit-Dr.cursor;return Dr.ket=Dr.cursor,!(!w()&&(Dr.cursor=Dr.limit-i,!v()))&&(Dr.bra=Dr.cursor,Dr.slice_del(),r=Dr.limit-Dr.cursor,Dr.ket=Dr.cursor,!(!W()||(Dr.bra=Dr.cursor,Dr.slice_del(),!K()))||(Dr.cursor=Dr.limit-r,Dr.ket=Dr.cursor,!(a()||(Dr.cursor=Dr.limit-r,m()||(Dr.cursor=Dr.limit-r,K())))||(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K()),!0)))}function Q(){var r,i,e=Dr.limit-Dr.cursor;if(Dr.ket=Dr.cursor,!p()&&(Dr.cursor=Dr.limit-e,!f()&&(Dr.cursor=Dr.limit-e,!_())))return!1;if(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,r=Dr.limit-Dr.cursor,a())Dr.bra=Dr.cursor,Dr.slice_del(),i=Dr.limit-Dr.cursor,Dr.ket=Dr.cursor,W()||(Dr.cursor=Dr.limit-i);else if(Dr.cursor=Dr.limit-r,!W())return!0;return Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,K(),!0}function R(){var r,i,e=Dr.limit-Dr.cursor;if(Dr.ket=Dr.cursor,W())return Dr.bra=Dr.cursor,Dr.slice_del(),void K();if(Dr.cursor=Dr.limit-e,Dr.ket=Dr.cursor,q())if(Dr.bra=Dr.cursor,Dr.slice_del(),r=Dr.limit-Dr.cursor,Dr.ket=Dr.cursor,d())Dr.bra=Dr.cursor,Dr.slice_del();else{if(Dr.cursor=Dr.limit-r,Dr.ket=Dr.cursor,!a()&&(Dr.cursor=Dr.limit-r,!m())){if(Dr.cursor=Dr.limit-r,Dr.ket=Dr.cursor,!W())return;if(Dr.bra=Dr.cursor,Dr.slice_del(),!K())return}Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K())}else if(Dr.cursor=Dr.limit-e,!M(e)&&(Dr.cursor=Dr.limit-e,!N(e))){if(Dr.cursor=Dr.limit-e,Dr.ket=Dr.cursor,y())return Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,i=Dr.limit-Dr.cursor,void(a()?(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K())):(Dr.cursor=Dr.limit-i,W()?(Dr.bra=Dr.cursor,Dr.slice_del(),K()):(Dr.cursor=Dr.limit-i,K())));if(Dr.cursor=Dr.limit-e,!O()){if(Dr.cursor=Dr.limit-e,d())return Dr.bra=Dr.cursor,void Dr.slice_del();Dr.cursor=Dr.limit-e,K()||(Dr.cursor=Dr.limit-e,Q()||(Dr.cursor=Dr.limit-e,Dr.ket=Dr.cursor,(a()||(Dr.cursor=Dr.limit-e,m()))&&(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K()))))}}}function U(){var r;if(Dr.ket=Dr.cursor,r=Dr.find_among_b(Sr,4))switch(Dr.bra=Dr.cursor,r){case 1:Dr.slice_from("p");break;case 2:Dr.slice_from("ç");break;case 3:Dr.slice_from("t");break;case 4:Dr.slice_from("k")}}function V(){for(;;){var r=Dr.limit-Dr.cursor;if(Dr.in_grouping_b(Wr,97,305)){Dr.cursor=Dr.limit-r;break}if(Dr.cursor=Dr.limit-r,Dr.cursor<=Dr.limit_backward)return!1;Dr.cursor--}return!0}function X(r,i,e){if(Dr.cursor=Dr.limit-r,V()){var n=Dr.limit-Dr.cursor;if(!Dr.eq_s_b(1,i)&&(Dr.cursor=Dr.limit-n,!Dr.eq_s_b(1,e)))return!0;Dr.cursor=Dr.limit-r;var t=Dr.cursor;return Dr.insert(Dr.cursor,Dr.cursor,e),Dr.cursor=t,!1}return!0}function Y(){var r=Dr.limit-Dr.cursor;(Dr.eq_s_b(1,"d")||(Dr.cursor=Dr.limit-r,Dr.eq_s_b(1,"g")))&&X(r,"a","ı")&&X(r,"e","i")&&X(r,"o","u")&&X(r,"ö","ü")}function $(){for(var r,i=Dr.cursor,e=2;;){for(r=Dr.cursor;!Dr.in_grouping(Wr,97,305);){if(Dr.cursor>=Dr.limit)return Dr.cursor=r,!(e>0)&&(Dr.cursor=i,!0);Dr.cursor++}e--}}function rr(r,i,e){for(;!Dr.eq_s(i,e);){if(Dr.cursor>=Dr.limit)return!0;Dr.cursor++}return(tr=i)!=Dr.limit||(Dr.cursor=r,!1)}function ir(){var r=Dr.cursor;return!rr(r,2,"ad")||(Dr.cursor=r,!rr(r,5,"soyad"))}function er(){var r=Dr.cursor;return!ir()&&(Dr.limit_backward=r,Dr.cursor=Dr.limit,Y(),Dr.cursor=Dr.limit,U(),!0)}var nr,tr,ur=[new i("m",-1,-1),new i("n",-1,-1),new i("miz",-1,-1),new i("niz",-1,-1),new i("muz",-1,-1),new i("nuz",-1,-1),new i("müz",-1,-1),new i("nüz",-1,-1),new i("mız",-1,-1),new i("nız",-1,-1)],or=[new i("leri",-1,-1),new i("ları",-1,-1)],sr=[new i("ni",-1,-1),new i("nu",-1,-1),new i("nü",-1,-1),new i("nı",-1,-1)],cr=[new i("in",-1,-1),new i("un",-1,-1),new i("ün",-1,-1),new i("ın",-1,-1)],lr=[new i("a",-1,-1),new i("e",-1,-1)],ar=[new i("na",-1,-1),new i("ne",-1,-1)],mr=[new i("da",-1,-1),new i("ta",-1,-1),new i("de",-1,-1),new i("te",-1,-1)],dr=[new i("nda",-1,-1),new i("nde",-1,-1)],fr=[new i("dan",-1,-1),new i("tan",-1,-1),new i("den",-1,-1),new i("ten",-1,-1)],br=[new i("ndan",-1,-1),new i("nden",-1,-1)],wr=[new i("la",-1,-1),new i("le",-1,-1)],_r=[new i("ca",-1,-1),new i("ce",-1,-1)],kr=[new i("im",-1,-1),new i("um",-1,-1),new i("üm",-1,-1),new i("ım",-1,-1)],pr=[new i("sin",-1,-1),new i("sun",-1,-1),new i("sün",-1,-1),new i("sın",-1,-1)],gr=[new i("iz",-1,-1),new i("uz",-1,-1),new i("üz",-1,-1),new i("ız",-1,-1)],yr=[new i("siniz",-1,-1),new i("sunuz",-1,-1),new i("sünüz",-1,-1),new i("sınız",-1,-1)],zr=[new i("lar",-1,-1),new i("ler",-1,-1)],vr=[new i("niz",-1,-1),new i("nuz",-1,-1),new i("nüz",-1,-1),new i("nız",-1,-1)],hr=[new i("dir",-1,-1),new i("tir",-1,-1),new i("dur",-1,-1),new i("tur",-1,-1),new i("dür",-1,-1),new i("tür",-1,-1),new i("dır",-1,-1),new i("tır",-1,-1)],qr=[new i("casına",-1,-1),new i("cesine",-1,-1)],Cr=[new i("di",-1,-1),new i("ti",-1,-1),new i("dik",-1,-1),new i("tik",-1,-1),new i("duk",-1,-1),new i("tuk",-1,-1),new i("dük",-1,-1),new i("tük",-1,-1),new i("dık",-1,-1),new i("tık",-1,-1),new i("dim",-1,-1),new i("tim",-1,-1),new i("dum",-1,-1),new i("tum",-1,-1),new i("düm",-1,-1),new i("tüm",-1,-1),new i("dım",-1,-1),new i("tım",-1,-1),new i("din",-1,-1),new i("tin",-1,-1),new i("dun",-1,-1),new i("tun",-1,-1),new i("dün",-1,-1),new i("tün",-1,-1),new i("dın",-1,-1),new i("tın",-1,-1),new i("du",-1,-1),new i("tu",-1,-1),new i("dü",-1,-1),new i("tü",-1,-1),new i("dı",-1,-1),new i("tı",-1,-1)],Pr=[new i("sa",-1,-1),new i("se",-1,-1),new i("sak",-1,-1),new i("sek",-1,-1),new i("sam",-1,-1),new i("sem",-1,-1),new i("san",-1,-1),new i("sen",-1,-1)],Fr=[new i("miş",-1,-1),new i("muş",-1,-1),new i("müş",-1,-1),new i("mış",-1,-1)],Sr=[new i("b",-1,1),new i("c",-1,2),new i("d",-1,3),new i("ğ",-1,4)],Wr=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,32,8,0,0,0,0,0,0,1],Lr=[1,16,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,8,0,0,0,0,0,0,1],xr=[1,64,16,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1],Ar=[17,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,130],Er=[1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1],jr=[17],Tr=[65],Zr=[65],Br=[["a",xr,97,305],["e",Ar,101,252],["ı",Er,97,305],["i",jr,101,105],["o",Tr,111,117],["ö",Zr,246,252],["u",Tr,111,117]],Dr=new e;this.setCurrent=function(r){Dr.setCurrent(r)},this.getCurrent=function(){return Dr.getCurrent()},this.stem=function(){return!!($()&&(Dr.limit_backward=Dr.cursor,Dr.cursor=Dr.limit,J(),Dr.cursor=Dr.limit,nr&&(R(),Dr.cursor=Dr.limit_backward,er())))}};return function(r){return"function"==typeof r.update?r.update(function(r){return n.setCurrent(r),n.stem(),n.getCurrent()}):(n.setCurrent(r),n.stem(),n.getCurrent())}}(),r.Pipeline.registerFunction(r.tr.stemmer,"stemmer-tr"),r.tr.stopWordFilter=r.generateStopWordFilter("acaba altmış altı ama ancak arada aslında ayrıca bana bazı belki ben benden beni benim beri beş bile bin bir biri birkaç birkez birçok birşey birşeyi biz bizden bize bizi bizim bu buna bunda bundan bunlar bunları bunların bunu bunun burada böyle böylece da daha dahi de defa değil diye diğer doksan dokuz dolayı dolayısıyla dört edecek eden ederek edilecek ediliyor edilmesi ediyor elli en etmesi etti ettiği ettiğini eğer gibi göre halen hangi hatta hem henüz hep hepsi her herhangi herkesin hiç hiçbir iki ile ilgili ise itibaren itibariyle için işte kadar karşın katrilyon kendi kendilerine kendini kendisi kendisine kendisini kez ki kim kimden kime kimi kimse kırk milyar milyon mu mü mı nasıl ne neden nedenle nerde nerede nereye niye niçin o olan olarak oldu olduklarını olduğu olduğunu olmadı olmadığı olmak olması olmayan olmaz olsa olsun olup olur olursa oluyor on ona ondan onlar onlardan onları onların onu onun otuz oysa pek rağmen sadece sanki sekiz seksen sen senden seni senin siz sizden sizi sizin tarafından trilyon tüm var vardı ve veya ya yani yapacak yapmak yaptı yaptıkları yaptığı yaptığını yapılan yapılması yapıyor yedi yerine yetmiş yine yirmi yoksa yüz zaten çok çünkü öyle üzere üç şey şeyden şeyi şeyler şu şuna şunda şundan şunları şunu şöyle".split(" ")),r.Pipeline.registerFunction(r.tr.stopWordFilter,"stopWordFilter-tr")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.vi.min.js b/assets/javascripts/lunr/min/lunr.vi.min.js new file mode 100644 index 00000000..22aed28c --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.vi.min.js @@ -0,0 +1 @@ +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.vi=function(){this.pipeline.reset(),this.pipeline.add(e.vi.stopWordFilter,e.vi.trimmer)},e.vi.wordCharacters="[A-Za-ẓ̀͐́͑̉̃̓ÂâÊêÔôĂ-ăĐ-đƠ-ơƯ-ư]",e.vi.trimmer=e.trimmerSupport.generateTrimmer(e.vi.wordCharacters),e.Pipeline.registerFunction(e.vi.trimmer,"trimmer-vi"),e.vi.stopWordFilter=e.generateStopWordFilter("là cái nhưng mà".split(" "))}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.zh.min.js b/assets/javascripts/lunr/min/lunr.zh.min.js new file mode 100644 index 00000000..7727bbe2 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.zh.min.js @@ -0,0 +1 @@ +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r(require("nodejieba")):r()(e.lunr)}(this,function(e){return function(r,t){if(void 0===r)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===r.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");var i="2"==r.version[0];r.zh=function(){this.pipeline.reset(),this.pipeline.add(r.zh.trimmer,r.zh.stopWordFilter,r.zh.stemmer),i?this.tokenizer=r.zh.tokenizer:(r.tokenizer&&(r.tokenizer=r.zh.tokenizer),this.tokenizerFn&&(this.tokenizerFn=r.zh.tokenizer))},r.zh.tokenizer=function(n){if(!arguments.length||null==n||void 0==n)return[];if(Array.isArray(n))return n.map(function(e){return i?new r.Token(e.toLowerCase()):e.toLowerCase()});t&&e.load(t);var o=n.toString().trim().toLowerCase(),s=[];e.cut(o,!0).forEach(function(e){s=s.concat(e.split(" "))}),s=s.filter(function(e){return!!e});var u=0;return s.map(function(e,t){if(i){var n=o.indexOf(e,u),s={};return s.position=[n,e.length],s.index=t,u=n,new r.Token(e,s)}return e})},r.zh.wordCharacters="\\w一-龥",r.zh.trimmer=r.trimmerSupport.generateTrimmer(r.zh.wordCharacters),r.Pipeline.registerFunction(r.zh.trimmer,"trimmer-zh"),r.zh.stemmer=function(){return function(e){return e}}(),r.Pipeline.registerFunction(r.zh.stemmer,"stemmer-zh"),r.zh.stopWordFilter=r.generateStopWordFilter("的 一 不 在 人 有 是 为 以 于 上 他 而 后 之 来 及 了 因 下 可 到 由 这 与 也 此 但 并 个 其 已 无 小 我 们 起 最 再 今 去 好 只 又 或 很 亦 某 把 那 你 乃 它 吧 被 比 别 趁 当 从 到 得 打 凡 儿 尔 该 各 给 跟 和 何 还 即 几 既 看 据 距 靠 啦 了 另 么 每 们 嘛 拿 哪 那 您 凭 且 却 让 仍 啥 如 若 使 谁 虽 随 同 所 她 哇 嗡 往 哪 些 向 沿 哟 用 于 咱 则 怎 曾 至 致 着 诸 自".split(" ")),r.Pipeline.registerFunction(r.zh.stopWordFilter,"stopWordFilter-zh")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/tinyseg.js b/assets/javascripts/lunr/tinyseg.js new file mode 100644 index 00000000..167fa6dd --- /dev/null +++ b/assets/javascripts/lunr/tinyseg.js @@ -0,0 +1,206 @@ +/** + * export the module via AMD, CommonJS or as a browser global + * Export code from https://github.com/umdjs/umd/blob/master/returnExports.js + */ +;(function (root, factory) { + if (typeof define === 'function' && define.amd) { + // AMD. Register as an anonymous module. + define(factory) + } else if (typeof exports === 'object') { + /** + * Node. Does not work with strict CommonJS, but + * only CommonJS-like environments that support module.exports, + * like Node. + */ + module.exports = factory() + } else { + // Browser globals (root is window) + factory()(root.lunr); + } +}(this, function () { + /** + * Just return a value to define the module export. + * This example returns an object, but the module + * can return a function as the exported value. + */ + + return function(lunr) { + // TinySegmenter 0.1 -- Super compact Japanese tokenizer in Javascript + // (c) 2008 Taku Kudo + // TinySegmenter is freely distributable under the terms of a new BSD licence. + // For details, see http://chasen.org/~taku/software/TinySegmenter/LICENCE.txt + + function TinySegmenter() { + var patterns = { + "[一二三四五六七八九十百千万億兆]":"M", + "[一-龠々〆ヵヶ]":"H", + "[ぁ-ん]":"I", + "[ァ-ヴーア-ン゙ー]":"K", + "[a-zA-Za-zA-Z]":"A", + "[0-90-9]":"N" + } + this.chartype_ = []; + for (var i in patterns) { + var regexp = new RegExp(i); + this.chartype_.push([regexp, patterns[i]]); + } + + this.BIAS__ = -332 + this.BC1__ = {"HH":6,"II":2461,"KH":406,"OH":-1378}; + this.BC2__ = {"AA":-3267,"AI":2744,"AN":-878,"HH":-4070,"HM":-1711,"HN":4012,"HO":3761,"IA":1327,"IH":-1184,"II":-1332,"IK":1721,"IO":5492,"KI":3831,"KK":-8741,"MH":-3132,"MK":3334,"OO":-2920}; + this.BC3__ = {"HH":996,"HI":626,"HK":-721,"HN":-1307,"HO":-836,"IH":-301,"KK":2762,"MK":1079,"MM":4034,"OA":-1652,"OH":266}; + this.BP1__ = {"BB":295,"OB":304,"OO":-125,"UB":352}; + this.BP2__ = {"BO":60,"OO":-1762}; + this.BQ1__ = {"BHH":1150,"BHM":1521,"BII":-1158,"BIM":886,"BMH":1208,"BNH":449,"BOH":-91,"BOO":-2597,"OHI":451,"OIH":-296,"OKA":1851,"OKH":-1020,"OKK":904,"OOO":2965}; + this.BQ2__ = {"BHH":118,"BHI":-1159,"BHM":466,"BIH":-919,"BKK":-1720,"BKO":864,"OHH":-1139,"OHM":-181,"OIH":153,"UHI":-1146}; + this.BQ3__ = {"BHH":-792,"BHI":2664,"BII":-299,"BKI":419,"BMH":937,"BMM":8335,"BNN":998,"BOH":775,"OHH":2174,"OHM":439,"OII":280,"OKH":1798,"OKI":-793,"OKO":-2242,"OMH":-2402,"OOO":11699}; + this.BQ4__ = {"BHH":-3895,"BIH":3761,"BII":-4654,"BIK":1348,"BKK":-1806,"BMI":-3385,"BOO":-12396,"OAH":926,"OHH":266,"OHK":-2036,"ONN":-973}; + this.BW1__ = {",と":660,",同":727,"B1あ":1404,"B1同":542,"、と":660,"、同":727,"」と":1682,"あっ":1505,"いう":1743,"いっ":-2055,"いる":672,"うし":-4817,"うん":665,"から":3472,"がら":600,"こう":-790,"こと":2083,"こん":-1262,"さら":-4143,"さん":4573,"した":2641,"して":1104,"すで":-3399,"そこ":1977,"それ":-871,"たち":1122,"ため":601,"った":3463,"つい":-802,"てい":805,"てき":1249,"でき":1127,"です":3445,"では":844,"とい":-4915,"とみ":1922,"どこ":3887,"ない":5713,"なっ":3015,"など":7379,"なん":-1113,"にし":2468,"には":1498,"にも":1671,"に対":-912,"の一":-501,"の中":741,"ませ":2448,"まで":1711,"まま":2600,"まる":-2155,"やむ":-1947,"よっ":-2565,"れた":2369,"れで":-913,"をし":1860,"を見":731,"亡く":-1886,"京都":2558,"取り":-2784,"大き":-2604,"大阪":1497,"平方":-2314,"引き":-1336,"日本":-195,"本当":-2423,"毎日":-2113,"目指":-724,"B1あ":1404,"B1同":542,"」と":1682}; + this.BW2__ = {"..":-11822,"11":-669,"――":-5730,"−−":-13175,"いう":-1609,"うか":2490,"かし":-1350,"かも":-602,"から":-7194,"かれ":4612,"がい":853,"がら":-3198,"きた":1941,"くな":-1597,"こと":-8392,"この":-4193,"させ":4533,"され":13168,"さん":-3977,"しい":-1819,"しか":-545,"した":5078,"して":972,"しな":939,"その":-3744,"たい":-1253,"たた":-662,"ただ":-3857,"たち":-786,"たと":1224,"たは":-939,"った":4589,"って":1647,"っと":-2094,"てい":6144,"てき":3640,"てく":2551,"ては":-3110,"ても":-3065,"でい":2666,"でき":-1528,"でし":-3828,"です":-4761,"でも":-4203,"とい":1890,"とこ":-1746,"とと":-2279,"との":720,"とみ":5168,"とも":-3941,"ない":-2488,"なが":-1313,"など":-6509,"なの":2614,"なん":3099,"にお":-1615,"にし":2748,"にな":2454,"によ":-7236,"に対":-14943,"に従":-4688,"に関":-11388,"のか":2093,"ので":-7059,"のに":-6041,"のの":-6125,"はい":1073,"はが":-1033,"はず":-2532,"ばれ":1813,"まし":-1316,"まで":-6621,"まれ":5409,"めて":-3153,"もい":2230,"もの":-10713,"らか":-944,"らし":-1611,"らに":-1897,"りし":651,"りま":1620,"れた":4270,"れて":849,"れば":4114,"ろう":6067,"われ":7901,"を通":-11877,"んだ":728,"んな":-4115,"一人":602,"一方":-1375,"一日":970,"一部":-1051,"上が":-4479,"会社":-1116,"出て":2163,"分の":-7758,"同党":970,"同日":-913,"大阪":-2471,"委員":-1250,"少な":-1050,"年度":-8669,"年間":-1626,"府県":-2363,"手権":-1982,"新聞":-4066,"日新":-722,"日本":-7068,"日米":3372,"曜日":-601,"朝鮮":-2355,"本人":-2697,"東京":-1543,"然と":-1384,"社会":-1276,"立て":-990,"第に":-1612,"米国":-4268,"11":-669}; + this.BW3__ = {"あた":-2194,"あり":719,"ある":3846,"い.":-1185,"い。":-1185,"いい":5308,"いえ":2079,"いく":3029,"いた":2056,"いっ":1883,"いる":5600,"いわ":1527,"うち":1117,"うと":4798,"えと":1454,"か.":2857,"か。":2857,"かけ":-743,"かっ":-4098,"かに":-669,"から":6520,"かり":-2670,"が,":1816,"が、":1816,"がき":-4855,"がけ":-1127,"がっ":-913,"がら":-4977,"がり":-2064,"きた":1645,"けど":1374,"こと":7397,"この":1542,"ころ":-2757,"さい":-714,"さを":976,"し,":1557,"し、":1557,"しい":-3714,"した":3562,"して":1449,"しな":2608,"しま":1200,"す.":-1310,"す。":-1310,"する":6521,"ず,":3426,"ず、":3426,"ずに":841,"そう":428,"た.":8875,"た。":8875,"たい":-594,"たの":812,"たり":-1183,"たる":-853,"だ.":4098,"だ。":4098,"だっ":1004,"った":-4748,"って":300,"てい":6240,"てお":855,"ても":302,"です":1437,"でに":-1482,"では":2295,"とう":-1387,"とし":2266,"との":541,"とも":-3543,"どう":4664,"ない":1796,"なく":-903,"など":2135,"に,":-1021,"に、":-1021,"にし":1771,"にな":1906,"には":2644,"の,":-724,"の、":-724,"の子":-1000,"は,":1337,"は、":1337,"べき":2181,"まし":1113,"ます":6943,"まっ":-1549,"まで":6154,"まれ":-793,"らし":1479,"られ":6820,"るる":3818,"れ,":854,"れ、":854,"れた":1850,"れて":1375,"れば":-3246,"れる":1091,"われ":-605,"んだ":606,"んで":798,"カ月":990,"会議":860,"入り":1232,"大会":2217,"始め":1681,"市":965,"新聞":-5055,"日,":974,"日、":974,"社会":2024,"カ月":990}; + this.TC1__ = {"AAA":1093,"HHH":1029,"HHM":580,"HII":998,"HOH":-390,"HOM":-331,"IHI":1169,"IOH":-142,"IOI":-1015,"IOM":467,"MMH":187,"OOI":-1832}; + this.TC2__ = {"HHO":2088,"HII":-1023,"HMM":-1154,"IHI":-1965,"KKH":703,"OII":-2649}; + this.TC3__ = {"AAA":-294,"HHH":346,"HHI":-341,"HII":-1088,"HIK":731,"HOH":-1486,"IHH":128,"IHI":-3041,"IHO":-1935,"IIH":-825,"IIM":-1035,"IOI":-542,"KHH":-1216,"KKA":491,"KKH":-1217,"KOK":-1009,"MHH":-2694,"MHM":-457,"MHO":123,"MMH":-471,"NNH":-1689,"NNO":662,"OHO":-3393}; + this.TC4__ = {"HHH":-203,"HHI":1344,"HHK":365,"HHM":-122,"HHN":182,"HHO":669,"HIH":804,"HII":679,"HOH":446,"IHH":695,"IHO":-2324,"IIH":321,"III":1497,"IIO":656,"IOO":54,"KAK":4845,"KKA":3386,"KKK":3065,"MHH":-405,"MHI":201,"MMH":-241,"MMM":661,"MOM":841}; + this.TQ1__ = {"BHHH":-227,"BHHI":316,"BHIH":-132,"BIHH":60,"BIII":1595,"BNHH":-744,"BOHH":225,"BOOO":-908,"OAKK":482,"OHHH":281,"OHIH":249,"OIHI":200,"OIIH":-68}; + this.TQ2__ = {"BIHH":-1401,"BIII":-1033,"BKAK":-543,"BOOO":-5591}; + this.TQ3__ = {"BHHH":478,"BHHM":-1073,"BHIH":222,"BHII":-504,"BIIH":-116,"BIII":-105,"BMHI":-863,"BMHM":-464,"BOMH":620,"OHHH":346,"OHHI":1729,"OHII":997,"OHMH":481,"OIHH":623,"OIIH":1344,"OKAK":2792,"OKHH":587,"OKKA":679,"OOHH":110,"OOII":-685}; + this.TQ4__ = {"BHHH":-721,"BHHM":-3604,"BHII":-966,"BIIH":-607,"BIII":-2181,"OAAA":-2763,"OAKK":180,"OHHH":-294,"OHHI":2446,"OHHO":480,"OHIH":-1573,"OIHH":1935,"OIHI":-493,"OIIH":626,"OIII":-4007,"OKAK":-8156}; + this.TW1__ = {"につい":-4681,"東京都":2026}; + this.TW2__ = {"ある程":-2049,"いった":-1256,"ころが":-2434,"しょう":3873,"その後":-4430,"だって":-1049,"ていた":1833,"として":-4657,"ともに":-4517,"もので":1882,"一気に":-792,"初めて":-1512,"同時に":-8097,"大きな":-1255,"対して":-2721,"社会党":-3216}; + this.TW3__ = {"いただ":-1734,"してい":1314,"として":-4314,"につい":-5483,"にとっ":-5989,"に当た":-6247,"ので,":-727,"ので、":-727,"のもの":-600,"れから":-3752,"十二月":-2287}; + this.TW4__ = {"いう.":8576,"いう。":8576,"からな":-2348,"してい":2958,"たが,":1516,"たが、":1516,"ている":1538,"という":1349,"ました":5543,"ません":1097,"ようと":-4258,"よると":5865}; + this.UC1__ = {"A":484,"K":93,"M":645,"O":-505}; + this.UC2__ = {"A":819,"H":1059,"I":409,"M":3987,"N":5775,"O":646}; + this.UC3__ = {"A":-1370,"I":2311}; + this.UC4__ = {"A":-2643,"H":1809,"I":-1032,"K":-3450,"M":3565,"N":3876,"O":6646}; + this.UC5__ = {"H":313,"I":-1238,"K":-799,"M":539,"O":-831}; + this.UC6__ = {"H":-506,"I":-253,"K":87,"M":247,"O":-387}; + this.UP1__ = {"O":-214}; + this.UP2__ = {"B":69,"O":935}; + this.UP3__ = {"B":189}; + this.UQ1__ = {"BH":21,"BI":-12,"BK":-99,"BN":142,"BO":-56,"OH":-95,"OI":477,"OK":410,"OO":-2422}; + this.UQ2__ = {"BH":216,"BI":113,"OK":1759}; + this.UQ3__ = {"BA":-479,"BH":42,"BI":1913,"BK":-7198,"BM":3160,"BN":6427,"BO":14761,"OI":-827,"ON":-3212}; + this.UW1__ = {",":156,"、":156,"「":-463,"あ":-941,"う":-127,"が":-553,"き":121,"こ":505,"で":-201,"と":-547,"ど":-123,"に":-789,"の":-185,"は":-847,"も":-466,"や":-470,"よ":182,"ら":-292,"り":208,"れ":169,"を":-446,"ん":-137,"・":-135,"主":-402,"京":-268,"区":-912,"午":871,"国":-460,"大":561,"委":729,"市":-411,"日":-141,"理":361,"生":-408,"県":-386,"都":-718,"「":-463,"・":-135}; + this.UW2__ = {",":-829,"、":-829,"〇":892,"「":-645,"」":3145,"あ":-538,"い":505,"う":134,"お":-502,"か":1454,"が":-856,"く":-412,"こ":1141,"さ":878,"ざ":540,"し":1529,"す":-675,"せ":300,"そ":-1011,"た":188,"だ":1837,"つ":-949,"て":-291,"で":-268,"と":-981,"ど":1273,"な":1063,"に":-1764,"の":130,"は":-409,"ひ":-1273,"べ":1261,"ま":600,"も":-1263,"や":-402,"よ":1639,"り":-579,"る":-694,"れ":571,"を":-2516,"ん":2095,"ア":-587,"カ":306,"キ":568,"ッ":831,"三":-758,"不":-2150,"世":-302,"中":-968,"主":-861,"事":492,"人":-123,"会":978,"保":362,"入":548,"初":-3025,"副":-1566,"北":-3414,"区":-422,"大":-1769,"天":-865,"太":-483,"子":-1519,"学":760,"実":1023,"小":-2009,"市":-813,"年":-1060,"強":1067,"手":-1519,"揺":-1033,"政":1522,"文":-1355,"新":-1682,"日":-1815,"明":-1462,"最":-630,"朝":-1843,"本":-1650,"東":-931,"果":-665,"次":-2378,"民":-180,"気":-1740,"理":752,"発":529,"目":-1584,"相":-242,"県":-1165,"立":-763,"第":810,"米":509,"自":-1353,"行":838,"西":-744,"見":-3874,"調":1010,"議":1198,"込":3041,"開":1758,"間":-1257,"「":-645,"」":3145,"ッ":831,"ア":-587,"カ":306,"キ":568}; + this.UW3__ = {",":4889,"1":-800,"−":-1723,"、":4889,"々":-2311,"〇":5827,"」":2670,"〓":-3573,"あ":-2696,"い":1006,"う":2342,"え":1983,"お":-4864,"か":-1163,"が":3271,"く":1004,"け":388,"げ":401,"こ":-3552,"ご":-3116,"さ":-1058,"し":-395,"す":584,"せ":3685,"そ":-5228,"た":842,"ち":-521,"っ":-1444,"つ":-1081,"て":6167,"で":2318,"と":1691,"ど":-899,"な":-2788,"に":2745,"の":4056,"は":4555,"ひ":-2171,"ふ":-1798,"へ":1199,"ほ":-5516,"ま":-4384,"み":-120,"め":1205,"も":2323,"や":-788,"よ":-202,"ら":727,"り":649,"る":5905,"れ":2773,"わ":-1207,"を":6620,"ん":-518,"ア":551,"グ":1319,"ス":874,"ッ":-1350,"ト":521,"ム":1109,"ル":1591,"ロ":2201,"ン":278,"・":-3794,"一":-1619,"下":-1759,"世":-2087,"両":3815,"中":653,"主":-758,"予":-1193,"二":974,"人":2742,"今":792,"他":1889,"以":-1368,"低":811,"何":4265,"作":-361,"保":-2439,"元":4858,"党":3593,"全":1574,"公":-3030,"六":755,"共":-1880,"円":5807,"再":3095,"分":457,"初":2475,"別":1129,"前":2286,"副":4437,"力":365,"動":-949,"務":-1872,"化":1327,"北":-1038,"区":4646,"千":-2309,"午":-783,"協":-1006,"口":483,"右":1233,"各":3588,"合":-241,"同":3906,"和":-837,"員":4513,"国":642,"型":1389,"場":1219,"外":-241,"妻":2016,"学":-1356,"安":-423,"実":-1008,"家":1078,"小":-513,"少":-3102,"州":1155,"市":3197,"平":-1804,"年":2416,"広":-1030,"府":1605,"度":1452,"建":-2352,"当":-3885,"得":1905,"思":-1291,"性":1822,"戸":-488,"指":-3973,"政":-2013,"教":-1479,"数":3222,"文":-1489,"新":1764,"日":2099,"旧":5792,"昨":-661,"時":-1248,"曜":-951,"最":-937,"月":4125,"期":360,"李":3094,"村":364,"東":-805,"核":5156,"森":2438,"業":484,"氏":2613,"民":-1694,"決":-1073,"法":1868,"海":-495,"無":979,"物":461,"特":-3850,"生":-273,"用":914,"町":1215,"的":7313,"直":-1835,"省":792,"県":6293,"知":-1528,"私":4231,"税":401,"立":-960,"第":1201,"米":7767,"系":3066,"約":3663,"級":1384,"統":-4229,"総":1163,"線":1255,"者":6457,"能":725,"自":-2869,"英":785,"見":1044,"調":-562,"財":-733,"費":1777,"車":1835,"軍":1375,"込":-1504,"通":-1136,"選":-681,"郎":1026,"郡":4404,"部":1200,"金":2163,"長":421,"開":-1432,"間":1302,"関":-1282,"雨":2009,"電":-1045,"非":2066,"駅":1620,"1":-800,"」":2670,"・":-3794,"ッ":-1350,"ア":551,"グ":1319,"ス":874,"ト":521,"ム":1109,"ル":1591,"ロ":2201,"ン":278}; + this.UW4__ = {",":3930,".":3508,"―":-4841,"、":3930,"。":3508,"〇":4999,"「":1895,"」":3798,"〓":-5156,"あ":4752,"い":-3435,"う":-640,"え":-2514,"お":2405,"か":530,"が":6006,"き":-4482,"ぎ":-3821,"く":-3788,"け":-4376,"げ":-4734,"こ":2255,"ご":1979,"さ":2864,"し":-843,"じ":-2506,"す":-731,"ず":1251,"せ":181,"そ":4091,"た":5034,"だ":5408,"ち":-3654,"っ":-5882,"つ":-1659,"て":3994,"で":7410,"と":4547,"な":5433,"に":6499,"ぬ":1853,"ね":1413,"の":7396,"は":8578,"ば":1940,"ひ":4249,"び":-4134,"ふ":1345,"へ":6665,"べ":-744,"ほ":1464,"ま":1051,"み":-2082,"む":-882,"め":-5046,"も":4169,"ゃ":-2666,"や":2795,"ょ":-1544,"よ":3351,"ら":-2922,"り":-9726,"る":-14896,"れ":-2613,"ろ":-4570,"わ":-1783,"を":13150,"ん":-2352,"カ":2145,"コ":1789,"セ":1287,"ッ":-724,"ト":-403,"メ":-1635,"ラ":-881,"リ":-541,"ル":-856,"ン":-3637,"・":-4371,"ー":-11870,"一":-2069,"中":2210,"予":782,"事":-190,"井":-1768,"人":1036,"以":544,"会":950,"体":-1286,"作":530,"側":4292,"先":601,"党":-2006,"共":-1212,"内":584,"円":788,"初":1347,"前":1623,"副":3879,"力":-302,"動":-740,"務":-2715,"化":776,"区":4517,"協":1013,"参":1555,"合":-1834,"和":-681,"員":-910,"器":-851,"回":1500,"国":-619,"園":-1200,"地":866,"場":-1410,"塁":-2094,"士":-1413,"多":1067,"大":571,"子":-4802,"学":-1397,"定":-1057,"寺":-809,"小":1910,"屋":-1328,"山":-1500,"島":-2056,"川":-2667,"市":2771,"年":374,"庁":-4556,"後":456,"性":553,"感":916,"所":-1566,"支":856,"改":787,"政":2182,"教":704,"文":522,"方":-856,"日":1798,"時":1829,"最":845,"月":-9066,"木":-485,"来":-442,"校":-360,"業":-1043,"氏":5388,"民":-2716,"気":-910,"沢":-939,"済":-543,"物":-735,"率":672,"球":-1267,"生":-1286,"産":-1101,"田":-2900,"町":1826,"的":2586,"目":922,"省":-3485,"県":2997,"空":-867,"立":-2112,"第":788,"米":2937,"系":786,"約":2171,"経":1146,"統":-1169,"総":940,"線":-994,"署":749,"者":2145,"能":-730,"般":-852,"行":-792,"規":792,"警":-1184,"議":-244,"谷":-1000,"賞":730,"車":-1481,"軍":1158,"輪":-1433,"込":-3370,"近":929,"道":-1291,"選":2596,"郎":-4866,"都":1192,"野":-1100,"銀":-2213,"長":357,"間":-2344,"院":-2297,"際":-2604,"電":-878,"領":-1659,"題":-792,"館":-1984,"首":1749,"高":2120,"「":1895,"」":3798,"・":-4371,"ッ":-724,"ー":-11870,"カ":2145,"コ":1789,"セ":1287,"ト":-403,"メ":-1635,"ラ":-881,"リ":-541,"ル":-856,"ン":-3637}; + this.UW5__ = {",":465,".":-299,"1":-514,"E2":-32768,"]":-2762,"、":465,"。":-299,"「":363,"あ":1655,"い":331,"う":-503,"え":1199,"お":527,"か":647,"が":-421,"き":1624,"ぎ":1971,"く":312,"げ":-983,"さ":-1537,"し":-1371,"す":-852,"だ":-1186,"ち":1093,"っ":52,"つ":921,"て":-18,"で":-850,"と":-127,"ど":1682,"な":-787,"に":-1224,"の":-635,"は":-578,"べ":1001,"み":502,"め":865,"ゃ":3350,"ょ":854,"り":-208,"る":429,"れ":504,"わ":419,"を":-1264,"ん":327,"イ":241,"ル":451,"ン":-343,"中":-871,"京":722,"会":-1153,"党":-654,"務":3519,"区":-901,"告":848,"員":2104,"大":-1296,"学":-548,"定":1785,"嵐":-1304,"市":-2991,"席":921,"年":1763,"思":872,"所":-814,"挙":1618,"新":-1682,"日":218,"月":-4353,"査":932,"格":1356,"機":-1508,"氏":-1347,"田":240,"町":-3912,"的":-3149,"相":1319,"省":-1052,"県":-4003,"研":-997,"社":-278,"空":-813,"統":1955,"者":-2233,"表":663,"語":-1073,"議":1219,"選":-1018,"郎":-368,"長":786,"間":1191,"題":2368,"館":-689,"1":-514,"E2":-32768,"「":363,"イ":241,"ル":451,"ン":-343}; + this.UW6__ = {",":227,".":808,"1":-270,"E1":306,"、":227,"。":808,"あ":-307,"う":189,"か":241,"が":-73,"く":-121,"こ":-200,"じ":1782,"す":383,"た":-428,"っ":573,"て":-1014,"で":101,"と":-105,"な":-253,"に":-149,"の":-417,"は":-236,"も":-206,"り":187,"る":-135,"を":195,"ル":-673,"ン":-496,"一":-277,"中":201,"件":-800,"会":624,"前":302,"区":1792,"員":-1212,"委":798,"学":-960,"市":887,"広":-695,"後":535,"業":-697,"相":753,"社":-507,"福":974,"空":-822,"者":1811,"連":463,"郎":1082,"1":-270,"E1":306,"ル":-673,"ン":-496}; + + return this; + } + TinySegmenter.prototype.ctype_ = function(str) { + for (var i in this.chartype_) { + if (str.match(this.chartype_[i][0])) { + return this.chartype_[i][1]; + } + } + return "O"; + } + + TinySegmenter.prototype.ts_ = function(v) { + if (v) { return v; } + return 0; + } + + TinySegmenter.prototype.segment = function(input) { + if (input == null || input == undefined || input == "") { + return []; + } + var result = []; + var seg = ["B3","B2","B1"]; + var ctype = ["O","O","O"]; + var o = input.split(""); + for (i = 0; i < o.length; ++i) { + seg.push(o[i]); + ctype.push(this.ctype_(o[i])) + } + seg.push("E1"); + seg.push("E2"); + seg.push("E3"); + ctype.push("O"); + ctype.push("O"); + ctype.push("O"); + var word = seg[3]; + var p1 = "U"; + var p2 = "U"; + var p3 = "U"; + for (var i = 4; i < seg.length - 3; ++i) { + var score = this.BIAS__; + var w1 = seg[i-3]; + var w2 = seg[i-2]; + var w3 = seg[i-1]; + var w4 = seg[i]; + var w5 = seg[i+1]; + var w6 = seg[i+2]; + var c1 = ctype[i-3]; + var c2 = ctype[i-2]; + var c3 = ctype[i-1]; + var c4 = ctype[i]; + var c5 = ctype[i+1]; + var c6 = ctype[i+2]; + score += this.ts_(this.UP1__[p1]); + score += this.ts_(this.UP2__[p2]); + score += this.ts_(this.UP3__[p3]); + score += this.ts_(this.BP1__[p1 + p2]); + score += this.ts_(this.BP2__[p2 + p3]); + score += this.ts_(this.UW1__[w1]); + score += this.ts_(this.UW2__[w2]); + score += this.ts_(this.UW3__[w3]); + score += this.ts_(this.UW4__[w4]); + score += this.ts_(this.UW5__[w5]); + score += this.ts_(this.UW6__[w6]); + score += this.ts_(this.BW1__[w2 + w3]); + score += this.ts_(this.BW2__[w3 + w4]); + score += this.ts_(this.BW3__[w4 + w5]); + score += this.ts_(this.TW1__[w1 + w2 + w3]); + score += this.ts_(this.TW2__[w2 + w3 + w4]); + score += this.ts_(this.TW3__[w3 + w4 + w5]); + score += this.ts_(this.TW4__[w4 + w5 + w6]); + score += this.ts_(this.UC1__[c1]); + score += this.ts_(this.UC2__[c2]); + score += this.ts_(this.UC3__[c3]); + score += this.ts_(this.UC4__[c4]); + score += this.ts_(this.UC5__[c5]); + score += this.ts_(this.UC6__[c6]); + score += this.ts_(this.BC1__[c2 + c3]); + score += this.ts_(this.BC2__[c3 + c4]); + score += this.ts_(this.BC3__[c4 + c5]); + score += this.ts_(this.TC1__[c1 + c2 + c3]); + score += this.ts_(this.TC2__[c2 + c3 + c4]); + score += this.ts_(this.TC3__[c3 + c4 + c5]); + score += this.ts_(this.TC4__[c4 + c5 + c6]); + // score += this.ts_(this.TC5__[c4 + c5 + c6]); + score += this.ts_(this.UQ1__[p1 + c1]); + score += this.ts_(this.UQ2__[p2 + c2]); + score += this.ts_(this.UQ3__[p3 + c3]); + score += this.ts_(this.BQ1__[p2 + c2 + c3]); + score += this.ts_(this.BQ2__[p2 + c3 + c4]); + score += this.ts_(this.BQ3__[p3 + c2 + c3]); + score += this.ts_(this.BQ4__[p3 + c3 + c4]); + score += this.ts_(this.TQ1__[p2 + c1 + c2 + c3]); + score += this.ts_(this.TQ2__[p2 + c2 + c3 + c4]); + score += this.ts_(this.TQ3__[p3 + c1 + c2 + c3]); + score += this.ts_(this.TQ4__[p3 + c2 + c3 + c4]); + var p = "O"; + if (score > 0) { + result.push(word); + word = ""; + p = "B"; + } + p1 = p2; + p2 = p3; + p3 = p; + word += seg[i]; + } + result.push(word); + + return result; + } + + lunr.TinySegmenter = TinySegmenter; + }; + +})); \ No newline at end of file diff --git a/assets/javascripts/lunr/wordcut.js b/assets/javascripts/lunr/wordcut.js new file mode 100644 index 00000000..146f4b44 --- /dev/null +++ b/assets/javascripts/lunr/wordcut.js @@ -0,0 +1,6708 @@ +(function(f){if(typeof exports==="object"&&typeof module!=="undefined"){module.exports=f()}else if(typeof define==="function"&&define.amd){define([],f)}else{var g;if(typeof window!=="undefined"){g=window}else if(typeof global!=="undefined"){g=global}else if(typeof self!=="undefined"){g=self}else{g=this}(g.lunr || (g.lunr = {})).wordcut = f()}})(function(){var define,module,exports;return (function e(t,n,r){function s(o,u){if(!n[o]){if(!t[o]){var a=typeof require=="function"&&require;if(!u&&a)return a(o,!0);if(i)return i(o,!0);var f=new Error("Cannot find module '"+o+"'");throw f.code="MODULE_NOT_FOUND",f}var l=n[o]={exports:{}};t[o][0].call(l.exports,function(e){var n=t[o][1][e];return s(n?n:e)},l,l.exports,e,t,n,r)}return n[o].exports}var i=typeof require=="function"&&require;for(var o=0;o 1; + }) + this.addWords(words, false) + } + if(finalize){ + this.finalizeDict(); + } + }, + + dictSeek: function (l, r, ch, strOffset, pos) { + var ans = null; + while (l <= r) { + var m = Math.floor((l + r) / 2), + dict_item = this.dict[m], + len = dict_item.length; + if (len <= strOffset) { + l = m + 1; + } else { + var ch_ = dict_item[strOffset]; + if (ch_ < ch) { + l = m + 1; + } else if (ch_ > ch) { + r = m - 1; + } else { + ans = m; + if (pos == LEFT) { + r = m - 1; + } else { + l = m + 1; + } + } + } + } + return ans; + }, + + isFinal: function (acceptor) { + return this.dict[acceptor.l].length == acceptor.strOffset; + }, + + createAcceptor: function () { + return { + l: 0, + r: this.dict.length - 1, + strOffset: 0, + isFinal: false, + dict: this, + transit: function (ch) { + return this.dict.transit(this, ch); + }, + isError: false, + tag: "DICT", + w: 1, + type: "DICT" + }; + }, + + transit: function (acceptor, ch) { + var l = this.dictSeek(acceptor.l, + acceptor.r, + ch, + acceptor.strOffset, + LEFT); + if (l !== null) { + var r = this.dictSeek(l, + acceptor.r, + ch, + acceptor.strOffset, + RIGHT); + acceptor.l = l; + acceptor.r = r; + acceptor.strOffset++; + acceptor.isFinal = this.isFinal(acceptor); + } else { + acceptor.isError = true; + } + return acceptor; + }, + + sortuniq: function(a){ + return a.sort().filter(function(item, pos, arr){ + return !pos || item != arr[pos - 1]; + }) + }, + + flatten: function(a){ + //[[1,2],[3]] -> [1,2,3] + return [].concat.apply([], a); + } +}; +module.exports = WordcutDict; + +}).call(this,"/dist/tmp") +},{"glob":16,"path":22}],3:[function(require,module,exports){ +var WordRule = { + createAcceptor: function(tag) { + if (tag["WORD_RULE"]) + return null; + + return {strOffset: 0, + isFinal: false, + transit: function(ch) { + var lch = ch.toLowerCase(); + if (lch >= "a" && lch <= "z") { + this.isFinal = true; + this.strOffset++; + } else { + this.isError = true; + } + return this; + }, + isError: false, + tag: "WORD_RULE", + type: "WORD_RULE", + w: 1}; + } +}; + +var NumberRule = { + createAcceptor: function(tag) { + if (tag["NUMBER_RULE"]) + return null; + + return {strOffset: 0, + isFinal: false, + transit: function(ch) { + if (ch >= "0" && ch <= "9") { + this.isFinal = true; + this.strOffset++; + } else { + this.isError = true; + } + return this; + }, + isError: false, + tag: "NUMBER_RULE", + type: "NUMBER_RULE", + w: 1}; + } +}; + +var SpaceRule = { + tag: "SPACE_RULE", + createAcceptor: function(tag) { + + if (tag["SPACE_RULE"]) + return null; + + return {strOffset: 0, + isFinal: false, + transit: function(ch) { + if (ch == " " || ch == "\t" || ch == "\r" || ch == "\n" || + ch == "\u00A0" || ch=="\u2003"//nbsp and emsp + ) { + this.isFinal = true; + this.strOffset++; + } else { + this.isError = true; + } + return this; + }, + isError: false, + tag: SpaceRule.tag, + w: 1, + type: "SPACE_RULE"}; + } +} + +var SingleSymbolRule = { + tag: "SINSYM", + createAcceptor: function(tag) { + return {strOffset: 0, + isFinal: false, + transit: function(ch) { + if (this.strOffset == 0 && ch.match(/^[\@\(\)\/\,\-\."`]$/)) { + this.isFinal = true; + this.strOffset++; + } else { + this.isError = true; + } + return this; + }, + isError: false, + tag: "SINSYM", + w: 1, + type: "SINSYM"}; + } +} + + +var LatinRules = [WordRule, SpaceRule, SingleSymbolRule, NumberRule]; + +module.exports = LatinRules; + +},{}],4:[function(require,module,exports){ +var _ = require("underscore") + , WordcutCore = require("./wordcut_core"); +var PathInfoBuilder = { + + /* + buildByPartAcceptors: function(path, acceptors, i) { + var + var genInfos = partAcceptors.reduce(function(genInfos, acceptor) { + + }, []); + + return genInfos; + } + */ + + buildByAcceptors: function(path, finalAcceptors, i) { + var self = this; + var infos = finalAcceptors.map(function(acceptor) { + var p = i - acceptor.strOffset + 1 + , _info = path[p]; + + var info = {p: p, + mw: _info.mw + (acceptor.mw === undefined ? 0 : acceptor.mw), + w: acceptor.w + _info.w, + unk: (acceptor.unk ? acceptor.unk : 0) + _info.unk, + type: acceptor.type}; + + if (acceptor.type == "PART") { + for(var j = p + 1; j <= i; j++) { + path[j].merge = p; + } + info.merge = p; + } + + return info; + }); + return infos.filter(function(info) { return info; }); + }, + + fallback: function(path, leftBoundary, text, i) { + var _info = path[leftBoundary]; + if (text[i].match(/[\u0E48-\u0E4E]/)) { + if (leftBoundary != 0) + leftBoundary = path[leftBoundary].p; + return {p: leftBoundary, + mw: 0, + w: 1 + _info.w, + unk: 1 + _info.unk, + type: "UNK"}; +/* } else if(leftBoundary > 0 && path[leftBoundary].type !== "UNK") { + leftBoundary = path[leftBoundary].p; + return {p: leftBoundary, + w: 1 + _info.w, + unk: 1 + _info.unk, + type: "UNK"}; */ + } else { + return {p: leftBoundary, + mw: _info.mw, + w: 1 + _info.w, + unk: 1 + _info.unk, + type: "UNK"}; + } + }, + + build: function(path, finalAcceptors, i, leftBoundary, text) { + var basicPathInfos = this.buildByAcceptors(path, finalAcceptors, i); + if (basicPathInfos.length > 0) { + return basicPathInfos; + } else { + return [this.fallback(path, leftBoundary, text, i)]; + } + } +}; + +module.exports = function() { + return _.clone(PathInfoBuilder); +} + +},{"./wordcut_core":8,"underscore":25}],5:[function(require,module,exports){ +var _ = require("underscore"); + + +var PathSelector = { + selectPath: function(paths) { + var path = paths.reduce(function(selectedPath, path) { + if (selectedPath == null) { + return path; + } else { + if (path.unk < selectedPath.unk) + return path; + if (path.unk == selectedPath.unk) { + if (path.mw < selectedPath.mw) + return path + if (path.mw == selectedPath.mw) { + if (path.w < selectedPath.w) + return path; + } + } + return selectedPath; + } + }, null); + return path; + }, + + createPath: function() { + return [{p:null, w:0, unk:0, type: "INIT", mw:0}]; + } +}; + +module.exports = function() { + return _.clone(PathSelector); +}; + +},{"underscore":25}],6:[function(require,module,exports){ +function isMatch(pat, offset, ch) { + if (pat.length <= offset) + return false; + var _ch = pat[offset]; + return _ch == ch || + (_ch.match(/[กข]/) && ch.match(/[ก-ฮ]/)) || + (_ch.match(/[มบ]/) && ch.match(/[ก-ฮ]/)) || + (_ch.match(/\u0E49/) && ch.match(/[\u0E48-\u0E4B]/)); +} + +var Rule0 = { + pat: "เหก็ม", + createAcceptor: function(tag) { + return {strOffset: 0, + isFinal: false, + transit: function(ch) { + if (isMatch(Rule0.pat, this.strOffset,ch)) { + this.isFinal = (this.strOffset + 1 == Rule0.pat.length); + this.strOffset++; + } else { + this.isError = true; + } + return this; + }, + isError: false, + tag: "THAI_RULE", + type: "THAI_RULE", + w: 1}; + } +}; + +var PartRule = { + createAcceptor: function(tag) { + return {strOffset: 0, + patterns: [ + "แก", "เก", "ก้", "กก์", "กา", "กี", "กิ", "กืก" + ], + isFinal: false, + transit: function(ch) { + var offset = this.strOffset; + this.patterns = this.patterns.filter(function(pat) { + return isMatch(pat, offset, ch); + }); + + if (this.patterns.length > 0) { + var len = 1 + offset; + this.isFinal = this.patterns.some(function(pat) { + return pat.length == len; + }); + this.strOffset++; + } else { + this.isError = true; + } + return this; + }, + isError: false, + tag: "PART", + type: "PART", + unk: 1, + w: 1}; + } +}; + +var ThaiRules = [Rule0, PartRule]; + +module.exports = ThaiRules; + +},{}],7:[function(require,module,exports){ +var sys = require("sys") + , WordcutDict = require("./dict") + , WordcutCore = require("./wordcut_core") + , PathInfoBuilder = require("./path_info_builder") + , PathSelector = require("./path_selector") + , Acceptors = require("./acceptors") + , latinRules = require("./latin_rules") + , thaiRules = require("./thai_rules") + , _ = require("underscore"); + + +var Wordcut = Object.create(WordcutCore); +Wordcut.defaultPathInfoBuilder = PathInfoBuilder; +Wordcut.defaultPathSelector = PathSelector; +Wordcut.defaultAcceptors = Acceptors; +Wordcut.defaultLatinRules = latinRules; +Wordcut.defaultThaiRules = thaiRules; +Wordcut.defaultDict = WordcutDict; + + +Wordcut.initNoDict = function(dict_path) { + var self = this; + self.pathInfoBuilder = new self.defaultPathInfoBuilder; + self.pathSelector = new self.defaultPathSelector; + self.acceptors = new self.defaultAcceptors; + self.defaultLatinRules.forEach(function(rule) { + self.acceptors.creators.push(rule); + }); + self.defaultThaiRules.forEach(function(rule) { + self.acceptors.creators.push(rule); + }); +}; + +Wordcut.init = function(dict_path, withDefault, additionalWords) { + withDefault = withDefault || false; + this.initNoDict(); + var dict = _.clone(this.defaultDict); + dict.init(dict_path, withDefault, additionalWords); + this.acceptors.creators.push(dict); +}; + +module.exports = Wordcut; + +},{"./acceptors":1,"./dict":2,"./latin_rules":3,"./path_info_builder":4,"./path_selector":5,"./thai_rules":6,"./wordcut_core":8,"sys":28,"underscore":25}],8:[function(require,module,exports){ +var WordcutCore = { + + buildPath: function(text) { + var self = this + , path = self.pathSelector.createPath() + , leftBoundary = 0; + self.acceptors.reset(); + for (var i = 0; i < text.length; i++) { + var ch = text[i]; + self.acceptors.transit(ch); + + var possiblePathInfos = self + .pathInfoBuilder + .build(path, + self.acceptors.getFinalAcceptors(), + i, + leftBoundary, + text); + var selectedPath = self.pathSelector.selectPath(possiblePathInfos) + + path.push(selectedPath); + if (selectedPath.type !== "UNK") { + leftBoundary = i; + } + } + return path; + }, + + pathToRanges: function(path) { + var e = path.length - 1 + , ranges = []; + + while (e > 0) { + var info = path[e] + , s = info.p; + + if (info.merge !== undefined && ranges.length > 0) { + var r = ranges[ranges.length - 1]; + r.s = info.merge; + s = r.s; + } else { + ranges.push({s:s, e:e}); + } + e = s; + } + return ranges.reverse(); + }, + + rangesToText: function(text, ranges, delimiter) { + return ranges.map(function(r) { + return text.substring(r.s, r.e); + }).join(delimiter); + }, + + cut: function(text, delimiter) { + var path = this.buildPath(text) + , ranges = this.pathToRanges(path); + return this + .rangesToText(text, ranges, + (delimiter === undefined ? "|" : delimiter)); + }, + + cutIntoRanges: function(text, noText) { + var path = this.buildPath(text) + , ranges = this.pathToRanges(path); + + if (!noText) { + ranges.forEach(function(r) { + r.text = text.substring(r.s, r.e); + }); + } + return ranges; + }, + + cutIntoArray: function(text) { + var path = this.buildPath(text) + , ranges = this.pathToRanges(path); + + return ranges.map(function(r) { + return text.substring(r.s, r.e) + }); + } +}; + +module.exports = WordcutCore; + +},{}],9:[function(require,module,exports){ +// http://wiki.commonjs.org/wiki/Unit_Testing/1.0 +// +// THIS IS NOT TESTED NOR LIKELY TO WORK OUTSIDE V8! +// +// Originally from narwhal.js (http://narwhaljs.org) +// Copyright (c) 2009 Thomas Robinson <280north.com> +// +// Permission is hereby granted, free of charge, to any person obtaining a copy +// of this software and associated documentation files (the 'Software'), to +// deal in the Software without restriction, including without limitation the +// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or +// sell copies of the Software, and to permit persons to whom the Software is +// furnished to do so, subject to the following conditions: +// +// The above copyright notice and this permission notice shall be included in +// all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +// AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +// ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +// when used in node, this will actually load the util module we depend on +// versus loading the builtin util module as happens otherwise +// this is a bug in node module loading as far as I am concerned +var util = require('util/'); + +var pSlice = Array.prototype.slice; +var hasOwn = Object.prototype.hasOwnProperty; + +// 1. The assert module provides functions that throw +// AssertionError's when particular conditions are not met. The +// assert module must conform to the following interface. + +var assert = module.exports = ok; + +// 2. The AssertionError is defined in assert. +// new assert.AssertionError({ message: message, +// actual: actual, +// expected: expected }) + +assert.AssertionError = function AssertionError(options) { + this.name = 'AssertionError'; + this.actual = options.actual; + this.expected = options.expected; + this.operator = options.operator; + if (options.message) { + this.message = options.message; + this.generatedMessage = false; + } else { + this.message = getMessage(this); + this.generatedMessage = true; + } + var stackStartFunction = options.stackStartFunction || fail; + + if (Error.captureStackTrace) { + Error.captureStackTrace(this, stackStartFunction); + } + else { + // non v8 browsers so we can have a stacktrace + var err = new Error(); + if (err.stack) { + var out = err.stack; + + // try to strip useless frames + var fn_name = stackStartFunction.name; + var idx = out.indexOf('\n' + fn_name); + if (idx >= 0) { + // once we have located the function frame + // we need to strip out everything before it (and its line) + var next_line = out.indexOf('\n', idx + 1); + out = out.substring(next_line + 1); + } + + this.stack = out; + } + } +}; + +// assert.AssertionError instanceof Error +util.inherits(assert.AssertionError, Error); + +function replacer(key, value) { + if (util.isUndefined(value)) { + return '' + value; + } + if (util.isNumber(value) && !isFinite(value)) { + return value.toString(); + } + if (util.isFunction(value) || util.isRegExp(value)) { + return value.toString(); + } + return value; +} + +function truncate(s, n) { + if (util.isString(s)) { + return s.length < n ? s : s.slice(0, n); + } else { + return s; + } +} + +function getMessage(self) { + return truncate(JSON.stringify(self.actual, replacer), 128) + ' ' + + self.operator + ' ' + + truncate(JSON.stringify(self.expected, replacer), 128); +} + +// At present only the three keys mentioned above are used and +// understood by the spec. Implementations or sub modules can pass +// other keys to the AssertionError's constructor - they will be +// ignored. + +// 3. All of the following functions must throw an AssertionError +// when a corresponding condition is not met, with a message that +// may be undefined if not provided. All assertion methods provide +// both the actual and expected values to the assertion error for +// display purposes. + +function fail(actual, expected, message, operator, stackStartFunction) { + throw new assert.AssertionError({ + message: message, + actual: actual, + expected: expected, + operator: operator, + stackStartFunction: stackStartFunction + }); +} + +// EXTENSION! allows for well behaved errors defined elsewhere. +assert.fail = fail; + +// 4. Pure assertion tests whether a value is truthy, as determined +// by !!guard. +// assert.ok(guard, message_opt); +// This statement is equivalent to assert.equal(true, !!guard, +// message_opt);. To test strictly for the value true, use +// assert.strictEqual(true, guard, message_opt);. + +function ok(value, message) { + if (!value) fail(value, true, message, '==', assert.ok); +} +assert.ok = ok; + +// 5. The equality assertion tests shallow, coercive equality with +// ==. +// assert.equal(actual, expected, message_opt); + +assert.equal = function equal(actual, expected, message) { + if (actual != expected) fail(actual, expected, message, '==', assert.equal); +}; + +// 6. The non-equality assertion tests for whether two objects are not equal +// with != assert.notEqual(actual, expected, message_opt); + +assert.notEqual = function notEqual(actual, expected, message) { + if (actual == expected) { + fail(actual, expected, message, '!=', assert.notEqual); + } +}; + +// 7. The equivalence assertion tests a deep equality relation. +// assert.deepEqual(actual, expected, message_opt); + +assert.deepEqual = function deepEqual(actual, expected, message) { + if (!_deepEqual(actual, expected)) { + fail(actual, expected, message, 'deepEqual', assert.deepEqual); + } +}; + +function _deepEqual(actual, expected) { + // 7.1. All identical values are equivalent, as determined by ===. + if (actual === expected) { + return true; + + } else if (util.isBuffer(actual) && util.isBuffer(expected)) { + if (actual.length != expected.length) return false; + + for (var i = 0; i < actual.length; i++) { + if (actual[i] !== expected[i]) return false; + } + + return true; + + // 7.2. If the expected value is a Date object, the actual value is + // equivalent if it is also a Date object that refers to the same time. + } else if (util.isDate(actual) && util.isDate(expected)) { + return actual.getTime() === expected.getTime(); + + // 7.3 If the expected value is a RegExp object, the actual value is + // equivalent if it is also a RegExp object with the same source and + // properties (`global`, `multiline`, `lastIndex`, `ignoreCase`). + } else if (util.isRegExp(actual) && util.isRegExp(expected)) { + return actual.source === expected.source && + actual.global === expected.global && + actual.multiline === expected.multiline && + actual.lastIndex === expected.lastIndex && + actual.ignoreCase === expected.ignoreCase; + + // 7.4. Other pairs that do not both pass typeof value == 'object', + // equivalence is determined by ==. + } else if (!util.isObject(actual) && !util.isObject(expected)) { + return actual == expected; + + // 7.5 For all other Object pairs, including Array objects, equivalence is + // determined by having the same number of owned properties (as verified + // with Object.prototype.hasOwnProperty.call), the same set of keys + // (although not necessarily the same order), equivalent values for every + // corresponding key, and an identical 'prototype' property. Note: this + // accounts for both named and indexed properties on Arrays. + } else { + return objEquiv(actual, expected); + } +} + +function isArguments(object) { + return Object.prototype.toString.call(object) == '[object Arguments]'; +} + +function objEquiv(a, b) { + if (util.isNullOrUndefined(a) || util.isNullOrUndefined(b)) + return false; + // an identical 'prototype' property. + if (a.prototype !== b.prototype) return false; + // if one is a primitive, the other must be same + if (util.isPrimitive(a) || util.isPrimitive(b)) { + return a === b; + } + var aIsArgs = isArguments(a), + bIsArgs = isArguments(b); + if ((aIsArgs && !bIsArgs) || (!aIsArgs && bIsArgs)) + return false; + if (aIsArgs) { + a = pSlice.call(a); + b = pSlice.call(b); + return _deepEqual(a, b); + } + var ka = objectKeys(a), + kb = objectKeys(b), + key, i; + // having the same number of owned properties (keys incorporates + // hasOwnProperty) + if (ka.length != kb.length) + return false; + //the same set of keys (although not necessarily the same order), + ka.sort(); + kb.sort(); + //~~~cheap key test + for (i = ka.length - 1; i >= 0; i--) { + if (ka[i] != kb[i]) + return false; + } + //equivalent values for every corresponding key, and + //~~~possibly expensive deep test + for (i = ka.length - 1; i >= 0; i--) { + key = ka[i]; + if (!_deepEqual(a[key], b[key])) return false; + } + return true; +} + +// 8. The non-equivalence assertion tests for any deep inequality. +// assert.notDeepEqual(actual, expected, message_opt); + +assert.notDeepEqual = function notDeepEqual(actual, expected, message) { + if (_deepEqual(actual, expected)) { + fail(actual, expected, message, 'notDeepEqual', assert.notDeepEqual); + } +}; + +// 9. The strict equality assertion tests strict equality, as determined by ===. +// assert.strictEqual(actual, expected, message_opt); + +assert.strictEqual = function strictEqual(actual, expected, message) { + if (actual !== expected) { + fail(actual, expected, message, '===', assert.strictEqual); + } +}; + +// 10. The strict non-equality assertion tests for strict inequality, as +// determined by !==. assert.notStrictEqual(actual, expected, message_opt); + +assert.notStrictEqual = function notStrictEqual(actual, expected, message) { + if (actual === expected) { + fail(actual, expected, message, '!==', assert.notStrictEqual); + } +}; + +function expectedException(actual, expected) { + if (!actual || !expected) { + return false; + } + + if (Object.prototype.toString.call(expected) == '[object RegExp]') { + return expected.test(actual); + } else if (actual instanceof expected) { + return true; + } else if (expected.call({}, actual) === true) { + return true; + } + + return false; +} + +function _throws(shouldThrow, block, expected, message) { + var actual; + + if (util.isString(expected)) { + message = expected; + expected = null; + } + + try { + block(); + } catch (e) { + actual = e; + } + + message = (expected && expected.name ? ' (' + expected.name + ').' : '.') + + (message ? ' ' + message : '.'); + + if (shouldThrow && !actual) { + fail(actual, expected, 'Missing expected exception' + message); + } + + if (!shouldThrow && expectedException(actual, expected)) { + fail(actual, expected, 'Got unwanted exception' + message); + } + + if ((shouldThrow && actual && expected && + !expectedException(actual, expected)) || (!shouldThrow && actual)) { + throw actual; + } +} + +// 11. Expected to throw an error: +// assert.throws(block, Error_opt, message_opt); + +assert.throws = function(block, /*optional*/error, /*optional*/message) { + _throws.apply(this, [true].concat(pSlice.call(arguments))); +}; + +// EXTENSION! This is annoying to write outside this module. +assert.doesNotThrow = function(block, /*optional*/message) { + _throws.apply(this, [false].concat(pSlice.call(arguments))); +}; + +assert.ifError = function(err) { if (err) {throw err;}}; + +var objectKeys = Object.keys || function (obj) { + var keys = []; + for (var key in obj) { + if (hasOwn.call(obj, key)) keys.push(key); + } + return keys; +}; + +},{"util/":28}],10:[function(require,module,exports){ +'use strict'; +module.exports = balanced; +function balanced(a, b, str) { + if (a instanceof RegExp) a = maybeMatch(a, str); + if (b instanceof RegExp) b = maybeMatch(b, str); + + var r = range(a, b, str); + + return r && { + start: r[0], + end: r[1], + pre: str.slice(0, r[0]), + body: str.slice(r[0] + a.length, r[1]), + post: str.slice(r[1] + b.length) + }; +} + +function maybeMatch(reg, str) { + var m = str.match(reg); + return m ? m[0] : null; +} + +balanced.range = range; +function range(a, b, str) { + var begs, beg, left, right, result; + var ai = str.indexOf(a); + var bi = str.indexOf(b, ai + 1); + var i = ai; + + if (ai >= 0 && bi > 0) { + begs = []; + left = str.length; + + while (i >= 0 && !result) { + if (i == ai) { + begs.push(i); + ai = str.indexOf(a, i + 1); + } else if (begs.length == 1) { + result = [ begs.pop(), bi ]; + } else { + beg = begs.pop(); + if (beg < left) { + left = beg; + right = bi; + } + + bi = str.indexOf(b, i + 1); + } + + i = ai < bi && ai >= 0 ? ai : bi; + } + + if (begs.length) { + result = [ left, right ]; + } + } + + return result; +} + +},{}],11:[function(require,module,exports){ +var concatMap = require('concat-map'); +var balanced = require('balanced-match'); + +module.exports = expandTop; + +var escSlash = '\0SLASH'+Math.random()+'\0'; +var escOpen = '\0OPEN'+Math.random()+'\0'; +var escClose = '\0CLOSE'+Math.random()+'\0'; +var escComma = '\0COMMA'+Math.random()+'\0'; +var escPeriod = '\0PERIOD'+Math.random()+'\0'; + +function numeric(str) { + return parseInt(str, 10) == str + ? parseInt(str, 10) + : str.charCodeAt(0); +} + +function escapeBraces(str) { + return str.split('\\\\').join(escSlash) + .split('\\{').join(escOpen) + .split('\\}').join(escClose) + .split('\\,').join(escComma) + .split('\\.').join(escPeriod); +} + +function unescapeBraces(str) { + return str.split(escSlash).join('\\') + .split(escOpen).join('{') + .split(escClose).join('}') + .split(escComma).join(',') + .split(escPeriod).join('.'); +} + + +// Basically just str.split(","), but handling cases +// where we have nested braced sections, which should be +// treated as individual members, like {a,{b,c},d} +function parseCommaParts(str) { + if (!str) + return ['']; + + var parts = []; + var m = balanced('{', '}', str); + + if (!m) + return str.split(','); + + var pre = m.pre; + var body = m.body; + var post = m.post; + var p = pre.split(','); + + p[p.length-1] += '{' + body + '}'; + var postParts = parseCommaParts(post); + if (post.length) { + p[p.length-1] += postParts.shift(); + p.push.apply(p, postParts); + } + + parts.push.apply(parts, p); + + return parts; +} + +function expandTop(str) { + if (!str) + return []; + + // I don't know why Bash 4.3 does this, but it does. + // Anything starting with {} will have the first two bytes preserved + // but *only* at the top level, so {},a}b will not expand to anything, + // but a{},b}c will be expanded to [a}c,abc]. + // One could argue that this is a bug in Bash, but since the goal of + // this module is to match Bash's rules, we escape a leading {} + if (str.substr(0, 2) === '{}') { + str = '\\{\\}' + str.substr(2); + } + + return expand(escapeBraces(str), true).map(unescapeBraces); +} + +function identity(e) { + return e; +} + +function embrace(str) { + return '{' + str + '}'; +} +function isPadded(el) { + return /^-?0\d/.test(el); +} + +function lte(i, y) { + return i <= y; +} +function gte(i, y) { + return i >= y; +} + +function expand(str, isTop) { + var expansions = []; + + var m = balanced('{', '}', str); + if (!m || /\$$/.test(m.pre)) return [str]; + + var isNumericSequence = /^-?\d+\.\.-?\d+(?:\.\.-?\d+)?$/.test(m.body); + var isAlphaSequence = /^[a-zA-Z]\.\.[a-zA-Z](?:\.\.-?\d+)?$/.test(m.body); + var isSequence = isNumericSequence || isAlphaSequence; + var isOptions = m.body.indexOf(',') >= 0; + if (!isSequence && !isOptions) { + // {a},b} + if (m.post.match(/,.*\}/)) { + str = m.pre + '{' + m.body + escClose + m.post; + return expand(str); + } + return [str]; + } + + var n; + if (isSequence) { + n = m.body.split(/\.\./); + } else { + n = parseCommaParts(m.body); + if (n.length === 1) { + // x{{a,b}}y ==> x{a}y x{b}y + n = expand(n[0], false).map(embrace); + if (n.length === 1) { + var post = m.post.length + ? expand(m.post, false) + : ['']; + return post.map(function(p) { + return m.pre + n[0] + p; + }); + } + } + } + + // at this point, n is the parts, and we know it's not a comma set + // with a single entry. + + // no need to expand pre, since it is guaranteed to be free of brace-sets + var pre = m.pre; + var post = m.post.length + ? expand(m.post, false) + : ['']; + + var N; + + if (isSequence) { + var x = numeric(n[0]); + var y = numeric(n[1]); + var width = Math.max(n[0].length, n[1].length) + var incr = n.length == 3 + ? Math.abs(numeric(n[2])) + : 1; + var test = lte; + var reverse = y < x; + if (reverse) { + incr *= -1; + test = gte; + } + var pad = n.some(isPadded); + + N = []; + + for (var i = x; test(i, y); i += incr) { + var c; + if (isAlphaSequence) { + c = String.fromCharCode(i); + if (c === '\\') + c = ''; + } else { + c = String(i); + if (pad) { + var need = width - c.length; + if (need > 0) { + var z = new Array(need + 1).join('0'); + if (i < 0) + c = '-' + z + c.slice(1); + else + c = z + c; + } + } + } + N.push(c); + } + } else { + N = concatMap(n, function(el) { return expand(el, false) }); + } + + for (var j = 0; j < N.length; j++) { + for (var k = 0; k < post.length; k++) { + var expansion = pre + N[j] + post[k]; + if (!isTop || isSequence || expansion) + expansions.push(expansion); + } + } + + return expansions; +} + + +},{"balanced-match":10,"concat-map":13}],12:[function(require,module,exports){ + +},{}],13:[function(require,module,exports){ +module.exports = function (xs, fn) { + var res = []; + for (var i = 0; i < xs.length; i++) { + var x = fn(xs[i], i); + if (isArray(x)) res.push.apply(res, x); + else res.push(x); + } + return res; +}; + +var isArray = Array.isArray || function (xs) { + return Object.prototype.toString.call(xs) === '[object Array]'; +}; + +},{}],14:[function(require,module,exports){ +// Copyright Joyent, Inc. and other Node contributors. +// +// Permission is hereby granted, free of charge, to any person obtaining a +// copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to permit +// persons to whom the Software is furnished to do so, subject to the +// following conditions: +// +// The above copyright notice and this permission notice shall be included +// in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +// USE OR OTHER DEALINGS IN THE SOFTWARE. + +function EventEmitter() { + this._events = this._events || {}; + this._maxListeners = this._maxListeners || undefined; +} +module.exports = EventEmitter; + +// Backwards-compat with node 0.10.x +EventEmitter.EventEmitter = EventEmitter; + +EventEmitter.prototype._events = undefined; +EventEmitter.prototype._maxListeners = undefined; + +// By default EventEmitters will print a warning if more than 10 listeners are +// added to it. This is a useful default which helps finding memory leaks. +EventEmitter.defaultMaxListeners = 10; + +// Obviously not all Emitters should be limited to 10. This function allows +// that to be increased. Set to zero for unlimited. +EventEmitter.prototype.setMaxListeners = function(n) { + if (!isNumber(n) || n < 0 || isNaN(n)) + throw TypeError('n must be a positive number'); + this._maxListeners = n; + return this; +}; + +EventEmitter.prototype.emit = function(type) { + var er, handler, len, args, i, listeners; + + if (!this._events) + this._events = {}; + + // If there is no 'error' event listener then throw. + if (type === 'error') { + if (!this._events.error || + (isObject(this._events.error) && !this._events.error.length)) { + er = arguments[1]; + if (er instanceof Error) { + throw er; // Unhandled 'error' event + } + throw TypeError('Uncaught, unspecified "error" event.'); + } + } + + handler = this._events[type]; + + if (isUndefined(handler)) + return false; + + if (isFunction(handler)) { + switch (arguments.length) { + // fast cases + case 1: + handler.call(this); + break; + case 2: + handler.call(this, arguments[1]); + break; + case 3: + handler.call(this, arguments[1], arguments[2]); + break; + // slower + default: + len = arguments.length; + args = new Array(len - 1); + for (i = 1; i < len; i++) + args[i - 1] = arguments[i]; + handler.apply(this, args); + } + } else if (isObject(handler)) { + len = arguments.length; + args = new Array(len - 1); + for (i = 1; i < len; i++) + args[i - 1] = arguments[i]; + + listeners = handler.slice(); + len = listeners.length; + for (i = 0; i < len; i++) + listeners[i].apply(this, args); + } + + return true; +}; + +EventEmitter.prototype.addListener = function(type, listener) { + var m; + + if (!isFunction(listener)) + throw TypeError('listener must be a function'); + + if (!this._events) + this._events = {}; + + // To avoid recursion in the case that type === "newListener"! Before + // adding it to the listeners, first emit "newListener". + if (this._events.newListener) + this.emit('newListener', type, + isFunction(listener.listener) ? + listener.listener : listener); + + if (!this._events[type]) + // Optimize the case of one listener. Don't need the extra array object. + this._events[type] = listener; + else if (isObject(this._events[type])) + // If we've already got an array, just append. + this._events[type].push(listener); + else + // Adding the second element, need to change to array. + this._events[type] = [this._events[type], listener]; + + // Check for listener leak + if (isObject(this._events[type]) && !this._events[type].warned) { + var m; + if (!isUndefined(this._maxListeners)) { + m = this._maxListeners; + } else { + m = EventEmitter.defaultMaxListeners; + } + + if (m && m > 0 && this._events[type].length > m) { + this._events[type].warned = true; + console.error('(node) warning: possible EventEmitter memory ' + + 'leak detected. %d listeners added. ' + + 'Use emitter.setMaxListeners() to increase limit.', + this._events[type].length); + if (typeof console.trace === 'function') { + // not supported in IE 10 + console.trace(); + } + } + } + + return this; +}; + +EventEmitter.prototype.on = EventEmitter.prototype.addListener; + +EventEmitter.prototype.once = function(type, listener) { + if (!isFunction(listener)) + throw TypeError('listener must be a function'); + + var fired = false; + + function g() { + this.removeListener(type, g); + + if (!fired) { + fired = true; + listener.apply(this, arguments); + } + } + + g.listener = listener; + this.on(type, g); + + return this; +}; + +// emits a 'removeListener' event iff the listener was removed +EventEmitter.prototype.removeListener = function(type, listener) { + var list, position, length, i; + + if (!isFunction(listener)) + throw TypeError('listener must be a function'); + + if (!this._events || !this._events[type]) + return this; + + list = this._events[type]; + length = list.length; + position = -1; + + if (list === listener || + (isFunction(list.listener) && list.listener === listener)) { + delete this._events[type]; + if (this._events.removeListener) + this.emit('removeListener', type, listener); + + } else if (isObject(list)) { + for (i = length; i-- > 0;) { + if (list[i] === listener || + (list[i].listener && list[i].listener === listener)) { + position = i; + break; + } + } + + if (position < 0) + return this; + + if (list.length === 1) { + list.length = 0; + delete this._events[type]; + } else { + list.splice(position, 1); + } + + if (this._events.removeListener) + this.emit('removeListener', type, listener); + } + + return this; +}; + +EventEmitter.prototype.removeAllListeners = function(type) { + var key, listeners; + + if (!this._events) + return this; + + // not listening for removeListener, no need to emit + if (!this._events.removeListener) { + if (arguments.length === 0) + this._events = {}; + else if (this._events[type]) + delete this._events[type]; + return this; + } + + // emit removeListener for all listeners on all events + if (arguments.length === 0) { + for (key in this._events) { + if (key === 'removeListener') continue; + this.removeAllListeners(key); + } + this.removeAllListeners('removeListener'); + this._events = {}; + return this; + } + + listeners = this._events[type]; + + if (isFunction(listeners)) { + this.removeListener(type, listeners); + } else { + // LIFO order + while (listeners.length) + this.removeListener(type, listeners[listeners.length - 1]); + } + delete this._events[type]; + + return this; +}; + +EventEmitter.prototype.listeners = function(type) { + var ret; + if (!this._events || !this._events[type]) + ret = []; + else if (isFunction(this._events[type])) + ret = [this._events[type]]; + else + ret = this._events[type].slice(); + return ret; +}; + +EventEmitter.listenerCount = function(emitter, type) { + var ret; + if (!emitter._events || !emitter._events[type]) + ret = 0; + else if (isFunction(emitter._events[type])) + ret = 1; + else + ret = emitter._events[type].length; + return ret; +}; + +function isFunction(arg) { + return typeof arg === 'function'; +} + +function isNumber(arg) { + return typeof arg === 'number'; +} + +function isObject(arg) { + return typeof arg === 'object' && arg !== null; +} + +function isUndefined(arg) { + return arg === void 0; +} + +},{}],15:[function(require,module,exports){ +(function (process){ +exports.alphasort = alphasort +exports.alphasorti = alphasorti +exports.setopts = setopts +exports.ownProp = ownProp +exports.makeAbs = makeAbs +exports.finish = finish +exports.mark = mark +exports.isIgnored = isIgnored +exports.childrenIgnored = childrenIgnored + +function ownProp (obj, field) { + return Object.prototype.hasOwnProperty.call(obj, field) +} + +var path = require("path") +var minimatch = require("minimatch") +var isAbsolute = require("path-is-absolute") +var Minimatch = minimatch.Minimatch + +function alphasorti (a, b) { + return a.toLowerCase().localeCompare(b.toLowerCase()) +} + +function alphasort (a, b) { + return a.localeCompare(b) +} + +function setupIgnores (self, options) { + self.ignore = options.ignore || [] + + if (!Array.isArray(self.ignore)) + self.ignore = [self.ignore] + + if (self.ignore.length) { + self.ignore = self.ignore.map(ignoreMap) + } +} + +function ignoreMap (pattern) { + var gmatcher = null + if (pattern.slice(-3) === '/**') { + var gpattern = pattern.replace(/(\/\*\*)+$/, '') + gmatcher = new Minimatch(gpattern) + } + + return { + matcher: new Minimatch(pattern), + gmatcher: gmatcher + } +} + +function setopts (self, pattern, options) { + if (!options) + options = {} + + // base-matching: just use globstar for that. + if (options.matchBase && -1 === pattern.indexOf("/")) { + if (options.noglobstar) { + throw new Error("base matching requires globstar") + } + pattern = "**/" + pattern + } + + self.silent = !!options.silent + self.pattern = pattern + self.strict = options.strict !== false + self.realpath = !!options.realpath + self.realpathCache = options.realpathCache || Object.create(null) + self.follow = !!options.follow + self.dot = !!options.dot + self.mark = !!options.mark + self.nodir = !!options.nodir + if (self.nodir) + self.mark = true + self.sync = !!options.sync + self.nounique = !!options.nounique + self.nonull = !!options.nonull + self.nosort = !!options.nosort + self.nocase = !!options.nocase + self.stat = !!options.stat + self.noprocess = !!options.noprocess + + self.maxLength = options.maxLength || Infinity + self.cache = options.cache || Object.create(null) + self.statCache = options.statCache || Object.create(null) + self.symlinks = options.symlinks || Object.create(null) + + setupIgnores(self, options) + + self.changedCwd = false + var cwd = process.cwd() + if (!ownProp(options, "cwd")) + self.cwd = cwd + else { + self.cwd = options.cwd + self.changedCwd = path.resolve(options.cwd) !== cwd + } + + self.root = options.root || path.resolve(self.cwd, "/") + self.root = path.resolve(self.root) + if (process.platform === "win32") + self.root = self.root.replace(/\\/g, "/") + + self.nomount = !!options.nomount + + // disable comments and negation unless the user explicitly + // passes in false as the option. + options.nonegate = options.nonegate === false ? false : true + options.nocomment = options.nocomment === false ? false : true + deprecationWarning(options) + + self.minimatch = new Minimatch(pattern, options) + self.options = self.minimatch.options +} + +// TODO(isaacs): remove entirely in v6 +// exported to reset in tests +exports.deprecationWarned +function deprecationWarning(options) { + if (!options.nonegate || !options.nocomment) { + if (process.noDeprecation !== true && !exports.deprecationWarned) { + var msg = 'glob WARNING: comments and negation will be disabled in v6' + if (process.throwDeprecation) + throw new Error(msg) + else if (process.traceDeprecation) + console.trace(msg) + else + console.error(msg) + + exports.deprecationWarned = true + } + } +} + +function finish (self) { + var nou = self.nounique + var all = nou ? [] : Object.create(null) + + for (var i = 0, l = self.matches.length; i < l; i ++) { + var matches = self.matches[i] + if (!matches || Object.keys(matches).length === 0) { + if (self.nonull) { + // do like the shell, and spit out the literal glob + var literal = self.minimatch.globSet[i] + if (nou) + all.push(literal) + else + all[literal] = true + } + } else { + // had matches + var m = Object.keys(matches) + if (nou) + all.push.apply(all, m) + else + m.forEach(function (m) { + all[m] = true + }) + } + } + + if (!nou) + all = Object.keys(all) + + if (!self.nosort) + all = all.sort(self.nocase ? alphasorti : alphasort) + + // at *some* point we statted all of these + if (self.mark) { + for (var i = 0; i < all.length; i++) { + all[i] = self._mark(all[i]) + } + if (self.nodir) { + all = all.filter(function (e) { + return !(/\/$/.test(e)) + }) + } + } + + if (self.ignore.length) + all = all.filter(function(m) { + return !isIgnored(self, m) + }) + + self.found = all +} + +function mark (self, p) { + var abs = makeAbs(self, p) + var c = self.cache[abs] + var m = p + if (c) { + var isDir = c === 'DIR' || Array.isArray(c) + var slash = p.slice(-1) === '/' + + if (isDir && !slash) + m += '/' + else if (!isDir && slash) + m = m.slice(0, -1) + + if (m !== p) { + var mabs = makeAbs(self, m) + self.statCache[mabs] = self.statCache[abs] + self.cache[mabs] = self.cache[abs] + } + } + + return m +} + +// lotta situps... +function makeAbs (self, f) { + var abs = f + if (f.charAt(0) === '/') { + abs = path.join(self.root, f) + } else if (isAbsolute(f) || f === '') { + abs = f + } else if (self.changedCwd) { + abs = path.resolve(self.cwd, f) + } else { + abs = path.resolve(f) + } + return abs +} + + +// Return true, if pattern ends with globstar '**', for the accompanying parent directory. +// Ex:- If node_modules/** is the pattern, add 'node_modules' to ignore list along with it's contents +function isIgnored (self, path) { + if (!self.ignore.length) + return false + + return self.ignore.some(function(item) { + return item.matcher.match(path) || !!(item.gmatcher && item.gmatcher.match(path)) + }) +} + +function childrenIgnored (self, path) { + if (!self.ignore.length) + return false + + return self.ignore.some(function(item) { + return !!(item.gmatcher && item.gmatcher.match(path)) + }) +} + +}).call(this,require('_process')) +},{"_process":24,"minimatch":20,"path":22,"path-is-absolute":23}],16:[function(require,module,exports){ +(function (process){ +// Approach: +// +// 1. Get the minimatch set +// 2. For each pattern in the set, PROCESS(pattern, false) +// 3. Store matches per-set, then uniq them +// +// PROCESS(pattern, inGlobStar) +// Get the first [n] items from pattern that are all strings +// Join these together. This is PREFIX. +// If there is no more remaining, then stat(PREFIX) and +// add to matches if it succeeds. END. +// +// If inGlobStar and PREFIX is symlink and points to dir +// set ENTRIES = [] +// else readdir(PREFIX) as ENTRIES +// If fail, END +// +// with ENTRIES +// If pattern[n] is GLOBSTAR +// // handle the case where the globstar match is empty +// // by pruning it out, and testing the resulting pattern +// PROCESS(pattern[0..n] + pattern[n+1 .. $], false) +// // handle other cases. +// for ENTRY in ENTRIES (not dotfiles) +// // attach globstar + tail onto the entry +// // Mark that this entry is a globstar match +// PROCESS(pattern[0..n] + ENTRY + pattern[n .. $], true) +// +// else // not globstar +// for ENTRY in ENTRIES (not dotfiles, unless pattern[n] is dot) +// Test ENTRY against pattern[n] +// If fails, continue +// If passes, PROCESS(pattern[0..n] + item + pattern[n+1 .. $]) +// +// Caveat: +// Cache all stats and readdirs results to minimize syscall. Since all +// we ever care about is existence and directory-ness, we can just keep +// `true` for files, and [children,...] for directories, or `false` for +// things that don't exist. + +module.exports = glob + +var fs = require('fs') +var minimatch = require('minimatch') +var Minimatch = minimatch.Minimatch +var inherits = require('inherits') +var EE = require('events').EventEmitter +var path = require('path') +var assert = require('assert') +var isAbsolute = require('path-is-absolute') +var globSync = require('./sync.js') +var common = require('./common.js') +var alphasort = common.alphasort +var alphasorti = common.alphasorti +var setopts = common.setopts +var ownProp = common.ownProp +var inflight = require('inflight') +var util = require('util') +var childrenIgnored = common.childrenIgnored +var isIgnored = common.isIgnored + +var once = require('once') + +function glob (pattern, options, cb) { + if (typeof options === 'function') cb = options, options = {} + if (!options) options = {} + + if (options.sync) { + if (cb) + throw new TypeError('callback provided to sync glob') + return globSync(pattern, options) + } + + return new Glob(pattern, options, cb) +} + +glob.sync = globSync +var GlobSync = glob.GlobSync = globSync.GlobSync + +// old api surface +glob.glob = glob + +glob.hasMagic = function (pattern, options_) { + var options = util._extend({}, options_) + options.noprocess = true + + var g = new Glob(pattern, options) + var set = g.minimatch.set + if (set.length > 1) + return true + + for (var j = 0; j < set[0].length; j++) { + if (typeof set[0][j] !== 'string') + return true + } + + return false +} + +glob.Glob = Glob +inherits(Glob, EE) +function Glob (pattern, options, cb) { + if (typeof options === 'function') { + cb = options + options = null + } + + if (options && options.sync) { + if (cb) + throw new TypeError('callback provided to sync glob') + return new GlobSync(pattern, options) + } + + if (!(this instanceof Glob)) + return new Glob(pattern, options, cb) + + setopts(this, pattern, options) + this._didRealPath = false + + // process each pattern in the minimatch set + var n = this.minimatch.set.length + + // The matches are stored as {: true,...} so that + // duplicates are automagically pruned. + // Later, we do an Object.keys() on these. + // Keep them as a list so we can fill in when nonull is set. + this.matches = new Array(n) + + if (typeof cb === 'function') { + cb = once(cb) + this.on('error', cb) + this.on('end', function (matches) { + cb(null, matches) + }) + } + + var self = this + var n = this.minimatch.set.length + this._processing = 0 + this.matches = new Array(n) + + this._emitQueue = [] + this._processQueue = [] + this.paused = false + + if (this.noprocess) + return this + + if (n === 0) + return done() + + for (var i = 0; i < n; i ++) { + this._process(this.minimatch.set[i], i, false, done) + } + + function done () { + --self._processing + if (self._processing <= 0) + self._finish() + } +} + +Glob.prototype._finish = function () { + assert(this instanceof Glob) + if (this.aborted) + return + + if (this.realpath && !this._didRealpath) + return this._realpath() + + common.finish(this) + this.emit('end', this.found) +} + +Glob.prototype._realpath = function () { + if (this._didRealpath) + return + + this._didRealpath = true + + var n = this.matches.length + if (n === 0) + return this._finish() + + var self = this + for (var i = 0; i < this.matches.length; i++) + this._realpathSet(i, next) + + function next () { + if (--n === 0) + self._finish() + } +} + +Glob.prototype._realpathSet = function (index, cb) { + var matchset = this.matches[index] + if (!matchset) + return cb() + + var found = Object.keys(matchset) + var self = this + var n = found.length + + if (n === 0) + return cb() + + var set = this.matches[index] = Object.create(null) + found.forEach(function (p, i) { + // If there's a problem with the stat, then it means that + // one or more of the links in the realpath couldn't be + // resolved. just return the abs value in that case. + p = self._makeAbs(p) + fs.realpath(p, self.realpathCache, function (er, real) { + if (!er) + set[real] = true + else if (er.syscall === 'stat') + set[p] = true + else + self.emit('error', er) // srsly wtf right here + + if (--n === 0) { + self.matches[index] = set + cb() + } + }) + }) +} + +Glob.prototype._mark = function (p) { + return common.mark(this, p) +} + +Glob.prototype._makeAbs = function (f) { + return common.makeAbs(this, f) +} + +Glob.prototype.abort = function () { + this.aborted = true + this.emit('abort') +} + +Glob.prototype.pause = function () { + if (!this.paused) { + this.paused = true + this.emit('pause') + } +} + +Glob.prototype.resume = function () { + if (this.paused) { + this.emit('resume') + this.paused = false + if (this._emitQueue.length) { + var eq = this._emitQueue.slice(0) + this._emitQueue.length = 0 + for (var i = 0; i < eq.length; i ++) { + var e = eq[i] + this._emitMatch(e[0], e[1]) + } + } + if (this._processQueue.length) { + var pq = this._processQueue.slice(0) + this._processQueue.length = 0 + for (var i = 0; i < pq.length; i ++) { + var p = pq[i] + this._processing-- + this._process(p[0], p[1], p[2], p[3]) + } + } + } +} + +Glob.prototype._process = function (pattern, index, inGlobStar, cb) { + assert(this instanceof Glob) + assert(typeof cb === 'function') + + if (this.aborted) + return + + this._processing++ + if (this.paused) { + this._processQueue.push([pattern, index, inGlobStar, cb]) + return + } + + //console.error('PROCESS %d', this._processing, pattern) + + // Get the first [n] parts of pattern that are all strings. + var n = 0 + while (typeof pattern[n] === 'string') { + n ++ + } + // now n is the index of the first one that is *not* a string. + + // see if there's anything else + var prefix + switch (n) { + // if not, then this is rather simple + case pattern.length: + this._processSimple(pattern.join('/'), index, cb) + return + + case 0: + // pattern *starts* with some non-trivial item. + // going to readdir(cwd), but not include the prefix in matches. + prefix = null + break + + default: + // pattern has some string bits in the front. + // whatever it starts with, whether that's 'absolute' like /foo/bar, + // or 'relative' like '../baz' + prefix = pattern.slice(0, n).join('/') + break + } + + var remain = pattern.slice(n) + + // get the list of entries. + var read + if (prefix === null) + read = '.' + else if (isAbsolute(prefix) || isAbsolute(pattern.join('/'))) { + if (!prefix || !isAbsolute(prefix)) + prefix = '/' + prefix + read = prefix + } else + read = prefix + + var abs = this._makeAbs(read) + + //if ignored, skip _processing + if (childrenIgnored(this, read)) + return cb() + + var isGlobStar = remain[0] === minimatch.GLOBSTAR + if (isGlobStar) + this._processGlobStar(prefix, read, abs, remain, index, inGlobStar, cb) + else + this._processReaddir(prefix, read, abs, remain, index, inGlobStar, cb) +} + +Glob.prototype._processReaddir = function (prefix, read, abs, remain, index, inGlobStar, cb) { + var self = this + this._readdir(abs, inGlobStar, function (er, entries) { + return self._processReaddir2(prefix, read, abs, remain, index, inGlobStar, entries, cb) + }) +} + +Glob.prototype._processReaddir2 = function (prefix, read, abs, remain, index, inGlobStar, entries, cb) { + + // if the abs isn't a dir, then nothing can match! + if (!entries) + return cb() + + // It will only match dot entries if it starts with a dot, or if + // dot is set. Stuff like @(.foo|.bar) isn't allowed. + var pn = remain[0] + var negate = !!this.minimatch.negate + var rawGlob = pn._glob + var dotOk = this.dot || rawGlob.charAt(0) === '.' + + var matchedEntries = [] + for (var i = 0; i < entries.length; i++) { + var e = entries[i] + if (e.charAt(0) !== '.' || dotOk) { + var m + if (negate && !prefix) { + m = !e.match(pn) + } else { + m = e.match(pn) + } + if (m) + matchedEntries.push(e) + } + } + + //console.error('prd2', prefix, entries, remain[0]._glob, matchedEntries) + + var len = matchedEntries.length + // If there are no matched entries, then nothing matches. + if (len === 0) + return cb() + + // if this is the last remaining pattern bit, then no need for + // an additional stat *unless* the user has specified mark or + // stat explicitly. We know they exist, since readdir returned + // them. + + if (remain.length === 1 && !this.mark && !this.stat) { + if (!this.matches[index]) + this.matches[index] = Object.create(null) + + for (var i = 0; i < len; i ++) { + var e = matchedEntries[i] + if (prefix) { + if (prefix !== '/') + e = prefix + '/' + e + else + e = prefix + e + } + + if (e.charAt(0) === '/' && !this.nomount) { + e = path.join(this.root, e) + } + this._emitMatch(index, e) + } + // This was the last one, and no stats were needed + return cb() + } + + // now test all matched entries as stand-ins for that part + // of the pattern. + remain.shift() + for (var i = 0; i < len; i ++) { + var e = matchedEntries[i] + var newPattern + if (prefix) { + if (prefix !== '/') + e = prefix + '/' + e + else + e = prefix + e + } + this._process([e].concat(remain), index, inGlobStar, cb) + } + cb() +} + +Glob.prototype._emitMatch = function (index, e) { + if (this.aborted) + return + + if (this.matches[index][e]) + return + + if (isIgnored(this, e)) + return + + if (this.paused) { + this._emitQueue.push([index, e]) + return + } + + var abs = this._makeAbs(e) + + if (this.nodir) { + var c = this.cache[abs] + if (c === 'DIR' || Array.isArray(c)) + return + } + + if (this.mark) + e = this._mark(e) + + this.matches[index][e] = true + + var st = this.statCache[abs] + if (st) + this.emit('stat', e, st) + + this.emit('match', e) +} + +Glob.prototype._readdirInGlobStar = function (abs, cb) { + if (this.aborted) + return + + // follow all symlinked directories forever + // just proceed as if this is a non-globstar situation + if (this.follow) + return this._readdir(abs, false, cb) + + var lstatkey = 'lstat\0' + abs + var self = this + var lstatcb = inflight(lstatkey, lstatcb_) + + if (lstatcb) + fs.lstat(abs, lstatcb) + + function lstatcb_ (er, lstat) { + if (er) + return cb() + + var isSym = lstat.isSymbolicLink() + self.symlinks[abs] = isSym + + // If it's not a symlink or a dir, then it's definitely a regular file. + // don't bother doing a readdir in that case. + if (!isSym && !lstat.isDirectory()) { + self.cache[abs] = 'FILE' + cb() + } else + self._readdir(abs, false, cb) + } +} + +Glob.prototype._readdir = function (abs, inGlobStar, cb) { + if (this.aborted) + return + + cb = inflight('readdir\0'+abs+'\0'+inGlobStar, cb) + if (!cb) + return + + //console.error('RD %j %j', +inGlobStar, abs) + if (inGlobStar && !ownProp(this.symlinks, abs)) + return this._readdirInGlobStar(abs, cb) + + if (ownProp(this.cache, abs)) { + var c = this.cache[abs] + if (!c || c === 'FILE') + return cb() + + if (Array.isArray(c)) + return cb(null, c) + } + + var self = this + fs.readdir(abs, readdirCb(this, abs, cb)) +} + +function readdirCb (self, abs, cb) { + return function (er, entries) { + if (er) + self._readdirError(abs, er, cb) + else + self._readdirEntries(abs, entries, cb) + } +} + +Glob.prototype._readdirEntries = function (abs, entries, cb) { + if (this.aborted) + return + + // if we haven't asked to stat everything, then just + // assume that everything in there exists, so we can avoid + // having to stat it a second time. + if (!this.mark && !this.stat) { + for (var i = 0; i < entries.length; i ++) { + var e = entries[i] + if (abs === '/') + e = abs + e + else + e = abs + '/' + e + this.cache[e] = true + } + } + + this.cache[abs] = entries + return cb(null, entries) +} + +Glob.prototype._readdirError = function (f, er, cb) { + if (this.aborted) + return + + // handle errors, and cache the information + switch (er.code) { + case 'ENOTSUP': // https://github.com/isaacs/node-glob/issues/205 + case 'ENOTDIR': // totally normal. means it *does* exist. + this.cache[this._makeAbs(f)] = 'FILE' + break + + case 'ENOENT': // not terribly unusual + case 'ELOOP': + case 'ENAMETOOLONG': + case 'UNKNOWN': + this.cache[this._makeAbs(f)] = false + break + + default: // some unusual error. Treat as failure. + this.cache[this._makeAbs(f)] = false + if (this.strict) { + this.emit('error', er) + // If the error is handled, then we abort + // if not, we threw out of here + this.abort() + } + if (!this.silent) + console.error('glob error', er) + break + } + + return cb() +} + +Glob.prototype._processGlobStar = function (prefix, read, abs, remain, index, inGlobStar, cb) { + var self = this + this._readdir(abs, inGlobStar, function (er, entries) { + self._processGlobStar2(prefix, read, abs, remain, index, inGlobStar, entries, cb) + }) +} + + +Glob.prototype._processGlobStar2 = function (prefix, read, abs, remain, index, inGlobStar, entries, cb) { + //console.error('pgs2', prefix, remain[0], entries) + + // no entries means not a dir, so it can never have matches + // foo.txt/** doesn't match foo.txt + if (!entries) + return cb() + + // test without the globstar, and with every child both below + // and replacing the globstar. + var remainWithoutGlobStar = remain.slice(1) + var gspref = prefix ? [ prefix ] : [] + var noGlobStar = gspref.concat(remainWithoutGlobStar) + + // the noGlobStar pattern exits the inGlobStar state + this._process(noGlobStar, index, false, cb) + + var isSym = this.symlinks[abs] + var len = entries.length + + // If it's a symlink, and we're in a globstar, then stop + if (isSym && inGlobStar) + return cb() + + for (var i = 0; i < len; i++) { + var e = entries[i] + if (e.charAt(0) === '.' && !this.dot) + continue + + // these two cases enter the inGlobStar state + var instead = gspref.concat(entries[i], remainWithoutGlobStar) + this._process(instead, index, true, cb) + + var below = gspref.concat(entries[i], remain) + this._process(below, index, true, cb) + } + + cb() +} + +Glob.prototype._processSimple = function (prefix, index, cb) { + // XXX review this. Shouldn't it be doing the mounting etc + // before doing stat? kinda weird? + var self = this + this._stat(prefix, function (er, exists) { + self._processSimple2(prefix, index, er, exists, cb) + }) +} +Glob.prototype._processSimple2 = function (prefix, index, er, exists, cb) { + + //console.error('ps2', prefix, exists) + + if (!this.matches[index]) + this.matches[index] = Object.create(null) + + // If it doesn't exist, then just mark the lack of results + if (!exists) + return cb() + + if (prefix && isAbsolute(prefix) && !this.nomount) { + var trail = /[\/\\]$/.test(prefix) + if (prefix.charAt(0) === '/') { + prefix = path.join(this.root, prefix) + } else { + prefix = path.resolve(this.root, prefix) + if (trail) + prefix += '/' + } + } + + if (process.platform === 'win32') + prefix = prefix.replace(/\\/g, '/') + + // Mark this as a match + this._emitMatch(index, prefix) + cb() +} + +// Returns either 'DIR', 'FILE', or false +Glob.prototype._stat = function (f, cb) { + var abs = this._makeAbs(f) + var needDir = f.slice(-1) === '/' + + if (f.length > this.maxLength) + return cb() + + if (!this.stat && ownProp(this.cache, abs)) { + var c = this.cache[abs] + + if (Array.isArray(c)) + c = 'DIR' + + // It exists, but maybe not how we need it + if (!needDir || c === 'DIR') + return cb(null, c) + + if (needDir && c === 'FILE') + return cb() + + // otherwise we have to stat, because maybe c=true + // if we know it exists, but not what it is. + } + + var exists + var stat = this.statCache[abs] + if (stat !== undefined) { + if (stat === false) + return cb(null, stat) + else { + var type = stat.isDirectory() ? 'DIR' : 'FILE' + if (needDir && type === 'FILE') + return cb() + else + return cb(null, type, stat) + } + } + + var self = this + var statcb = inflight('stat\0' + abs, lstatcb_) + if (statcb) + fs.lstat(abs, statcb) + + function lstatcb_ (er, lstat) { + if (lstat && lstat.isSymbolicLink()) { + // If it's a symlink, then treat it as the target, unless + // the target does not exist, then treat it as a file. + return fs.stat(abs, function (er, stat) { + if (er) + self._stat2(f, abs, null, lstat, cb) + else + self._stat2(f, abs, er, stat, cb) + }) + } else { + self._stat2(f, abs, er, lstat, cb) + } + } +} + +Glob.prototype._stat2 = function (f, abs, er, stat, cb) { + if (er) { + this.statCache[abs] = false + return cb() + } + + var needDir = f.slice(-1) === '/' + this.statCache[abs] = stat + + if (abs.slice(-1) === '/' && !stat.isDirectory()) + return cb(null, false, stat) + + var c = stat.isDirectory() ? 'DIR' : 'FILE' + this.cache[abs] = this.cache[abs] || c + + if (needDir && c !== 'DIR') + return cb() + + return cb(null, c, stat) +} + +}).call(this,require('_process')) +},{"./common.js":15,"./sync.js":17,"_process":24,"assert":9,"events":14,"fs":12,"inflight":18,"inherits":19,"minimatch":20,"once":21,"path":22,"path-is-absolute":23,"util":28}],17:[function(require,module,exports){ +(function (process){ +module.exports = globSync +globSync.GlobSync = GlobSync + +var fs = require('fs') +var minimatch = require('minimatch') +var Minimatch = minimatch.Minimatch +var Glob = require('./glob.js').Glob +var util = require('util') +var path = require('path') +var assert = require('assert') +var isAbsolute = require('path-is-absolute') +var common = require('./common.js') +var alphasort = common.alphasort +var alphasorti = common.alphasorti +var setopts = common.setopts +var ownProp = common.ownProp +var childrenIgnored = common.childrenIgnored + +function globSync (pattern, options) { + if (typeof options === 'function' || arguments.length === 3) + throw new TypeError('callback provided to sync glob\n'+ + 'See: https://github.com/isaacs/node-glob/issues/167') + + return new GlobSync(pattern, options).found +} + +function GlobSync (pattern, options) { + if (!pattern) + throw new Error('must provide pattern') + + if (typeof options === 'function' || arguments.length === 3) + throw new TypeError('callback provided to sync glob\n'+ + 'See: https://github.com/isaacs/node-glob/issues/167') + + if (!(this instanceof GlobSync)) + return new GlobSync(pattern, options) + + setopts(this, pattern, options) + + if (this.noprocess) + return this + + var n = this.minimatch.set.length + this.matches = new Array(n) + for (var i = 0; i < n; i ++) { + this._process(this.minimatch.set[i], i, false) + } + this._finish() +} + +GlobSync.prototype._finish = function () { + assert(this instanceof GlobSync) + if (this.realpath) { + var self = this + this.matches.forEach(function (matchset, index) { + var set = self.matches[index] = Object.create(null) + for (var p in matchset) { + try { + p = self._makeAbs(p) + var real = fs.realpathSync(p, self.realpathCache) + set[real] = true + } catch (er) { + if (er.syscall === 'stat') + set[self._makeAbs(p)] = true + else + throw er + } + } + }) + } + common.finish(this) +} + + +GlobSync.prototype._process = function (pattern, index, inGlobStar) { + assert(this instanceof GlobSync) + + // Get the first [n] parts of pattern that are all strings. + var n = 0 + while (typeof pattern[n] === 'string') { + n ++ + } + // now n is the index of the first one that is *not* a string. + + // See if there's anything else + var prefix + switch (n) { + // if not, then this is rather simple + case pattern.length: + this._processSimple(pattern.join('/'), index) + return + + case 0: + // pattern *starts* with some non-trivial item. + // going to readdir(cwd), but not include the prefix in matches. + prefix = null + break + + default: + // pattern has some string bits in the front. + // whatever it starts with, whether that's 'absolute' like /foo/bar, + // or 'relative' like '../baz' + prefix = pattern.slice(0, n).join('/') + break + } + + var remain = pattern.slice(n) + + // get the list of entries. + var read + if (prefix === null) + read = '.' + else if (isAbsolute(prefix) || isAbsolute(pattern.join('/'))) { + if (!prefix || !isAbsolute(prefix)) + prefix = '/' + prefix + read = prefix + } else + read = prefix + + var abs = this._makeAbs(read) + + //if ignored, skip processing + if (childrenIgnored(this, read)) + return + + var isGlobStar = remain[0] === minimatch.GLOBSTAR + if (isGlobStar) + this._processGlobStar(prefix, read, abs, remain, index, inGlobStar) + else + this._processReaddir(prefix, read, abs, remain, index, inGlobStar) +} + + +GlobSync.prototype._processReaddir = function (prefix, read, abs, remain, index, inGlobStar) { + var entries = this._readdir(abs, inGlobStar) + + // if the abs isn't a dir, then nothing can match! + if (!entries) + return + + // It will only match dot entries if it starts with a dot, or if + // dot is set. Stuff like @(.foo|.bar) isn't allowed. + var pn = remain[0] + var negate = !!this.minimatch.negate + var rawGlob = pn._glob + var dotOk = this.dot || rawGlob.charAt(0) === '.' + + var matchedEntries = [] + for (var i = 0; i < entries.length; i++) { + var e = entries[i] + if (e.charAt(0) !== '.' || dotOk) { + var m + if (negate && !prefix) { + m = !e.match(pn) + } else { + m = e.match(pn) + } + if (m) + matchedEntries.push(e) + } + } + + var len = matchedEntries.length + // If there are no matched entries, then nothing matches. + if (len === 0) + return + + // if this is the last remaining pattern bit, then no need for + // an additional stat *unless* the user has specified mark or + // stat explicitly. We know they exist, since readdir returned + // them. + + if (remain.length === 1 && !this.mark && !this.stat) { + if (!this.matches[index]) + this.matches[index] = Object.create(null) + + for (var i = 0; i < len; i ++) { + var e = matchedEntries[i] + if (prefix) { + if (prefix.slice(-1) !== '/') + e = prefix + '/' + e + else + e = prefix + e + } + + if (e.charAt(0) === '/' && !this.nomount) { + e = path.join(this.root, e) + } + this.matches[index][e] = true + } + // This was the last one, and no stats were needed + return + } + + // now test all matched entries as stand-ins for that part + // of the pattern. + remain.shift() + for (var i = 0; i < len; i ++) { + var e = matchedEntries[i] + var newPattern + if (prefix) + newPattern = [prefix, e] + else + newPattern = [e] + this._process(newPattern.concat(remain), index, inGlobStar) + } +} + + +GlobSync.prototype._emitMatch = function (index, e) { + var abs = this._makeAbs(e) + if (this.mark) + e = this._mark(e) + + if (this.matches[index][e]) + return + + if (this.nodir) { + var c = this.cache[this._makeAbs(e)] + if (c === 'DIR' || Array.isArray(c)) + return + } + + this.matches[index][e] = true + if (this.stat) + this._stat(e) +} + + +GlobSync.prototype._readdirInGlobStar = function (abs) { + // follow all symlinked directories forever + // just proceed as if this is a non-globstar situation + if (this.follow) + return this._readdir(abs, false) + + var entries + var lstat + var stat + try { + lstat = fs.lstatSync(abs) + } catch (er) { + // lstat failed, doesn't exist + return null + } + + var isSym = lstat.isSymbolicLink() + this.symlinks[abs] = isSym + + // If it's not a symlink or a dir, then it's definitely a regular file. + // don't bother doing a readdir in that case. + if (!isSym && !lstat.isDirectory()) + this.cache[abs] = 'FILE' + else + entries = this._readdir(abs, false) + + return entries +} + +GlobSync.prototype._readdir = function (abs, inGlobStar) { + var entries + + if (inGlobStar && !ownProp(this.symlinks, abs)) + return this._readdirInGlobStar(abs) + + if (ownProp(this.cache, abs)) { + var c = this.cache[abs] + if (!c || c === 'FILE') + return null + + if (Array.isArray(c)) + return c + } + + try { + return this._readdirEntries(abs, fs.readdirSync(abs)) + } catch (er) { + this._readdirError(abs, er) + return null + } +} + +GlobSync.prototype._readdirEntries = function (abs, entries) { + // if we haven't asked to stat everything, then just + // assume that everything in there exists, so we can avoid + // having to stat it a second time. + if (!this.mark && !this.stat) { + for (var i = 0; i < entries.length; i ++) { + var e = entries[i] + if (abs === '/') + e = abs + e + else + e = abs + '/' + e + this.cache[e] = true + } + } + + this.cache[abs] = entries + + // mark and cache dir-ness + return entries +} + +GlobSync.prototype._readdirError = function (f, er) { + // handle errors, and cache the information + switch (er.code) { + case 'ENOTSUP': // https://github.com/isaacs/node-glob/issues/205 + case 'ENOTDIR': // totally normal. means it *does* exist. + this.cache[this._makeAbs(f)] = 'FILE' + break + + case 'ENOENT': // not terribly unusual + case 'ELOOP': + case 'ENAMETOOLONG': + case 'UNKNOWN': + this.cache[this._makeAbs(f)] = false + break + + default: // some unusual error. Treat as failure. + this.cache[this._makeAbs(f)] = false + if (this.strict) + throw er + if (!this.silent) + console.error('glob error', er) + break + } +} + +GlobSync.prototype._processGlobStar = function (prefix, read, abs, remain, index, inGlobStar) { + + var entries = this._readdir(abs, inGlobStar) + + // no entries means not a dir, so it can never have matches + // foo.txt/** doesn't match foo.txt + if (!entries) + return + + // test without the globstar, and with every child both below + // and replacing the globstar. + var remainWithoutGlobStar = remain.slice(1) + var gspref = prefix ? [ prefix ] : [] + var noGlobStar = gspref.concat(remainWithoutGlobStar) + + // the noGlobStar pattern exits the inGlobStar state + this._process(noGlobStar, index, false) + + var len = entries.length + var isSym = this.symlinks[abs] + + // If it's a symlink, and we're in a globstar, then stop + if (isSym && inGlobStar) + return + + for (var i = 0; i < len; i++) { + var e = entries[i] + if (e.charAt(0) === '.' && !this.dot) + continue + + // these two cases enter the inGlobStar state + var instead = gspref.concat(entries[i], remainWithoutGlobStar) + this._process(instead, index, true) + + var below = gspref.concat(entries[i], remain) + this._process(below, index, true) + } +} + +GlobSync.prototype._processSimple = function (prefix, index) { + // XXX review this. Shouldn't it be doing the mounting etc + // before doing stat? kinda weird? + var exists = this._stat(prefix) + + if (!this.matches[index]) + this.matches[index] = Object.create(null) + + // If it doesn't exist, then just mark the lack of results + if (!exists) + return + + if (prefix && isAbsolute(prefix) && !this.nomount) { + var trail = /[\/\\]$/.test(prefix) + if (prefix.charAt(0) === '/') { + prefix = path.join(this.root, prefix) + } else { + prefix = path.resolve(this.root, prefix) + if (trail) + prefix += '/' + } + } + + if (process.platform === 'win32') + prefix = prefix.replace(/\\/g, '/') + + // Mark this as a match + this.matches[index][prefix] = true +} + +// Returns either 'DIR', 'FILE', or false +GlobSync.prototype._stat = function (f) { + var abs = this._makeAbs(f) + var needDir = f.slice(-1) === '/' + + if (f.length > this.maxLength) + return false + + if (!this.stat && ownProp(this.cache, abs)) { + var c = this.cache[abs] + + if (Array.isArray(c)) + c = 'DIR' + + // It exists, but maybe not how we need it + if (!needDir || c === 'DIR') + return c + + if (needDir && c === 'FILE') + return false + + // otherwise we have to stat, because maybe c=true + // if we know it exists, but not what it is. + } + + var exists + var stat = this.statCache[abs] + if (!stat) { + var lstat + try { + lstat = fs.lstatSync(abs) + } catch (er) { + return false + } + + if (lstat.isSymbolicLink()) { + try { + stat = fs.statSync(abs) + } catch (er) { + stat = lstat + } + } else { + stat = lstat + } + } + + this.statCache[abs] = stat + + var c = stat.isDirectory() ? 'DIR' : 'FILE' + this.cache[abs] = this.cache[abs] || c + + if (needDir && c !== 'DIR') + return false + + return c +} + +GlobSync.prototype._mark = function (p) { + return common.mark(this, p) +} + +GlobSync.prototype._makeAbs = function (f) { + return common.makeAbs(this, f) +} + +}).call(this,require('_process')) +},{"./common.js":15,"./glob.js":16,"_process":24,"assert":9,"fs":12,"minimatch":20,"path":22,"path-is-absolute":23,"util":28}],18:[function(require,module,exports){ +(function (process){ +var wrappy = require('wrappy') +var reqs = Object.create(null) +var once = require('once') + +module.exports = wrappy(inflight) + +function inflight (key, cb) { + if (reqs[key]) { + reqs[key].push(cb) + return null + } else { + reqs[key] = [cb] + return makeres(key) + } +} + +function makeres (key) { + return once(function RES () { + var cbs = reqs[key] + var len = cbs.length + var args = slice(arguments) + + // XXX It's somewhat ambiguous whether a new callback added in this + // pass should be queued for later execution if something in the + // list of callbacks throws, or if it should just be discarded. + // However, it's such an edge case that it hardly matters, and either + // choice is likely as surprising as the other. + // As it happens, we do go ahead and schedule it for later execution. + try { + for (var i = 0; i < len; i++) { + cbs[i].apply(null, args) + } + } finally { + if (cbs.length > len) { + // added more in the interim. + // de-zalgo, just in case, but don't call again. + cbs.splice(0, len) + process.nextTick(function () { + RES.apply(null, args) + }) + } else { + delete reqs[key] + } + } + }) +} + +function slice (args) { + var length = args.length + var array = [] + + for (var i = 0; i < length; i++) array[i] = args[i] + return array +} + +}).call(this,require('_process')) +},{"_process":24,"once":21,"wrappy":29}],19:[function(require,module,exports){ +if (typeof Object.create === 'function') { + // implementation from standard node.js 'util' module + module.exports = function inherits(ctor, superCtor) { + ctor.super_ = superCtor + ctor.prototype = Object.create(superCtor.prototype, { + constructor: { + value: ctor, + enumerable: false, + writable: true, + configurable: true + } + }); + }; +} else { + // old school shim for old browsers + module.exports = function inherits(ctor, superCtor) { + ctor.super_ = superCtor + var TempCtor = function () {} + TempCtor.prototype = superCtor.prototype + ctor.prototype = new TempCtor() + ctor.prototype.constructor = ctor + } +} + +},{}],20:[function(require,module,exports){ +module.exports = minimatch +minimatch.Minimatch = Minimatch + +var path = { sep: '/' } +try { + path = require('path') +} catch (er) {} + +var GLOBSTAR = minimatch.GLOBSTAR = Minimatch.GLOBSTAR = {} +var expand = require('brace-expansion') + +var plTypes = { + '!': { open: '(?:(?!(?:', close: '))[^/]*?)'}, + '?': { open: '(?:', close: ')?' }, + '+': { open: '(?:', close: ')+' }, + '*': { open: '(?:', close: ')*' }, + '@': { open: '(?:', close: ')' } +} + +// any single thing other than / +// don't need to escape / when using new RegExp() +var qmark = '[^/]' + +// * => any number of characters +var star = qmark + '*?' + +// ** when dots are allowed. Anything goes, except .. and . +// not (^ or / followed by one or two dots followed by $ or /), +// followed by anything, any number of times. +var twoStarDot = '(?:(?!(?:\\\/|^)(?:\\.{1,2})($|\\\/)).)*?' + +// not a ^ or / followed by a dot, +// followed by anything, any number of times. +var twoStarNoDot = '(?:(?!(?:\\\/|^)\\.).)*?' + +// characters that need to be escaped in RegExp. +var reSpecials = charSet('().*{}+?[]^$\\!') + +// "abc" -> { a:true, b:true, c:true } +function charSet (s) { + return s.split('').reduce(function (set, c) { + set[c] = true + return set + }, {}) +} + +// normalizes slashes. +var slashSplit = /\/+/ + +minimatch.filter = filter +function filter (pattern, options) { + options = options || {} + return function (p, i, list) { + return minimatch(p, pattern, options) + } +} + +function ext (a, b) { + a = a || {} + b = b || {} + var t = {} + Object.keys(b).forEach(function (k) { + t[k] = b[k] + }) + Object.keys(a).forEach(function (k) { + t[k] = a[k] + }) + return t +} + +minimatch.defaults = function (def) { + if (!def || !Object.keys(def).length) return minimatch + + var orig = minimatch + + var m = function minimatch (p, pattern, options) { + return orig.minimatch(p, pattern, ext(def, options)) + } + + m.Minimatch = function Minimatch (pattern, options) { + return new orig.Minimatch(pattern, ext(def, options)) + } + + return m +} + +Minimatch.defaults = function (def) { + if (!def || !Object.keys(def).length) return Minimatch + return minimatch.defaults(def).Minimatch +} + +function minimatch (p, pattern, options) { + if (typeof pattern !== 'string') { + throw new TypeError('glob pattern string required') + } + + if (!options) options = {} + + // shortcut: comments match nothing. + if (!options.nocomment && pattern.charAt(0) === '#') { + return false + } + + // "" only matches "" + if (pattern.trim() === '') return p === '' + + return new Minimatch(pattern, options).match(p) +} + +function Minimatch (pattern, options) { + if (!(this instanceof Minimatch)) { + return new Minimatch(pattern, options) + } + + if (typeof pattern !== 'string') { + throw new TypeError('glob pattern string required') + } + + if (!options) options = {} + pattern = pattern.trim() + + // windows support: need to use /, not \ + if (path.sep !== '/') { + pattern = pattern.split(path.sep).join('/') + } + + this.options = options + this.set = [] + this.pattern = pattern + this.regexp = null + this.negate = false + this.comment = false + this.empty = false + + // make the set of regexps etc. + this.make() +} + +Minimatch.prototype.debug = function () {} + +Minimatch.prototype.make = make +function make () { + // don't do it more than once. + if (this._made) return + + var pattern = this.pattern + var options = this.options + + // empty patterns and comments match nothing. + if (!options.nocomment && pattern.charAt(0) === '#') { + this.comment = true + return + } + if (!pattern) { + this.empty = true + return + } + + // step 1: figure out negation, etc. + this.parseNegate() + + // step 2: expand braces + var set = this.globSet = this.braceExpand() + + if (options.debug) this.debug = console.error + + this.debug(this.pattern, set) + + // step 3: now we have a set, so turn each one into a series of path-portion + // matching patterns. + // These will be regexps, except in the case of "**", which is + // set to the GLOBSTAR object for globstar behavior, + // and will not contain any / characters + set = this.globParts = set.map(function (s) { + return s.split(slashSplit) + }) + + this.debug(this.pattern, set) + + // glob --> regexps + set = set.map(function (s, si, set) { + return s.map(this.parse, this) + }, this) + + this.debug(this.pattern, set) + + // filter out everything that didn't compile properly. + set = set.filter(function (s) { + return s.indexOf(false) === -1 + }) + + this.debug(this.pattern, set) + + this.set = set +} + +Minimatch.prototype.parseNegate = parseNegate +function parseNegate () { + var pattern = this.pattern + var negate = false + var options = this.options + var negateOffset = 0 + + if (options.nonegate) return + + for (var i = 0, l = pattern.length + ; i < l && pattern.charAt(i) === '!' + ; i++) { + negate = !negate + negateOffset++ + } + + if (negateOffset) this.pattern = pattern.substr(negateOffset) + this.negate = negate +} + +// Brace expansion: +// a{b,c}d -> abd acd +// a{b,}c -> abc ac +// a{0..3}d -> a0d a1d a2d a3d +// a{b,c{d,e}f}g -> abg acdfg acefg +// a{b,c}d{e,f}g -> abdeg acdeg abdeg abdfg +// +// Invalid sets are not expanded. +// a{2..}b -> a{2..}b +// a{b}c -> a{b}c +minimatch.braceExpand = function (pattern, options) { + return braceExpand(pattern, options) +} + +Minimatch.prototype.braceExpand = braceExpand + +function braceExpand (pattern, options) { + if (!options) { + if (this instanceof Minimatch) { + options = this.options + } else { + options = {} + } + } + + pattern = typeof pattern === 'undefined' + ? this.pattern : pattern + + if (typeof pattern === 'undefined') { + throw new TypeError('undefined pattern') + } + + if (options.nobrace || + !pattern.match(/\{.*\}/)) { + // shortcut. no need to expand. + return [pattern] + } + + return expand(pattern) +} + +// parse a component of the expanded set. +// At this point, no pattern may contain "/" in it +// so we're going to return a 2d array, where each entry is the full +// pattern, split on '/', and then turned into a regular expression. +// A regexp is made at the end which joins each array with an +// escaped /, and another full one which joins each regexp with |. +// +// Following the lead of Bash 4.1, note that "**" only has special meaning +// when it is the *only* thing in a path portion. Otherwise, any series +// of * is equivalent to a single *. Globstar behavior is enabled by +// default, and can be disabled by setting options.noglobstar. +Minimatch.prototype.parse = parse +var SUBPARSE = {} +function parse (pattern, isSub) { + if (pattern.length > 1024 * 64) { + throw new TypeError('pattern is too long') + } + + var options = this.options + + // shortcuts + if (!options.noglobstar && pattern === '**') return GLOBSTAR + if (pattern === '') return '' + + var re = '' + var hasMagic = !!options.nocase + var escaping = false + // ? => one single character + var patternListStack = [] + var negativeLists = [] + var stateChar + var inClass = false + var reClassStart = -1 + var classStart = -1 + // . and .. never match anything that doesn't start with ., + // even when options.dot is set. + var patternStart = pattern.charAt(0) === '.' ? '' // anything + // not (start or / followed by . or .. followed by / or end) + : options.dot ? '(?!(?:^|\\\/)\\.{1,2}(?:$|\\\/))' + : '(?!\\.)' + var self = this + + function clearStateChar () { + if (stateChar) { + // we had some state-tracking character + // that wasn't consumed by this pass. + switch (stateChar) { + case '*': + re += star + hasMagic = true + break + case '?': + re += qmark + hasMagic = true + break + default: + re += '\\' + stateChar + break + } + self.debug('clearStateChar %j %j', stateChar, re) + stateChar = false + } + } + + for (var i = 0, len = pattern.length, c + ; (i < len) && (c = pattern.charAt(i)) + ; i++) { + this.debug('%s\t%s %s %j', pattern, i, re, c) + + // skip over any that are escaped. + if (escaping && reSpecials[c]) { + re += '\\' + c + escaping = false + continue + } + + switch (c) { + case '/': + // completely not allowed, even escaped. + // Should already be path-split by now. + return false + + case '\\': + clearStateChar() + escaping = true + continue + + // the various stateChar values + // for the "extglob" stuff. + case '?': + case '*': + case '+': + case '@': + case '!': + this.debug('%s\t%s %s %j <-- stateChar', pattern, i, re, c) + + // all of those are literals inside a class, except that + // the glob [!a] means [^a] in regexp + if (inClass) { + this.debug(' in class') + if (c === '!' && i === classStart + 1) c = '^' + re += c + continue + } + + // if we already have a stateChar, then it means + // that there was something like ** or +? in there. + // Handle the stateChar, then proceed with this one. + self.debug('call clearStateChar %j', stateChar) + clearStateChar() + stateChar = c + // if extglob is disabled, then +(asdf|foo) isn't a thing. + // just clear the statechar *now*, rather than even diving into + // the patternList stuff. + if (options.noext) clearStateChar() + continue + + case '(': + if (inClass) { + re += '(' + continue + } + + if (!stateChar) { + re += '\\(' + continue + } + + patternListStack.push({ + type: stateChar, + start: i - 1, + reStart: re.length, + open: plTypes[stateChar].open, + close: plTypes[stateChar].close + }) + // negation is (?:(?!js)[^/]*) + re += stateChar === '!' ? '(?:(?!(?:' : '(?:' + this.debug('plType %j %j', stateChar, re) + stateChar = false + continue + + case ')': + if (inClass || !patternListStack.length) { + re += '\\)' + continue + } + + clearStateChar() + hasMagic = true + var pl = patternListStack.pop() + // negation is (?:(?!js)[^/]*) + // The others are (?:) + re += pl.close + if (pl.type === '!') { + negativeLists.push(pl) + } + pl.reEnd = re.length + continue + + case '|': + if (inClass || !patternListStack.length || escaping) { + re += '\\|' + escaping = false + continue + } + + clearStateChar() + re += '|' + continue + + // these are mostly the same in regexp and glob + case '[': + // swallow any state-tracking char before the [ + clearStateChar() + + if (inClass) { + re += '\\' + c + continue + } + + inClass = true + classStart = i + reClassStart = re.length + re += c + continue + + case ']': + // a right bracket shall lose its special + // meaning and represent itself in + // a bracket expression if it occurs + // first in the list. -- POSIX.2 2.8.3.2 + if (i === classStart + 1 || !inClass) { + re += '\\' + c + escaping = false + continue + } + + // handle the case where we left a class open. + // "[z-a]" is valid, equivalent to "\[z-a\]" + if (inClass) { + // split where the last [ was, make sure we don't have + // an invalid re. if so, re-walk the contents of the + // would-be class to re-translate any characters that + // were passed through as-is + // TODO: It would probably be faster to determine this + // without a try/catch and a new RegExp, but it's tricky + // to do safely. For now, this is safe and works. + var cs = pattern.substring(classStart + 1, i) + try { + RegExp('[' + cs + ']') + } catch (er) { + // not a valid class! + var sp = this.parse(cs, SUBPARSE) + re = re.substr(0, reClassStart) + '\\[' + sp[0] + '\\]' + hasMagic = hasMagic || sp[1] + inClass = false + continue + } + } + + // finish up the class. + hasMagic = true + inClass = false + re += c + continue + + default: + // swallow any state char that wasn't consumed + clearStateChar() + + if (escaping) { + // no need + escaping = false + } else if (reSpecials[c] + && !(c === '^' && inClass)) { + re += '\\' + } + + re += c + + } // switch + } // for + + // handle the case where we left a class open. + // "[abc" is valid, equivalent to "\[abc" + if (inClass) { + // split where the last [ was, and escape it + // this is a huge pita. We now have to re-walk + // the contents of the would-be class to re-translate + // any characters that were passed through as-is + cs = pattern.substr(classStart + 1) + sp = this.parse(cs, SUBPARSE) + re = re.substr(0, reClassStart) + '\\[' + sp[0] + hasMagic = hasMagic || sp[1] + } + + // handle the case where we had a +( thing at the *end* + // of the pattern. + // each pattern list stack adds 3 chars, and we need to go through + // and escape any | chars that were passed through as-is for the regexp. + // Go through and escape them, taking care not to double-escape any + // | chars that were already escaped. + for (pl = patternListStack.pop(); pl; pl = patternListStack.pop()) { + var tail = re.slice(pl.reStart + pl.open.length) + this.debug('setting tail', re, pl) + // maybe some even number of \, then maybe 1 \, followed by a | + tail = tail.replace(/((?:\\{2}){0,64})(\\?)\|/g, function (_, $1, $2) { + if (!$2) { + // the | isn't already escaped, so escape it. + $2 = '\\' + } + + // need to escape all those slashes *again*, without escaping the + // one that we need for escaping the | character. As it works out, + // escaping an even number of slashes can be done by simply repeating + // it exactly after itself. That's why this trick works. + // + // I am sorry that you have to see this. + return $1 + $1 + $2 + '|' + }) + + this.debug('tail=%j\n %s', tail, tail, pl, re) + var t = pl.type === '*' ? star + : pl.type === '?' ? qmark + : '\\' + pl.type + + hasMagic = true + re = re.slice(0, pl.reStart) + t + '\\(' + tail + } + + // handle trailing things that only matter at the very end. + clearStateChar() + if (escaping) { + // trailing \\ + re += '\\\\' + } + + // only need to apply the nodot start if the re starts with + // something that could conceivably capture a dot + var addPatternStart = false + switch (re.charAt(0)) { + case '.': + case '[': + case '(': addPatternStart = true + } + + // Hack to work around lack of negative lookbehind in JS + // A pattern like: *.!(x).!(y|z) needs to ensure that a name + // like 'a.xyz.yz' doesn't match. So, the first negative + // lookahead, has to look ALL the way ahead, to the end of + // the pattern. + for (var n = negativeLists.length - 1; n > -1; n--) { + var nl = negativeLists[n] + + var nlBefore = re.slice(0, nl.reStart) + var nlFirst = re.slice(nl.reStart, nl.reEnd - 8) + var nlLast = re.slice(nl.reEnd - 8, nl.reEnd) + var nlAfter = re.slice(nl.reEnd) + + nlLast += nlAfter + + // Handle nested stuff like *(*.js|!(*.json)), where open parens + // mean that we should *not* include the ) in the bit that is considered + // "after" the negated section. + var openParensBefore = nlBefore.split('(').length - 1 + var cleanAfter = nlAfter + for (i = 0; i < openParensBefore; i++) { + cleanAfter = cleanAfter.replace(/\)[+*?]?/, '') + } + nlAfter = cleanAfter + + var dollar = '' + if (nlAfter === '' && isSub !== SUBPARSE) { + dollar = '$' + } + var newRe = nlBefore + nlFirst + nlAfter + dollar + nlLast + re = newRe + } + + // if the re is not "" at this point, then we need to make sure + // it doesn't match against an empty path part. + // Otherwise a/* will match a/, which it should not. + if (re !== '' && hasMagic) { + re = '(?=.)' + re + } + + if (addPatternStart) { + re = patternStart + re + } + + // parsing just a piece of a larger pattern. + if (isSub === SUBPARSE) { + return [re, hasMagic] + } + + // skip the regexp for non-magical patterns + // unescape anything in it, though, so that it'll be + // an exact match against a file etc. + if (!hasMagic) { + return globUnescape(pattern) + } + + var flags = options.nocase ? 'i' : '' + try { + var regExp = new RegExp('^' + re + '$', flags) + } catch (er) { + // If it was an invalid regular expression, then it can't match + // anything. This trick looks for a character after the end of + // the string, which is of course impossible, except in multi-line + // mode, but it's not a /m regex. + return new RegExp('$.') + } + + regExp._glob = pattern + regExp._src = re + + return regExp +} + +minimatch.makeRe = function (pattern, options) { + return new Minimatch(pattern, options || {}).makeRe() +} + +Minimatch.prototype.makeRe = makeRe +function makeRe () { + if (this.regexp || this.regexp === false) return this.regexp + + // at this point, this.set is a 2d array of partial + // pattern strings, or "**". + // + // It's better to use .match(). This function shouldn't + // be used, really, but it's pretty convenient sometimes, + // when you just want to work with a regex. + var set = this.set + + if (!set.length) { + this.regexp = false + return this.regexp + } + var options = this.options + + var twoStar = options.noglobstar ? star + : options.dot ? twoStarDot + : twoStarNoDot + var flags = options.nocase ? 'i' : '' + + var re = set.map(function (pattern) { + return pattern.map(function (p) { + return (p === GLOBSTAR) ? twoStar + : (typeof p === 'string') ? regExpEscape(p) + : p._src + }).join('\\\/') + }).join('|') + + // must match entire pattern + // ending in a * or ** will make it less strict. + re = '^(?:' + re + ')$' + + // can match anything, as long as it's not this. + if (this.negate) re = '^(?!' + re + ').*$' + + try { + this.regexp = new RegExp(re, flags) + } catch (ex) { + this.regexp = false + } + return this.regexp +} + +minimatch.match = function (list, pattern, options) { + options = options || {} + var mm = new Minimatch(pattern, options) + list = list.filter(function (f) { + return mm.match(f) + }) + if (mm.options.nonull && !list.length) { + list.push(pattern) + } + return list +} + +Minimatch.prototype.match = match +function match (f, partial) { + this.debug('match', f, this.pattern) + // short-circuit in the case of busted things. + // comments, etc. + if (this.comment) return false + if (this.empty) return f === '' + + if (f === '/' && partial) return true + + var options = this.options + + // windows: need to use /, not \ + if (path.sep !== '/') { + f = f.split(path.sep).join('/') + } + + // treat the test path as a set of pathparts. + f = f.split(slashSplit) + this.debug(this.pattern, 'split', f) + + // just ONE of the pattern sets in this.set needs to match + // in order for it to be valid. If negating, then just one + // match means that we have failed. + // Either way, return on the first hit. + + var set = this.set + this.debug(this.pattern, 'set', set) + + // Find the basename of the path by looking for the last non-empty segment + var filename + var i + for (i = f.length - 1; i >= 0; i--) { + filename = f[i] + if (filename) break + } + + for (i = 0; i < set.length; i++) { + var pattern = set[i] + var file = f + if (options.matchBase && pattern.length === 1) { + file = [filename] + } + var hit = this.matchOne(file, pattern, partial) + if (hit) { + if (options.flipNegate) return true + return !this.negate + } + } + + // didn't get any hits. this is success if it's a negative + // pattern, failure otherwise. + if (options.flipNegate) return false + return this.negate +} + +// set partial to true to test if, for example, +// "/a/b" matches the start of "/*/b/*/d" +// Partial means, if you run out of file before you run +// out of pattern, then that's fine, as long as all +// the parts match. +Minimatch.prototype.matchOne = function (file, pattern, partial) { + var options = this.options + + this.debug('matchOne', + { 'this': this, file: file, pattern: pattern }) + + this.debug('matchOne', file.length, pattern.length) + + for (var fi = 0, + pi = 0, + fl = file.length, + pl = pattern.length + ; (fi < fl) && (pi < pl) + ; fi++, pi++) { + this.debug('matchOne loop') + var p = pattern[pi] + var f = file[fi] + + this.debug(pattern, p, f) + + // should be impossible. + // some invalid regexp stuff in the set. + if (p === false) return false + + if (p === GLOBSTAR) { + this.debug('GLOBSTAR', [pattern, p, f]) + + // "**" + // a/**/b/**/c would match the following: + // a/b/x/y/z/c + // a/x/y/z/b/c + // a/b/x/b/x/c + // a/b/c + // To do this, take the rest of the pattern after + // the **, and see if it would match the file remainder. + // If so, return success. + // If not, the ** "swallows" a segment, and try again. + // This is recursively awful. + // + // a/**/b/**/c matching a/b/x/y/z/c + // - a matches a + // - doublestar + // - matchOne(b/x/y/z/c, b/**/c) + // - b matches b + // - doublestar + // - matchOne(x/y/z/c, c) -> no + // - matchOne(y/z/c, c) -> no + // - matchOne(z/c, c) -> no + // - matchOne(c, c) yes, hit + var fr = fi + var pr = pi + 1 + if (pr === pl) { + this.debug('** at the end') + // a ** at the end will just swallow the rest. + // We have found a match. + // however, it will not swallow /.x, unless + // options.dot is set. + // . and .. are *never* matched by **, for explosively + // exponential reasons. + for (; fi < fl; fi++) { + if (file[fi] === '.' || file[fi] === '..' || + (!options.dot && file[fi].charAt(0) === '.')) return false + } + return true + } + + // ok, let's see if we can swallow whatever we can. + while (fr < fl) { + var swallowee = file[fr] + + this.debug('\nglobstar while', file, fr, pattern, pr, swallowee) + + // XXX remove this slice. Just pass the start index. + if (this.matchOne(file.slice(fr), pattern.slice(pr), partial)) { + this.debug('globstar found match!', fr, fl, swallowee) + // found a match. + return true + } else { + // can't swallow "." or ".." ever. + // can only swallow ".foo" when explicitly asked. + if (swallowee === '.' || swallowee === '..' || + (!options.dot && swallowee.charAt(0) === '.')) { + this.debug('dot detected!', file, fr, pattern, pr) + break + } + + // ** swallows a segment, and continue. + this.debug('globstar swallow a segment, and continue') + fr++ + } + } + + // no match was found. + // However, in partial mode, we can't say this is necessarily over. + // If there's more *pattern* left, then + if (partial) { + // ran out of file + this.debug('\n>>> no match, partial?', file, fr, pattern, pr) + if (fr === fl) return true + } + return false + } + + // something other than ** + // non-magic patterns just have to match exactly + // patterns with magic have been turned into regexps. + var hit + if (typeof p === 'string') { + if (options.nocase) { + hit = f.toLowerCase() === p.toLowerCase() + } else { + hit = f === p + } + this.debug('string match', p, f, hit) + } else { + hit = f.match(p) + this.debug('pattern match', p, f, hit) + } + + if (!hit) return false + } + + // Note: ending in / means that we'll get a final "" + // at the end of the pattern. This can only match a + // corresponding "" at the end of the file. + // If the file ends in /, then it can only match a + // a pattern that ends in /, unless the pattern just + // doesn't have any more for it. But, a/b/ should *not* + // match "a/b/*", even though "" matches against the + // [^/]*? pattern, except in partial mode, where it might + // simply not be reached yet. + // However, a/b/ should still satisfy a/* + + // now either we fell off the end of the pattern, or we're done. + if (fi === fl && pi === pl) { + // ran out of pattern and filename at the same time. + // an exact hit! + return true + } else if (fi === fl) { + // ran out of file, but still had pattern left. + // this is ok if we're doing the match as part of + // a glob fs traversal. + return partial + } else if (pi === pl) { + // ran out of pattern, still have file left. + // this is only acceptable if we're on the very last + // empty segment of a file with a trailing slash. + // a/* should match a/b/ + var emptyFileEnd = (fi === fl - 1) && (file[fi] === '') + return emptyFileEnd + } + + // should be unreachable. + throw new Error('wtf?') +} + +// replace stuff like \* with * +function globUnescape (s) { + return s.replace(/\\(.)/g, '$1') +} + +function regExpEscape (s) { + return s.replace(/[-[\]{}()*+?.,\\^$|#\s]/g, '\\$&') +} + +},{"brace-expansion":11,"path":22}],21:[function(require,module,exports){ +var wrappy = require('wrappy') +module.exports = wrappy(once) +module.exports.strict = wrappy(onceStrict) + +once.proto = once(function () { + Object.defineProperty(Function.prototype, 'once', { + value: function () { + return once(this) + }, + configurable: true + }) + + Object.defineProperty(Function.prototype, 'onceStrict', { + value: function () { + return onceStrict(this) + }, + configurable: true + }) +}) + +function once (fn) { + var f = function () { + if (f.called) return f.value + f.called = true + return f.value = fn.apply(this, arguments) + } + f.called = false + return f +} + +function onceStrict (fn) { + var f = function () { + if (f.called) + throw new Error(f.onceError) + f.called = true + return f.value = fn.apply(this, arguments) + } + var name = fn.name || 'Function wrapped with `once`' + f.onceError = name + " shouldn't be called more than once" + f.called = false + return f +} + +},{"wrappy":29}],22:[function(require,module,exports){ +(function (process){ +// Copyright Joyent, Inc. and other Node contributors. +// +// Permission is hereby granted, free of charge, to any person obtaining a +// copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to permit +// persons to whom the Software is furnished to do so, subject to the +// following conditions: +// +// The above copyright notice and this permission notice shall be included +// in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +// USE OR OTHER DEALINGS IN THE SOFTWARE. + +// resolves . and .. elements in a path array with directory names there +// must be no slashes, empty elements, or device names (c:\) in the array +// (so also no leading and trailing slashes - it does not distinguish +// relative and absolute paths) +function normalizeArray(parts, allowAboveRoot) { + // if the path tries to go above the root, `up` ends up > 0 + var up = 0; + for (var i = parts.length - 1; i >= 0; i--) { + var last = parts[i]; + if (last === '.') { + parts.splice(i, 1); + } else if (last === '..') { + parts.splice(i, 1); + up++; + } else if (up) { + parts.splice(i, 1); + up--; + } + } + + // if the path is allowed to go above the root, restore leading ..s + if (allowAboveRoot) { + for (; up--; up) { + parts.unshift('..'); + } + } + + return parts; +} + +// Split a filename into [root, dir, basename, ext], unix version +// 'root' is just a slash, or nothing. +var splitPathRe = + /^(\/?|)([\s\S]*?)((?:\.{1,2}|[^\/]+?|)(\.[^.\/]*|))(?:[\/]*)$/; +var splitPath = function(filename) { + return splitPathRe.exec(filename).slice(1); +}; + +// path.resolve([from ...], to) +// posix version +exports.resolve = function() { + var resolvedPath = '', + resolvedAbsolute = false; + + for (var i = arguments.length - 1; i >= -1 && !resolvedAbsolute; i--) { + var path = (i >= 0) ? arguments[i] : process.cwd(); + + // Skip empty and invalid entries + if (typeof path !== 'string') { + throw new TypeError('Arguments to path.resolve must be strings'); + } else if (!path) { + continue; + } + + resolvedPath = path + '/' + resolvedPath; + resolvedAbsolute = path.charAt(0) === '/'; + } + + // At this point the path should be resolved to a full absolute path, but + // handle relative paths to be safe (might happen when process.cwd() fails) + + // Normalize the path + resolvedPath = normalizeArray(filter(resolvedPath.split('/'), function(p) { + return !!p; + }), !resolvedAbsolute).join('/'); + + return ((resolvedAbsolute ? '/' : '') + resolvedPath) || '.'; +}; + +// path.normalize(path) +// posix version +exports.normalize = function(path) { + var isAbsolute = exports.isAbsolute(path), + trailingSlash = substr(path, -1) === '/'; + + // Normalize the path + path = normalizeArray(filter(path.split('/'), function(p) { + return !!p; + }), !isAbsolute).join('/'); + + if (!path && !isAbsolute) { + path = '.'; + } + if (path && trailingSlash) { + path += '/'; + } + + return (isAbsolute ? '/' : '') + path; +}; + +// posix version +exports.isAbsolute = function(path) { + return path.charAt(0) === '/'; +}; + +// posix version +exports.join = function() { + var paths = Array.prototype.slice.call(arguments, 0); + return exports.normalize(filter(paths, function(p, index) { + if (typeof p !== 'string') { + throw new TypeError('Arguments to path.join must be strings'); + } + return p; + }).join('/')); +}; + + +// path.relative(from, to) +// posix version +exports.relative = function(from, to) { + from = exports.resolve(from).substr(1); + to = exports.resolve(to).substr(1); + + function trim(arr) { + var start = 0; + for (; start < arr.length; start++) { + if (arr[start] !== '') break; + } + + var end = arr.length - 1; + for (; end >= 0; end--) { + if (arr[end] !== '') break; + } + + if (start > end) return []; + return arr.slice(start, end - start + 1); + } + + var fromParts = trim(from.split('/')); + var toParts = trim(to.split('/')); + + var length = Math.min(fromParts.length, toParts.length); + var samePartsLength = length; + for (var i = 0; i < length; i++) { + if (fromParts[i] !== toParts[i]) { + samePartsLength = i; + break; + } + } + + var outputParts = []; + for (var i = samePartsLength; i < fromParts.length; i++) { + outputParts.push('..'); + } + + outputParts = outputParts.concat(toParts.slice(samePartsLength)); + + return outputParts.join('/'); +}; + +exports.sep = '/'; +exports.delimiter = ':'; + +exports.dirname = function(path) { + var result = splitPath(path), + root = result[0], + dir = result[1]; + + if (!root && !dir) { + // No dirname whatsoever + return '.'; + } + + if (dir) { + // It has a dirname, strip trailing slash + dir = dir.substr(0, dir.length - 1); + } + + return root + dir; +}; + + +exports.basename = function(path, ext) { + var f = splitPath(path)[2]; + // TODO: make this comparison case-insensitive on windows? + if (ext && f.substr(-1 * ext.length) === ext) { + f = f.substr(0, f.length - ext.length); + } + return f; +}; + + +exports.extname = function(path) { + return splitPath(path)[3]; +}; + +function filter (xs, f) { + if (xs.filter) return xs.filter(f); + var res = []; + for (var i = 0; i < xs.length; i++) { + if (f(xs[i], i, xs)) res.push(xs[i]); + } + return res; +} + +// String.prototype.substr - negative index don't work in IE8 +var substr = 'ab'.substr(-1) === 'b' + ? function (str, start, len) { return str.substr(start, len) } + : function (str, start, len) { + if (start < 0) start = str.length + start; + return str.substr(start, len); + } +; + +}).call(this,require('_process')) +},{"_process":24}],23:[function(require,module,exports){ +(function (process){ +'use strict'; + +function posix(path) { + return path.charAt(0) === '/'; +} + +function win32(path) { + // https://github.com/nodejs/node/blob/b3fcc245fb25539909ef1d5eaa01dbf92e168633/lib/path.js#L56 + var splitDeviceRe = /^([a-zA-Z]:|[\\\/]{2}[^\\\/]+[\\\/]+[^\\\/]+)?([\\\/])?([\s\S]*?)$/; + var result = splitDeviceRe.exec(path); + var device = result[1] || ''; + var isUnc = Boolean(device && device.charAt(1) !== ':'); + + // UNC paths are always absolute + return Boolean(result[2] || isUnc); +} + +module.exports = process.platform === 'win32' ? win32 : posix; +module.exports.posix = posix; +module.exports.win32 = win32; + +}).call(this,require('_process')) +},{"_process":24}],24:[function(require,module,exports){ +// shim for using process in browser +var process = module.exports = {}; + +// cached from whatever global is present so that test runners that stub it +// don't break things. But we need to wrap it in a try catch in case it is +// wrapped in strict mode code which doesn't define any globals. It's inside a +// function because try/catches deoptimize in certain engines. + +var cachedSetTimeout; +var cachedClearTimeout; + +function defaultSetTimout() { + throw new Error('setTimeout has not been defined'); +} +function defaultClearTimeout () { + throw new Error('clearTimeout has not been defined'); +} +(function () { + try { + if (typeof setTimeout === 'function') { + cachedSetTimeout = setTimeout; + } else { + cachedSetTimeout = defaultSetTimout; + } + } catch (e) { + cachedSetTimeout = defaultSetTimout; + } + try { + if (typeof clearTimeout === 'function') { + cachedClearTimeout = clearTimeout; + } else { + cachedClearTimeout = defaultClearTimeout; + } + } catch (e) { + cachedClearTimeout = defaultClearTimeout; + } +} ()) +function runTimeout(fun) { + if (cachedSetTimeout === setTimeout) { + //normal enviroments in sane situations + return setTimeout(fun, 0); + } + // if setTimeout wasn't available but was latter defined + if ((cachedSetTimeout === defaultSetTimout || !cachedSetTimeout) && setTimeout) { + cachedSetTimeout = setTimeout; + return setTimeout(fun, 0); + } + try { + // when when somebody has screwed with setTimeout but no I.E. maddness + return cachedSetTimeout(fun, 0); + } catch(e){ + try { + // When we are in I.E. but the script has been evaled so I.E. doesn't trust the global object when called normally + return cachedSetTimeout.call(null, fun, 0); + } catch(e){ + // same as above but when it's a version of I.E. that must have the global object for 'this', hopfully our context correct otherwise it will throw a global error + return cachedSetTimeout.call(this, fun, 0); + } + } + + +} +function runClearTimeout(marker) { + if (cachedClearTimeout === clearTimeout) { + //normal enviroments in sane situations + return clearTimeout(marker); + } + // if clearTimeout wasn't available but was latter defined + if ((cachedClearTimeout === defaultClearTimeout || !cachedClearTimeout) && clearTimeout) { + cachedClearTimeout = clearTimeout; + return clearTimeout(marker); + } + try { + // when when somebody has screwed with setTimeout but no I.E. maddness + return cachedClearTimeout(marker); + } catch (e){ + try { + // When we are in I.E. but the script has been evaled so I.E. doesn't trust the global object when called normally + return cachedClearTimeout.call(null, marker); + } catch (e){ + // same as above but when it's a version of I.E. that must have the global object for 'this', hopfully our context correct otherwise it will throw a global error. + // Some versions of I.E. have different rules for clearTimeout vs setTimeout + return cachedClearTimeout.call(this, marker); + } + } + + + +} +var queue = []; +var draining = false; +var currentQueue; +var queueIndex = -1; + +function cleanUpNextTick() { + if (!draining || !currentQueue) { + return; + } + draining = false; + if (currentQueue.length) { + queue = currentQueue.concat(queue); + } else { + queueIndex = -1; + } + if (queue.length) { + drainQueue(); + } +} + +function drainQueue() { + if (draining) { + return; + } + var timeout = runTimeout(cleanUpNextTick); + draining = true; + + var len = queue.length; + while(len) { + currentQueue = queue; + queue = []; + while (++queueIndex < len) { + if (currentQueue) { + currentQueue[queueIndex].run(); + } + } + queueIndex = -1; + len = queue.length; + } + currentQueue = null; + draining = false; + runClearTimeout(timeout); +} + +process.nextTick = function (fun) { + var args = new Array(arguments.length - 1); + if (arguments.length > 1) { + for (var i = 1; i < arguments.length; i++) { + args[i - 1] = arguments[i]; + } + } + queue.push(new Item(fun, args)); + if (queue.length === 1 && !draining) { + runTimeout(drainQueue); + } +}; + +// v8 likes predictible objects +function Item(fun, array) { + this.fun = fun; + this.array = array; +} +Item.prototype.run = function () { + this.fun.apply(null, this.array); +}; +process.title = 'browser'; +process.browser = true; +process.env = {}; +process.argv = []; +process.version = ''; // empty string to avoid regexp issues +process.versions = {}; + +function noop() {} + +process.on = noop; +process.addListener = noop; +process.once = noop; +process.off = noop; +process.removeListener = noop; +process.removeAllListeners = noop; +process.emit = noop; +process.prependListener = noop; +process.prependOnceListener = noop; + +process.listeners = function (name) { return [] } + +process.binding = function (name) { + throw new Error('process.binding is not supported'); +}; + +process.cwd = function () { return '/' }; +process.chdir = function (dir) { + throw new Error('process.chdir is not supported'); +}; +process.umask = function() { return 0; }; + +},{}],25:[function(require,module,exports){ +// Underscore.js 1.8.3 +// http://underscorejs.org +// (c) 2009-2015 Jeremy Ashkenas, DocumentCloud and Investigative Reporters & Editors +// Underscore may be freely distributed under the MIT license. + +(function() { + + // Baseline setup + // -------------- + + // Establish the root object, `window` in the browser, or `exports` on the server. + var root = this; + + // Save the previous value of the `_` variable. + var previousUnderscore = root._; + + // Save bytes in the minified (but not gzipped) version: + var ArrayProto = Array.prototype, ObjProto = Object.prototype, FuncProto = Function.prototype; + + // Create quick reference variables for speed access to core prototypes. + var + push = ArrayProto.push, + slice = ArrayProto.slice, + toString = ObjProto.toString, + hasOwnProperty = ObjProto.hasOwnProperty; + + // All **ECMAScript 5** native function implementations that we hope to use + // are declared here. + var + nativeIsArray = Array.isArray, + nativeKeys = Object.keys, + nativeBind = FuncProto.bind, + nativeCreate = Object.create; + + // Naked function reference for surrogate-prototype-swapping. + var Ctor = function(){}; + + // Create a safe reference to the Underscore object for use below. + var _ = function(obj) { + if (obj instanceof _) return obj; + if (!(this instanceof _)) return new _(obj); + this._wrapped = obj; + }; + + // Export the Underscore object for **Node.js**, with + // backwards-compatibility for the old `require()` API. If we're in + // the browser, add `_` as a global object. + if (typeof exports !== 'undefined') { + if (typeof module !== 'undefined' && module.exports) { + exports = module.exports = _; + } + exports._ = _; + } else { + root._ = _; + } + + // Current version. + _.VERSION = '1.8.3'; + + // Internal function that returns an efficient (for current engines) version + // of the passed-in callback, to be repeatedly applied in other Underscore + // functions. + var optimizeCb = function(func, context, argCount) { + if (context === void 0) return func; + switch (argCount == null ? 3 : argCount) { + case 1: return function(value) { + return func.call(context, value); + }; + case 2: return function(value, other) { + return func.call(context, value, other); + }; + case 3: return function(value, index, collection) { + return func.call(context, value, index, collection); + }; + case 4: return function(accumulator, value, index, collection) { + return func.call(context, accumulator, value, index, collection); + }; + } + return function() { + return func.apply(context, arguments); + }; + }; + + // A mostly-internal function to generate callbacks that can be applied + // to each element in a collection, returning the desired result — either + // identity, an arbitrary callback, a property matcher, or a property accessor. + var cb = function(value, context, argCount) { + if (value == null) return _.identity; + if (_.isFunction(value)) return optimizeCb(value, context, argCount); + if (_.isObject(value)) return _.matcher(value); + return _.property(value); + }; + _.iteratee = function(value, context) { + return cb(value, context, Infinity); + }; + + // An internal function for creating assigner functions. + var createAssigner = function(keysFunc, undefinedOnly) { + return function(obj) { + var length = arguments.length; + if (length < 2 || obj == null) return obj; + for (var index = 1; index < length; index++) { + var source = arguments[index], + keys = keysFunc(source), + l = keys.length; + for (var i = 0; i < l; i++) { + var key = keys[i]; + if (!undefinedOnly || obj[key] === void 0) obj[key] = source[key]; + } + } + return obj; + }; + }; + + // An internal function for creating a new object that inherits from another. + var baseCreate = function(prototype) { + if (!_.isObject(prototype)) return {}; + if (nativeCreate) return nativeCreate(prototype); + Ctor.prototype = prototype; + var result = new Ctor; + Ctor.prototype = null; + return result; + }; + + var property = function(key) { + return function(obj) { + return obj == null ? void 0 : obj[key]; + }; + }; + + // Helper for collection methods to determine whether a collection + // should be iterated as an array or as an object + // Related: http://people.mozilla.org/~jorendorff/es6-draft.html#sec-tolength + // Avoids a very nasty iOS 8 JIT bug on ARM-64. #2094 + var MAX_ARRAY_INDEX = Math.pow(2, 53) - 1; + var getLength = property('length'); + var isArrayLike = function(collection) { + var length = getLength(collection); + return typeof length == 'number' && length >= 0 && length <= MAX_ARRAY_INDEX; + }; + + // Collection Functions + // -------------------- + + // The cornerstone, an `each` implementation, aka `forEach`. + // Handles raw objects in addition to array-likes. Treats all + // sparse array-likes as if they were dense. + _.each = _.forEach = function(obj, iteratee, context) { + iteratee = optimizeCb(iteratee, context); + var i, length; + if (isArrayLike(obj)) { + for (i = 0, length = obj.length; i < length; i++) { + iteratee(obj[i], i, obj); + } + } else { + var keys = _.keys(obj); + for (i = 0, length = keys.length; i < length; i++) { + iteratee(obj[keys[i]], keys[i], obj); + } + } + return obj; + }; + + // Return the results of applying the iteratee to each element. + _.map = _.collect = function(obj, iteratee, context) { + iteratee = cb(iteratee, context); + var keys = !isArrayLike(obj) && _.keys(obj), + length = (keys || obj).length, + results = Array(length); + for (var index = 0; index < length; index++) { + var currentKey = keys ? keys[index] : index; + results[index] = iteratee(obj[currentKey], currentKey, obj); + } + return results; + }; + + // Create a reducing function iterating left or right. + function createReduce(dir) { + // Optimized iterator function as using arguments.length + // in the main function will deoptimize the, see #1991. + function iterator(obj, iteratee, memo, keys, index, length) { + for (; index >= 0 && index < length; index += dir) { + var currentKey = keys ? keys[index] : index; + memo = iteratee(memo, obj[currentKey], currentKey, obj); + } + return memo; + } + + return function(obj, iteratee, memo, context) { + iteratee = optimizeCb(iteratee, context, 4); + var keys = !isArrayLike(obj) && _.keys(obj), + length = (keys || obj).length, + index = dir > 0 ? 0 : length - 1; + // Determine the initial value if none is provided. + if (arguments.length < 3) { + memo = obj[keys ? keys[index] : index]; + index += dir; + } + return iterator(obj, iteratee, memo, keys, index, length); + }; + } + + // **Reduce** builds up a single result from a list of values, aka `inject`, + // or `foldl`. + _.reduce = _.foldl = _.inject = createReduce(1); + + // The right-associative version of reduce, also known as `foldr`. + _.reduceRight = _.foldr = createReduce(-1); + + // Return the first value which passes a truth test. Aliased as `detect`. + _.find = _.detect = function(obj, predicate, context) { + var key; + if (isArrayLike(obj)) { + key = _.findIndex(obj, predicate, context); + } else { + key = _.findKey(obj, predicate, context); + } + if (key !== void 0 && key !== -1) return obj[key]; + }; + + // Return all the elements that pass a truth test. + // Aliased as `select`. + _.filter = _.select = function(obj, predicate, context) { + var results = []; + predicate = cb(predicate, context); + _.each(obj, function(value, index, list) { + if (predicate(value, index, list)) results.push(value); + }); + return results; + }; + + // Return all the elements for which a truth test fails. + _.reject = function(obj, predicate, context) { + return _.filter(obj, _.negate(cb(predicate)), context); + }; + + // Determine whether all of the elements match a truth test. + // Aliased as `all`. + _.every = _.all = function(obj, predicate, context) { + predicate = cb(predicate, context); + var keys = !isArrayLike(obj) && _.keys(obj), + length = (keys || obj).length; + for (var index = 0; index < length; index++) { + var currentKey = keys ? keys[index] : index; + if (!predicate(obj[currentKey], currentKey, obj)) return false; + } + return true; + }; + + // Determine if at least one element in the object matches a truth test. + // Aliased as `any`. + _.some = _.any = function(obj, predicate, context) { + predicate = cb(predicate, context); + var keys = !isArrayLike(obj) && _.keys(obj), + length = (keys || obj).length; + for (var index = 0; index < length; index++) { + var currentKey = keys ? keys[index] : index; + if (predicate(obj[currentKey], currentKey, obj)) return true; + } + return false; + }; + + // Determine if the array or object contains a given item (using `===`). + // Aliased as `includes` and `include`. + _.contains = _.includes = _.include = function(obj, item, fromIndex, guard) { + if (!isArrayLike(obj)) obj = _.values(obj); + if (typeof fromIndex != 'number' || guard) fromIndex = 0; + return _.indexOf(obj, item, fromIndex) >= 0; + }; + + // Invoke a method (with arguments) on every item in a collection. + _.invoke = function(obj, method) { + var args = slice.call(arguments, 2); + var isFunc = _.isFunction(method); + return _.map(obj, function(value) { + var func = isFunc ? method : value[method]; + return func == null ? func : func.apply(value, args); + }); + }; + + // Convenience version of a common use case of `map`: fetching a property. + _.pluck = function(obj, key) { + return _.map(obj, _.property(key)); + }; + + // Convenience version of a common use case of `filter`: selecting only objects + // containing specific `key:value` pairs. + _.where = function(obj, attrs) { + return _.filter(obj, _.matcher(attrs)); + }; + + // Convenience version of a common use case of `find`: getting the first object + // containing specific `key:value` pairs. + _.findWhere = function(obj, attrs) { + return _.find(obj, _.matcher(attrs)); + }; + + // Return the maximum element (or element-based computation). + _.max = function(obj, iteratee, context) { + var result = -Infinity, lastComputed = -Infinity, + value, computed; + if (iteratee == null && obj != null) { + obj = isArrayLike(obj) ? obj : _.values(obj); + for (var i = 0, length = obj.length; i < length; i++) { + value = obj[i]; + if (value > result) { + result = value; + } + } + } else { + iteratee = cb(iteratee, context); + _.each(obj, function(value, index, list) { + computed = iteratee(value, index, list); + if (computed > lastComputed || computed === -Infinity && result === -Infinity) { + result = value; + lastComputed = computed; + } + }); + } + return result; + }; + + // Return the minimum element (or element-based computation). + _.min = function(obj, iteratee, context) { + var result = Infinity, lastComputed = Infinity, + value, computed; + if (iteratee == null && obj != null) { + obj = isArrayLike(obj) ? obj : _.values(obj); + for (var i = 0, length = obj.length; i < length; i++) { + value = obj[i]; + if (value < result) { + result = value; + } + } + } else { + iteratee = cb(iteratee, context); + _.each(obj, function(value, index, list) { + computed = iteratee(value, index, list); + if (computed < lastComputed || computed === Infinity && result === Infinity) { + result = value; + lastComputed = computed; + } + }); + } + return result; + }; + + // Shuffle a collection, using the modern version of the + // [Fisher-Yates shuffle](http://en.wikipedia.org/wiki/Fisher–Yates_shuffle). + _.shuffle = function(obj) { + var set = isArrayLike(obj) ? obj : _.values(obj); + var length = set.length; + var shuffled = Array(length); + for (var index = 0, rand; index < length; index++) { + rand = _.random(0, index); + if (rand !== index) shuffled[index] = shuffled[rand]; + shuffled[rand] = set[index]; + } + return shuffled; + }; + + // Sample **n** random values from a collection. + // If **n** is not specified, returns a single random element. + // The internal `guard` argument allows it to work with `map`. + _.sample = function(obj, n, guard) { + if (n == null || guard) { + if (!isArrayLike(obj)) obj = _.values(obj); + return obj[_.random(obj.length - 1)]; + } + return _.shuffle(obj).slice(0, Math.max(0, n)); + }; + + // Sort the object's values by a criterion produced by an iteratee. + _.sortBy = function(obj, iteratee, context) { + iteratee = cb(iteratee, context); + return _.pluck(_.map(obj, function(value, index, list) { + return { + value: value, + index: index, + criteria: iteratee(value, index, list) + }; + }).sort(function(left, right) { + var a = left.criteria; + var b = right.criteria; + if (a !== b) { + if (a > b || a === void 0) return 1; + if (a < b || b === void 0) return -1; + } + return left.index - right.index; + }), 'value'); + }; + + // An internal function used for aggregate "group by" operations. + var group = function(behavior) { + return function(obj, iteratee, context) { + var result = {}; + iteratee = cb(iteratee, context); + _.each(obj, function(value, index) { + var key = iteratee(value, index, obj); + behavior(result, value, key); + }); + return result; + }; + }; + + // Groups the object's values by a criterion. Pass either a string attribute + // to group by, or a function that returns the criterion. + _.groupBy = group(function(result, value, key) { + if (_.has(result, key)) result[key].push(value); else result[key] = [value]; + }); + + // Indexes the object's values by a criterion, similar to `groupBy`, but for + // when you know that your index values will be unique. + _.indexBy = group(function(result, value, key) { + result[key] = value; + }); + + // Counts instances of an object that group by a certain criterion. Pass + // either a string attribute to count by, or a function that returns the + // criterion. + _.countBy = group(function(result, value, key) { + if (_.has(result, key)) result[key]++; else result[key] = 1; + }); + + // Safely create a real, live array from anything iterable. + _.toArray = function(obj) { + if (!obj) return []; + if (_.isArray(obj)) return slice.call(obj); + if (isArrayLike(obj)) return _.map(obj, _.identity); + return _.values(obj); + }; + + // Return the number of elements in an object. + _.size = function(obj) { + if (obj == null) return 0; + return isArrayLike(obj) ? obj.length : _.keys(obj).length; + }; + + // Split a collection into two arrays: one whose elements all satisfy the given + // predicate, and one whose elements all do not satisfy the predicate. + _.partition = function(obj, predicate, context) { + predicate = cb(predicate, context); + var pass = [], fail = []; + _.each(obj, function(value, key, obj) { + (predicate(value, key, obj) ? pass : fail).push(value); + }); + return [pass, fail]; + }; + + // Array Functions + // --------------- + + // Get the first element of an array. Passing **n** will return the first N + // values in the array. Aliased as `head` and `take`. The **guard** check + // allows it to work with `_.map`. + _.first = _.head = _.take = function(array, n, guard) { + if (array == null) return void 0; + if (n == null || guard) return array[0]; + return _.initial(array, array.length - n); + }; + + // Returns everything but the last entry of the array. Especially useful on + // the arguments object. Passing **n** will return all the values in + // the array, excluding the last N. + _.initial = function(array, n, guard) { + return slice.call(array, 0, Math.max(0, array.length - (n == null || guard ? 1 : n))); + }; + + // Get the last element of an array. Passing **n** will return the last N + // values in the array. + _.last = function(array, n, guard) { + if (array == null) return void 0; + if (n == null || guard) return array[array.length - 1]; + return _.rest(array, Math.max(0, array.length - n)); + }; + + // Returns everything but the first entry of the array. Aliased as `tail` and `drop`. + // Especially useful on the arguments object. Passing an **n** will return + // the rest N values in the array. + _.rest = _.tail = _.drop = function(array, n, guard) { + return slice.call(array, n == null || guard ? 1 : n); + }; + + // Trim out all falsy values from an array. + _.compact = function(array) { + return _.filter(array, _.identity); + }; + + // Internal implementation of a recursive `flatten` function. + var flatten = function(input, shallow, strict, startIndex) { + var output = [], idx = 0; + for (var i = startIndex || 0, length = getLength(input); i < length; i++) { + var value = input[i]; + if (isArrayLike(value) && (_.isArray(value) || _.isArguments(value))) { + //flatten current level of array or arguments object + if (!shallow) value = flatten(value, shallow, strict); + var j = 0, len = value.length; + output.length += len; + while (j < len) { + output[idx++] = value[j++]; + } + } else if (!strict) { + output[idx++] = value; + } + } + return output; + }; + + // Flatten out an array, either recursively (by default), or just one level. + _.flatten = function(array, shallow) { + return flatten(array, shallow, false); + }; + + // Return a version of the array that does not contain the specified value(s). + _.without = function(array) { + return _.difference(array, slice.call(arguments, 1)); + }; + + // Produce a duplicate-free version of the array. If the array has already + // been sorted, you have the option of using a faster algorithm. + // Aliased as `unique`. + _.uniq = _.unique = function(array, isSorted, iteratee, context) { + if (!_.isBoolean(isSorted)) { + context = iteratee; + iteratee = isSorted; + isSorted = false; + } + if (iteratee != null) iteratee = cb(iteratee, context); + var result = []; + var seen = []; + for (var i = 0, length = getLength(array); i < length; i++) { + var value = array[i], + computed = iteratee ? iteratee(value, i, array) : value; + if (isSorted) { + if (!i || seen !== computed) result.push(value); + seen = computed; + } else if (iteratee) { + if (!_.contains(seen, computed)) { + seen.push(computed); + result.push(value); + } + } else if (!_.contains(result, value)) { + result.push(value); + } + } + return result; + }; + + // Produce an array that contains the union: each distinct element from all of + // the passed-in arrays. + _.union = function() { + return _.uniq(flatten(arguments, true, true)); + }; + + // Produce an array that contains every item shared between all the + // passed-in arrays. + _.intersection = function(array) { + var result = []; + var argsLength = arguments.length; + for (var i = 0, length = getLength(array); i < length; i++) { + var item = array[i]; + if (_.contains(result, item)) continue; + for (var j = 1; j < argsLength; j++) { + if (!_.contains(arguments[j], item)) break; + } + if (j === argsLength) result.push(item); + } + return result; + }; + + // Take the difference between one array and a number of other arrays. + // Only the elements present in just the first array will remain. + _.difference = function(array) { + var rest = flatten(arguments, true, true, 1); + return _.filter(array, function(value){ + return !_.contains(rest, value); + }); + }; + + // Zip together multiple lists into a single array -- elements that share + // an index go together. + _.zip = function() { + return _.unzip(arguments); + }; + + // Complement of _.zip. Unzip accepts an array of arrays and groups + // each array's elements on shared indices + _.unzip = function(array) { + var length = array && _.max(array, getLength).length || 0; + var result = Array(length); + + for (var index = 0; index < length; index++) { + result[index] = _.pluck(array, index); + } + return result; + }; + + // Converts lists into objects. Pass either a single array of `[key, value]` + // pairs, or two parallel arrays of the same length -- one of keys, and one of + // the corresponding values. + _.object = function(list, values) { + var result = {}; + for (var i = 0, length = getLength(list); i < length; i++) { + if (values) { + result[list[i]] = values[i]; + } else { + result[list[i][0]] = list[i][1]; + } + } + return result; + }; + + // Generator function to create the findIndex and findLastIndex functions + function createPredicateIndexFinder(dir) { + return function(array, predicate, context) { + predicate = cb(predicate, context); + var length = getLength(array); + var index = dir > 0 ? 0 : length - 1; + for (; index >= 0 && index < length; index += dir) { + if (predicate(array[index], index, array)) return index; + } + return -1; + }; + } + + // Returns the first index on an array-like that passes a predicate test + _.findIndex = createPredicateIndexFinder(1); + _.findLastIndex = createPredicateIndexFinder(-1); + + // Use a comparator function to figure out the smallest index at which + // an object should be inserted so as to maintain order. Uses binary search. + _.sortedIndex = function(array, obj, iteratee, context) { + iteratee = cb(iteratee, context, 1); + var value = iteratee(obj); + var low = 0, high = getLength(array); + while (low < high) { + var mid = Math.floor((low + high) / 2); + if (iteratee(array[mid]) < value) low = mid + 1; else high = mid; + } + return low; + }; + + // Generator function to create the indexOf and lastIndexOf functions + function createIndexFinder(dir, predicateFind, sortedIndex) { + return function(array, item, idx) { + var i = 0, length = getLength(array); + if (typeof idx == 'number') { + if (dir > 0) { + i = idx >= 0 ? idx : Math.max(idx + length, i); + } else { + length = idx >= 0 ? Math.min(idx + 1, length) : idx + length + 1; + } + } else if (sortedIndex && idx && length) { + idx = sortedIndex(array, item); + return array[idx] === item ? idx : -1; + } + if (item !== item) { + idx = predicateFind(slice.call(array, i, length), _.isNaN); + return idx >= 0 ? idx + i : -1; + } + for (idx = dir > 0 ? i : length - 1; idx >= 0 && idx < length; idx += dir) { + if (array[idx] === item) return idx; + } + return -1; + }; + } + + // Return the position of the first occurrence of an item in an array, + // or -1 if the item is not included in the array. + // If the array is large and already in sort order, pass `true` + // for **isSorted** to use binary search. + _.indexOf = createIndexFinder(1, _.findIndex, _.sortedIndex); + _.lastIndexOf = createIndexFinder(-1, _.findLastIndex); + + // Generate an integer Array containing an arithmetic progression. A port of + // the native Python `range()` function. See + // [the Python documentation](http://docs.python.org/library/functions.html#range). + _.range = function(start, stop, step) { + if (stop == null) { + stop = start || 0; + start = 0; + } + step = step || 1; + + var length = Math.max(Math.ceil((stop - start) / step), 0); + var range = Array(length); + + for (var idx = 0; idx < length; idx++, start += step) { + range[idx] = start; + } + + return range; + }; + + // Function (ahem) Functions + // ------------------ + + // Determines whether to execute a function as a constructor + // or a normal function with the provided arguments + var executeBound = function(sourceFunc, boundFunc, context, callingContext, args) { + if (!(callingContext instanceof boundFunc)) return sourceFunc.apply(context, args); + var self = baseCreate(sourceFunc.prototype); + var result = sourceFunc.apply(self, args); + if (_.isObject(result)) return result; + return self; + }; + + // Create a function bound to a given object (assigning `this`, and arguments, + // optionally). Delegates to **ECMAScript 5**'s native `Function.bind` if + // available. + _.bind = function(func, context) { + if (nativeBind && func.bind === nativeBind) return nativeBind.apply(func, slice.call(arguments, 1)); + if (!_.isFunction(func)) throw new TypeError('Bind must be called on a function'); + var args = slice.call(arguments, 2); + var bound = function() { + return executeBound(func, bound, context, this, args.concat(slice.call(arguments))); + }; + return bound; + }; + + // Partially apply a function by creating a version that has had some of its + // arguments pre-filled, without changing its dynamic `this` context. _ acts + // as a placeholder, allowing any combination of arguments to be pre-filled. + _.partial = function(func) { + var boundArgs = slice.call(arguments, 1); + var bound = function() { + var position = 0, length = boundArgs.length; + var args = Array(length); + for (var i = 0; i < length; i++) { + args[i] = boundArgs[i] === _ ? arguments[position++] : boundArgs[i]; + } + while (position < arguments.length) args.push(arguments[position++]); + return executeBound(func, bound, this, this, args); + }; + return bound; + }; + + // Bind a number of an object's methods to that object. Remaining arguments + // are the method names to be bound. Useful for ensuring that all callbacks + // defined on an object belong to it. + _.bindAll = function(obj) { + var i, length = arguments.length, key; + if (length <= 1) throw new Error('bindAll must be passed function names'); + for (i = 1; i < length; i++) { + key = arguments[i]; + obj[key] = _.bind(obj[key], obj); + } + return obj; + }; + + // Memoize an expensive function by storing its results. + _.memoize = function(func, hasher) { + var memoize = function(key) { + var cache = memoize.cache; + var address = '' + (hasher ? hasher.apply(this, arguments) : key); + if (!_.has(cache, address)) cache[address] = func.apply(this, arguments); + return cache[address]; + }; + memoize.cache = {}; + return memoize; + }; + + // Delays a function for the given number of milliseconds, and then calls + // it with the arguments supplied. + _.delay = function(func, wait) { + var args = slice.call(arguments, 2); + return setTimeout(function(){ + return func.apply(null, args); + }, wait); + }; + + // Defers a function, scheduling it to run after the current call stack has + // cleared. + _.defer = _.partial(_.delay, _, 1); + + // Returns a function, that, when invoked, will only be triggered at most once + // during a given window of time. Normally, the throttled function will run + // as much as it can, without ever going more than once per `wait` duration; + // but if you'd like to disable the execution on the leading edge, pass + // `{leading: false}`. To disable execution on the trailing edge, ditto. + _.throttle = function(func, wait, options) { + var context, args, result; + var timeout = null; + var previous = 0; + if (!options) options = {}; + var later = function() { + previous = options.leading === false ? 0 : _.now(); + timeout = null; + result = func.apply(context, args); + if (!timeout) context = args = null; + }; + return function() { + var now = _.now(); + if (!previous && options.leading === false) previous = now; + var remaining = wait - (now - previous); + context = this; + args = arguments; + if (remaining <= 0 || remaining > wait) { + if (timeout) { + clearTimeout(timeout); + timeout = null; + } + previous = now; + result = func.apply(context, args); + if (!timeout) context = args = null; + } else if (!timeout && options.trailing !== false) { + timeout = setTimeout(later, remaining); + } + return result; + }; + }; + + // Returns a function, that, as long as it continues to be invoked, will not + // be triggered. The function will be called after it stops being called for + // N milliseconds. If `immediate` is passed, trigger the function on the + // leading edge, instead of the trailing. + _.debounce = function(func, wait, immediate) { + var timeout, args, context, timestamp, result; + + var later = function() { + var last = _.now() - timestamp; + + if (last < wait && last >= 0) { + timeout = setTimeout(later, wait - last); + } else { + timeout = null; + if (!immediate) { + result = func.apply(context, args); + if (!timeout) context = args = null; + } + } + }; + + return function() { + context = this; + args = arguments; + timestamp = _.now(); + var callNow = immediate && !timeout; + if (!timeout) timeout = setTimeout(later, wait); + if (callNow) { + result = func.apply(context, args); + context = args = null; + } + + return result; + }; + }; + + // Returns the first function passed as an argument to the second, + // allowing you to adjust arguments, run code before and after, and + // conditionally execute the original function. + _.wrap = function(func, wrapper) { + return _.partial(wrapper, func); + }; + + // Returns a negated version of the passed-in predicate. + _.negate = function(predicate) { + return function() { + return !predicate.apply(this, arguments); + }; + }; + + // Returns a function that is the composition of a list of functions, each + // consuming the return value of the function that follows. + _.compose = function() { + var args = arguments; + var start = args.length - 1; + return function() { + var i = start; + var result = args[start].apply(this, arguments); + while (i--) result = args[i].call(this, result); + return result; + }; + }; + + // Returns a function that will only be executed on and after the Nth call. + _.after = function(times, func) { + return function() { + if (--times < 1) { + return func.apply(this, arguments); + } + }; + }; + + // Returns a function that will only be executed up to (but not including) the Nth call. + _.before = function(times, func) { + var memo; + return function() { + if (--times > 0) { + memo = func.apply(this, arguments); + } + if (times <= 1) func = null; + return memo; + }; + }; + + // Returns a function that will be executed at most one time, no matter how + // often you call it. Useful for lazy initialization. + _.once = _.partial(_.before, 2); + + // Object Functions + // ---------------- + + // Keys in IE < 9 that won't be iterated by `for key in ...` and thus missed. + var hasEnumBug = !{toString: null}.propertyIsEnumerable('toString'); + var nonEnumerableProps = ['valueOf', 'isPrototypeOf', 'toString', + 'propertyIsEnumerable', 'hasOwnProperty', 'toLocaleString']; + + function collectNonEnumProps(obj, keys) { + var nonEnumIdx = nonEnumerableProps.length; + var constructor = obj.constructor; + var proto = (_.isFunction(constructor) && constructor.prototype) || ObjProto; + + // Constructor is a special case. + var prop = 'constructor'; + if (_.has(obj, prop) && !_.contains(keys, prop)) keys.push(prop); + + while (nonEnumIdx--) { + prop = nonEnumerableProps[nonEnumIdx]; + if (prop in obj && obj[prop] !== proto[prop] && !_.contains(keys, prop)) { + keys.push(prop); + } + } + } + + // Retrieve the names of an object's own properties. + // Delegates to **ECMAScript 5**'s native `Object.keys` + _.keys = function(obj) { + if (!_.isObject(obj)) return []; + if (nativeKeys) return nativeKeys(obj); + var keys = []; + for (var key in obj) if (_.has(obj, key)) keys.push(key); + // Ahem, IE < 9. + if (hasEnumBug) collectNonEnumProps(obj, keys); + return keys; + }; + + // Retrieve all the property names of an object. + _.allKeys = function(obj) { + if (!_.isObject(obj)) return []; + var keys = []; + for (var key in obj) keys.push(key); + // Ahem, IE < 9. + if (hasEnumBug) collectNonEnumProps(obj, keys); + return keys; + }; + + // Retrieve the values of an object's properties. + _.values = function(obj) { + var keys = _.keys(obj); + var length = keys.length; + var values = Array(length); + for (var i = 0; i < length; i++) { + values[i] = obj[keys[i]]; + } + return values; + }; + + // Returns the results of applying the iteratee to each element of the object + // In contrast to _.map it returns an object + _.mapObject = function(obj, iteratee, context) { + iteratee = cb(iteratee, context); + var keys = _.keys(obj), + length = keys.length, + results = {}, + currentKey; + for (var index = 0; index < length; index++) { + currentKey = keys[index]; + results[currentKey] = iteratee(obj[currentKey], currentKey, obj); + } + return results; + }; + + // Convert an object into a list of `[key, value]` pairs. + _.pairs = function(obj) { + var keys = _.keys(obj); + var length = keys.length; + var pairs = Array(length); + for (var i = 0; i < length; i++) { + pairs[i] = [keys[i], obj[keys[i]]]; + } + return pairs; + }; + + // Invert the keys and values of an object. The values must be serializable. + _.invert = function(obj) { + var result = {}; + var keys = _.keys(obj); + for (var i = 0, length = keys.length; i < length; i++) { + result[obj[keys[i]]] = keys[i]; + } + return result; + }; + + // Return a sorted list of the function names available on the object. + // Aliased as `methods` + _.functions = _.methods = function(obj) { + var names = []; + for (var key in obj) { + if (_.isFunction(obj[key])) names.push(key); + } + return names.sort(); + }; + + // Extend a given object with all the properties in passed-in object(s). + _.extend = createAssigner(_.allKeys); + + // Assigns a given object with all the own properties in the passed-in object(s) + // (https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Object/assign) + _.extendOwn = _.assign = createAssigner(_.keys); + + // Returns the first key on an object that passes a predicate test + _.findKey = function(obj, predicate, context) { + predicate = cb(predicate, context); + var keys = _.keys(obj), key; + for (var i = 0, length = keys.length; i < length; i++) { + key = keys[i]; + if (predicate(obj[key], key, obj)) return key; + } + }; + + // Return a copy of the object only containing the whitelisted properties. + _.pick = function(object, oiteratee, context) { + var result = {}, obj = object, iteratee, keys; + if (obj == null) return result; + if (_.isFunction(oiteratee)) { + keys = _.allKeys(obj); + iteratee = optimizeCb(oiteratee, context); + } else { + keys = flatten(arguments, false, false, 1); + iteratee = function(value, key, obj) { return key in obj; }; + obj = Object(obj); + } + for (var i = 0, length = keys.length; i < length; i++) { + var key = keys[i]; + var value = obj[key]; + if (iteratee(value, key, obj)) result[key] = value; + } + return result; + }; + + // Return a copy of the object without the blacklisted properties. + _.omit = function(obj, iteratee, context) { + if (_.isFunction(iteratee)) { + iteratee = _.negate(iteratee); + } else { + var keys = _.map(flatten(arguments, false, false, 1), String); + iteratee = function(value, key) { + return !_.contains(keys, key); + }; + } + return _.pick(obj, iteratee, context); + }; + + // Fill in a given object with default properties. + _.defaults = createAssigner(_.allKeys, true); + + // Creates an object that inherits from the given prototype object. + // If additional properties are provided then they will be added to the + // created object. + _.create = function(prototype, props) { + var result = baseCreate(prototype); + if (props) _.extendOwn(result, props); + return result; + }; + + // Create a (shallow-cloned) duplicate of an object. + _.clone = function(obj) { + if (!_.isObject(obj)) return obj; + return _.isArray(obj) ? obj.slice() : _.extend({}, obj); + }; + + // Invokes interceptor with the obj, and then returns obj. + // The primary purpose of this method is to "tap into" a method chain, in + // order to perform operations on intermediate results within the chain. + _.tap = function(obj, interceptor) { + interceptor(obj); + return obj; + }; + + // Returns whether an object has a given set of `key:value` pairs. + _.isMatch = function(object, attrs) { + var keys = _.keys(attrs), length = keys.length; + if (object == null) return !length; + var obj = Object(object); + for (var i = 0; i < length; i++) { + var key = keys[i]; + if (attrs[key] !== obj[key] || !(key in obj)) return false; + } + return true; + }; + + + // Internal recursive comparison function for `isEqual`. + var eq = function(a, b, aStack, bStack) { + // Identical objects are equal. `0 === -0`, but they aren't identical. + // See the [Harmony `egal` proposal](http://wiki.ecmascript.org/doku.php?id=harmony:egal). + if (a === b) return a !== 0 || 1 / a === 1 / b; + // A strict comparison is necessary because `null == undefined`. + if (a == null || b == null) return a === b; + // Unwrap any wrapped objects. + if (a instanceof _) a = a._wrapped; + if (b instanceof _) b = b._wrapped; + // Compare `[[Class]]` names. + var className = toString.call(a); + if (className !== toString.call(b)) return false; + switch (className) { + // Strings, numbers, regular expressions, dates, and booleans are compared by value. + case '[object RegExp]': + // RegExps are coerced to strings for comparison (Note: '' + /a/i === '/a/i') + case '[object String]': + // Primitives and their corresponding object wrappers are equivalent; thus, `"5"` is + // equivalent to `new String("5")`. + return '' + a === '' + b; + case '[object Number]': + // `NaN`s are equivalent, but non-reflexive. + // Object(NaN) is equivalent to NaN + if (+a !== +a) return +b !== +b; + // An `egal` comparison is performed for other numeric values. + return +a === 0 ? 1 / +a === 1 / b : +a === +b; + case '[object Date]': + case '[object Boolean]': + // Coerce dates and booleans to numeric primitive values. Dates are compared by their + // millisecond representations. Note that invalid dates with millisecond representations + // of `NaN` are not equivalent. + return +a === +b; + } + + var areArrays = className === '[object Array]'; + if (!areArrays) { + if (typeof a != 'object' || typeof b != 'object') return false; + + // Objects with different constructors are not equivalent, but `Object`s or `Array`s + // from different frames are. + var aCtor = a.constructor, bCtor = b.constructor; + if (aCtor !== bCtor && !(_.isFunction(aCtor) && aCtor instanceof aCtor && + _.isFunction(bCtor) && bCtor instanceof bCtor) + && ('constructor' in a && 'constructor' in b)) { + return false; + } + } + // Assume equality for cyclic structures. The algorithm for detecting cyclic + // structures is adapted from ES 5.1 section 15.12.3, abstract operation `JO`. + + // Initializing stack of traversed objects. + // It's done here since we only need them for objects and arrays comparison. + aStack = aStack || []; + bStack = bStack || []; + var length = aStack.length; + while (length--) { + // Linear search. Performance is inversely proportional to the number of + // unique nested structures. + if (aStack[length] === a) return bStack[length] === b; + } + + // Add the first object to the stack of traversed objects. + aStack.push(a); + bStack.push(b); + + // Recursively compare objects and arrays. + if (areArrays) { + // Compare array lengths to determine if a deep comparison is necessary. + length = a.length; + if (length !== b.length) return false; + // Deep compare the contents, ignoring non-numeric properties. + while (length--) { + if (!eq(a[length], b[length], aStack, bStack)) return false; + } + } else { + // Deep compare objects. + var keys = _.keys(a), key; + length = keys.length; + // Ensure that both objects contain the same number of properties before comparing deep equality. + if (_.keys(b).length !== length) return false; + while (length--) { + // Deep compare each member + key = keys[length]; + if (!(_.has(b, key) && eq(a[key], b[key], aStack, bStack))) return false; + } + } + // Remove the first object from the stack of traversed objects. + aStack.pop(); + bStack.pop(); + return true; + }; + + // Perform a deep comparison to check if two objects are equal. + _.isEqual = function(a, b) { + return eq(a, b); + }; + + // Is a given array, string, or object empty? + // An "empty" object has no enumerable own-properties. + _.isEmpty = function(obj) { + if (obj == null) return true; + if (isArrayLike(obj) && (_.isArray(obj) || _.isString(obj) || _.isArguments(obj))) return obj.length === 0; + return _.keys(obj).length === 0; + }; + + // Is a given value a DOM element? + _.isElement = function(obj) { + return !!(obj && obj.nodeType === 1); + }; + + // Is a given value an array? + // Delegates to ECMA5's native Array.isArray + _.isArray = nativeIsArray || function(obj) { + return toString.call(obj) === '[object Array]'; + }; + + // Is a given variable an object? + _.isObject = function(obj) { + var type = typeof obj; + return type === 'function' || type === 'object' && !!obj; + }; + + // Add some isType methods: isArguments, isFunction, isString, isNumber, isDate, isRegExp, isError. + _.each(['Arguments', 'Function', 'String', 'Number', 'Date', 'RegExp', 'Error'], function(name) { + _['is' + name] = function(obj) { + return toString.call(obj) === '[object ' + name + ']'; + }; + }); + + // Define a fallback version of the method in browsers (ahem, IE < 9), where + // there isn't any inspectable "Arguments" type. + if (!_.isArguments(arguments)) { + _.isArguments = function(obj) { + return _.has(obj, 'callee'); + }; + } + + // Optimize `isFunction` if appropriate. Work around some typeof bugs in old v8, + // IE 11 (#1621), and in Safari 8 (#1929). + if (typeof /./ != 'function' && typeof Int8Array != 'object') { + _.isFunction = function(obj) { + return typeof obj == 'function' || false; + }; + } + + // Is a given object a finite number? + _.isFinite = function(obj) { + return isFinite(obj) && !isNaN(parseFloat(obj)); + }; + + // Is the given value `NaN`? (NaN is the only number which does not equal itself). + _.isNaN = function(obj) { + return _.isNumber(obj) && obj !== +obj; + }; + + // Is a given value a boolean? + _.isBoolean = function(obj) { + return obj === true || obj === false || toString.call(obj) === '[object Boolean]'; + }; + + // Is a given value equal to null? + _.isNull = function(obj) { + return obj === null; + }; + + // Is a given variable undefined? + _.isUndefined = function(obj) { + return obj === void 0; + }; + + // Shortcut function for checking if an object has a given property directly + // on itself (in other words, not on a prototype). + _.has = function(obj, key) { + return obj != null && hasOwnProperty.call(obj, key); + }; + + // Utility Functions + // ----------------- + + // Run Underscore.js in *noConflict* mode, returning the `_` variable to its + // previous owner. Returns a reference to the Underscore object. + _.noConflict = function() { + root._ = previousUnderscore; + return this; + }; + + // Keep the identity function around for default iteratees. + _.identity = function(value) { + return value; + }; + + // Predicate-generating functions. Often useful outside of Underscore. + _.constant = function(value) { + return function() { + return value; + }; + }; + + _.noop = function(){}; + + _.property = property; + + // Generates a function for a given object that returns a given property. + _.propertyOf = function(obj) { + return obj == null ? function(){} : function(key) { + return obj[key]; + }; + }; + + // Returns a predicate for checking whether an object has a given set of + // `key:value` pairs. + _.matcher = _.matches = function(attrs) { + attrs = _.extendOwn({}, attrs); + return function(obj) { + return _.isMatch(obj, attrs); + }; + }; + + // Run a function **n** times. + _.times = function(n, iteratee, context) { + var accum = Array(Math.max(0, n)); + iteratee = optimizeCb(iteratee, context, 1); + for (var i = 0; i < n; i++) accum[i] = iteratee(i); + return accum; + }; + + // Return a random integer between min and max (inclusive). + _.random = function(min, max) { + if (max == null) { + max = min; + min = 0; + } + return min + Math.floor(Math.random() * (max - min + 1)); + }; + + // A (possibly faster) way to get the current timestamp as an integer. + _.now = Date.now || function() { + return new Date().getTime(); + }; + + // List of HTML entities for escaping. + var escapeMap = { + '&': '&', + '<': '<', + '>': '>', + '"': '"', + "'": ''', + '`': '`' + }; + var unescapeMap = _.invert(escapeMap); + + // Functions for escaping and unescaping strings to/from HTML interpolation. + var createEscaper = function(map) { + var escaper = function(match) { + return map[match]; + }; + // Regexes for identifying a key that needs to be escaped + var source = '(?:' + _.keys(map).join('|') + ')'; + var testRegexp = RegExp(source); + var replaceRegexp = RegExp(source, 'g'); + return function(string) { + string = string == null ? '' : '' + string; + return testRegexp.test(string) ? string.replace(replaceRegexp, escaper) : string; + }; + }; + _.escape = createEscaper(escapeMap); + _.unescape = createEscaper(unescapeMap); + + // If the value of the named `property` is a function then invoke it with the + // `object` as context; otherwise, return it. + _.result = function(object, property, fallback) { + var value = object == null ? void 0 : object[property]; + if (value === void 0) { + value = fallback; + } + return _.isFunction(value) ? value.call(object) : value; + }; + + // Generate a unique integer id (unique within the entire client session). + // Useful for temporary DOM ids. + var idCounter = 0; + _.uniqueId = function(prefix) { + var id = ++idCounter + ''; + return prefix ? prefix + id : id; + }; + + // By default, Underscore uses ERB-style template delimiters, change the + // following template settings to use alternative delimiters. + _.templateSettings = { + evaluate : /<%([\s\S]+?)%>/g, + interpolate : /<%=([\s\S]+?)%>/g, + escape : /<%-([\s\S]+?)%>/g + }; + + // When customizing `templateSettings`, if you don't want to define an + // interpolation, evaluation or escaping regex, we need one that is + // guaranteed not to match. + var noMatch = /(.)^/; + + // Certain characters need to be escaped so that they can be put into a + // string literal. + var escapes = { + "'": "'", + '\\': '\\', + '\r': 'r', + '\n': 'n', + '\u2028': 'u2028', + '\u2029': 'u2029' + }; + + var escaper = /\\|'|\r|\n|\u2028|\u2029/g; + + var escapeChar = function(match) { + return '\\' + escapes[match]; + }; + + // JavaScript micro-templating, similar to John Resig's implementation. + // Underscore templating handles arbitrary delimiters, preserves whitespace, + // and correctly escapes quotes within interpolated code. + // NB: `oldSettings` only exists for backwards compatibility. + _.template = function(text, settings, oldSettings) { + if (!settings && oldSettings) settings = oldSettings; + settings = _.defaults({}, settings, _.templateSettings); + + // Combine delimiters into one regular expression via alternation. + var matcher = RegExp([ + (settings.escape || noMatch).source, + (settings.interpolate || noMatch).source, + (settings.evaluate || noMatch).source + ].join('|') + '|$', 'g'); + + // Compile the template source, escaping string literals appropriately. + var index = 0; + var source = "__p+='"; + text.replace(matcher, function(match, escape, interpolate, evaluate, offset) { + source += text.slice(index, offset).replace(escaper, escapeChar); + index = offset + match.length; + + if (escape) { + source += "'+\n((__t=(" + escape + "))==null?'':_.escape(__t))+\n'"; + } else if (interpolate) { + source += "'+\n((__t=(" + interpolate + "))==null?'':__t)+\n'"; + } else if (evaluate) { + source += "';\n" + evaluate + "\n__p+='"; + } + + // Adobe VMs need the match returned to produce the correct offest. + return match; + }); + source += "';\n"; + + // If a variable is not specified, place data values in local scope. + if (!settings.variable) source = 'with(obj||{}){\n' + source + '}\n'; + + source = "var __t,__p='',__j=Array.prototype.join," + + "print=function(){__p+=__j.call(arguments,'');};\n" + + source + 'return __p;\n'; + + try { + var render = new Function(settings.variable || 'obj', '_', source); + } catch (e) { + e.source = source; + throw e; + } + + var template = function(data) { + return render.call(this, data, _); + }; + + // Provide the compiled source as a convenience for precompilation. + var argument = settings.variable || 'obj'; + template.source = 'function(' + argument + '){\n' + source + '}'; + + return template; + }; + + // Add a "chain" function. Start chaining a wrapped Underscore object. + _.chain = function(obj) { + var instance = _(obj); + instance._chain = true; + return instance; + }; + + // OOP + // --------------- + // If Underscore is called as a function, it returns a wrapped object that + // can be used OO-style. This wrapper holds altered versions of all the + // underscore functions. Wrapped objects may be chained. + + // Helper function to continue chaining intermediate results. + var result = function(instance, obj) { + return instance._chain ? _(obj).chain() : obj; + }; + + // Add your own custom functions to the Underscore object. + _.mixin = function(obj) { + _.each(_.functions(obj), function(name) { + var func = _[name] = obj[name]; + _.prototype[name] = function() { + var args = [this._wrapped]; + push.apply(args, arguments); + return result(this, func.apply(_, args)); + }; + }); + }; + + // Add all of the Underscore functions to the wrapper object. + _.mixin(_); + + // Add all mutator Array functions to the wrapper. + _.each(['pop', 'push', 'reverse', 'shift', 'sort', 'splice', 'unshift'], function(name) { + var method = ArrayProto[name]; + _.prototype[name] = function() { + var obj = this._wrapped; + method.apply(obj, arguments); + if ((name === 'shift' || name === 'splice') && obj.length === 0) delete obj[0]; + return result(this, obj); + }; + }); + + // Add all accessor Array functions to the wrapper. + _.each(['concat', 'join', 'slice'], function(name) { + var method = ArrayProto[name]; + _.prototype[name] = function() { + return result(this, method.apply(this._wrapped, arguments)); + }; + }); + + // Extracts the result from a wrapped and chained object. + _.prototype.value = function() { + return this._wrapped; + }; + + // Provide unwrapping proxy for some methods used in engine operations + // such as arithmetic and JSON stringification. + _.prototype.valueOf = _.prototype.toJSON = _.prototype.value; + + _.prototype.toString = function() { + return '' + this._wrapped; + }; + + // AMD registration happens at the end for compatibility with AMD loaders + // that may not enforce next-turn semantics on modules. Even though general + // practice for AMD registration is to be anonymous, underscore registers + // as a named module because, like jQuery, it is a base library that is + // popular enough to be bundled in a third party lib, but not be part of + // an AMD load request. Those cases could generate an error when an + // anonymous define() is called outside of a loader request. + if (typeof define === 'function' && define.amd) { + define('underscore', [], function() { + return _; + }); + } +}.call(this)); + +},{}],26:[function(require,module,exports){ +arguments[4][19][0].apply(exports,arguments) +},{"dup":19}],27:[function(require,module,exports){ +module.exports = function isBuffer(arg) { + return arg && typeof arg === 'object' + && typeof arg.copy === 'function' + && typeof arg.fill === 'function' + && typeof arg.readUInt8 === 'function'; +} +},{}],28:[function(require,module,exports){ +(function (process,global){ +// Copyright Joyent, Inc. and other Node contributors. +// +// Permission is hereby granted, free of charge, to any person obtaining a +// copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to permit +// persons to whom the Software is furnished to do so, subject to the +// following conditions: +// +// The above copyright notice and this permission notice shall be included +// in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +// USE OR OTHER DEALINGS IN THE SOFTWARE. + +var formatRegExp = /%[sdj%]/g; +exports.format = function(f) { + if (!isString(f)) { + var objects = []; + for (var i = 0; i < arguments.length; i++) { + objects.push(inspect(arguments[i])); + } + return objects.join(' '); + } + + var i = 1; + var args = arguments; + var len = args.length; + var str = String(f).replace(formatRegExp, function(x) { + if (x === '%%') return '%'; + if (i >= len) return x; + switch (x) { + case '%s': return String(args[i++]); + case '%d': return Number(args[i++]); + case '%j': + try { + return JSON.stringify(args[i++]); + } catch (_) { + return '[Circular]'; + } + default: + return x; + } + }); + for (var x = args[i]; i < len; x = args[++i]) { + if (isNull(x) || !isObject(x)) { + str += ' ' + x; + } else { + str += ' ' + inspect(x); + } + } + return str; +}; + + +// Mark that a method should not be used. +// Returns a modified function which warns once by default. +// If --no-deprecation is set, then it is a no-op. +exports.deprecate = function(fn, msg) { + // Allow for deprecating things in the process of starting up. + if (isUndefined(global.process)) { + return function() { + return exports.deprecate(fn, msg).apply(this, arguments); + }; + } + + if (process.noDeprecation === true) { + return fn; + } + + var warned = false; + function deprecated() { + if (!warned) { + if (process.throwDeprecation) { + throw new Error(msg); + } else if (process.traceDeprecation) { + console.trace(msg); + } else { + console.error(msg); + } + warned = true; + } + return fn.apply(this, arguments); + } + + return deprecated; +}; + + +var debugs = {}; +var debugEnviron; +exports.debuglog = function(set) { + if (isUndefined(debugEnviron)) + debugEnviron = process.env.NODE_DEBUG || ''; + set = set.toUpperCase(); + if (!debugs[set]) { + if (new RegExp('\\b' + set + '\\b', 'i').test(debugEnviron)) { + var pid = process.pid; + debugs[set] = function() { + var msg = exports.format.apply(exports, arguments); + console.error('%s %d: %s', set, pid, msg); + }; + } else { + debugs[set] = function() {}; + } + } + return debugs[set]; +}; + + +/** + * Echos the value of a value. Trys to print the value out + * in the best way possible given the different types. + * + * @param {Object} obj The object to print out. + * @param {Object} opts Optional options object that alters the output. + */ +/* legacy: obj, showHidden, depth, colors*/ +function inspect(obj, opts) { + // default options + var ctx = { + seen: [], + stylize: stylizeNoColor + }; + // legacy... + if (arguments.length >= 3) ctx.depth = arguments[2]; + if (arguments.length >= 4) ctx.colors = arguments[3]; + if (isBoolean(opts)) { + // legacy... + ctx.showHidden = opts; + } else if (opts) { + // got an "options" object + exports._extend(ctx, opts); + } + // set default options + if (isUndefined(ctx.showHidden)) ctx.showHidden = false; + if (isUndefined(ctx.depth)) ctx.depth = 2; + if (isUndefined(ctx.colors)) ctx.colors = false; + if (isUndefined(ctx.customInspect)) ctx.customInspect = true; + if (ctx.colors) ctx.stylize = stylizeWithColor; + return formatValue(ctx, obj, ctx.depth); +} +exports.inspect = inspect; + + +// http://en.wikipedia.org/wiki/ANSI_escape_code#graphics +inspect.colors = { + 'bold' : [1, 22], + 'italic' : [3, 23], + 'underline' : [4, 24], + 'inverse' : [7, 27], + 'white' : [37, 39], + 'grey' : [90, 39], + 'black' : [30, 39], + 'blue' : [34, 39], + 'cyan' : [36, 39], + 'green' : [32, 39], + 'magenta' : [35, 39], + 'red' : [31, 39], + 'yellow' : [33, 39] +}; + +// Don't use 'blue' not visible on cmd.exe +inspect.styles = { + 'special': 'cyan', + 'number': 'yellow', + 'boolean': 'yellow', + 'undefined': 'grey', + 'null': 'bold', + 'string': 'green', + 'date': 'magenta', + // "name": intentionally not styling + 'regexp': 'red' +}; + + +function stylizeWithColor(str, styleType) { + var style = inspect.styles[styleType]; + + if (style) { + return '\u001b[' + inspect.colors[style][0] + 'm' + str + + '\u001b[' + inspect.colors[style][1] + 'm'; + } else { + return str; + } +} + + +function stylizeNoColor(str, styleType) { + return str; +} + + +function arrayToHash(array) { + var hash = {}; + + array.forEach(function(val, idx) { + hash[val] = true; + }); + + return hash; +} + + +function formatValue(ctx, value, recurseTimes) { + // Provide a hook for user-specified inspect functions. + // Check that value is an object with an inspect function on it + if (ctx.customInspect && + value && + isFunction(value.inspect) && + // Filter out the util module, it's inspect function is special + value.inspect !== exports.inspect && + // Also filter out any prototype objects using the circular check. + !(value.constructor && value.constructor.prototype === value)) { + var ret = value.inspect(recurseTimes, ctx); + if (!isString(ret)) { + ret = formatValue(ctx, ret, recurseTimes); + } + return ret; + } + + // Primitive types cannot have properties + var primitive = formatPrimitive(ctx, value); + if (primitive) { + return primitive; + } + + // Look up the keys of the object. + var keys = Object.keys(value); + var visibleKeys = arrayToHash(keys); + + if (ctx.showHidden) { + keys = Object.getOwnPropertyNames(value); + } + + // IE doesn't make error fields non-enumerable + // http://msdn.microsoft.com/en-us/library/ie/dww52sbt(v=vs.94).aspx + if (isError(value) + && (keys.indexOf('message') >= 0 || keys.indexOf('description') >= 0)) { + return formatError(value); + } + + // Some type of object without properties can be shortcutted. + if (keys.length === 0) { + if (isFunction(value)) { + var name = value.name ? ': ' + value.name : ''; + return ctx.stylize('[Function' + name + ']', 'special'); + } + if (isRegExp(value)) { + return ctx.stylize(RegExp.prototype.toString.call(value), 'regexp'); + } + if (isDate(value)) { + return ctx.stylize(Date.prototype.toString.call(value), 'date'); + } + if (isError(value)) { + return formatError(value); + } + } + + var base = '', array = false, braces = ['{', '}']; + + // Make Array say that they are Array + if (isArray(value)) { + array = true; + braces = ['[', ']']; + } + + // Make functions say that they are functions + if (isFunction(value)) { + var n = value.name ? ': ' + value.name : ''; + base = ' [Function' + n + ']'; + } + + // Make RegExps say that they are RegExps + if (isRegExp(value)) { + base = ' ' + RegExp.prototype.toString.call(value); + } + + // Make dates with properties first say the date + if (isDate(value)) { + base = ' ' + Date.prototype.toUTCString.call(value); + } + + // Make error with message first say the error + if (isError(value)) { + base = ' ' + formatError(value); + } + + if (keys.length === 0 && (!array || value.length == 0)) { + return braces[0] + base + braces[1]; + } + + if (recurseTimes < 0) { + if (isRegExp(value)) { + return ctx.stylize(RegExp.prototype.toString.call(value), 'regexp'); + } else { + return ctx.stylize('[Object]', 'special'); + } + } + + ctx.seen.push(value); + + var output; + if (array) { + output = formatArray(ctx, value, recurseTimes, visibleKeys, keys); + } else { + output = keys.map(function(key) { + return formatProperty(ctx, value, recurseTimes, visibleKeys, key, array); + }); + } + + ctx.seen.pop(); + + return reduceToSingleString(output, base, braces); +} + + +function formatPrimitive(ctx, value) { + if (isUndefined(value)) + return ctx.stylize('undefined', 'undefined'); + if (isString(value)) { + var simple = '\'' + JSON.stringify(value).replace(/^"|"$/g, '') + .replace(/'/g, "\\'") + .replace(/\\"/g, '"') + '\''; + return ctx.stylize(simple, 'string'); + } + if (isNumber(value)) + return ctx.stylize('' + value, 'number'); + if (isBoolean(value)) + return ctx.stylize('' + value, 'boolean'); + // For some reason typeof null is "object", so special case here. + if (isNull(value)) + return ctx.stylize('null', 'null'); +} + + +function formatError(value) { + return '[' + Error.prototype.toString.call(value) + ']'; +} + + +function formatArray(ctx, value, recurseTimes, visibleKeys, keys) { + var output = []; + for (var i = 0, l = value.length; i < l; ++i) { + if (hasOwnProperty(value, String(i))) { + output.push(formatProperty(ctx, value, recurseTimes, visibleKeys, + String(i), true)); + } else { + output.push(''); + } + } + keys.forEach(function(key) { + if (!key.match(/^\d+$/)) { + output.push(formatProperty(ctx, value, recurseTimes, visibleKeys, + key, true)); + } + }); + return output; +} + + +function formatProperty(ctx, value, recurseTimes, visibleKeys, key, array) { + var name, str, desc; + desc = Object.getOwnPropertyDescriptor(value, key) || { value: value[key] }; + if (desc.get) { + if (desc.set) { + str = ctx.stylize('[Getter/Setter]', 'special'); + } else { + str = ctx.stylize('[Getter]', 'special'); + } + } else { + if (desc.set) { + str = ctx.stylize('[Setter]', 'special'); + } + } + if (!hasOwnProperty(visibleKeys, key)) { + name = '[' + key + ']'; + } + if (!str) { + if (ctx.seen.indexOf(desc.value) < 0) { + if (isNull(recurseTimes)) { + str = formatValue(ctx, desc.value, null); + } else { + str = formatValue(ctx, desc.value, recurseTimes - 1); + } + if (str.indexOf('\n') > -1) { + if (array) { + str = str.split('\n').map(function(line) { + return ' ' + line; + }).join('\n').substr(2); + } else { + str = '\n' + str.split('\n').map(function(line) { + return ' ' + line; + }).join('\n'); + } + } + } else { + str = ctx.stylize('[Circular]', 'special'); + } + } + if (isUndefined(name)) { + if (array && key.match(/^\d+$/)) { + return str; + } + name = JSON.stringify('' + key); + if (name.match(/^"([a-zA-Z_][a-zA-Z_0-9]*)"$/)) { + name = name.substr(1, name.length - 2); + name = ctx.stylize(name, 'name'); + } else { + name = name.replace(/'/g, "\\'") + .replace(/\\"/g, '"') + .replace(/(^"|"$)/g, "'"); + name = ctx.stylize(name, 'string'); + } + } + + return name + ': ' + str; +} + + +function reduceToSingleString(output, base, braces) { + var numLinesEst = 0; + var length = output.reduce(function(prev, cur) { + numLinesEst++; + if (cur.indexOf('\n') >= 0) numLinesEst++; + return prev + cur.replace(/\u001b\[\d\d?m/g, '').length + 1; + }, 0); + + if (length > 60) { + return braces[0] + + (base === '' ? '' : base + '\n ') + + ' ' + + output.join(',\n ') + + ' ' + + braces[1]; + } + + return braces[0] + base + ' ' + output.join(', ') + ' ' + braces[1]; +} + + +// NOTE: These type checking functions intentionally don't use `instanceof` +// because it is fragile and can be easily faked with `Object.create()`. +function isArray(ar) { + return Array.isArray(ar); +} +exports.isArray = isArray; + +function isBoolean(arg) { + return typeof arg === 'boolean'; +} +exports.isBoolean = isBoolean; + +function isNull(arg) { + return arg === null; +} +exports.isNull = isNull; + +function isNullOrUndefined(arg) { + return arg == null; +} +exports.isNullOrUndefined = isNullOrUndefined; + +function isNumber(arg) { + return typeof arg === 'number'; +} +exports.isNumber = isNumber; + +function isString(arg) { + return typeof arg === 'string'; +} +exports.isString = isString; + +function isSymbol(arg) { + return typeof arg === 'symbol'; +} +exports.isSymbol = isSymbol; + +function isUndefined(arg) { + return arg === void 0; +} +exports.isUndefined = isUndefined; + +function isRegExp(re) { + return isObject(re) && objectToString(re) === '[object RegExp]'; +} +exports.isRegExp = isRegExp; + +function isObject(arg) { + return typeof arg === 'object' && arg !== null; +} +exports.isObject = isObject; + +function isDate(d) { + return isObject(d) && objectToString(d) === '[object Date]'; +} +exports.isDate = isDate; + +function isError(e) { + return isObject(e) && + (objectToString(e) === '[object Error]' || e instanceof Error); +} +exports.isError = isError; + +function isFunction(arg) { + return typeof arg === 'function'; +} +exports.isFunction = isFunction; + +function isPrimitive(arg) { + return arg === null || + typeof arg === 'boolean' || + typeof arg === 'number' || + typeof arg === 'string' || + typeof arg === 'symbol' || // ES6 symbol + typeof arg === 'undefined'; +} +exports.isPrimitive = isPrimitive; + +exports.isBuffer = require('./support/isBuffer'); + +function objectToString(o) { + return Object.prototype.toString.call(o); +} + + +function pad(n) { + return n < 10 ? '0' + n.toString(10) : n.toString(10); +} + + +var months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', + 'Oct', 'Nov', 'Dec']; + +// 26 Feb 16:19:34 +function timestamp() { + var d = new Date(); + var time = [pad(d.getHours()), + pad(d.getMinutes()), + pad(d.getSeconds())].join(':'); + return [d.getDate(), months[d.getMonth()], time].join(' '); +} + + +// log is just a thin wrapper to console.log that prepends a timestamp +exports.log = function() { + console.log('%s - %s', timestamp(), exports.format.apply(exports, arguments)); +}; + + +/** + * Inherit the prototype methods from one constructor into another. + * + * The Function.prototype.inherits from lang.js rewritten as a standalone + * function (not on Function.prototype). NOTE: If this file is to be loaded + * during bootstrapping this function needs to be rewritten using some native + * functions as prototype setup using normal JavaScript does not work as + * expected during bootstrapping (see mirror.js in r114903). + * + * @param {function} ctor Constructor function which needs to inherit the + * prototype. + * @param {function} superCtor Constructor function to inherit prototype from. + */ +exports.inherits = require('inherits'); + +exports._extend = function(origin, add) { + // Don't do anything if add isn't an object + if (!add || !isObject(add)) return origin; + + var keys = Object.keys(add); + var i = keys.length; + while (i--) { + origin[keys[i]] = add[keys[i]]; + } + return origin; +}; + +function hasOwnProperty(obj, prop) { + return Object.prototype.hasOwnProperty.call(obj, prop); +} + +}).call(this,require('_process'),typeof global !== "undefined" ? global : typeof self !== "undefined" ? self : typeof window !== "undefined" ? window : {}) +},{"./support/isBuffer":27,"_process":24,"inherits":26}],29:[function(require,module,exports){ +// Returns a wrapper function that returns a wrapped callback +// The wrapper function should do some stuff, and return a +// presumably different callback function. +// This makes sure that own properties are retained, so that +// decorations and such are not lost along the way. +module.exports = wrappy +function wrappy (fn, cb) { + if (fn && cb) return wrappy(fn)(cb) + + if (typeof fn !== 'function') + throw new TypeError('need wrapper function') + + Object.keys(fn).forEach(function (k) { + wrapper[k] = fn[k] + }) + + return wrapper + + function wrapper() { + var args = new Array(arguments.length) + for (var i = 0; i < args.length; i++) { + args[i] = arguments[i] + } + var ret = fn.apply(this, args) + var cb = args[args.length-1] + if (typeof ret === 'function' && ret !== cb) { + Object.keys(cb).forEach(function (k) { + ret[k] = cb[k] + }) + } + return ret + } +} + +},{}]},{},[7])(7) +}); \ No newline at end of file diff --git a/assets/javascripts/workers/search.5e67fbfe.min.js b/assets/javascripts/workers/search.5e67fbfe.min.js new file mode 100644 index 00000000..07b7f39b --- /dev/null +++ b/assets/javascripts/workers/search.5e67fbfe.min.js @@ -0,0 +1,48 @@ +(()=>{var ge=Object.create;var U=Object.defineProperty,ye=Object.defineProperties,me=Object.getOwnPropertyDescriptor,ve=Object.getOwnPropertyDescriptors,xe=Object.getOwnPropertyNames,G=Object.getOwnPropertySymbols,Se=Object.getPrototypeOf,X=Object.prototype.hasOwnProperty,Qe=Object.prototype.propertyIsEnumerable;var J=(t,e,r)=>e in t?U(t,e,{enumerable:!0,configurable:!0,writable:!0,value:r}):t[e]=r,M=(t,e)=>{for(var r in e||(e={}))X.call(e,r)&&J(t,r,e[r]);if(G)for(var r of G(e))Qe.call(e,r)&&J(t,r,e[r]);return t},Z=(t,e)=>ye(t,ve(e));var K=(t,e)=>()=>(e||t((e={exports:{}}).exports,e),e.exports);var be=(t,e,r,n)=>{if(e&&typeof e=="object"||typeof e=="function")for(let i of xe(e))!X.call(t,i)&&i!==r&&U(t,i,{get:()=>e[i],enumerable:!(n=me(e,i))||n.enumerable});return t};var W=(t,e,r)=>(r=t!=null?ge(Se(t)):{},be(e||!t||!t.__esModule?U(r,"default",{value:t,enumerable:!0}):r,t));var z=(t,e,r)=>new Promise((n,i)=>{var s=u=>{try{a(r.next(u))}catch(c){i(c)}},o=u=>{try{a(r.throw(u))}catch(c){i(c)}},a=u=>u.done?n(u.value):Promise.resolve(u.value).then(s,o);a((r=r.apply(t,e)).next())});var re=K((ee,te)=>{/** + * lunr - http://lunrjs.com - A bit like Solr, but much smaller and not as bright - 2.3.9 + * Copyright (C) 2020 Oliver Nightingale + * @license MIT + */(function(){var t=function(e){var r=new t.Builder;return r.pipeline.add(t.trimmer,t.stopWordFilter,t.stemmer),r.searchPipeline.add(t.stemmer),e.call(r,r),r.build()};t.version="2.3.9";/*! + * lunr.utils + * Copyright (C) 2020 Oliver Nightingale + */t.utils={},t.utils.warn=function(e){return function(r){e.console&&console.warn&&console.warn(r)}}(this),t.utils.asString=function(e){return e==null?"":e.toString()},t.utils.clone=function(e){if(e==null)return e;for(var r=Object.create(null),n=Object.keys(e),i=0;i0){var h=t.utils.clone(r)||{};h.position=[a,c],h.index=s.length,s.push(new t.Token(n.slice(a,o),h))}a=o+1}}return s},t.tokenizer.separator=/[\s\-]+/;/*! + * lunr.Pipeline + * Copyright (C) 2020 Oliver Nightingale + */t.Pipeline=function(){this._stack=[]},t.Pipeline.registeredFunctions=Object.create(null),t.Pipeline.registerFunction=function(e,r){r in this.registeredFunctions&&t.utils.warn("Overwriting existing registered function: "+r),e.label=r,t.Pipeline.registeredFunctions[e.label]=e},t.Pipeline.warnIfFunctionNotRegistered=function(e){var r=e.label&&e.label in this.registeredFunctions;r||t.utils.warn(`Function is not registered with pipeline. This may cause problems when serialising the index. +`,e)},t.Pipeline.load=function(e){var r=new t.Pipeline;return e.forEach(function(n){var i=t.Pipeline.registeredFunctions[n];if(i)r.add(i);else throw new Error("Cannot load unregistered function: "+n)}),r},t.Pipeline.prototype.add=function(){var e=Array.prototype.slice.call(arguments);e.forEach(function(r){t.Pipeline.warnIfFunctionNotRegistered(r),this._stack.push(r)},this)},t.Pipeline.prototype.after=function(e,r){t.Pipeline.warnIfFunctionNotRegistered(r);var n=this._stack.indexOf(e);if(n==-1)throw new Error("Cannot find existingFn");n=n+1,this._stack.splice(n,0,r)},t.Pipeline.prototype.before=function(e,r){t.Pipeline.warnIfFunctionNotRegistered(r);var n=this._stack.indexOf(e);if(n==-1)throw new Error("Cannot find existingFn");this._stack.splice(n,0,r)},t.Pipeline.prototype.remove=function(e){var r=this._stack.indexOf(e);r!=-1&&this._stack.splice(r,1)},t.Pipeline.prototype.run=function(e){for(var r=this._stack.length,n=0;n1&&(oe&&(n=s),o!=e);)i=n-r,s=r+Math.floor(i/2),o=this.elements[s*2];if(o==e||o>e)return s*2;if(ou?h+=2:a==u&&(r+=n[c+1]*i[h+1],c+=2,h+=2);return r},t.Vector.prototype.similarity=function(e){return this.dot(e)/this.magnitude()||0},t.Vector.prototype.toArray=function(){for(var e=new Array(this.elements.length/2),r=1,n=0;r0){var o=s.str.charAt(0),a;o in s.node.edges?a=s.node.edges[o]:(a=new t.TokenSet,s.node.edges[o]=a),s.str.length==1&&(a.final=!0),i.push({node:a,editsRemaining:s.editsRemaining,str:s.str.slice(1)})}if(s.editsRemaining!=0){if("*"in s.node.edges)var u=s.node.edges["*"];else{var u=new t.TokenSet;s.node.edges["*"]=u}if(s.str.length==0&&(u.final=!0),i.push({node:u,editsRemaining:s.editsRemaining-1,str:s.str}),s.str.length>1&&i.push({node:s.node,editsRemaining:s.editsRemaining-1,str:s.str.slice(1)}),s.str.length==1&&(s.node.final=!0),s.str.length>=1){if("*"in s.node.edges)var c=s.node.edges["*"];else{var c=new t.TokenSet;s.node.edges["*"]=c}s.str.length==1&&(c.final=!0),i.push({node:c,editsRemaining:s.editsRemaining-1,str:s.str.slice(1)})}if(s.str.length>1){var h=s.str.charAt(0),y=s.str.charAt(1),g;y in s.node.edges?g=s.node.edges[y]:(g=new t.TokenSet,s.node.edges[y]=g),s.str.length==1&&(g.final=!0),i.push({node:g,editsRemaining:s.editsRemaining-1,str:h+s.str.slice(2)})}}}return n},t.TokenSet.fromString=function(e){for(var r=new t.TokenSet,n=r,i=0,s=e.length;i=e;r--){var n=this.uncheckedNodes[r],i=n.child.toString();i in this.minimizedNodes?n.parent.edges[n.char]=this.minimizedNodes[i]:(n.child._str=i,this.minimizedNodes[i]=n.child),this.uncheckedNodes.pop()}};/*! + * lunr.Index + * Copyright (C) 2020 Oliver Nightingale + */t.Index=function(e){this.invertedIndex=e.invertedIndex,this.fieldVectors=e.fieldVectors,this.tokenSet=e.tokenSet,this.fields=e.fields,this.pipeline=e.pipeline},t.Index.prototype.search=function(e){return this.query(function(r){var n=new t.QueryParser(e,r);n.parse()})},t.Index.prototype.query=function(e){for(var r=new t.Query(this.fields),n=Object.create(null),i=Object.create(null),s=Object.create(null),o=Object.create(null),a=Object.create(null),u=0;u1?this._b=1:this._b=e},t.Builder.prototype.k1=function(e){this._k1=e},t.Builder.prototype.add=function(e,r){var n=e[this._ref],i=Object.keys(this._fields);this._documents[n]=r||{},this.documentCount+=1;for(var s=0;s=this.length)return t.QueryLexer.EOS;var e=this.str.charAt(this.pos);return this.pos+=1,e},t.QueryLexer.prototype.width=function(){return this.pos-this.start},t.QueryLexer.prototype.ignore=function(){this.start==this.pos&&(this.pos+=1),this.start=this.pos},t.QueryLexer.prototype.backup=function(){this.pos-=1},t.QueryLexer.prototype.acceptDigitRun=function(){var e,r;do e=this.next(),r=e.charCodeAt(0);while(r>47&&r<58);e!=t.QueryLexer.EOS&&this.backup()},t.QueryLexer.prototype.more=function(){return this.pos1&&(e.backup(),e.emit(t.QueryLexer.TERM)),e.ignore(),e.more())return t.QueryLexer.lexText},t.QueryLexer.lexEditDistance=function(e){return e.ignore(),e.acceptDigitRun(),e.emit(t.QueryLexer.EDIT_DISTANCE),t.QueryLexer.lexText},t.QueryLexer.lexBoost=function(e){return e.ignore(),e.acceptDigitRun(),e.emit(t.QueryLexer.BOOST),t.QueryLexer.lexText},t.QueryLexer.lexEOS=function(e){e.width()>0&&e.emit(t.QueryLexer.TERM)},t.QueryLexer.termSeparator=t.tokenizer.separator,t.QueryLexer.lexText=function(e){for(;;){var r=e.next();if(r==t.QueryLexer.EOS)return t.QueryLexer.lexEOS;if(r.charCodeAt(0)==92){e.escapeCharacter();continue}if(r==":")return t.QueryLexer.lexField;if(r=="~")return e.backup(),e.width()>0&&e.emit(t.QueryLexer.TERM),t.QueryLexer.lexEditDistance;if(r=="^")return e.backup(),e.width()>0&&e.emit(t.QueryLexer.TERM),t.QueryLexer.lexBoost;if(r=="+"&&e.width()===1||r=="-"&&e.width()===1)return e.emit(t.QueryLexer.PRESENCE),t.QueryLexer.lexText;if(r.match(t.QueryLexer.termSeparator))return t.QueryLexer.lexTerm}},t.QueryParser=function(e,r){this.lexer=new t.QueryLexer(e),this.query=r,this.currentClause={},this.lexemeIdx=0},t.QueryParser.prototype.parse=function(){this.lexer.run(),this.lexemes=this.lexer.lexemes;for(var e=t.QueryParser.parseClause;e;)e=e(this);return this.query},t.QueryParser.prototype.peekLexeme=function(){return this.lexemes[this.lexemeIdx]},t.QueryParser.prototype.consumeLexeme=function(){var e=this.peekLexeme();return this.lexemeIdx+=1,e},t.QueryParser.prototype.nextClause=function(){var e=this.currentClause;this.query.clause(e),this.currentClause={}},t.QueryParser.parseClause=function(e){var r=e.peekLexeme();if(r!=null)switch(r.type){case t.QueryLexer.PRESENCE:return t.QueryParser.parsePresence;case t.QueryLexer.FIELD:return t.QueryParser.parseField;case t.QueryLexer.TERM:return t.QueryParser.parseTerm;default:var n="expected either a field or a term, found "+r.type;throw r.str.length>=1&&(n+=" with value '"+r.str+"'"),new t.QueryParseError(n,r.start,r.end)}},t.QueryParser.parsePresence=function(e){var r=e.consumeLexeme();if(r!=null){switch(r.str){case"-":e.currentClause.presence=t.Query.presence.PROHIBITED;break;case"+":e.currentClause.presence=t.Query.presence.REQUIRED;break;default:var n="unrecognised presence operator'"+r.str+"'";throw new t.QueryParseError(n,r.start,r.end)}var i=e.peekLexeme();if(i==null){var n="expecting term or field, found nothing";throw new t.QueryParseError(n,r.start,r.end)}switch(i.type){case t.QueryLexer.FIELD:return t.QueryParser.parseField;case t.QueryLexer.TERM:return t.QueryParser.parseTerm;default:var n="expecting term or field, found '"+i.type+"'";throw new t.QueryParseError(n,i.start,i.end)}}},t.QueryParser.parseField=function(e){var r=e.consumeLexeme();if(r!=null){if(e.query.allFields.indexOf(r.str)==-1){var n=e.query.allFields.map(function(o){return"'"+o+"'"}).join(", "),i="unrecognised field '"+r.str+"', possible fields: "+n;throw new t.QueryParseError(i,r.start,r.end)}e.currentClause.fields=[r.str];var s=e.peekLexeme();if(s==null){var i="expecting term, found nothing";throw new t.QueryParseError(i,r.start,r.end)}switch(s.type){case t.QueryLexer.TERM:return t.QueryParser.parseTerm;default:var i="expecting term, found '"+s.type+"'";throw new t.QueryParseError(i,s.start,s.end)}}},t.QueryParser.parseTerm=function(e){var r=e.consumeLexeme();if(r!=null){e.currentClause.term=r.str.toLowerCase(),r.str.indexOf("*")!=-1&&(e.currentClause.usePipeline=!1);var n=e.peekLexeme();if(n==null){e.nextClause();return}switch(n.type){case t.QueryLexer.TERM:return e.nextClause(),t.QueryParser.parseTerm;case t.QueryLexer.FIELD:return e.nextClause(),t.QueryParser.parseField;case t.QueryLexer.EDIT_DISTANCE:return t.QueryParser.parseEditDistance;case t.QueryLexer.BOOST:return t.QueryParser.parseBoost;case t.QueryLexer.PRESENCE:return e.nextClause(),t.QueryParser.parsePresence;default:var i="Unexpected lexeme type '"+n.type+"'";throw new t.QueryParseError(i,n.start,n.end)}}},t.QueryParser.parseEditDistance=function(e){var r=e.consumeLexeme();if(r!=null){var n=parseInt(r.str,10);if(isNaN(n)){var i="edit distance must be numeric";throw new t.QueryParseError(i,r.start,r.end)}e.currentClause.editDistance=n;var s=e.peekLexeme();if(s==null){e.nextClause();return}switch(s.type){case t.QueryLexer.TERM:return e.nextClause(),t.QueryParser.parseTerm;case t.QueryLexer.FIELD:return e.nextClause(),t.QueryParser.parseField;case t.QueryLexer.EDIT_DISTANCE:return t.QueryParser.parseEditDistance;case t.QueryLexer.BOOST:return t.QueryParser.parseBoost;case t.QueryLexer.PRESENCE:return e.nextClause(),t.QueryParser.parsePresence;default:var i="Unexpected lexeme type '"+s.type+"'";throw new t.QueryParseError(i,s.start,s.end)}}},t.QueryParser.parseBoost=function(e){var r=e.consumeLexeme();if(r!=null){var n=parseInt(r.str,10);if(isNaN(n)){var i="boost must be numeric";throw new t.QueryParseError(i,r.start,r.end)}e.currentClause.boost=n;var s=e.peekLexeme();if(s==null){e.nextClause();return}switch(s.type){case t.QueryLexer.TERM:return e.nextClause(),t.QueryParser.parseTerm;case t.QueryLexer.FIELD:return e.nextClause(),t.QueryParser.parseField;case t.QueryLexer.EDIT_DISTANCE:return t.QueryParser.parseEditDistance;case t.QueryLexer.BOOST:return t.QueryParser.parseBoost;case t.QueryLexer.PRESENCE:return e.nextClause(),t.QueryParser.parsePresence;default:var i="Unexpected lexeme type '"+s.type+"'";throw new t.QueryParseError(i,s.start,s.end)}}},function(e,r){typeof define=="function"&&define.amd?define(r):typeof ee=="object"?te.exports=r():e.lunr=r()}(this,function(){return t})})()});var H=K((Re,ne)=>{"use strict";/*! + * escape-html + * Copyright(c) 2012-2013 TJ Holowaychuk + * Copyright(c) 2015 Andreas Lubbe + * Copyright(c) 2015 Tiancheng "Timothy" Gu + * MIT Licensed + */var Le=/["'&<>]/;ne.exports=we;function we(t){var e=""+t,r=Le.exec(e);if(!r)return e;var n,i="",s=0,o=0;for(s=r.index;s=0;r--){let n=t[r];typeof n!="object"?n=document.createTextNode(n):n.parentNode&&n.parentNode.removeChild(n),r?e.insertBefore(this.previousSibling,n):e.replaceChild(n,this)}}}));var ie=W(H());function se(t){let e=new Map,r=new Set;for(let n of t){let[i,s]=n.location.split("#"),o=n.location,a=n.title,u=n.tags,c=(0,ie.default)(n.text).replace(/\s+(?=[,.:;!?])/g,"").replace(/\s+/g," ");if(s){let h=e.get(i);r.has(h)?e.set(o,{location:o,title:a,text:c,parent:h}):(h.title=n.title,h.text=c,r.add(h))}else e.set(o,M({location:o,title:a,text:c},u&&{tags:u}))}return e}var oe=W(H());function ae(t,e){let r=new RegExp(t.separator,"img"),n=(i,s,o)=>`${s}${o}`;return i=>{i=i.replace(/[\s*+\-:~^]+/g," ").trim();let s=new RegExp(`(^|${t.separator})(${i.replace(/[|\\{}()[\]^$+*?.-]/g,"\\$&").replace(r,"|")})`,"img");return o=>(e?(0,oe.default)(o):o).replace(s,n).replace(/<\/mark>(\s+)]*>/img,"$1")}}function ue(t){let e=new lunr.Query(["title","text"]);return new lunr.QueryParser(t,e).parse(),e.clauses}function ce(t,e){var i;let r=new Set(t),n={};for(let s=0;s!n.has(i)))]}var q=class{constructor({config:e,docs:r,options:n}){this.options=n,this.documents=se(r),this.highlight=ae(e,!1),lunr.tokenizer.separator=new RegExp(e.separator),this.index=lunr(function(){e.lang.length===1&&e.lang[0]!=="en"?this.use(lunr[e.lang[0]]):e.lang.length>1&&this.use(lunr.multiLanguage(...e.lang));let i=Ee(["trimmer","stopWordFilter","stemmer"],n.pipeline);for(let s of e.lang.map(o=>o==="en"?lunr:lunr[o]))for(let o of i)this.pipeline.remove(s[o]),this.searchPipeline.remove(s[o]);this.ref("location"),this.field("title",{boost:1e3}),this.field("text"),this.field("tags",{boost:1e6});for(let s of r)this.add(s)})}search(e){if(e)try{let r=this.highlight(e),n=ue(e).filter(o=>o.presence!==lunr.Query.presence.PROHIBITED),i=this.index.search(`${e}*`).reduce((o,{ref:a,score:u,matchData:c})=>{let h=this.documents.get(a);if(typeof h!="undefined"){let{location:y,title:g,text:b,tags:m,parent:Q}=h,p=ce(n,Object.keys(c.metadata)),d=+!Q+ +Object.values(p).every(w=>w);o.push(Z(M({location:y,title:r(g),text:r(b)},m&&{tags:m.map(r)}),{score:u*(1+d),terms:p}))}return o},[]).sort((o,a)=>a.score-o.score).reduce((o,a)=>{let u=this.documents.get(a.location);if(typeof u!="undefined"){let c="parent"in u?u.parent.location:u.location;o.set(c,[...o.get(c)||[],a])}return o},new Map),s;if(this.options.suggestions){let o=this.index.query(a=>{for(let u of n)a.term(u.term,{fields:["title"],presence:lunr.Query.presence.REQUIRED,wildcard:lunr.Query.wildcard.TRAILING})});s=o.length?Object.keys(o[0].matchData.metadata):[]}return M({items:[...i.values()]},typeof s!="undefined"&&{suggestions:s})}catch(r){console.warn(`Invalid query: ${e} \u2013 see https://bit.ly/2s3ChXG`)}return{items:[]}}};var Y;function ke(t){return z(this,null,function*(){let e="../lunr";if(typeof parent!="undefined"&&"IFrameWorker"in parent){let n=document.querySelector("script[src]"),[i]=n.src.split("/worker");e=e.replace("..",i)}let r=[];for(let n of t.lang){switch(n){case"ja":r.push(`${e}/tinyseg.js`);break;case"hi":case"th":r.push(`${e}/wordcut.js`);break}n!=="en"&&r.push(`${e}/min/lunr.${n}.min.js`)}t.lang.length>1&&r.push(`${e}/min/lunr.multi.min.js`),r.length&&(yield importScripts(`${e}/min/lunr.stemmer.support.min.js`,...r))})}function Te(t){return z(this,null,function*(){switch(t.type){case 0:return yield ke(t.data.config),Y=new q(t.data),{type:1};case 2:return{type:3,data:Y?Y.search(t.data):{items:[]}};default:throw new TypeError("Invalid message type")}})}self.lunr=le.default;addEventListener("message",t=>z(void 0,null,function*(){postMessage(yield Te(t.data))}));})(); +//# sourceMappingURL=search.5e67fbfe.min.js.map + diff --git a/assets/javascripts/workers/search.5e67fbfe.min.js.map b/assets/javascripts/workers/search.5e67fbfe.min.js.map new file mode 100644 index 00000000..06d43304 --- /dev/null +++ b/assets/javascripts/workers/search.5e67fbfe.min.js.map @@ -0,0 +1,8 @@ +{ + "version": 3, + "sources": ["node_modules/lunr/lunr.js", "node_modules/escape-html/index.js", "src/assets/javascripts/integrations/search/worker/main/index.ts", "src/assets/javascripts/polyfills/index.ts", "src/assets/javascripts/integrations/search/document/index.ts", "src/assets/javascripts/integrations/search/highlighter/index.ts", "src/assets/javascripts/integrations/search/query/_/index.ts", "src/assets/javascripts/integrations/search/_/index.ts"], + "sourceRoot": "../../../..", + "sourcesContent": ["/**\n * lunr - http://lunrjs.com - A bit like Solr, but much smaller and not as bright - 2.3.9\n * Copyright (C) 2020 Oliver Nightingale\n * @license MIT\n */\n\n;(function(){\n\n/**\n * A convenience function for configuring and constructing\n * a new lunr Index.\n *\n * A lunr.Builder instance is created and the pipeline setup\n * with a trimmer, stop word filter and stemmer.\n *\n * This builder object is yielded to the configuration function\n * that is passed as a parameter, allowing the list of fields\n * and other builder parameters to be customised.\n *\n * All documents _must_ be added within the passed config function.\n *\n * @example\n * var idx = lunr(function () {\n * this.field('title')\n * this.field('body')\n * this.ref('id')\n *\n * documents.forEach(function (doc) {\n * this.add(doc)\n * }, this)\n * })\n *\n * @see {@link lunr.Builder}\n * @see {@link lunr.Pipeline}\n * @see {@link lunr.trimmer}\n * @see {@link lunr.stopWordFilter}\n * @see {@link lunr.stemmer}\n * @namespace {function} lunr\n */\nvar lunr = function (config) {\n var builder = new lunr.Builder\n\n builder.pipeline.add(\n lunr.trimmer,\n lunr.stopWordFilter,\n lunr.stemmer\n )\n\n builder.searchPipeline.add(\n lunr.stemmer\n )\n\n config.call(builder, builder)\n return builder.build()\n}\n\nlunr.version = \"2.3.9\"\n/*!\n * lunr.utils\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * A namespace containing utils for the rest of the lunr library\n * @namespace lunr.utils\n */\nlunr.utils = {}\n\n/**\n * Print a warning message to the console.\n *\n * @param {String} message The message to be printed.\n * @memberOf lunr.utils\n * @function\n */\nlunr.utils.warn = (function (global) {\n /* eslint-disable no-console */\n return function (message) {\n if (global.console && console.warn) {\n console.warn(message)\n }\n }\n /* eslint-enable no-console */\n})(this)\n\n/**\n * Convert an object to a string.\n *\n * In the case of `null` and `undefined` the function returns\n * the empty string, in all other cases the result of calling\n * `toString` on the passed object is returned.\n *\n * @param {Any} obj The object to convert to a string.\n * @return {String} string representation of the passed object.\n * @memberOf lunr.utils\n */\nlunr.utils.asString = function (obj) {\n if (obj === void 0 || obj === null) {\n return \"\"\n } else {\n return obj.toString()\n }\n}\n\n/**\n * Clones an object.\n *\n * Will create a copy of an existing object such that any mutations\n * on the copy cannot affect the original.\n *\n * Only shallow objects are supported, passing a nested object to this\n * function will cause a TypeError.\n *\n * Objects with primitives, and arrays of primitives are supported.\n *\n * @param {Object} obj The object to clone.\n * @return {Object} a clone of the passed object.\n * @throws {TypeError} when a nested object is passed.\n * @memberOf Utils\n */\nlunr.utils.clone = function (obj) {\n if (obj === null || obj === undefined) {\n return obj\n }\n\n var clone = Object.create(null),\n keys = Object.keys(obj)\n\n for (var i = 0; i < keys.length; i++) {\n var key = keys[i],\n val = obj[key]\n\n if (Array.isArray(val)) {\n clone[key] = val.slice()\n continue\n }\n\n if (typeof val === 'string' ||\n typeof val === 'number' ||\n typeof val === 'boolean') {\n clone[key] = val\n continue\n }\n\n throw new TypeError(\"clone is not deep and does not support nested objects\")\n }\n\n return clone\n}\nlunr.FieldRef = function (docRef, fieldName, stringValue) {\n this.docRef = docRef\n this.fieldName = fieldName\n this._stringValue = stringValue\n}\n\nlunr.FieldRef.joiner = \"/\"\n\nlunr.FieldRef.fromString = function (s) {\n var n = s.indexOf(lunr.FieldRef.joiner)\n\n if (n === -1) {\n throw \"malformed field ref string\"\n }\n\n var fieldRef = s.slice(0, n),\n docRef = s.slice(n + 1)\n\n return new lunr.FieldRef (docRef, fieldRef, s)\n}\n\nlunr.FieldRef.prototype.toString = function () {\n if (this._stringValue == undefined) {\n this._stringValue = this.fieldName + lunr.FieldRef.joiner + this.docRef\n }\n\n return this._stringValue\n}\n/*!\n * lunr.Set\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * A lunr set.\n *\n * @constructor\n */\nlunr.Set = function (elements) {\n this.elements = Object.create(null)\n\n if (elements) {\n this.length = elements.length\n\n for (var i = 0; i < this.length; i++) {\n this.elements[elements[i]] = true\n }\n } else {\n this.length = 0\n }\n}\n\n/**\n * A complete set that contains all elements.\n *\n * @static\n * @readonly\n * @type {lunr.Set}\n */\nlunr.Set.complete = {\n intersect: function (other) {\n return other\n },\n\n union: function () {\n return this\n },\n\n contains: function () {\n return true\n }\n}\n\n/**\n * An empty set that contains no elements.\n *\n * @static\n * @readonly\n * @type {lunr.Set}\n */\nlunr.Set.empty = {\n intersect: function () {\n return this\n },\n\n union: function (other) {\n return other\n },\n\n contains: function () {\n return false\n }\n}\n\n/**\n * Returns true if this set contains the specified object.\n *\n * @param {object} object - Object whose presence in this set is to be tested.\n * @returns {boolean} - True if this set contains the specified object.\n */\nlunr.Set.prototype.contains = function (object) {\n return !!this.elements[object]\n}\n\n/**\n * Returns a new set containing only the elements that are present in both\n * this set and the specified set.\n *\n * @param {lunr.Set} other - set to intersect with this set.\n * @returns {lunr.Set} a new set that is the intersection of this and the specified set.\n */\n\nlunr.Set.prototype.intersect = function (other) {\n var a, b, elements, intersection = []\n\n if (other === lunr.Set.complete) {\n return this\n }\n\n if (other === lunr.Set.empty) {\n return other\n }\n\n if (this.length < other.length) {\n a = this\n b = other\n } else {\n a = other\n b = this\n }\n\n elements = Object.keys(a.elements)\n\n for (var i = 0; i < elements.length; i++) {\n var element = elements[i]\n if (element in b.elements) {\n intersection.push(element)\n }\n }\n\n return new lunr.Set (intersection)\n}\n\n/**\n * Returns a new set combining the elements of this and the specified set.\n *\n * @param {lunr.Set} other - set to union with this set.\n * @return {lunr.Set} a new set that is the union of this and the specified set.\n */\n\nlunr.Set.prototype.union = function (other) {\n if (other === lunr.Set.complete) {\n return lunr.Set.complete\n }\n\n if (other === lunr.Set.empty) {\n return this\n }\n\n return new lunr.Set(Object.keys(this.elements).concat(Object.keys(other.elements)))\n}\n/**\n * A function to calculate the inverse document frequency for\n * a posting. This is shared between the builder and the index\n *\n * @private\n * @param {object} posting - The posting for a given term\n * @param {number} documentCount - The total number of documents.\n */\nlunr.idf = function (posting, documentCount) {\n var documentsWithTerm = 0\n\n for (var fieldName in posting) {\n if (fieldName == '_index') continue // Ignore the term index, its not a field\n documentsWithTerm += Object.keys(posting[fieldName]).length\n }\n\n var x = (documentCount - documentsWithTerm + 0.5) / (documentsWithTerm + 0.5)\n\n return Math.log(1 + Math.abs(x))\n}\n\n/**\n * A token wraps a string representation of a token\n * as it is passed through the text processing pipeline.\n *\n * @constructor\n * @param {string} [str=''] - The string token being wrapped.\n * @param {object} [metadata={}] - Metadata associated with this token.\n */\nlunr.Token = function (str, metadata) {\n this.str = str || \"\"\n this.metadata = metadata || {}\n}\n\n/**\n * Returns the token string that is being wrapped by this object.\n *\n * @returns {string}\n */\nlunr.Token.prototype.toString = function () {\n return this.str\n}\n\n/**\n * A token update function is used when updating or optionally\n * when cloning a token.\n *\n * @callback lunr.Token~updateFunction\n * @param {string} str - The string representation of the token.\n * @param {Object} metadata - All metadata associated with this token.\n */\n\n/**\n * Applies the given function to the wrapped string token.\n *\n * @example\n * token.update(function (str, metadata) {\n * return str.toUpperCase()\n * })\n *\n * @param {lunr.Token~updateFunction} fn - A function to apply to the token string.\n * @returns {lunr.Token}\n */\nlunr.Token.prototype.update = function (fn) {\n this.str = fn(this.str, this.metadata)\n return this\n}\n\n/**\n * Creates a clone of this token. Optionally a function can be\n * applied to the cloned token.\n *\n * @param {lunr.Token~updateFunction} [fn] - An optional function to apply to the cloned token.\n * @returns {lunr.Token}\n */\nlunr.Token.prototype.clone = function (fn) {\n fn = fn || function (s) { return s }\n return new lunr.Token (fn(this.str, this.metadata), this.metadata)\n}\n/*!\n * lunr.tokenizer\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * A function for splitting a string into tokens ready to be inserted into\n * the search index. Uses `lunr.tokenizer.separator` to split strings, change\n * the value of this property to change how strings are split into tokens.\n *\n * This tokenizer will convert its parameter to a string by calling `toString` and\n * then will split this string on the character in `lunr.tokenizer.separator`.\n * Arrays will have their elements converted to strings and wrapped in a lunr.Token.\n *\n * Optional metadata can be passed to the tokenizer, this metadata will be cloned and\n * added as metadata to every token that is created from the object to be tokenized.\n *\n * @static\n * @param {?(string|object|object[])} obj - The object to convert into tokens\n * @param {?object} metadata - Optional metadata to associate with every token\n * @returns {lunr.Token[]}\n * @see {@link lunr.Pipeline}\n */\nlunr.tokenizer = function (obj, metadata) {\n if (obj == null || obj == undefined) {\n return []\n }\n\n if (Array.isArray(obj)) {\n return obj.map(function (t) {\n return new lunr.Token(\n lunr.utils.asString(t).toLowerCase(),\n lunr.utils.clone(metadata)\n )\n })\n }\n\n var str = obj.toString().toLowerCase(),\n len = str.length,\n tokens = []\n\n for (var sliceEnd = 0, sliceStart = 0; sliceEnd <= len; sliceEnd++) {\n var char = str.charAt(sliceEnd),\n sliceLength = sliceEnd - sliceStart\n\n if ((char.match(lunr.tokenizer.separator) || sliceEnd == len)) {\n\n if (sliceLength > 0) {\n var tokenMetadata = lunr.utils.clone(metadata) || {}\n tokenMetadata[\"position\"] = [sliceStart, sliceLength]\n tokenMetadata[\"index\"] = tokens.length\n\n tokens.push(\n new lunr.Token (\n str.slice(sliceStart, sliceEnd),\n tokenMetadata\n )\n )\n }\n\n sliceStart = sliceEnd + 1\n }\n\n }\n\n return tokens\n}\n\n/**\n * The separator used to split a string into tokens. Override this property to change the behaviour of\n * `lunr.tokenizer` behaviour when tokenizing strings. By default this splits on whitespace and hyphens.\n *\n * @static\n * @see lunr.tokenizer\n */\nlunr.tokenizer.separator = /[\\s\\-]+/\n/*!\n * lunr.Pipeline\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * lunr.Pipelines maintain an ordered list of functions to be applied to all\n * tokens in documents entering the search index and queries being ran against\n * the index.\n *\n * An instance of lunr.Index created with the lunr shortcut will contain a\n * pipeline with a stop word filter and an English language stemmer. Extra\n * functions can be added before or after either of these functions or these\n * default functions can be removed.\n *\n * When run the pipeline will call each function in turn, passing a token, the\n * index of that token in the original list of all tokens and finally a list of\n * all the original tokens.\n *\n * The output of functions in the pipeline will be passed to the next function\n * in the pipeline. To exclude a token from entering the index the function\n * should return undefined, the rest of the pipeline will not be called with\n * this token.\n *\n * For serialisation of pipelines to work, all functions used in an instance of\n * a pipeline should be registered with lunr.Pipeline. Registered functions can\n * then be loaded. If trying to load a serialised pipeline that uses functions\n * that are not registered an error will be thrown.\n *\n * If not planning on serialising the pipeline then registering pipeline functions\n * is not necessary.\n *\n * @constructor\n */\nlunr.Pipeline = function () {\n this._stack = []\n}\n\nlunr.Pipeline.registeredFunctions = Object.create(null)\n\n/**\n * A pipeline function maps lunr.Token to lunr.Token. A lunr.Token contains the token\n * string as well as all known metadata. A pipeline function can mutate the token string\n * or mutate (or add) metadata for a given token.\n *\n * A pipeline function can indicate that the passed token should be discarded by returning\n * null, undefined or an empty string. This token will not be passed to any downstream pipeline\n * functions and will not be added to the index.\n *\n * Multiple tokens can be returned by returning an array of tokens. Each token will be passed\n * to any downstream pipeline functions and all will returned tokens will be added to the index.\n *\n * Any number of pipeline functions may be chained together using a lunr.Pipeline.\n *\n * @interface lunr.PipelineFunction\n * @param {lunr.Token} token - A token from the document being processed.\n * @param {number} i - The index of this token in the complete list of tokens for this document/field.\n * @param {lunr.Token[]} tokens - All tokens for this document/field.\n * @returns {(?lunr.Token|lunr.Token[])}\n */\n\n/**\n * Register a function with the pipeline.\n *\n * Functions that are used in the pipeline should be registered if the pipeline\n * needs to be serialised, or a serialised pipeline needs to be loaded.\n *\n * Registering a function does not add it to a pipeline, functions must still be\n * added to instances of the pipeline for them to be used when running a pipeline.\n *\n * @param {lunr.PipelineFunction} fn - The function to check for.\n * @param {String} label - The label to register this function with\n */\nlunr.Pipeline.registerFunction = function (fn, label) {\n if (label in this.registeredFunctions) {\n lunr.utils.warn('Overwriting existing registered function: ' + label)\n }\n\n fn.label = label\n lunr.Pipeline.registeredFunctions[fn.label] = fn\n}\n\n/**\n * Warns if the function is not registered as a Pipeline function.\n *\n * @param {lunr.PipelineFunction} fn - The function to check for.\n * @private\n */\nlunr.Pipeline.warnIfFunctionNotRegistered = function (fn) {\n var isRegistered = fn.label && (fn.label in this.registeredFunctions)\n\n if (!isRegistered) {\n lunr.utils.warn('Function is not registered with pipeline. This may cause problems when serialising the index.\\n', fn)\n }\n}\n\n/**\n * Loads a previously serialised pipeline.\n *\n * All functions to be loaded must already be registered with lunr.Pipeline.\n * If any function from the serialised data has not been registered then an\n * error will be thrown.\n *\n * @param {Object} serialised - The serialised pipeline to load.\n * @returns {lunr.Pipeline}\n */\nlunr.Pipeline.load = function (serialised) {\n var pipeline = new lunr.Pipeline\n\n serialised.forEach(function (fnName) {\n var fn = lunr.Pipeline.registeredFunctions[fnName]\n\n if (fn) {\n pipeline.add(fn)\n } else {\n throw new Error('Cannot load unregistered function: ' + fnName)\n }\n })\n\n return pipeline\n}\n\n/**\n * Adds new functions to the end of the pipeline.\n *\n * Logs a warning if the function has not been registered.\n *\n * @param {lunr.PipelineFunction[]} functions - Any number of functions to add to the pipeline.\n */\nlunr.Pipeline.prototype.add = function () {\n var fns = Array.prototype.slice.call(arguments)\n\n fns.forEach(function (fn) {\n lunr.Pipeline.warnIfFunctionNotRegistered(fn)\n this._stack.push(fn)\n }, this)\n}\n\n/**\n * Adds a single function after a function that already exists in the\n * pipeline.\n *\n * Logs a warning if the function has not been registered.\n *\n * @param {lunr.PipelineFunction} existingFn - A function that already exists in the pipeline.\n * @param {lunr.PipelineFunction} newFn - The new function to add to the pipeline.\n */\nlunr.Pipeline.prototype.after = function (existingFn, newFn) {\n lunr.Pipeline.warnIfFunctionNotRegistered(newFn)\n\n var pos = this._stack.indexOf(existingFn)\n if (pos == -1) {\n throw new Error('Cannot find existingFn')\n }\n\n pos = pos + 1\n this._stack.splice(pos, 0, newFn)\n}\n\n/**\n * Adds a single function before a function that already exists in the\n * pipeline.\n *\n * Logs a warning if the function has not been registered.\n *\n * @param {lunr.PipelineFunction} existingFn - A function that already exists in the pipeline.\n * @param {lunr.PipelineFunction} newFn - The new function to add to the pipeline.\n */\nlunr.Pipeline.prototype.before = function (existingFn, newFn) {\n lunr.Pipeline.warnIfFunctionNotRegistered(newFn)\n\n var pos = this._stack.indexOf(existingFn)\n if (pos == -1) {\n throw new Error('Cannot find existingFn')\n }\n\n this._stack.splice(pos, 0, newFn)\n}\n\n/**\n * Removes a function from the pipeline.\n *\n * @param {lunr.PipelineFunction} fn The function to remove from the pipeline.\n */\nlunr.Pipeline.prototype.remove = function (fn) {\n var pos = this._stack.indexOf(fn)\n if (pos == -1) {\n return\n }\n\n this._stack.splice(pos, 1)\n}\n\n/**\n * Runs the current list of functions that make up the pipeline against the\n * passed tokens.\n *\n * @param {Array} tokens The tokens to run through the pipeline.\n * @returns {Array}\n */\nlunr.Pipeline.prototype.run = function (tokens) {\n var stackLength = this._stack.length\n\n for (var i = 0; i < stackLength; i++) {\n var fn = this._stack[i]\n var memo = []\n\n for (var j = 0; j < tokens.length; j++) {\n var result = fn(tokens[j], j, tokens)\n\n if (result === null || result === void 0 || result === '') continue\n\n if (Array.isArray(result)) {\n for (var k = 0; k < result.length; k++) {\n memo.push(result[k])\n }\n } else {\n memo.push(result)\n }\n }\n\n tokens = memo\n }\n\n return tokens\n}\n\n/**\n * Convenience method for passing a string through a pipeline and getting\n * strings out. This method takes care of wrapping the passed string in a\n * token and mapping the resulting tokens back to strings.\n *\n * @param {string} str - The string to pass through the pipeline.\n * @param {?object} metadata - Optional metadata to associate with the token\n * passed to the pipeline.\n * @returns {string[]}\n */\nlunr.Pipeline.prototype.runString = function (str, metadata) {\n var token = new lunr.Token (str, metadata)\n\n return this.run([token]).map(function (t) {\n return t.toString()\n })\n}\n\n/**\n * Resets the pipeline by removing any existing processors.\n *\n */\nlunr.Pipeline.prototype.reset = function () {\n this._stack = []\n}\n\n/**\n * Returns a representation of the pipeline ready for serialisation.\n *\n * Logs a warning if the function has not been registered.\n *\n * @returns {Array}\n */\nlunr.Pipeline.prototype.toJSON = function () {\n return this._stack.map(function (fn) {\n lunr.Pipeline.warnIfFunctionNotRegistered(fn)\n\n return fn.label\n })\n}\n/*!\n * lunr.Vector\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * A vector is used to construct the vector space of documents and queries. These\n * vectors support operations to determine the similarity between two documents or\n * a document and a query.\n *\n * Normally no parameters are required for initializing a vector, but in the case of\n * loading a previously dumped vector the raw elements can be provided to the constructor.\n *\n * For performance reasons vectors are implemented with a flat array, where an elements\n * index is immediately followed by its value. E.g. [index, value, index, value]. This\n * allows the underlying array to be as sparse as possible and still offer decent\n * performance when being used for vector calculations.\n *\n * @constructor\n * @param {Number[]} [elements] - The flat list of element index and element value pairs.\n */\nlunr.Vector = function (elements) {\n this._magnitude = 0\n this.elements = elements || []\n}\n\n\n/**\n * Calculates the position within the vector to insert a given index.\n *\n * This is used internally by insert and upsert. If there are duplicate indexes then\n * the position is returned as if the value for that index were to be updated, but it\n * is the callers responsibility to check whether there is a duplicate at that index\n *\n * @param {Number} insertIdx - The index at which the element should be inserted.\n * @returns {Number}\n */\nlunr.Vector.prototype.positionForIndex = function (index) {\n // For an empty vector the tuple can be inserted at the beginning\n if (this.elements.length == 0) {\n return 0\n }\n\n var start = 0,\n end = this.elements.length / 2,\n sliceLength = end - start,\n pivotPoint = Math.floor(sliceLength / 2),\n pivotIndex = this.elements[pivotPoint * 2]\n\n while (sliceLength > 1) {\n if (pivotIndex < index) {\n start = pivotPoint\n }\n\n if (pivotIndex > index) {\n end = pivotPoint\n }\n\n if (pivotIndex == index) {\n break\n }\n\n sliceLength = end - start\n pivotPoint = start + Math.floor(sliceLength / 2)\n pivotIndex = this.elements[pivotPoint * 2]\n }\n\n if (pivotIndex == index) {\n return pivotPoint * 2\n }\n\n if (pivotIndex > index) {\n return pivotPoint * 2\n }\n\n if (pivotIndex < index) {\n return (pivotPoint + 1) * 2\n }\n}\n\n/**\n * Inserts an element at an index within the vector.\n *\n * Does not allow duplicates, will throw an error if there is already an entry\n * for this index.\n *\n * @param {Number} insertIdx - The index at which the element should be inserted.\n * @param {Number} val - The value to be inserted into the vector.\n */\nlunr.Vector.prototype.insert = function (insertIdx, val) {\n this.upsert(insertIdx, val, function () {\n throw \"duplicate index\"\n })\n}\n\n/**\n * Inserts or updates an existing index within the vector.\n *\n * @param {Number} insertIdx - The index at which the element should be inserted.\n * @param {Number} val - The value to be inserted into the vector.\n * @param {function} fn - A function that is called for updates, the existing value and the\n * requested value are passed as arguments\n */\nlunr.Vector.prototype.upsert = function (insertIdx, val, fn) {\n this._magnitude = 0\n var position = this.positionForIndex(insertIdx)\n\n if (this.elements[position] == insertIdx) {\n this.elements[position + 1] = fn(this.elements[position + 1], val)\n } else {\n this.elements.splice(position, 0, insertIdx, val)\n }\n}\n\n/**\n * Calculates the magnitude of this vector.\n *\n * @returns {Number}\n */\nlunr.Vector.prototype.magnitude = function () {\n if (this._magnitude) return this._magnitude\n\n var sumOfSquares = 0,\n elementsLength = this.elements.length\n\n for (var i = 1; i < elementsLength; i += 2) {\n var val = this.elements[i]\n sumOfSquares += val * val\n }\n\n return this._magnitude = Math.sqrt(sumOfSquares)\n}\n\n/**\n * Calculates the dot product of this vector and another vector.\n *\n * @param {lunr.Vector} otherVector - The vector to compute the dot product with.\n * @returns {Number}\n */\nlunr.Vector.prototype.dot = function (otherVector) {\n var dotProduct = 0,\n a = this.elements, b = otherVector.elements,\n aLen = a.length, bLen = b.length,\n aVal = 0, bVal = 0,\n i = 0, j = 0\n\n while (i < aLen && j < bLen) {\n aVal = a[i], bVal = b[j]\n if (aVal < bVal) {\n i += 2\n } else if (aVal > bVal) {\n j += 2\n } else if (aVal == bVal) {\n dotProduct += a[i + 1] * b[j + 1]\n i += 2\n j += 2\n }\n }\n\n return dotProduct\n}\n\n/**\n * Calculates the similarity between this vector and another vector.\n *\n * @param {lunr.Vector} otherVector - The other vector to calculate the\n * similarity with.\n * @returns {Number}\n */\nlunr.Vector.prototype.similarity = function (otherVector) {\n return this.dot(otherVector) / this.magnitude() || 0\n}\n\n/**\n * Converts the vector to an array of the elements within the vector.\n *\n * @returns {Number[]}\n */\nlunr.Vector.prototype.toArray = function () {\n var output = new Array (this.elements.length / 2)\n\n for (var i = 1, j = 0; i < this.elements.length; i += 2, j++) {\n output[j] = this.elements[i]\n }\n\n return output\n}\n\n/**\n * A JSON serializable representation of the vector.\n *\n * @returns {Number[]}\n */\nlunr.Vector.prototype.toJSON = function () {\n return this.elements\n}\n/* eslint-disable */\n/*!\n * lunr.stemmer\n * Copyright (C) 2020 Oliver Nightingale\n * Includes code from - http://tartarus.org/~martin/PorterStemmer/js.txt\n */\n\n/**\n * lunr.stemmer is an english language stemmer, this is a JavaScript\n * implementation of the PorterStemmer taken from http://tartarus.org/~martin\n *\n * @static\n * @implements {lunr.PipelineFunction}\n * @param {lunr.Token} token - The string to stem\n * @returns {lunr.Token}\n * @see {@link lunr.Pipeline}\n * @function\n */\nlunr.stemmer = (function(){\n var step2list = {\n \"ational\" : \"ate\",\n \"tional\" : \"tion\",\n \"enci\" : \"ence\",\n \"anci\" : \"ance\",\n \"izer\" : \"ize\",\n \"bli\" : \"ble\",\n \"alli\" : \"al\",\n \"entli\" : \"ent\",\n \"eli\" : \"e\",\n \"ousli\" : \"ous\",\n \"ization\" : \"ize\",\n \"ation\" : \"ate\",\n \"ator\" : \"ate\",\n \"alism\" : \"al\",\n \"iveness\" : \"ive\",\n \"fulness\" : \"ful\",\n \"ousness\" : \"ous\",\n \"aliti\" : \"al\",\n \"iviti\" : \"ive\",\n \"biliti\" : \"ble\",\n \"logi\" : \"log\"\n },\n\n step3list = {\n \"icate\" : \"ic\",\n \"ative\" : \"\",\n \"alize\" : \"al\",\n \"iciti\" : \"ic\",\n \"ical\" : \"ic\",\n \"ful\" : \"\",\n \"ness\" : \"\"\n },\n\n c = \"[^aeiou]\", // consonant\n v = \"[aeiouy]\", // vowel\n C = c + \"[^aeiouy]*\", // consonant sequence\n V = v + \"[aeiou]*\", // vowel sequence\n\n mgr0 = \"^(\" + C + \")?\" + V + C, // [C]VC... is m>0\n meq1 = \"^(\" + C + \")?\" + V + C + \"(\" + V + \")?$\", // [C]VC[V] is m=1\n mgr1 = \"^(\" + C + \")?\" + V + C + V + C, // [C]VCVC... is m>1\n s_v = \"^(\" + C + \")?\" + v; // vowel in stem\n\n var re_mgr0 = new RegExp(mgr0);\n var re_mgr1 = new RegExp(mgr1);\n var re_meq1 = new RegExp(meq1);\n var re_s_v = new RegExp(s_v);\n\n var re_1a = /^(.+?)(ss|i)es$/;\n var re2_1a = /^(.+?)([^s])s$/;\n var re_1b = /^(.+?)eed$/;\n var re2_1b = /^(.+?)(ed|ing)$/;\n var re_1b_2 = /.$/;\n var re2_1b_2 = /(at|bl|iz)$/;\n var re3_1b_2 = new RegExp(\"([^aeiouylsz])\\\\1$\");\n var re4_1b_2 = new RegExp(\"^\" + C + v + \"[^aeiouwxy]$\");\n\n var re_1c = /^(.+?[^aeiou])y$/;\n var re_2 = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/;\n\n var re_3 = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/;\n\n var re_4 = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/;\n var re2_4 = /^(.+?)(s|t)(ion)$/;\n\n var re_5 = /^(.+?)e$/;\n var re_5_1 = /ll$/;\n var re3_5 = new RegExp(\"^\" + C + v + \"[^aeiouwxy]$\");\n\n var porterStemmer = function porterStemmer(w) {\n var stem,\n suffix,\n firstch,\n re,\n re2,\n re3,\n re4;\n\n if (w.length < 3) { return w; }\n\n firstch = w.substr(0,1);\n if (firstch == \"y\") {\n w = firstch.toUpperCase() + w.substr(1);\n }\n\n // Step 1a\n re = re_1a\n re2 = re2_1a;\n\n if (re.test(w)) { w = w.replace(re,\"$1$2\"); }\n else if (re2.test(w)) { w = w.replace(re2,\"$1$2\"); }\n\n // Step 1b\n re = re_1b;\n re2 = re2_1b;\n if (re.test(w)) {\n var fp = re.exec(w);\n re = re_mgr0;\n if (re.test(fp[1])) {\n re = re_1b_2;\n w = w.replace(re,\"\");\n }\n } else if (re2.test(w)) {\n var fp = re2.exec(w);\n stem = fp[1];\n re2 = re_s_v;\n if (re2.test(stem)) {\n w = stem;\n re2 = re2_1b_2;\n re3 = re3_1b_2;\n re4 = re4_1b_2;\n if (re2.test(w)) { w = w + \"e\"; }\n else if (re3.test(w)) { re = re_1b_2; w = w.replace(re,\"\"); }\n else if (re4.test(w)) { w = w + \"e\"; }\n }\n }\n\n // Step 1c - replace suffix y or Y by i if preceded by a non-vowel which is not the first letter of the word (so cry -> cri, by -> by, say -> say)\n re = re_1c;\n if (re.test(w)) {\n var fp = re.exec(w);\n stem = fp[1];\n w = stem + \"i\";\n }\n\n // Step 2\n re = re_2;\n if (re.test(w)) {\n var fp = re.exec(w);\n stem = fp[1];\n suffix = fp[2];\n re = re_mgr0;\n if (re.test(stem)) {\n w = stem + step2list[suffix];\n }\n }\n\n // Step 3\n re = re_3;\n if (re.test(w)) {\n var fp = re.exec(w);\n stem = fp[1];\n suffix = fp[2];\n re = re_mgr0;\n if (re.test(stem)) {\n w = stem + step3list[suffix];\n }\n }\n\n // Step 4\n re = re_4;\n re2 = re2_4;\n if (re.test(w)) {\n var fp = re.exec(w);\n stem = fp[1];\n re = re_mgr1;\n if (re.test(stem)) {\n w = stem;\n }\n } else if (re2.test(w)) {\n var fp = re2.exec(w);\n stem = fp[1] + fp[2];\n re2 = re_mgr1;\n if (re2.test(stem)) {\n w = stem;\n }\n }\n\n // Step 5\n re = re_5;\n if (re.test(w)) {\n var fp = re.exec(w);\n stem = fp[1];\n re = re_mgr1;\n re2 = re_meq1;\n re3 = re3_5;\n if (re.test(stem) || (re2.test(stem) && !(re3.test(stem)))) {\n w = stem;\n }\n }\n\n re = re_5_1;\n re2 = re_mgr1;\n if (re.test(w) && re2.test(w)) {\n re = re_1b_2;\n w = w.replace(re,\"\");\n }\n\n // and turn initial Y back to y\n\n if (firstch == \"y\") {\n w = firstch.toLowerCase() + w.substr(1);\n }\n\n return w;\n };\n\n return function (token) {\n return token.update(porterStemmer);\n }\n})();\n\nlunr.Pipeline.registerFunction(lunr.stemmer, 'stemmer')\n/*!\n * lunr.stopWordFilter\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * lunr.generateStopWordFilter builds a stopWordFilter function from the provided\n * list of stop words.\n *\n * The built in lunr.stopWordFilter is built using this generator and can be used\n * to generate custom stopWordFilters for applications or non English languages.\n *\n * @function\n * @param {Array} token The token to pass through the filter\n * @returns {lunr.PipelineFunction}\n * @see lunr.Pipeline\n * @see lunr.stopWordFilter\n */\nlunr.generateStopWordFilter = function (stopWords) {\n var words = stopWords.reduce(function (memo, stopWord) {\n memo[stopWord] = stopWord\n return memo\n }, {})\n\n return function (token) {\n if (token && words[token.toString()] !== token.toString()) return token\n }\n}\n\n/**\n * lunr.stopWordFilter is an English language stop word list filter, any words\n * contained in the list will not be passed through the filter.\n *\n * This is intended to be used in the Pipeline. If the token does not pass the\n * filter then undefined will be returned.\n *\n * @function\n * @implements {lunr.PipelineFunction}\n * @params {lunr.Token} token - A token to check for being a stop word.\n * @returns {lunr.Token}\n * @see {@link lunr.Pipeline}\n */\nlunr.stopWordFilter = lunr.generateStopWordFilter([\n 'a',\n 'able',\n 'about',\n 'across',\n 'after',\n 'all',\n 'almost',\n 'also',\n 'am',\n 'among',\n 'an',\n 'and',\n 'any',\n 'are',\n 'as',\n 'at',\n 'be',\n 'because',\n 'been',\n 'but',\n 'by',\n 'can',\n 'cannot',\n 'could',\n 'dear',\n 'did',\n 'do',\n 'does',\n 'either',\n 'else',\n 'ever',\n 'every',\n 'for',\n 'from',\n 'get',\n 'got',\n 'had',\n 'has',\n 'have',\n 'he',\n 'her',\n 'hers',\n 'him',\n 'his',\n 'how',\n 'however',\n 'i',\n 'if',\n 'in',\n 'into',\n 'is',\n 'it',\n 'its',\n 'just',\n 'least',\n 'let',\n 'like',\n 'likely',\n 'may',\n 'me',\n 'might',\n 'most',\n 'must',\n 'my',\n 'neither',\n 'no',\n 'nor',\n 'not',\n 'of',\n 'off',\n 'often',\n 'on',\n 'only',\n 'or',\n 'other',\n 'our',\n 'own',\n 'rather',\n 'said',\n 'say',\n 'says',\n 'she',\n 'should',\n 'since',\n 'so',\n 'some',\n 'than',\n 'that',\n 'the',\n 'their',\n 'them',\n 'then',\n 'there',\n 'these',\n 'they',\n 'this',\n 'tis',\n 'to',\n 'too',\n 'twas',\n 'us',\n 'wants',\n 'was',\n 'we',\n 'were',\n 'what',\n 'when',\n 'where',\n 'which',\n 'while',\n 'who',\n 'whom',\n 'why',\n 'will',\n 'with',\n 'would',\n 'yet',\n 'you',\n 'your'\n])\n\nlunr.Pipeline.registerFunction(lunr.stopWordFilter, 'stopWordFilter')\n/*!\n * lunr.trimmer\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * lunr.trimmer is a pipeline function for trimming non word\n * characters from the beginning and end of tokens before they\n * enter the index.\n *\n * This implementation may not work correctly for non latin\n * characters and should either be removed or adapted for use\n * with languages with non-latin characters.\n *\n * @static\n * @implements {lunr.PipelineFunction}\n * @param {lunr.Token} token The token to pass through the filter\n * @returns {lunr.Token}\n * @see lunr.Pipeline\n */\nlunr.trimmer = function (token) {\n return token.update(function (s) {\n return s.replace(/^\\W+/, '').replace(/\\W+$/, '')\n })\n}\n\nlunr.Pipeline.registerFunction(lunr.trimmer, 'trimmer')\n/*!\n * lunr.TokenSet\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * A token set is used to store the unique list of all tokens\n * within an index. Token sets are also used to represent an\n * incoming query to the index, this query token set and index\n * token set are then intersected to find which tokens to look\n * up in the inverted index.\n *\n * A token set can hold multiple tokens, as in the case of the\n * index token set, or it can hold a single token as in the\n * case of a simple query token set.\n *\n * Additionally token sets are used to perform wildcard matching.\n * Leading, contained and trailing wildcards are supported, and\n * from this edit distance matching can also be provided.\n *\n * Token sets are implemented as a minimal finite state automata,\n * where both common prefixes and suffixes are shared between tokens.\n * This helps to reduce the space used for storing the token set.\n *\n * @constructor\n */\nlunr.TokenSet = function () {\n this.final = false\n this.edges = {}\n this.id = lunr.TokenSet._nextId\n lunr.TokenSet._nextId += 1\n}\n\n/**\n * Keeps track of the next, auto increment, identifier to assign\n * to a new tokenSet.\n *\n * TokenSets require a unique identifier to be correctly minimised.\n *\n * @private\n */\nlunr.TokenSet._nextId = 1\n\n/**\n * Creates a TokenSet instance from the given sorted array of words.\n *\n * @param {String[]} arr - A sorted array of strings to create the set from.\n * @returns {lunr.TokenSet}\n * @throws Will throw an error if the input array is not sorted.\n */\nlunr.TokenSet.fromArray = function (arr) {\n var builder = new lunr.TokenSet.Builder\n\n for (var i = 0, len = arr.length; i < len; i++) {\n builder.insert(arr[i])\n }\n\n builder.finish()\n return builder.root\n}\n\n/**\n * Creates a token set from a query clause.\n *\n * @private\n * @param {Object} clause - A single clause from lunr.Query.\n * @param {string} clause.term - The query clause term.\n * @param {number} [clause.editDistance] - The optional edit distance for the term.\n * @returns {lunr.TokenSet}\n */\nlunr.TokenSet.fromClause = function (clause) {\n if ('editDistance' in clause) {\n return lunr.TokenSet.fromFuzzyString(clause.term, clause.editDistance)\n } else {\n return lunr.TokenSet.fromString(clause.term)\n }\n}\n\n/**\n * Creates a token set representing a single string with a specified\n * edit distance.\n *\n * Insertions, deletions, substitutions and transpositions are each\n * treated as an edit distance of 1.\n *\n * Increasing the allowed edit distance will have a dramatic impact\n * on the performance of both creating and intersecting these TokenSets.\n * It is advised to keep the edit distance less than 3.\n *\n * @param {string} str - The string to create the token set from.\n * @param {number} editDistance - The allowed edit distance to match.\n * @returns {lunr.Vector}\n */\nlunr.TokenSet.fromFuzzyString = function (str, editDistance) {\n var root = new lunr.TokenSet\n\n var stack = [{\n node: root,\n editsRemaining: editDistance,\n str: str\n }]\n\n while (stack.length) {\n var frame = stack.pop()\n\n // no edit\n if (frame.str.length > 0) {\n var char = frame.str.charAt(0),\n noEditNode\n\n if (char in frame.node.edges) {\n noEditNode = frame.node.edges[char]\n } else {\n noEditNode = new lunr.TokenSet\n frame.node.edges[char] = noEditNode\n }\n\n if (frame.str.length == 1) {\n noEditNode.final = true\n }\n\n stack.push({\n node: noEditNode,\n editsRemaining: frame.editsRemaining,\n str: frame.str.slice(1)\n })\n }\n\n if (frame.editsRemaining == 0) {\n continue\n }\n\n // insertion\n if (\"*\" in frame.node.edges) {\n var insertionNode = frame.node.edges[\"*\"]\n } else {\n var insertionNode = new lunr.TokenSet\n frame.node.edges[\"*\"] = insertionNode\n }\n\n if (frame.str.length == 0) {\n insertionNode.final = true\n }\n\n stack.push({\n node: insertionNode,\n editsRemaining: frame.editsRemaining - 1,\n str: frame.str\n })\n\n // deletion\n // can only do a deletion if we have enough edits remaining\n // and if there are characters left to delete in the string\n if (frame.str.length > 1) {\n stack.push({\n node: frame.node,\n editsRemaining: frame.editsRemaining - 1,\n str: frame.str.slice(1)\n })\n }\n\n // deletion\n // just removing the last character from the str\n if (frame.str.length == 1) {\n frame.node.final = true\n }\n\n // substitution\n // can only do a substitution if we have enough edits remaining\n // and if there are characters left to substitute\n if (frame.str.length >= 1) {\n if (\"*\" in frame.node.edges) {\n var substitutionNode = frame.node.edges[\"*\"]\n } else {\n var substitutionNode = new lunr.TokenSet\n frame.node.edges[\"*\"] = substitutionNode\n }\n\n if (frame.str.length == 1) {\n substitutionNode.final = true\n }\n\n stack.push({\n node: substitutionNode,\n editsRemaining: frame.editsRemaining - 1,\n str: frame.str.slice(1)\n })\n }\n\n // transposition\n // can only do a transposition if there are edits remaining\n // and there are enough characters to transpose\n if (frame.str.length > 1) {\n var charA = frame.str.charAt(0),\n charB = frame.str.charAt(1),\n transposeNode\n\n if (charB in frame.node.edges) {\n transposeNode = frame.node.edges[charB]\n } else {\n transposeNode = new lunr.TokenSet\n frame.node.edges[charB] = transposeNode\n }\n\n if (frame.str.length == 1) {\n transposeNode.final = true\n }\n\n stack.push({\n node: transposeNode,\n editsRemaining: frame.editsRemaining - 1,\n str: charA + frame.str.slice(2)\n })\n }\n }\n\n return root\n}\n\n/**\n * Creates a TokenSet from a string.\n *\n * The string may contain one or more wildcard characters (*)\n * that will allow wildcard matching when intersecting with\n * another TokenSet.\n *\n * @param {string} str - The string to create a TokenSet from.\n * @returns {lunr.TokenSet}\n */\nlunr.TokenSet.fromString = function (str) {\n var node = new lunr.TokenSet,\n root = node\n\n /*\n * Iterates through all characters within the passed string\n * appending a node for each character.\n *\n * When a wildcard character is found then a self\n * referencing edge is introduced to continually match\n * any number of any characters.\n */\n for (var i = 0, len = str.length; i < len; i++) {\n var char = str[i],\n final = (i == len - 1)\n\n if (char == \"*\") {\n node.edges[char] = node\n node.final = final\n\n } else {\n var next = new lunr.TokenSet\n next.final = final\n\n node.edges[char] = next\n node = next\n }\n }\n\n return root\n}\n\n/**\n * Converts this TokenSet into an array of strings\n * contained within the TokenSet.\n *\n * This is not intended to be used on a TokenSet that\n * contains wildcards, in these cases the results are\n * undefined and are likely to cause an infinite loop.\n *\n * @returns {string[]}\n */\nlunr.TokenSet.prototype.toArray = function () {\n var words = []\n\n var stack = [{\n prefix: \"\",\n node: this\n }]\n\n while (stack.length) {\n var frame = stack.pop(),\n edges = Object.keys(frame.node.edges),\n len = edges.length\n\n if (frame.node.final) {\n /* In Safari, at this point the prefix is sometimes corrupted, see:\n * https://github.com/olivernn/lunr.js/issues/279 Calling any\n * String.prototype method forces Safari to \"cast\" this string to what\n * it's supposed to be, fixing the bug. */\n frame.prefix.charAt(0)\n words.push(frame.prefix)\n }\n\n for (var i = 0; i < len; i++) {\n var edge = edges[i]\n\n stack.push({\n prefix: frame.prefix.concat(edge),\n node: frame.node.edges[edge]\n })\n }\n }\n\n return words\n}\n\n/**\n * Generates a string representation of a TokenSet.\n *\n * This is intended to allow TokenSets to be used as keys\n * in objects, largely to aid the construction and minimisation\n * of a TokenSet. As such it is not designed to be a human\n * friendly representation of the TokenSet.\n *\n * @returns {string}\n */\nlunr.TokenSet.prototype.toString = function () {\n // NOTE: Using Object.keys here as this.edges is very likely\n // to enter 'hash-mode' with many keys being added\n //\n // avoiding a for-in loop here as it leads to the function\n // being de-optimised (at least in V8). From some simple\n // benchmarks the performance is comparable, but allowing\n // V8 to optimize may mean easy performance wins in the future.\n\n if (this._str) {\n return this._str\n }\n\n var str = this.final ? '1' : '0',\n labels = Object.keys(this.edges).sort(),\n len = labels.length\n\n for (var i = 0; i < len; i++) {\n var label = labels[i],\n node = this.edges[label]\n\n str = str + label + node.id\n }\n\n return str\n}\n\n/**\n * Returns a new TokenSet that is the intersection of\n * this TokenSet and the passed TokenSet.\n *\n * This intersection will take into account any wildcards\n * contained within the TokenSet.\n *\n * @param {lunr.TokenSet} b - An other TokenSet to intersect with.\n * @returns {lunr.TokenSet}\n */\nlunr.TokenSet.prototype.intersect = function (b) {\n var output = new lunr.TokenSet,\n frame = undefined\n\n var stack = [{\n qNode: b,\n output: output,\n node: this\n }]\n\n while (stack.length) {\n frame = stack.pop()\n\n // NOTE: As with the #toString method, we are using\n // Object.keys and a for loop instead of a for-in loop\n // as both of these objects enter 'hash' mode, causing\n // the function to be de-optimised in V8\n var qEdges = Object.keys(frame.qNode.edges),\n qLen = qEdges.length,\n nEdges = Object.keys(frame.node.edges),\n nLen = nEdges.length\n\n for (var q = 0; q < qLen; q++) {\n var qEdge = qEdges[q]\n\n for (var n = 0; n < nLen; n++) {\n var nEdge = nEdges[n]\n\n if (nEdge == qEdge || qEdge == '*') {\n var node = frame.node.edges[nEdge],\n qNode = frame.qNode.edges[qEdge],\n final = node.final && qNode.final,\n next = undefined\n\n if (nEdge in frame.output.edges) {\n // an edge already exists for this character\n // no need to create a new node, just set the finality\n // bit unless this node is already final\n next = frame.output.edges[nEdge]\n next.final = next.final || final\n\n } else {\n // no edge exists yet, must create one\n // set the finality bit and insert it\n // into the output\n next = new lunr.TokenSet\n next.final = final\n frame.output.edges[nEdge] = next\n }\n\n stack.push({\n qNode: qNode,\n output: next,\n node: node\n })\n }\n }\n }\n }\n\n return output\n}\nlunr.TokenSet.Builder = function () {\n this.previousWord = \"\"\n this.root = new lunr.TokenSet\n this.uncheckedNodes = []\n this.minimizedNodes = {}\n}\n\nlunr.TokenSet.Builder.prototype.insert = function (word) {\n var node,\n commonPrefix = 0\n\n if (word < this.previousWord) {\n throw new Error (\"Out of order word insertion\")\n }\n\n for (var i = 0; i < word.length && i < this.previousWord.length; i++) {\n if (word[i] != this.previousWord[i]) break\n commonPrefix++\n }\n\n this.minimize(commonPrefix)\n\n if (this.uncheckedNodes.length == 0) {\n node = this.root\n } else {\n node = this.uncheckedNodes[this.uncheckedNodes.length - 1].child\n }\n\n for (var i = commonPrefix; i < word.length; i++) {\n var nextNode = new lunr.TokenSet,\n char = word[i]\n\n node.edges[char] = nextNode\n\n this.uncheckedNodes.push({\n parent: node,\n char: char,\n child: nextNode\n })\n\n node = nextNode\n }\n\n node.final = true\n this.previousWord = word\n}\n\nlunr.TokenSet.Builder.prototype.finish = function () {\n this.minimize(0)\n}\n\nlunr.TokenSet.Builder.prototype.minimize = function (downTo) {\n for (var i = this.uncheckedNodes.length - 1; i >= downTo; i--) {\n var node = this.uncheckedNodes[i],\n childKey = node.child.toString()\n\n if (childKey in this.minimizedNodes) {\n node.parent.edges[node.char] = this.minimizedNodes[childKey]\n } else {\n // Cache the key for this node since\n // we know it can't change anymore\n node.child._str = childKey\n\n this.minimizedNodes[childKey] = node.child\n }\n\n this.uncheckedNodes.pop()\n }\n}\n/*!\n * lunr.Index\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * An index contains the built index of all documents and provides a query interface\n * to the index.\n *\n * Usually instances of lunr.Index will not be created using this constructor, instead\n * lunr.Builder should be used to construct new indexes, or lunr.Index.load should be\n * used to load previously built and serialized indexes.\n *\n * @constructor\n * @param {Object} attrs - The attributes of the built search index.\n * @param {Object} attrs.invertedIndex - An index of term/field to document reference.\n * @param {Object} attrs.fieldVectors - Field vectors\n * @param {lunr.TokenSet} attrs.tokenSet - An set of all corpus tokens.\n * @param {string[]} attrs.fields - The names of indexed document fields.\n * @param {lunr.Pipeline} attrs.pipeline - The pipeline to use for search terms.\n */\nlunr.Index = function (attrs) {\n this.invertedIndex = attrs.invertedIndex\n this.fieldVectors = attrs.fieldVectors\n this.tokenSet = attrs.tokenSet\n this.fields = attrs.fields\n this.pipeline = attrs.pipeline\n}\n\n/**\n * A result contains details of a document matching a search query.\n * @typedef {Object} lunr.Index~Result\n * @property {string} ref - The reference of the document this result represents.\n * @property {number} score - A number between 0 and 1 representing how similar this document is to the query.\n * @property {lunr.MatchData} matchData - Contains metadata about this match including which term(s) caused the match.\n */\n\n/**\n * Although lunr provides the ability to create queries using lunr.Query, it also provides a simple\n * query language which itself is parsed into an instance of lunr.Query.\n *\n * For programmatically building queries it is advised to directly use lunr.Query, the query language\n * is best used for human entered text rather than program generated text.\n *\n * At its simplest queries can just be a single term, e.g. `hello`, multiple terms are also supported\n * and will be combined with OR, e.g `hello world` will match documents that contain either 'hello'\n * or 'world', though those that contain both will rank higher in the results.\n *\n * Wildcards can be included in terms to match one or more unspecified characters, these wildcards can\n * be inserted anywhere within the term, and more than one wildcard can exist in a single term. Adding\n * wildcards will increase the number of documents that will be found but can also have a negative\n * impact on query performance, especially with wildcards at the beginning of a term.\n *\n * Terms can be restricted to specific fields, e.g. `title:hello`, only documents with the term\n * hello in the title field will match this query. Using a field not present in the index will lead\n * to an error being thrown.\n *\n * Modifiers can also be added to terms, lunr supports edit distance and boost modifiers on terms. A term\n * boost will make documents matching that term score higher, e.g. `foo^5`. Edit distance is also supported\n * to provide fuzzy matching, e.g. 'hello~2' will match documents with hello with an edit distance of 2.\n * Avoid large values for edit distance to improve query performance.\n *\n * Each term also supports a presence modifier. By default a term's presence in document is optional, however\n * this can be changed to either required or prohibited. For a term's presence to be required in a document the\n * term should be prefixed with a '+', e.g. `+foo bar` is a search for documents that must contain 'foo' and\n * optionally contain 'bar'. Conversely a leading '-' sets the terms presence to prohibited, i.e. it must not\n * appear in a document, e.g. `-foo bar` is a search for documents that do not contain 'foo' but may contain 'bar'.\n *\n * To escape special characters the backslash character '\\' can be used, this allows searches to include\n * characters that would normally be considered modifiers, e.g. `foo\\~2` will search for a term \"foo~2\" instead\n * of attempting to apply a boost of 2 to the search term \"foo\".\n *\n * @typedef {string} lunr.Index~QueryString\n * @example Simple single term query\n * hello\n * @example Multiple term query\n * hello world\n * @example term scoped to a field\n * title:hello\n * @example term with a boost of 10\n * hello^10\n * @example term with an edit distance of 2\n * hello~2\n * @example terms with presence modifiers\n * -foo +bar baz\n */\n\n/**\n * Performs a search against the index using lunr query syntax.\n *\n * Results will be returned sorted by their score, the most relevant results\n * will be returned first. For details on how the score is calculated, please see\n * the {@link https://lunrjs.com/guides/searching.html#scoring|guide}.\n *\n * For more programmatic querying use lunr.Index#query.\n *\n * @param {lunr.Index~QueryString} queryString - A string containing a lunr query.\n * @throws {lunr.QueryParseError} If the passed query string cannot be parsed.\n * @returns {lunr.Index~Result[]}\n */\nlunr.Index.prototype.search = function (queryString) {\n return this.query(function (query) {\n var parser = new lunr.QueryParser(queryString, query)\n parser.parse()\n })\n}\n\n/**\n * A query builder callback provides a query object to be used to express\n * the query to perform on the index.\n *\n * @callback lunr.Index~queryBuilder\n * @param {lunr.Query} query - The query object to build up.\n * @this lunr.Query\n */\n\n/**\n * Performs a query against the index using the yielded lunr.Query object.\n *\n * If performing programmatic queries against the index, this method is preferred\n * over lunr.Index#search so as to avoid the additional query parsing overhead.\n *\n * A query object is yielded to the supplied function which should be used to\n * express the query to be run against the index.\n *\n * Note that although this function takes a callback parameter it is _not_ an\n * asynchronous operation, the callback is just yielded a query object to be\n * customized.\n *\n * @param {lunr.Index~queryBuilder} fn - A function that is used to build the query.\n * @returns {lunr.Index~Result[]}\n */\nlunr.Index.prototype.query = function (fn) {\n // for each query clause\n // * process terms\n // * expand terms from token set\n // * find matching documents and metadata\n // * get document vectors\n // * score documents\n\n var query = new lunr.Query(this.fields),\n matchingFields = Object.create(null),\n queryVectors = Object.create(null),\n termFieldCache = Object.create(null),\n requiredMatches = Object.create(null),\n prohibitedMatches = Object.create(null)\n\n /*\n * To support field level boosts a query vector is created per\n * field. An empty vector is eagerly created to support negated\n * queries.\n */\n for (var i = 0; i < this.fields.length; i++) {\n queryVectors[this.fields[i]] = new lunr.Vector\n }\n\n fn.call(query, query)\n\n for (var i = 0; i < query.clauses.length; i++) {\n /*\n * Unless the pipeline has been disabled for this term, which is\n * the case for terms with wildcards, we need to pass the clause\n * term through the search pipeline. A pipeline returns an array\n * of processed terms. Pipeline functions may expand the passed\n * term, which means we may end up performing multiple index lookups\n * for a single query term.\n */\n var clause = query.clauses[i],\n terms = null,\n clauseMatches = lunr.Set.empty\n\n if (clause.usePipeline) {\n terms = this.pipeline.runString(clause.term, {\n fields: clause.fields\n })\n } else {\n terms = [clause.term]\n }\n\n for (var m = 0; m < terms.length; m++) {\n var term = terms[m]\n\n /*\n * Each term returned from the pipeline needs to use the same query\n * clause object, e.g. the same boost and or edit distance. The\n * simplest way to do this is to re-use the clause object but mutate\n * its term property.\n */\n clause.term = term\n\n /*\n * From the term in the clause we create a token set which will then\n * be used to intersect the indexes token set to get a list of terms\n * to lookup in the inverted index\n */\n var termTokenSet = lunr.TokenSet.fromClause(clause),\n expandedTerms = this.tokenSet.intersect(termTokenSet).toArray()\n\n /*\n * If a term marked as required does not exist in the tokenSet it is\n * impossible for the search to return any matches. We set all the field\n * scoped required matches set to empty and stop examining any further\n * clauses.\n */\n if (expandedTerms.length === 0 && clause.presence === lunr.Query.presence.REQUIRED) {\n for (var k = 0; k < clause.fields.length; k++) {\n var field = clause.fields[k]\n requiredMatches[field] = lunr.Set.empty\n }\n\n break\n }\n\n for (var j = 0; j < expandedTerms.length; j++) {\n /*\n * For each term get the posting and termIndex, this is required for\n * building the query vector.\n */\n var expandedTerm = expandedTerms[j],\n posting = this.invertedIndex[expandedTerm],\n termIndex = posting._index\n\n for (var k = 0; k < clause.fields.length; k++) {\n /*\n * For each field that this query term is scoped by (by default\n * all fields are in scope) we need to get all the document refs\n * that have this term in that field.\n *\n * The posting is the entry in the invertedIndex for the matching\n * term from above.\n */\n var field = clause.fields[k],\n fieldPosting = posting[field],\n matchingDocumentRefs = Object.keys(fieldPosting),\n termField = expandedTerm + \"/\" + field,\n matchingDocumentsSet = new lunr.Set(matchingDocumentRefs)\n\n /*\n * if the presence of this term is required ensure that the matching\n * documents are added to the set of required matches for this clause.\n *\n */\n if (clause.presence == lunr.Query.presence.REQUIRED) {\n clauseMatches = clauseMatches.union(matchingDocumentsSet)\n\n if (requiredMatches[field] === undefined) {\n requiredMatches[field] = lunr.Set.complete\n }\n }\n\n /*\n * if the presence of this term is prohibited ensure that the matching\n * documents are added to the set of prohibited matches for this field,\n * creating that set if it does not yet exist.\n */\n if (clause.presence == lunr.Query.presence.PROHIBITED) {\n if (prohibitedMatches[field] === undefined) {\n prohibitedMatches[field] = lunr.Set.empty\n }\n\n prohibitedMatches[field] = prohibitedMatches[field].union(matchingDocumentsSet)\n\n /*\n * Prohibited matches should not be part of the query vector used for\n * similarity scoring and no metadata should be extracted so we continue\n * to the next field\n */\n continue\n }\n\n /*\n * The query field vector is populated using the termIndex found for\n * the term and a unit value with the appropriate boost applied.\n * Using upsert because there could already be an entry in the vector\n * for the term we are working with. In that case we just add the scores\n * together.\n */\n queryVectors[field].upsert(termIndex, clause.boost, function (a, b) { return a + b })\n\n /**\n * If we've already seen this term, field combo then we've already collected\n * the matching documents and metadata, no need to go through all that again\n */\n if (termFieldCache[termField]) {\n continue\n }\n\n for (var l = 0; l < matchingDocumentRefs.length; l++) {\n /*\n * All metadata for this term/field/document triple\n * are then extracted and collected into an instance\n * of lunr.MatchData ready to be returned in the query\n * results\n */\n var matchingDocumentRef = matchingDocumentRefs[l],\n matchingFieldRef = new lunr.FieldRef (matchingDocumentRef, field),\n metadata = fieldPosting[matchingDocumentRef],\n fieldMatch\n\n if ((fieldMatch = matchingFields[matchingFieldRef]) === undefined) {\n matchingFields[matchingFieldRef] = new lunr.MatchData (expandedTerm, field, metadata)\n } else {\n fieldMatch.add(expandedTerm, field, metadata)\n }\n\n }\n\n termFieldCache[termField] = true\n }\n }\n }\n\n /**\n * If the presence was required we need to update the requiredMatches field sets.\n * We do this after all fields for the term have collected their matches because\n * the clause terms presence is required in _any_ of the fields not _all_ of the\n * fields.\n */\n if (clause.presence === lunr.Query.presence.REQUIRED) {\n for (var k = 0; k < clause.fields.length; k++) {\n var field = clause.fields[k]\n requiredMatches[field] = requiredMatches[field].intersect(clauseMatches)\n }\n }\n }\n\n /**\n * Need to combine the field scoped required and prohibited\n * matching documents into a global set of required and prohibited\n * matches\n */\n var allRequiredMatches = lunr.Set.complete,\n allProhibitedMatches = lunr.Set.empty\n\n for (var i = 0; i < this.fields.length; i++) {\n var field = this.fields[i]\n\n if (requiredMatches[field]) {\n allRequiredMatches = allRequiredMatches.intersect(requiredMatches[field])\n }\n\n if (prohibitedMatches[field]) {\n allProhibitedMatches = allProhibitedMatches.union(prohibitedMatches[field])\n }\n }\n\n var matchingFieldRefs = Object.keys(matchingFields),\n results = [],\n matches = Object.create(null)\n\n /*\n * If the query is negated (contains only prohibited terms)\n * we need to get _all_ fieldRefs currently existing in the\n * index. This is only done when we know that the query is\n * entirely prohibited terms to avoid any cost of getting all\n * fieldRefs unnecessarily.\n *\n * Additionally, blank MatchData must be created to correctly\n * populate the results.\n */\n if (query.isNegated()) {\n matchingFieldRefs = Object.keys(this.fieldVectors)\n\n for (var i = 0; i < matchingFieldRefs.length; i++) {\n var matchingFieldRef = matchingFieldRefs[i]\n var fieldRef = lunr.FieldRef.fromString(matchingFieldRef)\n matchingFields[matchingFieldRef] = new lunr.MatchData\n }\n }\n\n for (var i = 0; i < matchingFieldRefs.length; i++) {\n /*\n * Currently we have document fields that match the query, but we\n * need to return documents. The matchData and scores are combined\n * from multiple fields belonging to the same document.\n *\n * Scores are calculated by field, using the query vectors created\n * above, and combined into a final document score using addition.\n */\n var fieldRef = lunr.FieldRef.fromString(matchingFieldRefs[i]),\n docRef = fieldRef.docRef\n\n if (!allRequiredMatches.contains(docRef)) {\n continue\n }\n\n if (allProhibitedMatches.contains(docRef)) {\n continue\n }\n\n var fieldVector = this.fieldVectors[fieldRef],\n score = queryVectors[fieldRef.fieldName].similarity(fieldVector),\n docMatch\n\n if ((docMatch = matches[docRef]) !== undefined) {\n docMatch.score += score\n docMatch.matchData.combine(matchingFields[fieldRef])\n } else {\n var match = {\n ref: docRef,\n score: score,\n matchData: matchingFields[fieldRef]\n }\n matches[docRef] = match\n results.push(match)\n }\n }\n\n /*\n * Sort the results objects by score, highest first.\n */\n return results.sort(function (a, b) {\n return b.score - a.score\n })\n}\n\n/**\n * Prepares the index for JSON serialization.\n *\n * The schema for this JSON blob will be described in a\n * separate JSON schema file.\n *\n * @returns {Object}\n */\nlunr.Index.prototype.toJSON = function () {\n var invertedIndex = Object.keys(this.invertedIndex)\n .sort()\n .map(function (term) {\n return [term, this.invertedIndex[term]]\n }, this)\n\n var fieldVectors = Object.keys(this.fieldVectors)\n .map(function (ref) {\n return [ref, this.fieldVectors[ref].toJSON()]\n }, this)\n\n return {\n version: lunr.version,\n fields: this.fields,\n fieldVectors: fieldVectors,\n invertedIndex: invertedIndex,\n pipeline: this.pipeline.toJSON()\n }\n}\n\n/**\n * Loads a previously serialized lunr.Index\n *\n * @param {Object} serializedIndex - A previously serialized lunr.Index\n * @returns {lunr.Index}\n */\nlunr.Index.load = function (serializedIndex) {\n var attrs = {},\n fieldVectors = {},\n serializedVectors = serializedIndex.fieldVectors,\n invertedIndex = Object.create(null),\n serializedInvertedIndex = serializedIndex.invertedIndex,\n tokenSetBuilder = new lunr.TokenSet.Builder,\n pipeline = lunr.Pipeline.load(serializedIndex.pipeline)\n\n if (serializedIndex.version != lunr.version) {\n lunr.utils.warn(\"Version mismatch when loading serialised index. Current version of lunr '\" + lunr.version + \"' does not match serialized index '\" + serializedIndex.version + \"'\")\n }\n\n for (var i = 0; i < serializedVectors.length; i++) {\n var tuple = serializedVectors[i],\n ref = tuple[0],\n elements = tuple[1]\n\n fieldVectors[ref] = new lunr.Vector(elements)\n }\n\n for (var i = 0; i < serializedInvertedIndex.length; i++) {\n var tuple = serializedInvertedIndex[i],\n term = tuple[0],\n posting = tuple[1]\n\n tokenSetBuilder.insert(term)\n invertedIndex[term] = posting\n }\n\n tokenSetBuilder.finish()\n\n attrs.fields = serializedIndex.fields\n\n attrs.fieldVectors = fieldVectors\n attrs.invertedIndex = invertedIndex\n attrs.tokenSet = tokenSetBuilder.root\n attrs.pipeline = pipeline\n\n return new lunr.Index(attrs)\n}\n/*!\n * lunr.Builder\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * lunr.Builder performs indexing on a set of documents and\n * returns instances of lunr.Index ready for querying.\n *\n * All configuration of the index is done via the builder, the\n * fields to index, the document reference, the text processing\n * pipeline and document scoring parameters are all set on the\n * builder before indexing.\n *\n * @constructor\n * @property {string} _ref - Internal reference to the document reference field.\n * @property {string[]} _fields - Internal reference to the document fields to index.\n * @property {object} invertedIndex - The inverted index maps terms to document fields.\n * @property {object} documentTermFrequencies - Keeps track of document term frequencies.\n * @property {object} documentLengths - Keeps track of the length of documents added to the index.\n * @property {lunr.tokenizer} tokenizer - Function for splitting strings into tokens for indexing.\n * @property {lunr.Pipeline} pipeline - The pipeline performs text processing on tokens before indexing.\n * @property {lunr.Pipeline} searchPipeline - A pipeline for processing search terms before querying the index.\n * @property {number} documentCount - Keeps track of the total number of documents indexed.\n * @property {number} _b - A parameter to control field length normalization, setting this to 0 disabled normalization, 1 fully normalizes field lengths, the default value is 0.75.\n * @property {number} _k1 - A parameter to control how quickly an increase in term frequency results in term frequency saturation, the default value is 1.2.\n * @property {number} termIndex - A counter incremented for each unique term, used to identify a terms position in the vector space.\n * @property {array} metadataWhitelist - A list of metadata keys that have been whitelisted for entry in the index.\n */\nlunr.Builder = function () {\n this._ref = \"id\"\n this._fields = Object.create(null)\n this._documents = Object.create(null)\n this.invertedIndex = Object.create(null)\n this.fieldTermFrequencies = {}\n this.fieldLengths = {}\n this.tokenizer = lunr.tokenizer\n this.pipeline = new lunr.Pipeline\n this.searchPipeline = new lunr.Pipeline\n this.documentCount = 0\n this._b = 0.75\n this._k1 = 1.2\n this.termIndex = 0\n this.metadataWhitelist = []\n}\n\n/**\n * Sets the document field used as the document reference. Every document must have this field.\n * The type of this field in the document should be a string, if it is not a string it will be\n * coerced into a string by calling toString.\n *\n * The default ref is 'id'.\n *\n * The ref should _not_ be changed during indexing, it should be set before any documents are\n * added to the index. Changing it during indexing can lead to inconsistent results.\n *\n * @param {string} ref - The name of the reference field in the document.\n */\nlunr.Builder.prototype.ref = function (ref) {\n this._ref = ref\n}\n\n/**\n * A function that is used to extract a field from a document.\n *\n * Lunr expects a field to be at the top level of a document, if however the field\n * is deeply nested within a document an extractor function can be used to extract\n * the right field for indexing.\n *\n * @callback fieldExtractor\n * @param {object} doc - The document being added to the index.\n * @returns {?(string|object|object[])} obj - The object that will be indexed for this field.\n * @example Extracting a nested field\n * function (doc) { return doc.nested.field }\n */\n\n/**\n * Adds a field to the list of document fields that will be indexed. Every document being\n * indexed should have this field. Null values for this field in indexed documents will\n * not cause errors but will limit the chance of that document being retrieved by searches.\n *\n * All fields should be added before adding documents to the index. Adding fields after\n * a document has been indexed will have no effect on already indexed documents.\n *\n * Fields can be boosted at build time. This allows terms within that field to have more\n * importance when ranking search results. Use a field boost to specify that matches within\n * one field are more important than other fields.\n *\n * @param {string} fieldName - The name of a field to index in all documents.\n * @param {object} attributes - Optional attributes associated with this field.\n * @param {number} [attributes.boost=1] - Boost applied to all terms within this field.\n * @param {fieldExtractor} [attributes.extractor] - Function to extract a field from a document.\n * @throws {RangeError} fieldName cannot contain unsupported characters '/'\n */\nlunr.Builder.prototype.field = function (fieldName, attributes) {\n if (/\\//.test(fieldName)) {\n throw new RangeError (\"Field '\" + fieldName + \"' contains illegal character '/'\")\n }\n\n this._fields[fieldName] = attributes || {}\n}\n\n/**\n * A parameter to tune the amount of field length normalisation that is applied when\n * calculating relevance scores. A value of 0 will completely disable any normalisation\n * and a value of 1 will fully normalise field lengths. The default is 0.75. Values of b\n * will be clamped to the range 0 - 1.\n *\n * @param {number} number - The value to set for this tuning parameter.\n */\nlunr.Builder.prototype.b = function (number) {\n if (number < 0) {\n this._b = 0\n } else if (number > 1) {\n this._b = 1\n } else {\n this._b = number\n }\n}\n\n/**\n * A parameter that controls the speed at which a rise in term frequency results in term\n * frequency saturation. The default value is 1.2. Setting this to a higher value will give\n * slower saturation levels, a lower value will result in quicker saturation.\n *\n * @param {number} number - The value to set for this tuning parameter.\n */\nlunr.Builder.prototype.k1 = function (number) {\n this._k1 = number\n}\n\n/**\n * Adds a document to the index.\n *\n * Before adding fields to the index the index should have been fully setup, with the document\n * ref and all fields to index already having been specified.\n *\n * The document must have a field name as specified by the ref (by default this is 'id') and\n * it should have all fields defined for indexing, though null or undefined values will not\n * cause errors.\n *\n * Entire documents can be boosted at build time. Applying a boost to a document indicates that\n * this document should rank higher in search results than other documents.\n *\n * @param {object} doc - The document to add to the index.\n * @param {object} attributes - Optional attributes associated with this document.\n * @param {number} [attributes.boost=1] - Boost applied to all terms within this document.\n */\nlunr.Builder.prototype.add = function (doc, attributes) {\n var docRef = doc[this._ref],\n fields = Object.keys(this._fields)\n\n this._documents[docRef] = attributes || {}\n this.documentCount += 1\n\n for (var i = 0; i < fields.length; i++) {\n var fieldName = fields[i],\n extractor = this._fields[fieldName].extractor,\n field = extractor ? extractor(doc) : doc[fieldName],\n tokens = this.tokenizer(field, {\n fields: [fieldName]\n }),\n terms = this.pipeline.run(tokens),\n fieldRef = new lunr.FieldRef (docRef, fieldName),\n fieldTerms = Object.create(null)\n\n this.fieldTermFrequencies[fieldRef] = fieldTerms\n this.fieldLengths[fieldRef] = 0\n\n // store the length of this field for this document\n this.fieldLengths[fieldRef] += terms.length\n\n // calculate term frequencies for this field\n for (var j = 0; j < terms.length; j++) {\n var term = terms[j]\n\n if (fieldTerms[term] == undefined) {\n fieldTerms[term] = 0\n }\n\n fieldTerms[term] += 1\n\n // add to inverted index\n // create an initial posting if one doesn't exist\n if (this.invertedIndex[term] == undefined) {\n var posting = Object.create(null)\n posting[\"_index\"] = this.termIndex\n this.termIndex += 1\n\n for (var k = 0; k < fields.length; k++) {\n posting[fields[k]] = Object.create(null)\n }\n\n this.invertedIndex[term] = posting\n }\n\n // add an entry for this term/fieldName/docRef to the invertedIndex\n if (this.invertedIndex[term][fieldName][docRef] == undefined) {\n this.invertedIndex[term][fieldName][docRef] = Object.create(null)\n }\n\n // store all whitelisted metadata about this token in the\n // inverted index\n for (var l = 0; l < this.metadataWhitelist.length; l++) {\n var metadataKey = this.metadataWhitelist[l],\n metadata = term.metadata[metadataKey]\n\n if (this.invertedIndex[term][fieldName][docRef][metadataKey] == undefined) {\n this.invertedIndex[term][fieldName][docRef][metadataKey] = []\n }\n\n this.invertedIndex[term][fieldName][docRef][metadataKey].push(metadata)\n }\n }\n\n }\n}\n\n/**\n * Calculates the average document length for this index\n *\n * @private\n */\nlunr.Builder.prototype.calculateAverageFieldLengths = function () {\n\n var fieldRefs = Object.keys(this.fieldLengths),\n numberOfFields = fieldRefs.length,\n accumulator = {},\n documentsWithField = {}\n\n for (var i = 0; i < numberOfFields; i++) {\n var fieldRef = lunr.FieldRef.fromString(fieldRefs[i]),\n field = fieldRef.fieldName\n\n documentsWithField[field] || (documentsWithField[field] = 0)\n documentsWithField[field] += 1\n\n accumulator[field] || (accumulator[field] = 0)\n accumulator[field] += this.fieldLengths[fieldRef]\n }\n\n var fields = Object.keys(this._fields)\n\n for (var i = 0; i < fields.length; i++) {\n var fieldName = fields[i]\n accumulator[fieldName] = accumulator[fieldName] / documentsWithField[fieldName]\n }\n\n this.averageFieldLength = accumulator\n}\n\n/**\n * Builds a vector space model of every document using lunr.Vector\n *\n * @private\n */\nlunr.Builder.prototype.createFieldVectors = function () {\n var fieldVectors = {},\n fieldRefs = Object.keys(this.fieldTermFrequencies),\n fieldRefsLength = fieldRefs.length,\n termIdfCache = Object.create(null)\n\n for (var i = 0; i < fieldRefsLength; i++) {\n var fieldRef = lunr.FieldRef.fromString(fieldRefs[i]),\n fieldName = fieldRef.fieldName,\n fieldLength = this.fieldLengths[fieldRef],\n fieldVector = new lunr.Vector,\n termFrequencies = this.fieldTermFrequencies[fieldRef],\n terms = Object.keys(termFrequencies),\n termsLength = terms.length\n\n\n var fieldBoost = this._fields[fieldName].boost || 1,\n docBoost = this._documents[fieldRef.docRef].boost || 1\n\n for (var j = 0; j < termsLength; j++) {\n var term = terms[j],\n tf = termFrequencies[term],\n termIndex = this.invertedIndex[term]._index,\n idf, score, scoreWithPrecision\n\n if (termIdfCache[term] === undefined) {\n idf = lunr.idf(this.invertedIndex[term], this.documentCount)\n termIdfCache[term] = idf\n } else {\n idf = termIdfCache[term]\n }\n\n score = idf * ((this._k1 + 1) * tf) / (this._k1 * (1 - this._b + this._b * (fieldLength / this.averageFieldLength[fieldName])) + tf)\n score *= fieldBoost\n score *= docBoost\n scoreWithPrecision = Math.round(score * 1000) / 1000\n // Converts 1.23456789 to 1.234.\n // Reducing the precision so that the vectors take up less\n // space when serialised. Doing it now so that they behave\n // the same before and after serialisation. Also, this is\n // the fastest approach to reducing a number's precision in\n // JavaScript.\n\n fieldVector.insert(termIndex, scoreWithPrecision)\n }\n\n fieldVectors[fieldRef] = fieldVector\n }\n\n this.fieldVectors = fieldVectors\n}\n\n/**\n * Creates a token set of all tokens in the index using lunr.TokenSet\n *\n * @private\n */\nlunr.Builder.prototype.createTokenSet = function () {\n this.tokenSet = lunr.TokenSet.fromArray(\n Object.keys(this.invertedIndex).sort()\n )\n}\n\n/**\n * Builds the index, creating an instance of lunr.Index.\n *\n * This completes the indexing process and should only be called\n * once all documents have been added to the index.\n *\n * @returns {lunr.Index}\n */\nlunr.Builder.prototype.build = function () {\n this.calculateAverageFieldLengths()\n this.createFieldVectors()\n this.createTokenSet()\n\n return new lunr.Index({\n invertedIndex: this.invertedIndex,\n fieldVectors: this.fieldVectors,\n tokenSet: this.tokenSet,\n fields: Object.keys(this._fields),\n pipeline: this.searchPipeline\n })\n}\n\n/**\n * Applies a plugin to the index builder.\n *\n * A plugin is a function that is called with the index builder as its context.\n * Plugins can be used to customise or extend the behaviour of the index\n * in some way. A plugin is just a function, that encapsulated the custom\n * behaviour that should be applied when building the index.\n *\n * The plugin function will be called with the index builder as its argument, additional\n * arguments can also be passed when calling use. The function will be called\n * with the index builder as its context.\n *\n * @param {Function} plugin The plugin to apply.\n */\nlunr.Builder.prototype.use = function (fn) {\n var args = Array.prototype.slice.call(arguments, 1)\n args.unshift(this)\n fn.apply(this, args)\n}\n/**\n * Contains and collects metadata about a matching document.\n * A single instance of lunr.MatchData is returned as part of every\n * lunr.Index~Result.\n *\n * @constructor\n * @param {string} term - The term this match data is associated with\n * @param {string} field - The field in which the term was found\n * @param {object} metadata - The metadata recorded about this term in this field\n * @property {object} metadata - A cloned collection of metadata associated with this document.\n * @see {@link lunr.Index~Result}\n */\nlunr.MatchData = function (term, field, metadata) {\n var clonedMetadata = Object.create(null),\n metadataKeys = Object.keys(metadata || {})\n\n // Cloning the metadata to prevent the original\n // being mutated during match data combination.\n // Metadata is kept in an array within the inverted\n // index so cloning the data can be done with\n // Array#slice\n for (var i = 0; i < metadataKeys.length; i++) {\n var key = metadataKeys[i]\n clonedMetadata[key] = metadata[key].slice()\n }\n\n this.metadata = Object.create(null)\n\n if (term !== undefined) {\n this.metadata[term] = Object.create(null)\n this.metadata[term][field] = clonedMetadata\n }\n}\n\n/**\n * An instance of lunr.MatchData will be created for every term that matches a\n * document. However only one instance is required in a lunr.Index~Result. This\n * method combines metadata from another instance of lunr.MatchData with this\n * objects metadata.\n *\n * @param {lunr.MatchData} otherMatchData - Another instance of match data to merge with this one.\n * @see {@link lunr.Index~Result}\n */\nlunr.MatchData.prototype.combine = function (otherMatchData) {\n var terms = Object.keys(otherMatchData.metadata)\n\n for (var i = 0; i < terms.length; i++) {\n var term = terms[i],\n fields = Object.keys(otherMatchData.metadata[term])\n\n if (this.metadata[term] == undefined) {\n this.metadata[term] = Object.create(null)\n }\n\n for (var j = 0; j < fields.length; j++) {\n var field = fields[j],\n keys = Object.keys(otherMatchData.metadata[term][field])\n\n if (this.metadata[term][field] == undefined) {\n this.metadata[term][field] = Object.create(null)\n }\n\n for (var k = 0; k < keys.length; k++) {\n var key = keys[k]\n\n if (this.metadata[term][field][key] == undefined) {\n this.metadata[term][field][key] = otherMatchData.metadata[term][field][key]\n } else {\n this.metadata[term][field][key] = this.metadata[term][field][key].concat(otherMatchData.metadata[term][field][key])\n }\n\n }\n }\n }\n}\n\n/**\n * Add metadata for a term/field pair to this instance of match data.\n *\n * @param {string} term - The term this match data is associated with\n * @param {string} field - The field in which the term was found\n * @param {object} metadata - The metadata recorded about this term in this field\n */\nlunr.MatchData.prototype.add = function (term, field, metadata) {\n if (!(term in this.metadata)) {\n this.metadata[term] = Object.create(null)\n this.metadata[term][field] = metadata\n return\n }\n\n if (!(field in this.metadata[term])) {\n this.metadata[term][field] = metadata\n return\n }\n\n var metadataKeys = Object.keys(metadata)\n\n for (var i = 0; i < metadataKeys.length; i++) {\n var key = metadataKeys[i]\n\n if (key in this.metadata[term][field]) {\n this.metadata[term][field][key] = this.metadata[term][field][key].concat(metadata[key])\n } else {\n this.metadata[term][field][key] = metadata[key]\n }\n }\n}\n/**\n * A lunr.Query provides a programmatic way of defining queries to be performed\n * against a {@link lunr.Index}.\n *\n * Prefer constructing a lunr.Query using the {@link lunr.Index#query} method\n * so the query object is pre-initialized with the right index fields.\n *\n * @constructor\n * @property {lunr.Query~Clause[]} clauses - An array of query clauses.\n * @property {string[]} allFields - An array of all available fields in a lunr.Index.\n */\nlunr.Query = function (allFields) {\n this.clauses = []\n this.allFields = allFields\n}\n\n/**\n * Constants for indicating what kind of automatic wildcard insertion will be used when constructing a query clause.\n *\n * This allows wildcards to be added to the beginning and end of a term without having to manually do any string\n * concatenation.\n *\n * The wildcard constants can be bitwise combined to select both leading and trailing wildcards.\n *\n * @constant\n * @default\n * @property {number} wildcard.NONE - The term will have no wildcards inserted, this is the default behaviour\n * @property {number} wildcard.LEADING - Prepend the term with a wildcard, unless a leading wildcard already exists\n * @property {number} wildcard.TRAILING - Append a wildcard to the term, unless a trailing wildcard already exists\n * @see lunr.Query~Clause\n * @see lunr.Query#clause\n * @see lunr.Query#term\n * @example query term with trailing wildcard\n * query.term('foo', { wildcard: lunr.Query.wildcard.TRAILING })\n * @example query term with leading and trailing wildcard\n * query.term('foo', {\n * wildcard: lunr.Query.wildcard.LEADING | lunr.Query.wildcard.TRAILING\n * })\n */\n\nlunr.Query.wildcard = new String (\"*\")\nlunr.Query.wildcard.NONE = 0\nlunr.Query.wildcard.LEADING = 1\nlunr.Query.wildcard.TRAILING = 2\n\n/**\n * Constants for indicating what kind of presence a term must have in matching documents.\n *\n * @constant\n * @enum {number}\n * @see lunr.Query~Clause\n * @see lunr.Query#clause\n * @see lunr.Query#term\n * @example query term with required presence\n * query.term('foo', { presence: lunr.Query.presence.REQUIRED })\n */\nlunr.Query.presence = {\n /**\n * Term's presence in a document is optional, this is the default value.\n */\n OPTIONAL: 1,\n\n /**\n * Term's presence in a document is required, documents that do not contain\n * this term will not be returned.\n */\n REQUIRED: 2,\n\n /**\n * Term's presence in a document is prohibited, documents that do contain\n * this term will not be returned.\n */\n PROHIBITED: 3\n}\n\n/**\n * A single clause in a {@link lunr.Query} contains a term and details on how to\n * match that term against a {@link lunr.Index}.\n *\n * @typedef {Object} lunr.Query~Clause\n * @property {string[]} fields - The fields in an index this clause should be matched against.\n * @property {number} [boost=1] - Any boost that should be applied when matching this clause.\n * @property {number} [editDistance] - Whether the term should have fuzzy matching applied, and how fuzzy the match should be.\n * @property {boolean} [usePipeline] - Whether the term should be passed through the search pipeline.\n * @property {number} [wildcard=lunr.Query.wildcard.NONE] - Whether the term should have wildcards appended or prepended.\n * @property {number} [presence=lunr.Query.presence.OPTIONAL] - The terms presence in any matching documents.\n */\n\n/**\n * Adds a {@link lunr.Query~Clause} to this query.\n *\n * Unless the clause contains the fields to be matched all fields will be matched. In addition\n * a default boost of 1 is applied to the clause.\n *\n * @param {lunr.Query~Clause} clause - The clause to add to this query.\n * @see lunr.Query~Clause\n * @returns {lunr.Query}\n */\nlunr.Query.prototype.clause = function (clause) {\n if (!('fields' in clause)) {\n clause.fields = this.allFields\n }\n\n if (!('boost' in clause)) {\n clause.boost = 1\n }\n\n if (!('usePipeline' in clause)) {\n clause.usePipeline = true\n }\n\n if (!('wildcard' in clause)) {\n clause.wildcard = lunr.Query.wildcard.NONE\n }\n\n if ((clause.wildcard & lunr.Query.wildcard.LEADING) && (clause.term.charAt(0) != lunr.Query.wildcard)) {\n clause.term = \"*\" + clause.term\n }\n\n if ((clause.wildcard & lunr.Query.wildcard.TRAILING) && (clause.term.slice(-1) != lunr.Query.wildcard)) {\n clause.term = \"\" + clause.term + \"*\"\n }\n\n if (!('presence' in clause)) {\n clause.presence = lunr.Query.presence.OPTIONAL\n }\n\n this.clauses.push(clause)\n\n return this\n}\n\n/**\n * A negated query is one in which every clause has a presence of\n * prohibited. These queries require some special processing to return\n * the expected results.\n *\n * @returns boolean\n */\nlunr.Query.prototype.isNegated = function () {\n for (var i = 0; i < this.clauses.length; i++) {\n if (this.clauses[i].presence != lunr.Query.presence.PROHIBITED) {\n return false\n }\n }\n\n return true\n}\n\n/**\n * Adds a term to the current query, under the covers this will create a {@link lunr.Query~Clause}\n * to the list of clauses that make up this query.\n *\n * The term is used as is, i.e. no tokenization will be performed by this method. Instead conversion\n * to a token or token-like string should be done before calling this method.\n *\n * The term will be converted to a string by calling `toString`. Multiple terms can be passed as an\n * array, each term in the array will share the same options.\n *\n * @param {object|object[]} term - The term(s) to add to the query.\n * @param {object} [options] - Any additional properties to add to the query clause.\n * @returns {lunr.Query}\n * @see lunr.Query#clause\n * @see lunr.Query~Clause\n * @example adding a single term to a query\n * query.term(\"foo\")\n * @example adding a single term to a query and specifying search fields, term boost and automatic trailing wildcard\n * query.term(\"foo\", {\n * fields: [\"title\"],\n * boost: 10,\n * wildcard: lunr.Query.wildcard.TRAILING\n * })\n * @example using lunr.tokenizer to convert a string to tokens before using them as terms\n * query.term(lunr.tokenizer(\"foo bar\"))\n */\nlunr.Query.prototype.term = function (term, options) {\n if (Array.isArray(term)) {\n term.forEach(function (t) { this.term(t, lunr.utils.clone(options)) }, this)\n return this\n }\n\n var clause = options || {}\n clause.term = term.toString()\n\n this.clause(clause)\n\n return this\n}\nlunr.QueryParseError = function (message, start, end) {\n this.name = \"QueryParseError\"\n this.message = message\n this.start = start\n this.end = end\n}\n\nlunr.QueryParseError.prototype = new Error\nlunr.QueryLexer = function (str) {\n this.lexemes = []\n this.str = str\n this.length = str.length\n this.pos = 0\n this.start = 0\n this.escapeCharPositions = []\n}\n\nlunr.QueryLexer.prototype.run = function () {\n var state = lunr.QueryLexer.lexText\n\n while (state) {\n state = state(this)\n }\n}\n\nlunr.QueryLexer.prototype.sliceString = function () {\n var subSlices = [],\n sliceStart = this.start,\n sliceEnd = this.pos\n\n for (var i = 0; i < this.escapeCharPositions.length; i++) {\n sliceEnd = this.escapeCharPositions[i]\n subSlices.push(this.str.slice(sliceStart, sliceEnd))\n sliceStart = sliceEnd + 1\n }\n\n subSlices.push(this.str.slice(sliceStart, this.pos))\n this.escapeCharPositions.length = 0\n\n return subSlices.join('')\n}\n\nlunr.QueryLexer.prototype.emit = function (type) {\n this.lexemes.push({\n type: type,\n str: this.sliceString(),\n start: this.start,\n end: this.pos\n })\n\n this.start = this.pos\n}\n\nlunr.QueryLexer.prototype.escapeCharacter = function () {\n this.escapeCharPositions.push(this.pos - 1)\n this.pos += 1\n}\n\nlunr.QueryLexer.prototype.next = function () {\n if (this.pos >= this.length) {\n return lunr.QueryLexer.EOS\n }\n\n var char = this.str.charAt(this.pos)\n this.pos += 1\n return char\n}\n\nlunr.QueryLexer.prototype.width = function () {\n return this.pos - this.start\n}\n\nlunr.QueryLexer.prototype.ignore = function () {\n if (this.start == this.pos) {\n this.pos += 1\n }\n\n this.start = this.pos\n}\n\nlunr.QueryLexer.prototype.backup = function () {\n this.pos -= 1\n}\n\nlunr.QueryLexer.prototype.acceptDigitRun = function () {\n var char, charCode\n\n do {\n char = this.next()\n charCode = char.charCodeAt(0)\n } while (charCode > 47 && charCode < 58)\n\n if (char != lunr.QueryLexer.EOS) {\n this.backup()\n }\n}\n\nlunr.QueryLexer.prototype.more = function () {\n return this.pos < this.length\n}\n\nlunr.QueryLexer.EOS = 'EOS'\nlunr.QueryLexer.FIELD = 'FIELD'\nlunr.QueryLexer.TERM = 'TERM'\nlunr.QueryLexer.EDIT_DISTANCE = 'EDIT_DISTANCE'\nlunr.QueryLexer.BOOST = 'BOOST'\nlunr.QueryLexer.PRESENCE = 'PRESENCE'\n\nlunr.QueryLexer.lexField = function (lexer) {\n lexer.backup()\n lexer.emit(lunr.QueryLexer.FIELD)\n lexer.ignore()\n return lunr.QueryLexer.lexText\n}\n\nlunr.QueryLexer.lexTerm = function (lexer) {\n if (lexer.width() > 1) {\n lexer.backup()\n lexer.emit(lunr.QueryLexer.TERM)\n }\n\n lexer.ignore()\n\n if (lexer.more()) {\n return lunr.QueryLexer.lexText\n }\n}\n\nlunr.QueryLexer.lexEditDistance = function (lexer) {\n lexer.ignore()\n lexer.acceptDigitRun()\n lexer.emit(lunr.QueryLexer.EDIT_DISTANCE)\n return lunr.QueryLexer.lexText\n}\n\nlunr.QueryLexer.lexBoost = function (lexer) {\n lexer.ignore()\n lexer.acceptDigitRun()\n lexer.emit(lunr.QueryLexer.BOOST)\n return lunr.QueryLexer.lexText\n}\n\nlunr.QueryLexer.lexEOS = function (lexer) {\n if (lexer.width() > 0) {\n lexer.emit(lunr.QueryLexer.TERM)\n }\n}\n\n// This matches the separator used when tokenising fields\n// within a document. These should match otherwise it is\n// not possible to search for some tokens within a document.\n//\n// It is possible for the user to change the separator on the\n// tokenizer so it _might_ clash with any other of the special\n// characters already used within the search string, e.g. :.\n//\n// This means that it is possible to change the separator in\n// such a way that makes some words unsearchable using a search\n// string.\nlunr.QueryLexer.termSeparator = lunr.tokenizer.separator\n\nlunr.QueryLexer.lexText = function (lexer) {\n while (true) {\n var char = lexer.next()\n\n if (char == lunr.QueryLexer.EOS) {\n return lunr.QueryLexer.lexEOS\n }\n\n // Escape character is '\\'\n if (char.charCodeAt(0) == 92) {\n lexer.escapeCharacter()\n continue\n }\n\n if (char == \":\") {\n return lunr.QueryLexer.lexField\n }\n\n if (char == \"~\") {\n lexer.backup()\n if (lexer.width() > 0) {\n lexer.emit(lunr.QueryLexer.TERM)\n }\n return lunr.QueryLexer.lexEditDistance\n }\n\n if (char == \"^\") {\n lexer.backup()\n if (lexer.width() > 0) {\n lexer.emit(lunr.QueryLexer.TERM)\n }\n return lunr.QueryLexer.lexBoost\n }\n\n // \"+\" indicates term presence is required\n // checking for length to ensure that only\n // leading \"+\" are considered\n if (char == \"+\" && lexer.width() === 1) {\n lexer.emit(lunr.QueryLexer.PRESENCE)\n return lunr.QueryLexer.lexText\n }\n\n // \"-\" indicates term presence is prohibited\n // checking for length to ensure that only\n // leading \"-\" are considered\n if (char == \"-\" && lexer.width() === 1) {\n lexer.emit(lunr.QueryLexer.PRESENCE)\n return lunr.QueryLexer.lexText\n }\n\n if (char.match(lunr.QueryLexer.termSeparator)) {\n return lunr.QueryLexer.lexTerm\n }\n }\n}\n\nlunr.QueryParser = function (str, query) {\n this.lexer = new lunr.QueryLexer (str)\n this.query = query\n this.currentClause = {}\n this.lexemeIdx = 0\n}\n\nlunr.QueryParser.prototype.parse = function () {\n this.lexer.run()\n this.lexemes = this.lexer.lexemes\n\n var state = lunr.QueryParser.parseClause\n\n while (state) {\n state = state(this)\n }\n\n return this.query\n}\n\nlunr.QueryParser.prototype.peekLexeme = function () {\n return this.lexemes[this.lexemeIdx]\n}\n\nlunr.QueryParser.prototype.consumeLexeme = function () {\n var lexeme = this.peekLexeme()\n this.lexemeIdx += 1\n return lexeme\n}\n\nlunr.QueryParser.prototype.nextClause = function () {\n var completedClause = this.currentClause\n this.query.clause(completedClause)\n this.currentClause = {}\n}\n\nlunr.QueryParser.parseClause = function (parser) {\n var lexeme = parser.peekLexeme()\n\n if (lexeme == undefined) {\n return\n }\n\n switch (lexeme.type) {\n case lunr.QueryLexer.PRESENCE:\n return lunr.QueryParser.parsePresence\n case lunr.QueryLexer.FIELD:\n return lunr.QueryParser.parseField\n case lunr.QueryLexer.TERM:\n return lunr.QueryParser.parseTerm\n default:\n var errorMessage = \"expected either a field or a term, found \" + lexeme.type\n\n if (lexeme.str.length >= 1) {\n errorMessage += \" with value '\" + lexeme.str + \"'\"\n }\n\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n}\n\nlunr.QueryParser.parsePresence = function (parser) {\n var lexeme = parser.consumeLexeme()\n\n if (lexeme == undefined) {\n return\n }\n\n switch (lexeme.str) {\n case \"-\":\n parser.currentClause.presence = lunr.Query.presence.PROHIBITED\n break\n case \"+\":\n parser.currentClause.presence = lunr.Query.presence.REQUIRED\n break\n default:\n var errorMessage = \"unrecognised presence operator'\" + lexeme.str + \"'\"\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n\n var nextLexeme = parser.peekLexeme()\n\n if (nextLexeme == undefined) {\n var errorMessage = \"expecting term or field, found nothing\"\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n\n switch (nextLexeme.type) {\n case lunr.QueryLexer.FIELD:\n return lunr.QueryParser.parseField\n case lunr.QueryLexer.TERM:\n return lunr.QueryParser.parseTerm\n default:\n var errorMessage = \"expecting term or field, found '\" + nextLexeme.type + \"'\"\n throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end)\n }\n}\n\nlunr.QueryParser.parseField = function (parser) {\n var lexeme = parser.consumeLexeme()\n\n if (lexeme == undefined) {\n return\n }\n\n if (parser.query.allFields.indexOf(lexeme.str) == -1) {\n var possibleFields = parser.query.allFields.map(function (f) { return \"'\" + f + \"'\" }).join(', '),\n errorMessage = \"unrecognised field '\" + lexeme.str + \"', possible fields: \" + possibleFields\n\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n\n parser.currentClause.fields = [lexeme.str]\n\n var nextLexeme = parser.peekLexeme()\n\n if (nextLexeme == undefined) {\n var errorMessage = \"expecting term, found nothing\"\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n\n switch (nextLexeme.type) {\n case lunr.QueryLexer.TERM:\n return lunr.QueryParser.parseTerm\n default:\n var errorMessage = \"expecting term, found '\" + nextLexeme.type + \"'\"\n throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end)\n }\n}\n\nlunr.QueryParser.parseTerm = function (parser) {\n var lexeme = parser.consumeLexeme()\n\n if (lexeme == undefined) {\n return\n }\n\n parser.currentClause.term = lexeme.str.toLowerCase()\n\n if (lexeme.str.indexOf(\"*\") != -1) {\n parser.currentClause.usePipeline = false\n }\n\n var nextLexeme = parser.peekLexeme()\n\n if (nextLexeme == undefined) {\n parser.nextClause()\n return\n }\n\n switch (nextLexeme.type) {\n case lunr.QueryLexer.TERM:\n parser.nextClause()\n return lunr.QueryParser.parseTerm\n case lunr.QueryLexer.FIELD:\n parser.nextClause()\n return lunr.QueryParser.parseField\n case lunr.QueryLexer.EDIT_DISTANCE:\n return lunr.QueryParser.parseEditDistance\n case lunr.QueryLexer.BOOST:\n return lunr.QueryParser.parseBoost\n case lunr.QueryLexer.PRESENCE:\n parser.nextClause()\n return lunr.QueryParser.parsePresence\n default:\n var errorMessage = \"Unexpected lexeme type '\" + nextLexeme.type + \"'\"\n throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end)\n }\n}\n\nlunr.QueryParser.parseEditDistance = function (parser) {\n var lexeme = parser.consumeLexeme()\n\n if (lexeme == undefined) {\n return\n }\n\n var editDistance = parseInt(lexeme.str, 10)\n\n if (isNaN(editDistance)) {\n var errorMessage = \"edit distance must be numeric\"\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n\n parser.currentClause.editDistance = editDistance\n\n var nextLexeme = parser.peekLexeme()\n\n if (nextLexeme == undefined) {\n parser.nextClause()\n return\n }\n\n switch (nextLexeme.type) {\n case lunr.QueryLexer.TERM:\n parser.nextClause()\n return lunr.QueryParser.parseTerm\n case lunr.QueryLexer.FIELD:\n parser.nextClause()\n return lunr.QueryParser.parseField\n case lunr.QueryLexer.EDIT_DISTANCE:\n return lunr.QueryParser.parseEditDistance\n case lunr.QueryLexer.BOOST:\n return lunr.QueryParser.parseBoost\n case lunr.QueryLexer.PRESENCE:\n parser.nextClause()\n return lunr.QueryParser.parsePresence\n default:\n var errorMessage = \"Unexpected lexeme type '\" + nextLexeme.type + \"'\"\n throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end)\n }\n}\n\nlunr.QueryParser.parseBoost = function (parser) {\n var lexeme = parser.consumeLexeme()\n\n if (lexeme == undefined) {\n return\n }\n\n var boost = parseInt(lexeme.str, 10)\n\n if (isNaN(boost)) {\n var errorMessage = \"boost must be numeric\"\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n\n parser.currentClause.boost = boost\n\n var nextLexeme = parser.peekLexeme()\n\n if (nextLexeme == undefined) {\n parser.nextClause()\n return\n }\n\n switch (nextLexeme.type) {\n case lunr.QueryLexer.TERM:\n parser.nextClause()\n return lunr.QueryParser.parseTerm\n case lunr.QueryLexer.FIELD:\n parser.nextClause()\n return lunr.QueryParser.parseField\n case lunr.QueryLexer.EDIT_DISTANCE:\n return lunr.QueryParser.parseEditDistance\n case lunr.QueryLexer.BOOST:\n return lunr.QueryParser.parseBoost\n case lunr.QueryLexer.PRESENCE:\n parser.nextClause()\n return lunr.QueryParser.parsePresence\n default:\n var errorMessage = \"Unexpected lexeme type '\" + nextLexeme.type + \"'\"\n throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end)\n }\n}\n\n /**\n * export the module via AMD, CommonJS or as a browser global\n * Export code from https://github.com/umdjs/umd/blob/master/returnExports.js\n */\n ;(function (root, factory) {\n if (typeof define === 'function' && define.amd) {\n // AMD. Register as an anonymous module.\n define(factory)\n } else if (typeof exports === 'object') {\n /**\n * Node. Does not work with strict CommonJS, but\n * only CommonJS-like enviroments that support module.exports,\n * like Node.\n */\n module.exports = factory()\n } else {\n // Browser globals (root is window)\n root.lunr = factory()\n }\n }(this, function () {\n /**\n * Just return a value to define the module export.\n * This example returns an object, but the module\n * can return a function as the exported value.\n */\n return lunr\n }))\n})();\n", "/*!\n * escape-html\n * Copyright(c) 2012-2013 TJ Holowaychuk\n * Copyright(c) 2015 Andreas Lubbe\n * Copyright(c) 2015 Tiancheng \"Timothy\" Gu\n * MIT Licensed\n */\n\n'use strict';\n\n/**\n * Module variables.\n * @private\n */\n\nvar matchHtmlRegExp = /[\"'&<>]/;\n\n/**\n * Module exports.\n * @public\n */\n\nmodule.exports = escapeHtml;\n\n/**\n * Escape special characters in the given string of html.\n *\n * @param {string} string The string to escape for inserting into HTML\n * @return {string}\n * @public\n */\n\nfunction escapeHtml(string) {\n var str = '' + string;\n var match = matchHtmlRegExp.exec(str);\n\n if (!match) {\n return str;\n }\n\n var escape;\n var html = '';\n var index = 0;\n var lastIndex = 0;\n\n for (index = match.index; index < str.length; index++) {\n switch (str.charCodeAt(index)) {\n case 34: // \"\n escape = '"';\n break;\n case 38: // &\n escape = '&';\n break;\n case 39: // '\n escape = ''';\n break;\n case 60: // <\n escape = '<';\n break;\n case 62: // >\n escape = '>';\n break;\n default:\n continue;\n }\n\n if (lastIndex !== index) {\n html += str.substring(lastIndex, index);\n }\n\n lastIndex = index + 1;\n html += escape;\n }\n\n return lastIndex !== index\n ? html + str.substring(lastIndex, index)\n : html;\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A RTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport lunr from \"lunr\"\n\nimport \"~/polyfills\"\n\nimport { Search, SearchIndexConfig } from \"../../_\"\nimport {\n SearchMessage,\n SearchMessageType\n} from \"../message\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Add support for usage with `iframe-worker` polyfill\n *\n * While `importScripts` is synchronous when executed inside of a web worker,\n * it's not possible to provide a synchronous polyfilled implementation. The\n * cool thing is that awaiting a non-Promise is a noop, so extending the type\n * definition to return a `Promise` shouldn't break anything.\n *\n * @see https://bit.ly/2PjDnXi - GitHub comment\n */\ndeclare global {\n function importScripts(...urls: string[]): Promise | void\n}\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Search index\n */\nlet index: Search\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch (= import) multi-language support through `lunr-languages`\n *\n * This function automatically imports the stemmers necessary to process the\n * languages, which are defined through the search index configuration.\n *\n * If the worker runs inside of an `iframe` (when using `iframe-worker` as\n * a shim), the base URL for the stemmers to be loaded must be determined by\n * searching for the first `script` element with a `src` attribute, which will\n * contain the contents of this script.\n *\n * @param config - Search index configuration\n *\n * @returns Promise resolving with no result\n */\nasync function setupSearchLanguages(\n config: SearchIndexConfig\n): Promise {\n let base = \"../lunr\"\n\n /* Detect `iframe-worker` and fix base URL */\n if (typeof parent !== \"undefined\" && \"IFrameWorker\" in parent) {\n const worker = document.querySelector(\"script[src]\")!\n const [path] = worker.src.split(\"/worker\")\n\n /* Prefix base with path */\n base = base.replace(\"..\", path)\n }\n\n /* Add scripts for languages */\n const scripts = []\n for (const lang of config.lang) {\n switch (lang) {\n\n /* Add segmenter for Japanese */\n case \"ja\":\n scripts.push(`${base}/tinyseg.js`)\n break\n\n /* Add segmenter for Hindi and Thai */\n case \"hi\":\n case \"th\":\n scripts.push(`${base}/wordcut.js`)\n break\n }\n\n /* Add language support */\n if (lang !== \"en\")\n scripts.push(`${base}/min/lunr.${lang}.min.js`)\n }\n\n /* Add multi-language support */\n if (config.lang.length > 1)\n scripts.push(`${base}/min/lunr.multi.min.js`)\n\n /* Load scripts synchronously */\n if (scripts.length)\n await importScripts(\n `${base}/min/lunr.stemmer.support.min.js`,\n ...scripts\n )\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Message handler\n *\n * @param message - Source message\n *\n * @returns Target message\n */\nexport async function handler(\n message: SearchMessage\n): Promise {\n switch (message.type) {\n\n /* Search setup message */\n case SearchMessageType.SETUP:\n await setupSearchLanguages(message.data.config)\n index = new Search(message.data)\n return {\n type: SearchMessageType.READY\n }\n\n /* Search query message */\n case SearchMessageType.QUERY:\n return {\n type: SearchMessageType.RESULT,\n data: index ? index.search(message.data) : { items: [] }\n }\n\n /* All other messages */\n default:\n throw new TypeError(\"Invalid message type\")\n }\n}\n\n/* ----------------------------------------------------------------------------\n * Worker\n * ------------------------------------------------------------------------- */\n\n/* @ts-expect-error - expose Lunr.js in global scope, or stemmers won't work */\nself.lunr = lunr\n\n/* Handle messages */\naddEventListener(\"message\", async ev => {\n postMessage(await handler(ev.data))\n})\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Polyfills\n * ------------------------------------------------------------------------- */\n\n/* Polyfill `Object.entries` */\nif (!Object.entries)\n Object.entries = function (obj: object) {\n const data: [string, string][] = []\n for (const key of Object.keys(obj))\n // @ts-expect-error - ignore property access warning\n data.push([key, obj[key]])\n\n /* Return entries */\n return data\n }\n\n/* Polyfill `Object.values` */\nif (!Object.values)\n Object.values = function (obj: object) {\n const data: string[] = []\n for (const key of Object.keys(obj))\n // @ts-expect-error - ignore property access warning\n data.push(obj[key])\n\n /* Return values */\n return data\n }\n\n/* ------------------------------------------------------------------------- */\n\n/* Polyfills for `Element` */\nif (typeof Element !== \"undefined\") {\n\n /* Polyfill `Element.scrollTo` */\n if (!Element.prototype.scrollTo)\n Element.prototype.scrollTo = function (\n x?: ScrollToOptions | number, y?: number\n ): void {\n if (typeof x === \"object\") {\n this.scrollLeft = x.left!\n this.scrollTop = x.top!\n } else {\n this.scrollLeft = x!\n this.scrollTop = y!\n }\n }\n\n /* Polyfill `Element.replaceWith` */\n if (!Element.prototype.replaceWith)\n Element.prototype.replaceWith = function (\n ...nodes: Array\n ): void {\n const parent = this.parentNode\n if (parent) {\n if (nodes.length === 0)\n parent.removeChild(this)\n\n /* Replace children and create text nodes */\n for (let i = nodes.length - 1; i >= 0; i--) {\n let node = nodes[i]\n if (typeof node !== \"object\")\n node = document.createTextNode(node)\n else if (node.parentNode)\n node.parentNode.removeChild(node)\n\n /* Replace child or insert before previous sibling */\n if (!i)\n parent.replaceChild(node, this)\n else\n parent.insertBefore(this.previousSibling!, node)\n }\n }\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport escapeHTML from \"escape-html\"\n\nimport { SearchIndexDocument } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search document\n */\nexport interface SearchDocument extends SearchIndexDocument {\n parent?: SearchIndexDocument /* Parent article */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search document mapping\n */\nexport type SearchDocumentMap = Map\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Create a search document mapping\n *\n * @param docs - Search index documents\n *\n * @returns Search document map\n */\nexport function setupSearchDocumentMap(\n docs: SearchIndexDocument[]\n): SearchDocumentMap {\n const documents = new Map()\n const parents = new Set()\n for (const doc of docs) {\n const [path, hash] = doc.location.split(\"#\")\n\n /* Extract location, title and tags */\n const location = doc.location\n const title = doc.title\n const tags = doc.tags\n\n /* Escape and cleanup text */\n const text = escapeHTML(doc.text)\n .replace(/\\s+(?=[,.:;!?])/g, \"\")\n .replace(/\\s+/g, \" \")\n\n /* Handle section */\n if (hash) {\n const parent = documents.get(path)!\n\n /* Ignore first section, override article */\n if (!parents.has(parent)) {\n parent.title = doc.title\n parent.text = text\n\n /* Remember that we processed the article */\n parents.add(parent)\n\n /* Add subsequent section */\n } else {\n documents.set(location, {\n location,\n title,\n text,\n parent\n })\n }\n\n /* Add article */\n } else {\n documents.set(location, {\n location,\n title,\n text,\n ...tags && { tags }\n })\n }\n }\n return documents\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport escapeHTML from \"escape-html\"\n\nimport { SearchIndexConfig } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search highlight function\n *\n * @param value - Value\n *\n * @returns Highlighted value\n */\nexport type SearchHighlightFn = (value: string) => string\n\n/**\n * Search highlight factory function\n *\n * @param query - Query value\n *\n * @returns Search highlight function\n */\nexport type SearchHighlightFactoryFn = (query: string) => SearchHighlightFn\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Create a search highlighter\n *\n * @param config - Search index configuration\n * @param escape - Whether to escape HTML\n *\n * @returns Search highlight factory function\n */\nexport function setupSearchHighlighter(\n config: SearchIndexConfig, escape: boolean\n): SearchHighlightFactoryFn {\n const separator = new RegExp(config.separator, \"img\")\n const highlight = (_: unknown, data: string, term: string) => {\n return `${data}${term}`\n }\n\n /* Return factory function */\n return (query: string) => {\n query = query\n .replace(/[\\s*+\\-:~^]+/g, \" \")\n .trim()\n\n /* Create search term match expression */\n const match = new RegExp(`(^|${config.separator})(${\n query\n .replace(/[|\\\\{}()[\\]^$+*?.-]/g, \"\\\\$&\")\n .replace(separator, \"|\")\n })`, \"img\")\n\n /* Highlight string value */\n return value => (\n escape\n ? escapeHTML(value)\n : value\n )\n .replace(match, highlight)\n .replace(/<\\/mark>(\\s+)]*>/img, \"$1\")\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search query clause\n */\nexport interface SearchQueryClause {\n presence: lunr.Query.presence /* Clause presence */\n term: string /* Clause term */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search query terms\n */\nexport type SearchQueryTerms = Record\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Parse a search query for analysis\n *\n * @param value - Query value\n *\n * @returns Search query clauses\n */\nexport function parseSearchQuery(\n value: string\n): SearchQueryClause[] {\n const query = new (lunr as any).Query([\"title\", \"text\"])\n const parser = new (lunr as any).QueryParser(value, query)\n\n /* Parse and return query clauses */\n parser.parse()\n return query.clauses\n}\n\n/**\n * Analyze the search query clauses in regard to the search terms found\n *\n * @param query - Search query clauses\n * @param terms - Search terms\n *\n * @returns Search query terms\n */\nexport function getSearchQueryTerms(\n query: SearchQueryClause[], terms: string[]\n): SearchQueryTerms {\n const clauses = new Set(query)\n\n /* Match query clauses against terms */\n const result: SearchQueryTerms = {}\n for (let t = 0; t < terms.length; t++)\n for (const clause of clauses)\n if (terms[t].startsWith(clause.term)) {\n result[clause.term] = true\n clauses.delete(clause)\n }\n\n /* Annotate unmatched non-stopword query clauses */\n for (const clause of clauses)\n if (lunr.stopWordFilter?.(clause.term as any))\n result[clause.term] = false\n\n /* Return query terms */\n return result\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n SearchDocument,\n SearchDocumentMap,\n setupSearchDocumentMap\n} from \"../document\"\nimport {\n SearchHighlightFactoryFn,\n setupSearchHighlighter\n} from \"../highlighter\"\nimport { SearchOptions } from \"../options\"\nimport {\n SearchQueryTerms,\n getSearchQueryTerms,\n parseSearchQuery\n} from \"../query\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search index configuration\n */\nexport interface SearchIndexConfig {\n lang: string[] /* Search languages */\n separator: string /* Search separator */\n}\n\n/**\n * Search index document\n */\nexport interface SearchIndexDocument {\n location: string /* Document location */\n title: string /* Document title */\n text: string /* Document text */\n tags?: string[] /* Document tags */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search index\n *\n * This interfaces describes the format of the `search_index.json` file which\n * is automatically built by the MkDocs search plugin.\n */\nexport interface SearchIndex {\n config: SearchIndexConfig /* Search index configuration */\n docs: SearchIndexDocument[] /* Search index documents */\n options: SearchOptions /* Search options */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search metadata\n */\nexport interface SearchMetadata {\n score: number /* Score (relevance) */\n terms: SearchQueryTerms /* Search query terms */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search result document\n */\nexport type SearchResultDocument = SearchDocument & SearchMetadata\n\n/**\n * Search result item\n */\nexport type SearchResultItem = SearchResultDocument[]\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search result\n */\nexport interface SearchResult {\n items: SearchResultItem[] /* Search result items */\n suggestions?: string[] /* Search suggestions */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Compute the difference of two lists of strings\n *\n * @param a - 1st list of strings\n * @param b - 2nd list of strings\n *\n * @returns Difference\n */\nfunction difference(a: string[], b: string[]): string[] {\n const [x, y] = [new Set(a), new Set(b)]\n return [\n ...new Set([...x].filter(value => !y.has(value)))\n ]\n}\n\n/* ----------------------------------------------------------------------------\n * Class\n * ------------------------------------------------------------------------- */\n\n/**\n * Search index\n */\nexport class Search {\n\n /**\n * Search document mapping\n *\n * A mapping of URLs (including hash fragments) to the actual articles and\n * sections of the documentation. The search document mapping must be created\n * regardless of whether the index was prebuilt or not, as Lunr.js itself\n * only stores the actual index.\n */\n protected documents: SearchDocumentMap\n\n /**\n * Search highlight factory function\n */\n protected highlight: SearchHighlightFactoryFn\n\n /**\n * The underlying Lunr.js search index\n */\n protected index: lunr.Index\n\n /**\n * Search options\n */\n protected options: SearchOptions\n\n /**\n * Create the search integration\n *\n * @param data - Search index\n */\n public constructor({ config, docs, options }: SearchIndex) {\n this.options = options\n\n /* Set up document map and highlighter factory */\n this.documents = setupSearchDocumentMap(docs)\n this.highlight = setupSearchHighlighter(config, false)\n\n /* Set separator for tokenizer */\n lunr.tokenizer.separator = new RegExp(config.separator)\n\n /* Create search index */\n this.index = lunr(function () {\n\n /* Set up multi-language support */\n if (config.lang.length === 1 && config.lang[0] !== \"en\") {\n this.use((lunr as any)[config.lang[0]])\n } else if (config.lang.length > 1) {\n this.use((lunr as any).multiLanguage(...config.lang))\n }\n\n /* Compute functions to be removed from the pipeline */\n const fns = difference([\n \"trimmer\", \"stopWordFilter\", \"stemmer\"\n ], options.pipeline)\n\n /* Remove functions from the pipeline for registered languages */\n for (const lang of config.lang.map(language => (\n language === \"en\" ? lunr : (lunr as any)[language]\n ))) {\n for (const fn of fns) {\n this.pipeline.remove(lang[fn])\n this.searchPipeline.remove(lang[fn])\n }\n }\n\n /* Set up reference */\n this.ref(\"location\")\n\n /* Set up fields */\n this.field(\"title\", { boost: 1e3 })\n this.field(\"text\")\n this.field(\"tags\", { boost: 1e6 })\n\n /* Index documents */\n for (const doc of docs)\n this.add(doc)\n })\n }\n\n /**\n * Search for matching documents\n *\n * The search index which MkDocs provides is divided up into articles, which\n * contain the whole content of the individual pages, and sections, which only\n * contain the contents of the subsections obtained by breaking the individual\n * pages up at `h1` ... `h6`. As there may be many sections on different pages\n * with identical titles (for example within this very project, e.g. \"Usage\"\n * or \"Installation\"), they need to be put into the context of the containing\n * page. For this reason, section results are grouped within their respective\n * articles which are the top-level results that are returned.\n *\n * @param query - Query value\n *\n * @returns Search results\n */\n public search(query: string): SearchResult {\n if (query) {\n try {\n const highlight = this.highlight(query)\n\n /* Parse query to extract clauses for analysis */\n const clauses = parseSearchQuery(query)\n .filter(clause => (\n clause.presence !== lunr.Query.presence.PROHIBITED\n ))\n\n /* Perform search and post-process results */\n const groups = this.index.search(`${query}*`)\n\n /* Apply post-query boosts based on title and search query terms */\n .reduce((item, { ref, score, matchData }) => {\n const document = this.documents.get(ref)\n if (typeof document !== \"undefined\") {\n const { location, title, text, tags, parent } = document\n\n /* Compute and analyze search query terms */\n const terms = getSearchQueryTerms(\n clauses,\n Object.keys(matchData.metadata)\n )\n\n /* Highlight title and text and apply post-query boosts */\n const boost = +!parent + +Object.values(terms).every(t => t)\n item.push({\n location,\n title: highlight(title),\n text: highlight(text),\n ...tags && { tags: tags.map(highlight) },\n score: score * (1 + boost),\n terms\n })\n }\n return item\n }, [])\n\n /* Sort search results again after applying boosts */\n .sort((a, b) => b.score - a.score)\n\n /* Group search results by page */\n .reduce((items, result) => {\n const document = this.documents.get(result.location)\n if (typeof document !== \"undefined\") {\n const ref = \"parent\" in document\n ? document.parent!.location\n : document.location\n items.set(ref, [...items.get(ref) || [], result])\n }\n return items\n }, new Map())\n\n /* Generate search suggestions, if desired */\n let suggestions: string[] | undefined\n if (this.options.suggestions) {\n const titles = this.index.query(builder => {\n for (const clause of clauses)\n builder.term(clause.term, {\n fields: [\"title\"],\n presence: lunr.Query.presence.REQUIRED,\n wildcard: lunr.Query.wildcard.TRAILING\n })\n })\n\n /* Retrieve suggestions for best match */\n suggestions = titles.length\n ? Object.keys(titles[0].matchData.metadata)\n : []\n }\n\n /* Return items and suggestions */\n return {\n items: [...groups.values()],\n ...typeof suggestions !== \"undefined\" && { suggestions }\n }\n\n /* Log errors to console (for now) */\n } catch {\n console.warn(`Invalid query: ${query} \u2013 see https://bit.ly/2s3ChXG`)\n }\n }\n\n /* Return nothing in case of error or empty query */\n return { items: [] }\n }\n}\n"], + "mappings": "mkCAAA;AAAA;AAAA;AAAA;AAAA,GAMC,AAAC,WAAU,CAiCZ,GAAI,GAAO,SAAU,EAAQ,CAC3B,GAAI,GAAU,GAAI,GAAK,QAEvB,SAAQ,SAAS,IACf,EAAK,QACL,EAAK,eACL,EAAK,OACP,EAEA,EAAQ,eAAe,IACrB,EAAK,OACP,EAEA,EAAO,KAAK,EAAS,CAAO,EACrB,EAAQ,MAAM,CACvB,EAEA,EAAK,QAAU,QACf;AAAA;AAAA;AAAA,GASA,EAAK,MAAQ,CAAC,EASd,EAAK,MAAM,KAAQ,SAAU,EAAQ,CAEnC,MAAO,UAAU,EAAS,CACxB,AAAI,EAAO,SAAW,QAAQ,MAC5B,QAAQ,KAAK,CAAO,CAExB,CAEF,EAAG,IAAI,EAaP,EAAK,MAAM,SAAW,SAAU,EAAK,CACnC,MAAI,AAAkB,IAAQ,KACrB,GAEA,EAAI,SAAS,CAExB,EAkBA,EAAK,MAAM,MAAQ,SAAU,EAAK,CAChC,GAAI,GAAQ,KACV,MAAO,GAMT,OAHI,GAAQ,OAAO,OAAO,IAAI,EAC1B,EAAO,OAAO,KAAK,CAAG,EAEjB,EAAI,EAAG,EAAI,EAAK,OAAQ,IAAK,CACpC,GAAI,GAAM,EAAK,GACX,EAAM,EAAI,GAEd,GAAI,MAAM,QAAQ,CAAG,EAAG,CACtB,EAAM,GAAO,EAAI,MAAM,EACvB,QACF,CAEA,GAAI,MAAO,IAAQ,UACf,MAAO,IAAQ,UACf,MAAO,IAAQ,UAAW,CAC5B,EAAM,GAAO,EACb,QACF,CAEA,KAAM,IAAI,WAAU,uDAAuD,CAC7E,CAEA,MAAO,EACT,EACA,EAAK,SAAW,SAAU,EAAQ,EAAW,EAAa,CACxD,KAAK,OAAS,EACd,KAAK,UAAY,EACjB,KAAK,aAAe,CACtB,EAEA,EAAK,SAAS,OAAS,IAEvB,EAAK,SAAS,WAAa,SAAU,EAAG,CACtC,GAAI,GAAI,EAAE,QAAQ,EAAK,SAAS,MAAM,EAEtC,GAAI,IAAM,GACR,KAAM,6BAGR,GAAI,GAAW,EAAE,MAAM,EAAG,CAAC,EACvB,EAAS,EAAE,MAAM,EAAI,CAAC,EAE1B,MAAO,IAAI,GAAK,SAAU,EAAQ,EAAU,CAAC,CAC/C,EAEA,EAAK,SAAS,UAAU,SAAW,UAAY,CAC7C,MAAI,MAAK,cAAgB,MACvB,MAAK,aAAe,KAAK,UAAY,EAAK,SAAS,OAAS,KAAK,QAG5D,KAAK,YACd,EACA;AAAA;AAAA;AAAA,GAUA,EAAK,IAAM,SAAU,EAAU,CAG7B,GAFA,KAAK,SAAW,OAAO,OAAO,IAAI,EAE9B,EAAU,CACZ,KAAK,OAAS,EAAS,OAEvB,OAAS,GAAI,EAAG,EAAI,KAAK,OAAQ,IAC/B,KAAK,SAAS,EAAS,IAAM,EAEjC,KACE,MAAK,OAAS,CAElB,EASA,EAAK,IAAI,SAAW,CAClB,UAAW,SAAU,EAAO,CAC1B,MAAO,EACT,EAEA,MAAO,UAAY,CACjB,MAAO,KACT,EAEA,SAAU,UAAY,CACpB,MAAO,EACT,CACF,EASA,EAAK,IAAI,MAAQ,CACf,UAAW,UAAY,CACrB,MAAO,KACT,EAEA,MAAO,SAAU,EAAO,CACtB,MAAO,EACT,EAEA,SAAU,UAAY,CACpB,MAAO,EACT,CACF,EAQA,EAAK,IAAI,UAAU,SAAW,SAAU,EAAQ,CAC9C,MAAO,CAAC,CAAC,KAAK,SAAS,EACzB,EAUA,EAAK,IAAI,UAAU,UAAY,SAAU,EAAO,CAC9C,GAAI,GAAG,EAAG,EAAU,EAAe,CAAC,EAEpC,GAAI,IAAU,EAAK,IAAI,SACrB,MAAO,MAGT,GAAI,IAAU,EAAK,IAAI,MACrB,MAAO,GAGT,AAAI,KAAK,OAAS,EAAM,OACtB,GAAI,KACJ,EAAI,GAEJ,GAAI,EACJ,EAAI,MAGN,EAAW,OAAO,KAAK,EAAE,QAAQ,EAEjC,OAAS,GAAI,EAAG,EAAI,EAAS,OAAQ,IAAK,CACxC,GAAI,GAAU,EAAS,GACvB,AAAI,IAAW,GAAE,UACf,EAAa,KAAK,CAAO,CAE7B,CAEA,MAAO,IAAI,GAAK,IAAK,CAAY,CACnC,EASA,EAAK,IAAI,UAAU,MAAQ,SAAU,EAAO,CAC1C,MAAI,KAAU,EAAK,IAAI,SACd,EAAK,IAAI,SAGd,IAAU,EAAK,IAAI,MACd,KAGF,GAAI,GAAK,IAAI,OAAO,KAAK,KAAK,QAAQ,EAAE,OAAO,OAAO,KAAK,EAAM,QAAQ,CAAC,CAAC,CACpF,EASA,EAAK,IAAM,SAAU,EAAS,EAAe,CAC3C,GAAI,GAAoB,EAExB,OAAS,KAAa,GACpB,AAAI,GAAa,UACjB,IAAqB,OAAO,KAAK,EAAQ,EAAU,EAAE,QAGvD,GAAI,GAAK,GAAgB,EAAoB,IAAQ,GAAoB,IAEzE,MAAO,MAAK,IAAI,EAAI,KAAK,IAAI,CAAC,CAAC,CACjC,EAUA,EAAK,MAAQ,SAAU,EAAK,EAAU,CACpC,KAAK,IAAM,GAAO,GAClB,KAAK,SAAW,GAAY,CAAC,CAC/B,EAOA,EAAK,MAAM,UAAU,SAAW,UAAY,CAC1C,MAAO,MAAK,GACd,EAsBA,EAAK,MAAM,UAAU,OAAS,SAAU,EAAI,CAC1C,YAAK,IAAM,EAAG,KAAK,IAAK,KAAK,QAAQ,EAC9B,IACT,EASA,EAAK,MAAM,UAAU,MAAQ,SAAU,EAAI,CACzC,SAAK,GAAM,SAAU,EAAG,CAAE,MAAO,EAAE,EAC5B,GAAI,GAAK,MAAO,EAAG,KAAK,IAAK,KAAK,QAAQ,EAAG,KAAK,QAAQ,CACnE,EACA;AAAA;AAAA;AAAA,GAuBA,EAAK,UAAY,SAAU,EAAK,EAAU,CACxC,GAAI,GAAO,MAAQ,GAAO,KACxB,MAAO,CAAC,EAGV,GAAI,MAAM,QAAQ,CAAG,EACnB,MAAO,GAAI,IAAI,SAAU,EAAG,CAC1B,MAAO,IAAI,GAAK,MACd,EAAK,MAAM,SAAS,CAAC,EAAE,YAAY,EACnC,EAAK,MAAM,MAAM,CAAQ,CAC3B,CACF,CAAC,EAOH,OAJI,GAAM,EAAI,SAAS,EAAE,YAAY,EACjC,EAAM,EAAI,OACV,EAAS,CAAC,EAEL,EAAW,EAAG,EAAa,EAAG,GAAY,EAAK,IAAY,CAClE,GAAI,GAAO,EAAI,OAAO,CAAQ,EAC1B,EAAc,EAAW,EAE7B,GAAK,EAAK,MAAM,EAAK,UAAU,SAAS,GAAK,GAAY,EAAM,CAE7D,GAAI,EAAc,EAAG,CACnB,GAAI,GAAgB,EAAK,MAAM,MAAM,CAAQ,GAAK,CAAC,EACnD,EAAc,SAAc,CAAC,EAAY,CAAW,EACpD,EAAc,MAAW,EAAO,OAEhC,EAAO,KACL,GAAI,GAAK,MACP,EAAI,MAAM,EAAY,CAAQ,EAC9B,CACF,CACF,CACF,CAEA,EAAa,EAAW,CAC1B,CAEF,CAEA,MAAO,EACT,EASA,EAAK,UAAU,UAAY,UAC3B;AAAA;AAAA;AAAA,GAkCA,EAAK,SAAW,UAAY,CAC1B,KAAK,OAAS,CAAC,CACjB,EAEA,EAAK,SAAS,oBAAsB,OAAO,OAAO,IAAI,EAmCtD,EAAK,SAAS,iBAAmB,SAAU,EAAI,EAAO,CACpD,AAAI,IAAS,MAAK,qBAChB,EAAK,MAAM,KAAK,6CAA+C,CAAK,EAGtE,EAAG,MAAQ,EACX,EAAK,SAAS,oBAAoB,EAAG,OAAS,CAChD,EAQA,EAAK,SAAS,4BAA8B,SAAU,EAAI,CACxD,GAAI,GAAe,EAAG,OAAU,EAAG,QAAS,MAAK,oBAEjD,AAAK,GACH,EAAK,MAAM,KAAK;AAAA,EAAmG,CAAE,CAEzH,EAYA,EAAK,SAAS,KAAO,SAAU,EAAY,CACzC,GAAI,GAAW,GAAI,GAAK,SAExB,SAAW,QAAQ,SAAU,EAAQ,CACnC,GAAI,GAAK,EAAK,SAAS,oBAAoB,GAE3C,GAAI,EACF,EAAS,IAAI,CAAE,MAEf,MAAM,IAAI,OAAM,sCAAwC,CAAM,CAElE,CAAC,EAEM,CACT,EASA,EAAK,SAAS,UAAU,IAAM,UAAY,CACxC,GAAI,GAAM,MAAM,UAAU,MAAM,KAAK,SAAS,EAE9C,EAAI,QAAQ,SAAU,EAAI,CACxB,EAAK,SAAS,4BAA4B,CAAE,EAC5C,KAAK,OAAO,KAAK,CAAE,CACrB,EAAG,IAAI,CACT,EAWA,EAAK,SAAS,UAAU,MAAQ,SAAU,EAAY,EAAO,CAC3D,EAAK,SAAS,4BAA4B,CAAK,EAE/C,GAAI,GAAM,KAAK,OAAO,QAAQ,CAAU,EACxC,GAAI,GAAO,GACT,KAAM,IAAI,OAAM,wBAAwB,EAG1C,EAAM,EAAM,EACZ,KAAK,OAAO,OAAO,EAAK,EAAG,CAAK,CAClC,EAWA,EAAK,SAAS,UAAU,OAAS,SAAU,EAAY,EAAO,CAC5D,EAAK,SAAS,4BAA4B,CAAK,EAE/C,GAAI,GAAM,KAAK,OAAO,QAAQ,CAAU,EACxC,GAAI,GAAO,GACT,KAAM,IAAI,OAAM,wBAAwB,EAG1C,KAAK,OAAO,OAAO,EAAK,EAAG,CAAK,CAClC,EAOA,EAAK,SAAS,UAAU,OAAS,SAAU,EAAI,CAC7C,GAAI,GAAM,KAAK,OAAO,QAAQ,CAAE,EAChC,AAAI,GAAO,IAIX,KAAK,OAAO,OAAO,EAAK,CAAC,CAC3B,EASA,EAAK,SAAS,UAAU,IAAM,SAAU,EAAQ,CAG9C,OAFI,GAAc,KAAK,OAAO,OAErB,EAAI,EAAG,EAAI,EAAa,IAAK,CAIpC,OAHI,GAAK,KAAK,OAAO,GACjB,EAAO,CAAC,EAEH,EAAI,EAAG,EAAI,EAAO,OAAQ,IAAK,CACtC,GAAI,GAAS,EAAG,EAAO,GAAI,EAAG,CAAM,EAEpC,GAAI,KAAW,MAA6B,IAAW,IAEvD,GAAI,MAAM,QAAQ,CAAM,EACtB,OAAS,GAAI,EAAG,EAAI,EAAO,OAAQ,IACjC,EAAK,KAAK,EAAO,EAAE,MAGrB,GAAK,KAAK,CAAM,CAEpB,CAEA,EAAS,CACX,CAEA,MAAO,EACT,EAYA,EAAK,SAAS,UAAU,UAAY,SAAU,EAAK,EAAU,CAC3D,GAAI,GAAQ,GAAI,GAAK,MAAO,EAAK,CAAQ,EAEzC,MAAO,MAAK,IAAI,CAAC,CAAK,CAAC,EAAE,IAAI,SAAU,EAAG,CACxC,MAAO,GAAE,SAAS,CACpB,CAAC,CACH,EAMA,EAAK,SAAS,UAAU,MAAQ,UAAY,CAC1C,KAAK,OAAS,CAAC,CACjB,EASA,EAAK,SAAS,UAAU,OAAS,UAAY,CAC3C,MAAO,MAAK,OAAO,IAAI,SAAU,EAAI,CACnC,SAAK,SAAS,4BAA4B,CAAE,EAErC,EAAG,KACZ,CAAC,CACH,EACA;AAAA;AAAA;AAAA,GAqBA,EAAK,OAAS,SAAU,EAAU,CAChC,KAAK,WAAa,EAClB,KAAK,SAAW,GAAY,CAAC,CAC/B,EAaA,EAAK,OAAO,UAAU,iBAAmB,SAAU,EAAO,CAExD,GAAI,KAAK,SAAS,QAAU,EAC1B,MAAO,GAST,OANI,GAAQ,EACR,EAAM,KAAK,SAAS,OAAS,EAC7B,EAAc,EAAM,EACpB,EAAa,KAAK,MAAM,EAAc,CAAC,EACvC,EAAa,KAAK,SAAS,EAAa,GAErC,EAAc,GACf,GAAa,GACf,GAAQ,GAGN,EAAa,GACf,GAAM,GAGJ,GAAc,IAIlB,EAAc,EAAM,EACpB,EAAa,EAAQ,KAAK,MAAM,EAAc,CAAC,EAC/C,EAAa,KAAK,SAAS,EAAa,GAO1C,GAJI,GAAc,GAId,EAAa,EACf,MAAO,GAAa,EAGtB,GAAI,EAAa,EACf,MAAQ,GAAa,GAAK,CAE9B,EAWA,EAAK,OAAO,UAAU,OAAS,SAAU,EAAW,EAAK,CACvD,KAAK,OAAO,EAAW,EAAK,UAAY,CACtC,KAAM,iBACR,CAAC,CACH,EAUA,EAAK,OAAO,UAAU,OAAS,SAAU,EAAW,EAAK,EAAI,CAC3D,KAAK,WAAa,EAClB,GAAI,GAAW,KAAK,iBAAiB,CAAS,EAE9C,AAAI,KAAK,SAAS,IAAa,EAC7B,KAAK,SAAS,EAAW,GAAK,EAAG,KAAK,SAAS,EAAW,GAAI,CAAG,EAEjE,KAAK,SAAS,OAAO,EAAU,EAAG,EAAW,CAAG,CAEpD,EAOA,EAAK,OAAO,UAAU,UAAY,UAAY,CAC5C,GAAI,KAAK,WAAY,MAAO,MAAK,WAKjC,OAHI,GAAe,EACf,EAAiB,KAAK,SAAS,OAE1B,EAAI,EAAG,EAAI,EAAgB,GAAK,EAAG,CAC1C,GAAI,GAAM,KAAK,SAAS,GACxB,GAAgB,EAAM,CACxB,CAEA,MAAO,MAAK,WAAa,KAAK,KAAK,CAAY,CACjD,EAQA,EAAK,OAAO,UAAU,IAAM,SAAU,EAAa,CAOjD,OANI,GAAa,EACb,EAAI,KAAK,SAAU,EAAI,EAAY,SACnC,EAAO,EAAE,OAAQ,EAAO,EAAE,OAC1B,EAAO,EAAG,EAAO,EACjB,EAAI,EAAG,EAAI,EAER,EAAI,GAAQ,EAAI,GACrB,EAAO,EAAE,GAAI,EAAO,EAAE,GACtB,AAAI,EAAO,EACT,GAAK,EACA,AAAI,EAAO,EAChB,GAAK,EACI,GAAQ,GACjB,IAAc,EAAE,EAAI,GAAK,EAAE,EAAI,GAC/B,GAAK,EACL,GAAK,GAIT,MAAO,EACT,EASA,EAAK,OAAO,UAAU,WAAa,SAAU,EAAa,CACxD,MAAO,MAAK,IAAI,CAAW,EAAI,KAAK,UAAU,GAAK,CACrD,EAOA,EAAK,OAAO,UAAU,QAAU,UAAY,CAG1C,OAFI,GAAS,GAAI,OAAO,KAAK,SAAS,OAAS,CAAC,EAEvC,EAAI,EAAG,EAAI,EAAG,EAAI,KAAK,SAAS,OAAQ,GAAK,EAAG,IACvD,EAAO,GAAK,KAAK,SAAS,GAG5B,MAAO,EACT,EAOA,EAAK,OAAO,UAAU,OAAS,UAAY,CACzC,MAAO,MAAK,QACd,EAEA;AAAA;AAAA;AAAA;AAAA,GAiBA,EAAK,QAAW,UAAU,CACxB,GAAI,GAAY,CACZ,QAAY,MACZ,OAAW,OACX,KAAS,OACT,KAAS,OACT,KAAS,MACT,IAAQ,MACR,KAAS,KACT,MAAU,MACV,IAAQ,IACR,MAAU,MACV,QAAY,MACZ,MAAU,MACV,KAAS,MACT,MAAU,KACV,QAAY,MACZ,QAAY,MACZ,QAAY,MACZ,MAAU,KACV,MAAU,MACV,OAAW,MACX,KAAS,KACX,EAEA,EAAY,CACV,MAAU,KACV,MAAU,GACV,MAAU,KACV,MAAU,KACV,KAAS,KACT,IAAQ,GACR,KAAS,EACX,EAEA,EAAI,WACJ,EAAI,WACJ,EAAI,EAAI,aACR,EAAI,EAAI,WAER,EAAO,KAAO,EAAI,KAAO,EAAI,EAC7B,EAAO,KAAO,EAAI,KAAO,EAAI,EAAI,IAAM,EAAI,MAC3C,EAAO,KAAO,EAAI,KAAO,EAAI,EAAI,EAAI,EACrC,EAAM,KAAO,EAAI,KAAO,EAEtB,EAAU,GAAI,QAAO,CAAI,EACzB,EAAU,GAAI,QAAO,CAAI,EACzB,EAAU,GAAI,QAAO,CAAI,EACzB,EAAS,GAAI,QAAO,CAAG,EAEvB,EAAQ,kBACR,EAAS,iBACT,EAAQ,aACR,EAAS,kBACT,EAAU,KACV,EAAW,cACX,EAAW,GAAI,QAAO,oBAAoB,EAC1C,EAAW,GAAI,QAAO,IAAM,EAAI,EAAI,cAAc,EAElD,EAAQ,mBACR,EAAO,2IAEP,EAAO,iDAEP,EAAO,sFACP,EAAQ,oBAER,EAAO,WACP,EAAS,MACT,EAAQ,GAAI,QAAO,IAAM,EAAI,EAAI,cAAc,EAE/C,EAAgB,SAAuB,EAAG,CAC5C,GAAI,GACF,EACA,EACA,EACA,EACA,EACA,EAEF,GAAI,EAAE,OAAS,EAAK,MAAO,GAiB3B,GAfA,EAAU,EAAE,OAAO,EAAE,CAAC,EAClB,GAAW,KACb,GAAI,EAAQ,YAAY,EAAI,EAAE,OAAO,CAAC,GAIxC,EAAK,EACL,EAAM,EAEN,AAAI,EAAG,KAAK,CAAC,EAAK,EAAI,EAAE,QAAQ,EAAG,MAAM,EAChC,EAAI,KAAK,CAAC,GAAK,GAAI,EAAE,QAAQ,EAAI,MAAM,GAGhD,EAAK,EACL,EAAM,EACF,EAAG,KAAK,CAAC,EAAG,CACd,GAAI,GAAK,EAAG,KAAK,CAAC,EAClB,EAAK,EACD,EAAG,KAAK,EAAG,EAAE,GACf,GAAK,EACL,EAAI,EAAE,QAAQ,EAAG,EAAE,EAEvB,SAAW,EAAI,KAAK,CAAC,EAAG,CACtB,GAAI,GAAK,EAAI,KAAK,CAAC,EACnB,EAAO,EAAG,GACV,EAAM,EACF,EAAI,KAAK,CAAI,GACf,GAAI,EACJ,EAAM,EACN,EAAM,EACN,EAAM,EACN,AAAI,EAAI,KAAK,CAAC,EAAK,EAAI,EAAI,IACtB,AAAI,EAAI,KAAK,CAAC,EAAK,GAAK,EAAS,EAAI,EAAE,QAAQ,EAAG,EAAE,GAChD,EAAI,KAAK,CAAC,GAAK,GAAI,EAAI,KAEpC,CAIA,GADA,EAAK,EACD,EAAG,KAAK,CAAC,EAAG,CACd,GAAI,GAAK,EAAG,KAAK,CAAC,EAClB,EAAO,EAAG,GACV,EAAI,EAAO,GACb,CAIA,GADA,EAAK,EACD,EAAG,KAAK,CAAC,EAAG,CACd,GAAI,GAAK,EAAG,KAAK,CAAC,EAClB,EAAO,EAAG,GACV,EAAS,EAAG,GACZ,EAAK,EACD,EAAG,KAAK,CAAI,GACd,GAAI,EAAO,EAAU,GAEzB,CAIA,GADA,EAAK,EACD,EAAG,KAAK,CAAC,EAAG,CACd,GAAI,GAAK,EAAG,KAAK,CAAC,EAClB,EAAO,EAAG,GACV,EAAS,EAAG,GACZ,EAAK,EACD,EAAG,KAAK,CAAI,GACd,GAAI,EAAO,EAAU,GAEzB,CAKA,GAFA,EAAK,EACL,EAAM,EACF,EAAG,KAAK,CAAC,EAAG,CACd,GAAI,GAAK,EAAG,KAAK,CAAC,EAClB,EAAO,EAAG,GACV,EAAK,EACD,EAAG,KAAK,CAAI,GACd,GAAI,EAER,SAAW,EAAI,KAAK,CAAC,EAAG,CACtB,GAAI,GAAK,EAAI,KAAK,CAAC,EACnB,EAAO,EAAG,GAAK,EAAG,GAClB,EAAM,EACF,EAAI,KAAK,CAAI,GACf,GAAI,EAER,CAIA,GADA,EAAK,EACD,EAAG,KAAK,CAAC,EAAG,CACd,GAAI,GAAK,EAAG,KAAK,CAAC,EAClB,EAAO,EAAG,GACV,EAAK,EACL,EAAM,EACN,EAAM,EACF,GAAG,KAAK,CAAI,GAAM,EAAI,KAAK,CAAI,GAAK,CAAE,EAAI,KAAK,CAAI,IACrD,GAAI,EAER,CAEA,SAAK,EACL,EAAM,EACF,EAAG,KAAK,CAAC,GAAK,EAAI,KAAK,CAAC,GAC1B,GAAK,EACL,EAAI,EAAE,QAAQ,EAAG,EAAE,GAKjB,GAAW,KACb,GAAI,EAAQ,YAAY,EAAI,EAAE,OAAO,CAAC,GAGjC,CACT,EAEA,MAAO,UAAU,EAAO,CACtB,MAAO,GAAM,OAAO,CAAa,CACnC,CACF,EAAG,EAEH,EAAK,SAAS,iBAAiB,EAAK,QAAS,SAAS,EACtD;AAAA;AAAA;AAAA,GAkBA,EAAK,uBAAyB,SAAU,EAAW,CACjD,GAAI,GAAQ,EAAU,OAAO,SAAU,EAAM,EAAU,CACrD,SAAK,GAAY,EACV,CACT,EAAG,CAAC,CAAC,EAEL,MAAO,UAAU,EAAO,CACtB,GAAI,GAAS,EAAM,EAAM,SAAS,KAAO,EAAM,SAAS,EAAG,MAAO,EACpE,CACF,EAeA,EAAK,eAAiB,EAAK,uBAAuB,CAChD,IACA,OACA,QACA,SACA,QACA,MACA,SACA,OACA,KACA,QACA,KACA,MACA,MACA,MACA,KACA,KACA,KACA,UACA,OACA,MACA,KACA,MACA,SACA,QACA,OACA,MACA,KACA,OACA,SACA,OACA,OACA,QACA,MACA,OACA,MACA,MACA,MACA,MACA,OACA,KACA,MACA,OACA,MACA,MACA,MACA,UACA,IACA,KACA,KACA,OACA,KACA,KACA,MACA,OACA,QACA,MACA,OACA,SACA,MACA,KACA,QACA,OACA,OACA,KACA,UACA,KACA,MACA,MACA,KACA,MACA,QACA,KACA,OACA,KACA,QACA,MACA,MACA,SACA,OACA,MACA,OACA,MACA,SACA,QACA,KACA,OACA,OACA,OACA,MACA,QACA,OACA,OACA,QACA,QACA,OACA,OACA,MACA,KACA,MACA,OACA,KACA,QACA,MACA,KACA,OACA,OACA,OACA,QACA,QACA,QACA,MACA,OACA,MACA,OACA,OACA,QACA,MACA,MACA,MACF,CAAC,EAED,EAAK,SAAS,iBAAiB,EAAK,eAAgB,gBAAgB,EACpE;AAAA;AAAA;AAAA,GAoBA,EAAK,QAAU,SAAU,EAAO,CAC9B,MAAO,GAAM,OAAO,SAAU,EAAG,CAC/B,MAAO,GAAE,QAAQ,OAAQ,EAAE,EAAE,QAAQ,OAAQ,EAAE,CACjD,CAAC,CACH,EAEA,EAAK,SAAS,iBAAiB,EAAK,QAAS,SAAS,EACtD;AAAA;AAAA;AAAA,GA0BA,EAAK,SAAW,UAAY,CAC1B,KAAK,MAAQ,GACb,KAAK,MAAQ,CAAC,EACd,KAAK,GAAK,EAAK,SAAS,QACxB,EAAK,SAAS,SAAW,CAC3B,EAUA,EAAK,SAAS,QAAU,EASxB,EAAK,SAAS,UAAY,SAAU,EAAK,CAGvC,OAFI,GAAU,GAAI,GAAK,SAAS,QAEvB,EAAI,EAAG,EAAM,EAAI,OAAQ,EAAI,EAAK,IACzC,EAAQ,OAAO,EAAI,EAAE,EAGvB,SAAQ,OAAO,EACR,EAAQ,IACjB,EAWA,EAAK,SAAS,WAAa,SAAU,EAAQ,CAC3C,MAAI,gBAAkB,GACb,EAAK,SAAS,gBAAgB,EAAO,KAAM,EAAO,YAAY,EAE9D,EAAK,SAAS,WAAW,EAAO,IAAI,CAE/C,EAiBA,EAAK,SAAS,gBAAkB,SAAU,EAAK,EAAc,CAS3D,OARI,GAAO,GAAI,GAAK,SAEhB,EAAQ,CAAC,CACX,KAAM,EACN,eAAgB,EAChB,IAAK,CACP,CAAC,EAEM,EAAM,QAAQ,CACnB,GAAI,GAAQ,EAAM,IAAI,EAGtB,GAAI,EAAM,IAAI,OAAS,EAAG,CACxB,GAAI,GAAO,EAAM,IAAI,OAAO,CAAC,EACzB,EAEJ,AAAI,IAAQ,GAAM,KAAK,MACrB,EAAa,EAAM,KAAK,MAAM,GAE9B,GAAa,GAAI,GAAK,SACtB,EAAM,KAAK,MAAM,GAAQ,GAGvB,EAAM,IAAI,QAAU,GACtB,GAAW,MAAQ,IAGrB,EAAM,KAAK,CACT,KAAM,EACN,eAAgB,EAAM,eACtB,IAAK,EAAM,IAAI,MAAM,CAAC,CACxB,CAAC,CACH,CAEA,GAAI,EAAM,gBAAkB,EAK5B,IAAI,KAAO,GAAM,KAAK,MACpB,GAAI,GAAgB,EAAM,KAAK,MAAM,SAChC,CACL,GAAI,GAAgB,GAAI,GAAK,SAC7B,EAAM,KAAK,MAAM,KAAO,CAC1B,CAgCA,GA9BI,EAAM,IAAI,QAAU,GACtB,GAAc,MAAQ,IAGxB,EAAM,KAAK,CACT,KAAM,EACN,eAAgB,EAAM,eAAiB,EACvC,IAAK,EAAM,GACb,CAAC,EAKG,EAAM,IAAI,OAAS,GACrB,EAAM,KAAK,CACT,KAAM,EAAM,KACZ,eAAgB,EAAM,eAAiB,EACvC,IAAK,EAAM,IAAI,MAAM,CAAC,CACxB,CAAC,EAKC,EAAM,IAAI,QAAU,GACtB,GAAM,KAAK,MAAQ,IAMjB,EAAM,IAAI,QAAU,EAAG,CACzB,GAAI,KAAO,GAAM,KAAK,MACpB,GAAI,GAAmB,EAAM,KAAK,MAAM,SACnC,CACL,GAAI,GAAmB,GAAI,GAAK,SAChC,EAAM,KAAK,MAAM,KAAO,CAC1B,CAEA,AAAI,EAAM,IAAI,QAAU,GACtB,GAAiB,MAAQ,IAG3B,EAAM,KAAK,CACT,KAAM,EACN,eAAgB,EAAM,eAAiB,EACvC,IAAK,EAAM,IAAI,MAAM,CAAC,CACxB,CAAC,CACH,CAKA,GAAI,EAAM,IAAI,OAAS,EAAG,CACxB,GAAI,GAAQ,EAAM,IAAI,OAAO,CAAC,EAC1B,EAAQ,EAAM,IAAI,OAAO,CAAC,EAC1B,EAEJ,AAAI,IAAS,GAAM,KAAK,MACtB,EAAgB,EAAM,KAAK,MAAM,GAEjC,GAAgB,GAAI,GAAK,SACzB,EAAM,KAAK,MAAM,GAAS,GAGxB,EAAM,IAAI,QAAU,GACtB,GAAc,MAAQ,IAGxB,EAAM,KAAK,CACT,KAAM,EACN,eAAgB,EAAM,eAAiB,EACvC,IAAK,EAAQ,EAAM,IAAI,MAAM,CAAC,CAChC,CAAC,CACH,EACF,CAEA,MAAO,EACT,EAYA,EAAK,SAAS,WAAa,SAAU,EAAK,CAYxC,OAXI,GAAO,GAAI,GAAK,SAChB,EAAO,EAUF,EAAI,EAAG,EAAM,EAAI,OAAQ,EAAI,EAAK,IAAK,CAC9C,GAAI,GAAO,EAAI,GACX,EAAS,GAAK,EAAM,EAExB,GAAI,GAAQ,IACV,EAAK,MAAM,GAAQ,EACnB,EAAK,MAAQ,MAER,CACL,GAAI,GAAO,GAAI,GAAK,SACpB,EAAK,MAAQ,EAEb,EAAK,MAAM,GAAQ,EACnB,EAAO,CACT,CACF,CAEA,MAAO,EACT,EAYA,EAAK,SAAS,UAAU,QAAU,UAAY,CAQ5C,OAPI,GAAQ,CAAC,EAET,EAAQ,CAAC,CACX,OAAQ,GACR,KAAM,IACR,CAAC,EAEM,EAAM,QAAQ,CACnB,GAAI,GAAQ,EAAM,IAAI,EAClB,EAAQ,OAAO,KAAK,EAAM,KAAK,KAAK,EACpC,EAAM,EAAM,OAEhB,AAAI,EAAM,KAAK,OAKb,GAAM,OAAO,OAAO,CAAC,EACrB,EAAM,KAAK,EAAM,MAAM,GAGzB,OAAS,GAAI,EAAG,EAAI,EAAK,IAAK,CAC5B,GAAI,GAAO,EAAM,GAEjB,EAAM,KAAK,CACT,OAAQ,EAAM,OAAO,OAAO,CAAI,EAChC,KAAM,EAAM,KAAK,MAAM,EACzB,CAAC,CACH,CACF,CAEA,MAAO,EACT,EAYA,EAAK,SAAS,UAAU,SAAW,UAAY,CAS7C,GAAI,KAAK,KACP,MAAO,MAAK,KAOd,OAJI,GAAM,KAAK,MAAQ,IAAM,IACzB,EAAS,OAAO,KAAK,KAAK,KAAK,EAAE,KAAK,EACtC,EAAM,EAAO,OAER,EAAI,EAAG,EAAI,EAAK,IAAK,CAC5B,GAAI,GAAQ,EAAO,GACf,EAAO,KAAK,MAAM,GAEtB,EAAM,EAAM,EAAQ,EAAK,EAC3B,CAEA,MAAO,EACT,EAYA,EAAK,SAAS,UAAU,UAAY,SAAU,EAAG,CAU/C,OATI,GAAS,GAAI,GAAK,SAClB,EAAQ,OAER,EAAQ,CAAC,CACX,MAAO,EACP,OAAQ,EACR,KAAM,IACR,CAAC,EAEM,EAAM,QAAQ,CACnB,EAAQ,EAAM,IAAI,EAWlB,OALI,GAAS,OAAO,KAAK,EAAM,MAAM,KAAK,EACtC,EAAO,EAAO,OACd,EAAS,OAAO,KAAK,EAAM,KAAK,KAAK,EACrC,EAAO,EAAO,OAET,EAAI,EAAG,EAAI,EAAM,IAGxB,OAFI,GAAQ,EAAO,GAEV,EAAI,EAAG,EAAI,EAAM,IAAK,CAC7B,GAAI,GAAQ,EAAO,GAEnB,GAAI,GAAS,GAAS,GAAS,IAAK,CAClC,GAAI,GAAO,EAAM,KAAK,MAAM,GACxB,EAAQ,EAAM,MAAM,MAAM,GAC1B,EAAQ,EAAK,OAAS,EAAM,MAC5B,EAAO,OAEX,AAAI,IAAS,GAAM,OAAO,MAIxB,GAAO,EAAM,OAAO,MAAM,GAC1B,EAAK,MAAQ,EAAK,OAAS,GAM3B,GAAO,GAAI,GAAK,SAChB,EAAK,MAAQ,EACb,EAAM,OAAO,MAAM,GAAS,GAG9B,EAAM,KAAK,CACT,MAAO,EACP,OAAQ,EACR,KAAM,CACR,CAAC,CACH,CACF,CAEJ,CAEA,MAAO,EACT,EACA,EAAK,SAAS,QAAU,UAAY,CAClC,KAAK,aAAe,GACpB,KAAK,KAAO,GAAI,GAAK,SACrB,KAAK,eAAiB,CAAC,EACvB,KAAK,eAAiB,CAAC,CACzB,EAEA,EAAK,SAAS,QAAQ,UAAU,OAAS,SAAU,EAAM,CACvD,GAAI,GACA,EAAe,EAEnB,GAAI,EAAO,KAAK,aACd,KAAM,IAAI,OAAO,6BAA6B,EAGhD,OAAS,GAAI,EAAG,EAAI,EAAK,QAAU,EAAI,KAAK,aAAa,QACnD,EAAK,IAAM,KAAK,aAAa,GAD8B,IAE/D,IAGF,KAAK,SAAS,CAAY,EAE1B,AAAI,KAAK,eAAe,QAAU,EAChC,EAAO,KAAK,KAEZ,EAAO,KAAK,eAAe,KAAK,eAAe,OAAS,GAAG,MAG7D,OAAS,GAAI,EAAc,EAAI,EAAK,OAAQ,IAAK,CAC/C,GAAI,GAAW,GAAI,GAAK,SACpB,EAAO,EAAK,GAEhB,EAAK,MAAM,GAAQ,EAEnB,KAAK,eAAe,KAAK,CACvB,OAAQ,EACR,KAAM,EACN,MAAO,CACT,CAAC,EAED,EAAO,CACT,CAEA,EAAK,MAAQ,GACb,KAAK,aAAe,CACtB,EAEA,EAAK,SAAS,QAAQ,UAAU,OAAS,UAAY,CACnD,KAAK,SAAS,CAAC,CACjB,EAEA,EAAK,SAAS,QAAQ,UAAU,SAAW,SAAU,EAAQ,CAC3D,OAAS,GAAI,KAAK,eAAe,OAAS,EAAG,GAAK,EAAQ,IAAK,CAC7D,GAAI,GAAO,KAAK,eAAe,GAC3B,EAAW,EAAK,MAAM,SAAS,EAEnC,AAAI,IAAY,MAAK,eACnB,EAAK,OAAO,MAAM,EAAK,MAAQ,KAAK,eAAe,GAInD,GAAK,MAAM,KAAO,EAElB,KAAK,eAAe,GAAY,EAAK,OAGvC,KAAK,eAAe,IAAI,CAC1B,CACF,EACA;AAAA;AAAA;AAAA,GAqBA,EAAK,MAAQ,SAAU,EAAO,CAC5B,KAAK,cAAgB,EAAM,cAC3B,KAAK,aAAe,EAAM,aAC1B,KAAK,SAAW,EAAM,SACtB,KAAK,OAAS,EAAM,OACpB,KAAK,SAAW,EAAM,QACxB,EAyEA,EAAK,MAAM,UAAU,OAAS,SAAU,EAAa,CACnD,MAAO,MAAK,MAAM,SAAU,EAAO,CACjC,GAAI,GAAS,GAAI,GAAK,YAAY,EAAa,CAAK,EACpD,EAAO,MAAM,CACf,CAAC,CACH,EA2BA,EAAK,MAAM,UAAU,MAAQ,SAAU,EAAI,CAoBzC,OAZI,GAAQ,GAAI,GAAK,MAAM,KAAK,MAAM,EAClC,EAAiB,OAAO,OAAO,IAAI,EACnC,EAAe,OAAO,OAAO,IAAI,EACjC,EAAiB,OAAO,OAAO,IAAI,EACnC,EAAkB,OAAO,OAAO,IAAI,EACpC,EAAoB,OAAO,OAAO,IAAI,EAOjC,EAAI,EAAG,EAAI,KAAK,OAAO,OAAQ,IACtC,EAAa,KAAK,OAAO,IAAM,GAAI,GAAK,OAG1C,EAAG,KAAK,EAAO,CAAK,EAEpB,OAAS,GAAI,EAAG,EAAI,EAAM,QAAQ,OAAQ,IAAK,CAS7C,GAAI,GAAS,EAAM,QAAQ,GACvB,EAAQ,KACR,EAAgB,EAAK,IAAI,MAE7B,AAAI,EAAO,YACT,EAAQ,KAAK,SAAS,UAAU,EAAO,KAAM,CAC3C,OAAQ,EAAO,MACjB,CAAC,EAED,EAAQ,CAAC,EAAO,IAAI,EAGtB,OAAS,GAAI,EAAG,EAAI,EAAM,OAAQ,IAAK,CACrC,GAAI,GAAO,EAAM,GAQjB,EAAO,KAAO,EAOd,GAAI,GAAe,EAAK,SAAS,WAAW,CAAM,EAC9C,EAAgB,KAAK,SAAS,UAAU,CAAY,EAAE,QAAQ,EAQlE,GAAI,EAAc,SAAW,GAAK,EAAO,WAAa,EAAK,MAAM,SAAS,SAAU,CAClF,OAAS,GAAI,EAAG,EAAI,EAAO,OAAO,OAAQ,IAAK,CAC7C,GAAI,GAAQ,EAAO,OAAO,GAC1B,EAAgB,GAAS,EAAK,IAAI,KACpC,CAEA,KACF,CAEA,OAAS,GAAI,EAAG,EAAI,EAAc,OAAQ,IASxC,OAJI,GAAe,EAAc,GAC7B,EAAU,KAAK,cAAc,GAC7B,EAAY,EAAQ,OAEf,EAAI,EAAG,EAAI,EAAO,OAAO,OAAQ,IAAK,CAS7C,GAAI,GAAQ,EAAO,OAAO,GACtB,EAAe,EAAQ,GACvB,EAAuB,OAAO,KAAK,CAAY,EAC/C,EAAY,EAAe,IAAM,EACjC,EAAuB,GAAI,GAAK,IAAI,CAAoB,EAoB5D,GAbI,EAAO,UAAY,EAAK,MAAM,SAAS,UACzC,GAAgB,EAAc,MAAM,CAAoB,EAEpD,EAAgB,KAAW,QAC7B,GAAgB,GAAS,EAAK,IAAI,WASlC,EAAO,UAAY,EAAK,MAAM,SAAS,WAAY,CACrD,AAAI,EAAkB,KAAW,QAC/B,GAAkB,GAAS,EAAK,IAAI,OAGtC,EAAkB,GAAS,EAAkB,GAAO,MAAM,CAAoB,EAO9E,QACF,CAeA,GANA,EAAa,GAAO,OAAO,EAAW,EAAO,MAAO,SAAU,GAAG,GAAG,CAAE,MAAO,IAAI,EAAE,CAAC,EAMhF,GAAe,GAInB,QAAS,GAAI,EAAG,EAAI,EAAqB,OAAQ,IAAK,CAOpD,GAAI,GAAsB,EAAqB,GAC3C,EAAmB,GAAI,GAAK,SAAU,EAAqB,CAAK,EAChE,EAAW,EAAa,GACxB,EAEJ,AAAK,GAAa,EAAe,MAAuB,OACtD,EAAe,GAAoB,GAAI,GAAK,UAAW,EAAc,EAAO,CAAQ,EAEpF,EAAW,IAAI,EAAc,EAAO,CAAQ,CAGhD,CAEA,EAAe,GAAa,GAC9B,CAEJ,CAQA,GAAI,EAAO,WAAa,EAAK,MAAM,SAAS,SAC1C,OAAS,GAAI,EAAG,EAAI,EAAO,OAAO,OAAQ,IAAK,CAC7C,GAAI,GAAQ,EAAO,OAAO,GAC1B,EAAgB,GAAS,EAAgB,GAAO,UAAU,CAAa,CACzE,CAEJ,CAUA,OAHI,GAAqB,EAAK,IAAI,SAC9B,EAAuB,EAAK,IAAI,MAE3B,EAAI,EAAG,EAAI,KAAK,OAAO,OAAQ,IAAK,CAC3C,GAAI,GAAQ,KAAK,OAAO,GAExB,AAAI,EAAgB,IAClB,GAAqB,EAAmB,UAAU,EAAgB,EAAM,GAGtE,EAAkB,IACpB,GAAuB,EAAqB,MAAM,EAAkB,EAAM,EAE9E,CAEA,GAAI,GAAoB,OAAO,KAAK,CAAc,EAC9C,EAAU,CAAC,EACX,EAAU,OAAO,OAAO,IAAI,EAYhC,GAAI,EAAM,UAAU,EAAG,CACrB,EAAoB,OAAO,KAAK,KAAK,YAAY,EAEjD,OAAS,GAAI,EAAG,EAAI,EAAkB,OAAQ,IAAK,CACjD,GAAI,GAAmB,EAAkB,GACrC,EAAW,EAAK,SAAS,WAAW,CAAgB,EACxD,EAAe,GAAoB,GAAI,GAAK,SAC9C,CACF,CAEA,OAAS,GAAI,EAAG,EAAI,EAAkB,OAAQ,IAAK,CASjD,GAAI,GAAW,EAAK,SAAS,WAAW,EAAkB,EAAE,EACxD,EAAS,EAAS,OAEtB,GAAI,EAAC,EAAmB,SAAS,CAAM,GAInC,GAAqB,SAAS,CAAM,EAIxC,IAAI,GAAc,KAAK,aAAa,GAChC,EAAQ,EAAa,EAAS,WAAW,WAAW,CAAW,EAC/D,EAEJ,GAAK,GAAW,EAAQ,MAAa,OACnC,EAAS,OAAS,EAClB,EAAS,UAAU,QAAQ,EAAe,EAAS,MAC9C,CACL,GAAI,GAAQ,CACV,IAAK,EACL,MAAO,EACP,UAAW,EAAe,EAC5B,EACA,EAAQ,GAAU,EAClB,EAAQ,KAAK,CAAK,CACpB,EACF,CAKA,MAAO,GAAQ,KAAK,SAAU,GAAG,GAAG,CAClC,MAAO,IAAE,MAAQ,GAAE,KACrB,CAAC,CACH,EAUA,EAAK,MAAM,UAAU,OAAS,UAAY,CACxC,GAAI,GAAgB,OAAO,KAAK,KAAK,aAAa,EAC/C,KAAK,EACL,IAAI,SAAU,EAAM,CACnB,MAAO,CAAC,EAAM,KAAK,cAAc,EAAK,CACxC,EAAG,IAAI,EAEL,EAAe,OAAO,KAAK,KAAK,YAAY,EAC7C,IAAI,SAAU,EAAK,CAClB,MAAO,CAAC,EAAK,KAAK,aAAa,GAAK,OAAO,CAAC,CAC9C,EAAG,IAAI,EAET,MAAO,CACL,QAAS,EAAK,QACd,OAAQ,KAAK,OACb,aAAc,EACd,cAAe,EACf,SAAU,KAAK,SAAS,OAAO,CACjC,CACF,EAQA,EAAK,MAAM,KAAO,SAAU,EAAiB,CAC3C,GAAI,GAAQ,CAAC,EACT,EAAe,CAAC,EAChB,EAAoB,EAAgB,aACpC,EAAgB,OAAO,OAAO,IAAI,EAClC,EAA0B,EAAgB,cAC1C,EAAkB,GAAI,GAAK,SAAS,QACpC,EAAW,EAAK,SAAS,KAAK,EAAgB,QAAQ,EAE1D,AAAI,EAAgB,SAAW,EAAK,SAClC,EAAK,MAAM,KAAK,4EAA8E,EAAK,QAAU,sCAAwC,EAAgB,QAAU,GAAG,EAGpL,OAAS,GAAI,EAAG,EAAI,EAAkB,OAAQ,IAAK,CACjD,GAAI,GAAQ,EAAkB,GAC1B,EAAM,EAAM,GACZ,EAAW,EAAM,GAErB,EAAa,GAAO,GAAI,GAAK,OAAO,CAAQ,CAC9C,CAEA,OAAS,GAAI,EAAG,EAAI,EAAwB,OAAQ,IAAK,CACvD,GAAI,GAAQ,EAAwB,GAChC,EAAO,EAAM,GACb,EAAU,EAAM,GAEpB,EAAgB,OAAO,CAAI,EAC3B,EAAc,GAAQ,CACxB,CAEA,SAAgB,OAAO,EAEvB,EAAM,OAAS,EAAgB,OAE/B,EAAM,aAAe,EACrB,EAAM,cAAgB,EACtB,EAAM,SAAW,EAAgB,KACjC,EAAM,SAAW,EAEV,GAAI,GAAK,MAAM,CAAK,CAC7B,EACA;AAAA;AAAA;AAAA,GA6BA,EAAK,QAAU,UAAY,CACzB,KAAK,KAAO,KACZ,KAAK,QAAU,OAAO,OAAO,IAAI,EACjC,KAAK,WAAa,OAAO,OAAO,IAAI,EACpC,KAAK,cAAgB,OAAO,OAAO,IAAI,EACvC,KAAK,qBAAuB,CAAC,EAC7B,KAAK,aAAe,CAAC,EACrB,KAAK,UAAY,EAAK,UACtB,KAAK,SAAW,GAAI,GAAK,SACzB,KAAK,eAAiB,GAAI,GAAK,SAC/B,KAAK,cAAgB,EACrB,KAAK,GAAK,IACV,KAAK,IAAM,IACX,KAAK,UAAY,EACjB,KAAK,kBAAoB,CAAC,CAC5B,EAcA,EAAK,QAAQ,UAAU,IAAM,SAAU,EAAK,CAC1C,KAAK,KAAO,CACd,EAkCA,EAAK,QAAQ,UAAU,MAAQ,SAAU,EAAW,EAAY,CAC9D,GAAI,KAAK,KAAK,CAAS,EACrB,KAAM,IAAI,YAAY,UAAY,EAAY,kCAAkC,EAGlF,KAAK,QAAQ,GAAa,GAAc,CAAC,CAC3C,EAUA,EAAK,QAAQ,UAAU,EAAI,SAAU,EAAQ,CAC3C,AAAI,EAAS,EACX,KAAK,GAAK,EACL,AAAI,EAAS,EAClB,KAAK,GAAK,EAEV,KAAK,GAAK,CAEd,EASA,EAAK,QAAQ,UAAU,GAAK,SAAU,EAAQ,CAC5C,KAAK,IAAM,CACb,EAmBA,EAAK,QAAQ,UAAU,IAAM,SAAU,EAAK,EAAY,CACtD,GAAI,GAAS,EAAI,KAAK,MAClB,EAAS,OAAO,KAAK,KAAK,OAAO,EAErC,KAAK,WAAW,GAAU,GAAc,CAAC,EACzC,KAAK,eAAiB,EAEtB,OAAS,GAAI,EAAG,EAAI,EAAO,OAAQ,IAAK,CACtC,GAAI,GAAY,EAAO,GACnB,EAAY,KAAK,QAAQ,GAAW,UACpC,EAAQ,EAAY,EAAU,CAAG,EAAI,EAAI,GACzC,EAAS,KAAK,UAAU,EAAO,CAC7B,OAAQ,CAAC,CAAS,CACpB,CAAC,EACD,EAAQ,KAAK,SAAS,IAAI,CAAM,EAChC,EAAW,GAAI,GAAK,SAAU,EAAQ,CAAS,EAC/C,EAAa,OAAO,OAAO,IAAI,EAEnC,KAAK,qBAAqB,GAAY,EACtC,KAAK,aAAa,GAAY,EAG9B,KAAK,aAAa,IAAa,EAAM,OAGrC,OAAS,GAAI,EAAG,EAAI,EAAM,OAAQ,IAAK,CACrC,GAAI,GAAO,EAAM,GAUjB,GARI,EAAW,IAAS,MACtB,GAAW,GAAQ,GAGrB,EAAW,IAAS,EAIhB,KAAK,cAAc,IAAS,KAAW,CACzC,GAAI,GAAU,OAAO,OAAO,IAAI,EAChC,EAAQ,OAAY,KAAK,UACzB,KAAK,WAAa,EAElB,OAAS,GAAI,EAAG,EAAI,EAAO,OAAQ,IACjC,EAAQ,EAAO,IAAM,OAAO,OAAO,IAAI,EAGzC,KAAK,cAAc,GAAQ,CAC7B,CAGA,AAAI,KAAK,cAAc,GAAM,GAAW,IAAW,MACjD,MAAK,cAAc,GAAM,GAAW,GAAU,OAAO,OAAO,IAAI,GAKlE,OAAS,GAAI,EAAG,EAAI,KAAK,kBAAkB,OAAQ,IAAK,CACtD,GAAI,GAAc,KAAK,kBAAkB,GACrC,EAAW,EAAK,SAAS,GAE7B,AAAI,KAAK,cAAc,GAAM,GAAW,GAAQ,IAAgB,MAC9D,MAAK,cAAc,GAAM,GAAW,GAAQ,GAAe,CAAC,GAG9D,KAAK,cAAc,GAAM,GAAW,GAAQ,GAAa,KAAK,CAAQ,CACxE,CACF,CAEF,CACF,EAOA,EAAK,QAAQ,UAAU,6BAA+B,UAAY,CAOhE,OALI,GAAY,OAAO,KAAK,KAAK,YAAY,EACzC,EAAiB,EAAU,OAC3B,EAAc,CAAC,EACf,EAAqB,CAAC,EAEjB,EAAI,EAAG,EAAI,EAAgB,IAAK,CACvC,GAAI,GAAW,EAAK,SAAS,WAAW,EAAU,EAAE,EAChD,EAAQ,EAAS,UAErB,EAAmB,IAAW,GAAmB,GAAS,GAC1D,EAAmB,IAAU,EAE7B,EAAY,IAAW,GAAY,GAAS,GAC5C,EAAY,IAAU,KAAK,aAAa,EAC1C,CAIA,OAFI,GAAS,OAAO,KAAK,KAAK,OAAO,EAE5B,EAAI,EAAG,EAAI,EAAO,OAAQ,IAAK,CACtC,GAAI,GAAY,EAAO,GACvB,EAAY,GAAa,EAAY,GAAa,EAAmB,EACvE,CAEA,KAAK,mBAAqB,CAC5B,EAOA,EAAK,QAAQ,UAAU,mBAAqB,UAAY,CAMtD,OALI,GAAe,CAAC,EAChB,EAAY,OAAO,KAAK,KAAK,oBAAoB,EACjD,EAAkB,EAAU,OAC5B,EAAe,OAAO,OAAO,IAAI,EAE5B,EAAI,EAAG,EAAI,EAAiB,IAAK,CAaxC,OAZI,GAAW,EAAK,SAAS,WAAW,EAAU,EAAE,EAChD,EAAY,EAAS,UACrB,EAAc,KAAK,aAAa,GAChC,EAAc,GAAI,GAAK,OACvB,EAAkB,KAAK,qBAAqB,GAC5C,EAAQ,OAAO,KAAK,CAAe,EACnC,EAAc,EAAM,OAGpB,EAAa,KAAK,QAAQ,GAAW,OAAS,EAC9C,EAAW,KAAK,WAAW,EAAS,QAAQ,OAAS,EAEhD,EAAI,EAAG,EAAI,EAAa,IAAK,CACpC,GAAI,GAAO,EAAM,GACb,EAAK,EAAgB,GACrB,EAAY,KAAK,cAAc,GAAM,OACrC,EAAK,EAAO,EAEhB,AAAI,EAAa,KAAU,OACzB,GAAM,EAAK,IAAI,KAAK,cAAc,GAAO,KAAK,aAAa,EAC3D,EAAa,GAAQ,GAErB,EAAM,EAAa,GAGrB,EAAQ,EAAQ,OAAK,IAAM,GAAK,GAAO,MAAK,IAAO,GAAI,KAAK,GAAK,KAAK,GAAM,GAAc,KAAK,mBAAmB,KAAe,GACjI,GAAS,EACT,GAAS,EACT,EAAqB,KAAK,MAAM,EAAQ,GAAI,EAAI,IAQhD,EAAY,OAAO,EAAW,CAAkB,CAClD,CAEA,EAAa,GAAY,CAC3B,CAEA,KAAK,aAAe,CACtB,EAOA,EAAK,QAAQ,UAAU,eAAiB,UAAY,CAClD,KAAK,SAAW,EAAK,SAAS,UAC5B,OAAO,KAAK,KAAK,aAAa,EAAE,KAAK,CACvC,CACF,EAUA,EAAK,QAAQ,UAAU,MAAQ,UAAY,CACzC,YAAK,6BAA6B,EAClC,KAAK,mBAAmB,EACxB,KAAK,eAAe,EAEb,GAAI,GAAK,MAAM,CACpB,cAAe,KAAK,cACpB,aAAc,KAAK,aACnB,SAAU,KAAK,SACf,OAAQ,OAAO,KAAK,KAAK,OAAO,EAChC,SAAU,KAAK,cACjB,CAAC,CACH,EAgBA,EAAK,QAAQ,UAAU,IAAM,SAAU,EAAI,CACzC,GAAI,GAAO,MAAM,UAAU,MAAM,KAAK,UAAW,CAAC,EAClD,EAAK,QAAQ,IAAI,EACjB,EAAG,MAAM,KAAM,CAAI,CACrB,EAaA,EAAK,UAAY,SAAU,EAAM,EAAO,EAAU,CAShD,OARI,GAAiB,OAAO,OAAO,IAAI,EACnC,EAAe,OAAO,KAAK,GAAY,CAAC,CAAC,EAOpC,EAAI,EAAG,EAAI,EAAa,OAAQ,IAAK,CAC5C,GAAI,GAAM,EAAa,GACvB,EAAe,GAAO,EAAS,GAAK,MAAM,CAC5C,CAEA,KAAK,SAAW,OAAO,OAAO,IAAI,EAE9B,IAAS,QACX,MAAK,SAAS,GAAQ,OAAO,OAAO,IAAI,EACxC,KAAK,SAAS,GAAM,GAAS,EAEjC,EAWA,EAAK,UAAU,UAAU,QAAU,SAAU,EAAgB,CAG3D,OAFI,GAAQ,OAAO,KAAK,EAAe,QAAQ,EAEtC,EAAI,EAAG,EAAI,EAAM,OAAQ,IAAK,CACrC,GAAI,GAAO,EAAM,GACb,EAAS,OAAO,KAAK,EAAe,SAAS,EAAK,EAEtD,AAAI,KAAK,SAAS,IAAS,MACzB,MAAK,SAAS,GAAQ,OAAO,OAAO,IAAI,GAG1C,OAAS,GAAI,EAAG,EAAI,EAAO,OAAQ,IAAK,CACtC,GAAI,GAAQ,EAAO,GACf,EAAO,OAAO,KAAK,EAAe,SAAS,GAAM,EAAM,EAE3D,AAAI,KAAK,SAAS,GAAM,IAAU,MAChC,MAAK,SAAS,GAAM,GAAS,OAAO,OAAO,IAAI,GAGjD,OAAS,GAAI,EAAG,EAAI,EAAK,OAAQ,IAAK,CACpC,GAAI,GAAM,EAAK,GAEf,AAAI,KAAK,SAAS,GAAM,GAAO,IAAQ,KACrC,KAAK,SAAS,GAAM,GAAO,GAAO,EAAe,SAAS,GAAM,GAAO,GAEvE,KAAK,SAAS,GAAM,GAAO,GAAO,KAAK,SAAS,GAAM,GAAO,GAAK,OAAO,EAAe,SAAS,GAAM,GAAO,EAAI,CAGtH,CACF,CACF,CACF,EASA,EAAK,UAAU,UAAU,IAAM,SAAU,EAAM,EAAO,EAAU,CAC9D,GAAI,CAAE,KAAQ,MAAK,UAAW,CAC5B,KAAK,SAAS,GAAQ,OAAO,OAAO,IAAI,EACxC,KAAK,SAAS,GAAM,GAAS,EAC7B,MACF,CAEA,GAAI,CAAE,KAAS,MAAK,SAAS,IAAQ,CACnC,KAAK,SAAS,GAAM,GAAS,EAC7B,MACF,CAIA,OAFI,GAAe,OAAO,KAAK,CAAQ,EAE9B,EAAI,EAAG,EAAI,EAAa,OAAQ,IAAK,CAC5C,GAAI,GAAM,EAAa,GAEvB,AAAI,IAAO,MAAK,SAAS,GAAM,GAC7B,KAAK,SAAS,GAAM,GAAO,GAAO,KAAK,SAAS,GAAM,GAAO,GAAK,OAAO,EAAS,EAAI,EAEtF,KAAK,SAAS,GAAM,GAAO,GAAO,EAAS,EAE/C,CACF,EAYA,EAAK,MAAQ,SAAU,EAAW,CAChC,KAAK,QAAU,CAAC,EAChB,KAAK,UAAY,CACnB,EA0BA,EAAK,MAAM,SAAW,GAAI,QAAQ,GAAG,EACrC,EAAK,MAAM,SAAS,KAAO,EAC3B,EAAK,MAAM,SAAS,QAAU,EAC9B,EAAK,MAAM,SAAS,SAAW,EAa/B,EAAK,MAAM,SAAW,CAIpB,SAAU,EAMV,SAAU,EAMV,WAAY,CACd,EAyBA,EAAK,MAAM,UAAU,OAAS,SAAU,EAAQ,CAC9C,MAAM,UAAY,IAChB,GAAO,OAAS,KAAK,WAGjB,SAAW,IACf,GAAO,MAAQ,GAGX,eAAiB,IACrB,GAAO,YAAc,IAGjB,YAAc,IAClB,GAAO,SAAW,EAAK,MAAM,SAAS,MAGnC,EAAO,SAAW,EAAK,MAAM,SAAS,SAAa,EAAO,KAAK,OAAO,CAAC,GAAK,EAAK,MAAM,UAC1F,GAAO,KAAO,IAAM,EAAO,MAGxB,EAAO,SAAW,EAAK,MAAM,SAAS,UAAc,EAAO,KAAK,MAAM,EAAE,GAAK,EAAK,MAAM,UAC3F,GAAO,KAAO,GAAK,EAAO,KAAO,KAG7B,YAAc,IAClB,GAAO,SAAW,EAAK,MAAM,SAAS,UAGxC,KAAK,QAAQ,KAAK,CAAM,EAEjB,IACT,EASA,EAAK,MAAM,UAAU,UAAY,UAAY,CAC3C,OAAS,GAAI,EAAG,EAAI,KAAK,QAAQ,OAAQ,IACvC,GAAI,KAAK,QAAQ,GAAG,UAAY,EAAK,MAAM,SAAS,WAClD,MAAO,GAIX,MAAO,EACT,EA4BA,EAAK,MAAM,UAAU,KAAO,SAAU,EAAM,EAAS,CACnD,GAAI,MAAM,QAAQ,CAAI,EACpB,SAAK,QAAQ,SAAU,EAAG,CAAE,KAAK,KAAK,EAAG,EAAK,MAAM,MAAM,CAAO,CAAC,CAAE,EAAG,IAAI,EACpE,KAGT,GAAI,GAAS,GAAW,CAAC,EACzB,SAAO,KAAO,EAAK,SAAS,EAE5B,KAAK,OAAO,CAAM,EAEX,IACT,EACA,EAAK,gBAAkB,SAAU,EAAS,EAAO,EAAK,CACpD,KAAK,KAAO,kBACZ,KAAK,QAAU,EACf,KAAK,MAAQ,EACb,KAAK,IAAM,CACb,EAEA,EAAK,gBAAgB,UAAY,GAAI,OACrC,EAAK,WAAa,SAAU,EAAK,CAC/B,KAAK,QAAU,CAAC,EAChB,KAAK,IAAM,EACX,KAAK,OAAS,EAAI,OAClB,KAAK,IAAM,EACX,KAAK,MAAQ,EACb,KAAK,oBAAsB,CAAC,CAC9B,EAEA,EAAK,WAAW,UAAU,IAAM,UAAY,CAG1C,OAFI,GAAQ,EAAK,WAAW,QAErB,GACL,EAAQ,EAAM,IAAI,CAEtB,EAEA,EAAK,WAAW,UAAU,YAAc,UAAY,CAKlD,OAJI,GAAY,CAAC,EACb,EAAa,KAAK,MAClB,EAAW,KAAK,IAEX,EAAI,EAAG,EAAI,KAAK,oBAAoB,OAAQ,IACnD,EAAW,KAAK,oBAAoB,GACpC,EAAU,KAAK,KAAK,IAAI,MAAM,EAAY,CAAQ,CAAC,EACnD,EAAa,EAAW,EAG1B,SAAU,KAAK,KAAK,IAAI,MAAM,EAAY,KAAK,GAAG,CAAC,EACnD,KAAK,oBAAoB,OAAS,EAE3B,EAAU,KAAK,EAAE,CAC1B,EAEA,EAAK,WAAW,UAAU,KAAO,SAAU,EAAM,CAC/C,KAAK,QAAQ,KAAK,CAChB,KAAM,EACN,IAAK,KAAK,YAAY,EACtB,MAAO,KAAK,MACZ,IAAK,KAAK,GACZ,CAAC,EAED,KAAK,MAAQ,KAAK,GACpB,EAEA,EAAK,WAAW,UAAU,gBAAkB,UAAY,CACtD,KAAK,oBAAoB,KAAK,KAAK,IAAM,CAAC,EAC1C,KAAK,KAAO,CACd,EAEA,EAAK,WAAW,UAAU,KAAO,UAAY,CAC3C,GAAI,KAAK,KAAO,KAAK,OACnB,MAAO,GAAK,WAAW,IAGzB,GAAI,GAAO,KAAK,IAAI,OAAO,KAAK,GAAG,EACnC,YAAK,KAAO,EACL,CACT,EAEA,EAAK,WAAW,UAAU,MAAQ,UAAY,CAC5C,MAAO,MAAK,IAAM,KAAK,KACzB,EAEA,EAAK,WAAW,UAAU,OAAS,UAAY,CAC7C,AAAI,KAAK,OAAS,KAAK,KACrB,MAAK,KAAO,GAGd,KAAK,MAAQ,KAAK,GACpB,EAEA,EAAK,WAAW,UAAU,OAAS,UAAY,CAC7C,KAAK,KAAO,CACd,EAEA,EAAK,WAAW,UAAU,eAAiB,UAAY,CACrD,GAAI,GAAM,EAEV,EACE,GAAO,KAAK,KAAK,EACjB,EAAW,EAAK,WAAW,CAAC,QACrB,EAAW,IAAM,EAAW,IAErC,AAAI,GAAQ,EAAK,WAAW,KAC1B,KAAK,OAAO,CAEhB,EAEA,EAAK,WAAW,UAAU,KAAO,UAAY,CAC3C,MAAO,MAAK,IAAM,KAAK,MACzB,EAEA,EAAK,WAAW,IAAM,MACtB,EAAK,WAAW,MAAQ,QACxB,EAAK,WAAW,KAAO,OACvB,EAAK,WAAW,cAAgB,gBAChC,EAAK,WAAW,MAAQ,QACxB,EAAK,WAAW,SAAW,WAE3B,EAAK,WAAW,SAAW,SAAU,EAAO,CAC1C,SAAM,OAAO,EACb,EAAM,KAAK,EAAK,WAAW,KAAK,EAChC,EAAM,OAAO,EACN,EAAK,WAAW,OACzB,EAEA,EAAK,WAAW,QAAU,SAAU,EAAO,CAQzC,GAPI,EAAM,MAAM,EAAI,GAClB,GAAM,OAAO,EACb,EAAM,KAAK,EAAK,WAAW,IAAI,GAGjC,EAAM,OAAO,EAET,EAAM,KAAK,EACb,MAAO,GAAK,WAAW,OAE3B,EAEA,EAAK,WAAW,gBAAkB,SAAU,EAAO,CACjD,SAAM,OAAO,EACb,EAAM,eAAe,EACrB,EAAM,KAAK,EAAK,WAAW,aAAa,EACjC,EAAK,WAAW,OACzB,EAEA,EAAK,WAAW,SAAW,SAAU,EAAO,CAC1C,SAAM,OAAO,EACb,EAAM,eAAe,EACrB,EAAM,KAAK,EAAK,WAAW,KAAK,EACzB,EAAK,WAAW,OACzB,EAEA,EAAK,WAAW,OAAS,SAAU,EAAO,CACxC,AAAI,EAAM,MAAM,EAAI,GAClB,EAAM,KAAK,EAAK,WAAW,IAAI,CAEnC,EAaA,EAAK,WAAW,cAAgB,EAAK,UAAU,UAE/C,EAAK,WAAW,QAAU,SAAU,EAAO,CACzC,OAAa,CACX,GAAI,GAAO,EAAM,KAAK,EAEtB,GAAI,GAAQ,EAAK,WAAW,IAC1B,MAAO,GAAK,WAAW,OAIzB,GAAI,EAAK,WAAW,CAAC,GAAK,GAAI,CAC5B,EAAM,gBAAgB,EACtB,QACF,CAEA,GAAI,GAAQ,IACV,MAAO,GAAK,WAAW,SAGzB,GAAI,GAAQ,IACV,SAAM,OAAO,EACT,EAAM,MAAM,EAAI,GAClB,EAAM,KAAK,EAAK,WAAW,IAAI,EAE1B,EAAK,WAAW,gBAGzB,GAAI,GAAQ,IACV,SAAM,OAAO,EACT,EAAM,MAAM,EAAI,GAClB,EAAM,KAAK,EAAK,WAAW,IAAI,EAE1B,EAAK,WAAW,SAczB,GARI,GAAQ,KAAO,EAAM,MAAM,IAAM,GAQjC,GAAQ,KAAO,EAAM,MAAM,IAAM,EACnC,SAAM,KAAK,EAAK,WAAW,QAAQ,EAC5B,EAAK,WAAW,QAGzB,GAAI,EAAK,MAAM,EAAK,WAAW,aAAa,EAC1C,MAAO,GAAK,WAAW,OAE3B,CACF,EAEA,EAAK,YAAc,SAAU,EAAK,EAAO,CACvC,KAAK,MAAQ,GAAI,GAAK,WAAY,CAAG,EACrC,KAAK,MAAQ,EACb,KAAK,cAAgB,CAAC,EACtB,KAAK,UAAY,CACnB,EAEA,EAAK,YAAY,UAAU,MAAQ,UAAY,CAC7C,KAAK,MAAM,IAAI,EACf,KAAK,QAAU,KAAK,MAAM,QAI1B,OAFI,GAAQ,EAAK,YAAY,YAEtB,GACL,EAAQ,EAAM,IAAI,EAGpB,MAAO,MAAK,KACd,EAEA,EAAK,YAAY,UAAU,WAAa,UAAY,CAClD,MAAO,MAAK,QAAQ,KAAK,UAC3B,EAEA,EAAK,YAAY,UAAU,cAAgB,UAAY,CACrD,GAAI,GAAS,KAAK,WAAW,EAC7B,YAAK,WAAa,EACX,CACT,EAEA,EAAK,YAAY,UAAU,WAAa,UAAY,CAClD,GAAI,GAAkB,KAAK,cAC3B,KAAK,MAAM,OAAO,CAAe,EACjC,KAAK,cAAgB,CAAC,CACxB,EAEA,EAAK,YAAY,YAAc,SAAU,EAAQ,CAC/C,GAAI,GAAS,EAAO,WAAW,EAE/B,GAAI,GAAU,KAId,OAAQ,EAAO,UACR,GAAK,WAAW,SACnB,MAAO,GAAK,YAAY,kBACrB,GAAK,WAAW,MACnB,MAAO,GAAK,YAAY,eACrB,GAAK,WAAW,KACnB,MAAO,GAAK,YAAY,kBAExB,GAAI,GAAe,4CAA8C,EAAO,KAExE,KAAI,GAAO,IAAI,QAAU,GACvB,IAAgB,gBAAkB,EAAO,IAAM,KAG3C,GAAI,GAAK,gBAAiB,EAAc,EAAO,MAAO,EAAO,GAAG,EAE5E,EAEA,EAAK,YAAY,cAAgB,SAAU,EAAQ,CACjD,GAAI,GAAS,EAAO,cAAc,EAElC,GAAI,GAAU,KAId,QAAQ,EAAO,SACR,IACH,EAAO,cAAc,SAAW,EAAK,MAAM,SAAS,WACpD,UACG,IACH,EAAO,cAAc,SAAW,EAAK,MAAM,SAAS,SACpD,cAEA,GAAI,GAAe,kCAAoC,EAAO,IAAM,IACpE,KAAM,IAAI,GAAK,gBAAiB,EAAc,EAAO,MAAO,EAAO,GAAG,EAG1E,GAAI,GAAa,EAAO,WAAW,EAEnC,GAAI,GAAc,KAAW,CAC3B,GAAI,GAAe,yCACnB,KAAM,IAAI,GAAK,gBAAiB,EAAc,EAAO,MAAO,EAAO,GAAG,CACxE,CAEA,OAAQ,EAAW,UACZ,GAAK,WAAW,MACnB,MAAO,GAAK,YAAY,eACrB,GAAK,WAAW,KACnB,MAAO,GAAK,YAAY,kBAExB,GAAI,GAAe,mCAAqC,EAAW,KAAO,IAC1E,KAAM,IAAI,GAAK,gBAAiB,EAAc,EAAW,MAAO,EAAW,GAAG,GAEpF,EAEA,EAAK,YAAY,WAAa,SAAU,EAAQ,CAC9C,GAAI,GAAS,EAAO,cAAc,EAElC,GAAI,GAAU,KAId,IAAI,EAAO,MAAM,UAAU,QAAQ,EAAO,GAAG,GAAK,GAAI,CACpD,GAAI,GAAiB,EAAO,MAAM,UAAU,IAAI,SAAU,EAAG,CAAE,MAAO,IAAM,EAAI,GAAI,CAAC,EAAE,KAAK,IAAI,EAC5F,EAAe,uBAAyB,EAAO,IAAM,uBAAyB,EAElF,KAAM,IAAI,GAAK,gBAAiB,EAAc,EAAO,MAAO,EAAO,GAAG,CACxE,CAEA,EAAO,cAAc,OAAS,CAAC,EAAO,GAAG,EAEzC,GAAI,GAAa,EAAO,WAAW,EAEnC,GAAI,GAAc,KAAW,CAC3B,GAAI,GAAe,gCACnB,KAAM,IAAI,GAAK,gBAAiB,EAAc,EAAO,MAAO,EAAO,GAAG,CACxE,CAEA,OAAQ,EAAW,UACZ,GAAK,WAAW,KACnB,MAAO,GAAK,YAAY,kBAExB,GAAI,GAAe,0BAA4B,EAAW,KAAO,IACjE,KAAM,IAAI,GAAK,gBAAiB,EAAc,EAAW,MAAO,EAAW,GAAG,GAEpF,EAEA,EAAK,YAAY,UAAY,SAAU,EAAQ,CAC7C,GAAI,GAAS,EAAO,cAAc,EAElC,GAAI,GAAU,KAId,GAAO,cAAc,KAAO,EAAO,IAAI,YAAY,EAE/C,EAAO,IAAI,QAAQ,GAAG,GAAK,IAC7B,GAAO,cAAc,YAAc,IAGrC,GAAI,GAAa,EAAO,WAAW,EAEnC,GAAI,GAAc,KAAW,CAC3B,EAAO,WAAW,EAClB,MACF,CAEA,OAAQ,EAAW,UACZ,GAAK,WAAW,KACnB,SAAO,WAAW,EACX,EAAK,YAAY,cACrB,GAAK,WAAW,MACnB,SAAO,WAAW,EACX,EAAK,YAAY,eACrB,GAAK,WAAW,cACnB,MAAO,GAAK,YAAY,sBACrB,GAAK,WAAW,MACnB,MAAO,GAAK,YAAY,eACrB,GAAK,WAAW,SACnB,SAAO,WAAW,EACX,EAAK,YAAY,sBAExB,GAAI,GAAe,2BAA6B,EAAW,KAAO,IAClE,KAAM,IAAI,GAAK,gBAAiB,EAAc,EAAW,MAAO,EAAW,GAAG,GAEpF,EAEA,EAAK,YAAY,kBAAoB,SAAU,EAAQ,CACrD,GAAI,GAAS,EAAO,cAAc,EAElC,GAAI,GAAU,KAId,IAAI,GAAe,SAAS,EAAO,IAAK,EAAE,EAE1C,GAAI,MAAM,CAAY,EAAG,CACvB,GAAI,GAAe,gCACnB,KAAM,IAAI,GAAK,gBAAiB,EAAc,EAAO,MAAO,EAAO,GAAG,CACxE,CAEA,EAAO,cAAc,aAAe,EAEpC,GAAI,GAAa,EAAO,WAAW,EAEnC,GAAI,GAAc,KAAW,CAC3B,EAAO,WAAW,EAClB,MACF,CAEA,OAAQ,EAAW,UACZ,GAAK,WAAW,KACnB,SAAO,WAAW,EACX,EAAK,YAAY,cACrB,GAAK,WAAW,MACnB,SAAO,WAAW,EACX,EAAK,YAAY,eACrB,GAAK,WAAW,cACnB,MAAO,GAAK,YAAY,sBACrB,GAAK,WAAW,MACnB,MAAO,GAAK,YAAY,eACrB,GAAK,WAAW,SACnB,SAAO,WAAW,EACX,EAAK,YAAY,sBAExB,GAAI,GAAe,2BAA6B,EAAW,KAAO,IAClE,KAAM,IAAI,GAAK,gBAAiB,EAAc,EAAW,MAAO,EAAW,GAAG,GAEpF,EAEA,EAAK,YAAY,WAAa,SAAU,EAAQ,CAC9C,GAAI,GAAS,EAAO,cAAc,EAElC,GAAI,GAAU,KAId,IAAI,GAAQ,SAAS,EAAO,IAAK,EAAE,EAEnC,GAAI,MAAM,CAAK,EAAG,CAChB,GAAI,GAAe,wBACnB,KAAM,IAAI,GAAK,gBAAiB,EAAc,EAAO,MAAO,EAAO,GAAG,CACxE,CAEA,EAAO,cAAc,MAAQ,EAE7B,GAAI,GAAa,EAAO,WAAW,EAEnC,GAAI,GAAc,KAAW,CAC3B,EAAO,WAAW,EAClB,MACF,CAEA,OAAQ,EAAW,UACZ,GAAK,WAAW,KACnB,SAAO,WAAW,EACX,EAAK,YAAY,cACrB,GAAK,WAAW,MACnB,SAAO,WAAW,EACX,EAAK,YAAY,eACrB,GAAK,WAAW,cACnB,MAAO,GAAK,YAAY,sBACrB,GAAK,WAAW,MACnB,MAAO,GAAK,YAAY,eACrB,GAAK,WAAW,SACnB,SAAO,WAAW,EACX,EAAK,YAAY,sBAExB,GAAI,GAAe,2BAA6B,EAAW,KAAO,IAClE,KAAM,IAAI,GAAK,gBAAiB,EAAc,EAAW,MAAO,EAAW,GAAG,GAEpF,EAMI,SAAU,EAAM,EAAS,CACzB,AAAI,MAAO,SAAW,YAAc,OAAO,IAEzC,OAAO,CAAO,EACT,AAAI,MAAO,KAAY,SAM5B,GAAO,QAAU,EAAQ,EAGzB,EAAK,KAAO,EAAQ,CAExB,EAAE,KAAM,UAAY,CAMlB,MAAO,EACT,CAAC,CACH,GAAG,ICl5GH;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,GAeA,GAAI,IAAkB,UAOtB,GAAO,QAAU,GAUjB,YAAoB,EAAQ,CAC1B,GAAI,GAAM,GAAK,EACX,EAAQ,GAAgB,KAAK,CAAG,EAEpC,GAAI,CAAC,EACH,MAAO,GAGT,GAAI,GACA,EAAO,GACP,EAAQ,EACR,EAAY,EAEhB,IAAK,EAAQ,EAAM,MAAO,EAAQ,EAAI,OAAQ,IAAS,CACrD,OAAQ,EAAI,WAAW,CAAK,OACrB,IACH,EAAS,SACT,UACG,IACH,EAAS,QACT,UACG,IACH,EAAS,QACT,UACG,IACH,EAAS,OACT,UACG,IACH,EAAS,OACT,cAEA,SAGJ,AAAI,IAAc,GAChB,IAAQ,EAAI,UAAU,EAAW,CAAK,GAGxC,EAAY,EAAQ,EACpB,GAAQ,CACV,CAEA,MAAO,KAAc,EACjB,EAAO,EAAI,UAAU,EAAW,CAAK,EACrC,CACN,ICvDA,OAAiB,QCKjB,AAAK,OAAO,SACV,QAAO,QAAU,SAAU,EAAa,CACtC,GAAM,GAA2B,CAAC,EAClC,OAAW,KAAO,QAAO,KAAK,CAAG,EAE/B,EAAK,KAAK,CAAC,EAAK,EAAI,EAAI,CAAC,EAG3B,MAAO,EACT,GAGF,AAAK,OAAO,QACV,QAAO,OAAS,SAAU,EAAa,CACrC,GAAM,GAAiB,CAAC,EACxB,OAAW,KAAO,QAAO,KAAK,CAAG,EAE/B,EAAK,KAAK,EAAI,EAAI,EAGpB,MAAO,EACT,GAKF,AAAI,MAAO,UAAY,aAGhB,SAAQ,UAAU,UACrB,SAAQ,UAAU,SAAW,SAC3B,EAA8B,EACxB,CACN,AAAI,MAAO,IAAM,SACf,MAAK,WAAa,EAAE,KACpB,KAAK,UAAY,EAAE,KAEnB,MAAK,WAAa,EAClB,KAAK,UAAY,EAErB,GAGG,QAAQ,UAAU,aACrB,SAAQ,UAAU,YAAc,YAC3B,EACG,CACN,GAAM,GAAS,KAAK,WACpB,GAAI,EAAQ,CACV,AAAI,EAAM,SAAW,GACnB,EAAO,YAAY,IAAI,EAGzB,OAAS,GAAI,EAAM,OAAS,EAAG,GAAK,EAAG,IAAK,CAC1C,GAAI,GAAO,EAAM,GACjB,AAAI,MAAO,IAAS,SAClB,EAAO,SAAS,eAAe,CAAI,EAC5B,EAAK,YACZ,EAAK,WAAW,YAAY,CAAI,EAGlC,AAAK,EAGH,EAAO,aAAa,KAAK,gBAAkB,CAAI,EAF/C,EAAO,aAAa,EAAM,IAAI,CAGlC,CACF,CACF,ICxEJ,OAAuB,OAiChB,YACL,EACmB,CACnB,GAAM,GAAY,GAAI,KAChB,EAAY,GAAI,KACtB,OAAW,KAAO,GAAM,CACtB,GAAM,CAAC,EAAM,GAAQ,EAAI,SAAS,MAAM,GAAG,EAGrC,EAAW,EAAI,SACf,EAAW,EAAI,MACf,EAAW,EAAI,KAGf,EAAO,eAAW,EAAI,IAAI,EAC7B,QAAQ,mBAAoB,EAAE,EAC9B,QAAQ,OAAQ,GAAG,EAGtB,GAAI,EAAM,CACR,GAAM,GAAS,EAAU,IAAI,CAAI,EAGjC,AAAK,EAAQ,IAAI,CAAM,EASrB,EAAU,IAAI,EAAU,CACtB,WACA,QACA,OACA,QACF,CAAC,EAbD,GAAO,MAAQ,EAAI,MACnB,EAAO,KAAQ,EAGf,EAAQ,IAAI,CAAM,EAatB,KACE,GAAU,IAAI,EAAU,GACtB,WACA,QACA,QACG,GAAQ,CAAE,MAAK,EACnB,CAEL,CACA,MAAO,EACT,CCpFA,OAAuB,OAsChB,YACL,EAA2B,EACD,CAC1B,GAAM,GAAY,GAAI,QAAO,EAAO,UAAW,KAAK,EAC9C,EAAY,CAAC,EAAY,EAAc,IACpC,GAAG,4BAA+B,WAI3C,MAAO,AAAC,IAAkB,CACxB,EAAQ,EACL,QAAQ,gBAAiB,GAAG,EAC5B,KAAK,EAGR,GAAM,GAAQ,GAAI,QAAO,MAAM,EAAO,cACpC,EACG,QAAQ,uBAAwB,MAAM,EACtC,QAAQ,EAAW,GAAG,KACtB,KAAK,EAGV,MAAO,IACL,GACI,eAAW,CAAK,EAChB,GAED,QAAQ,EAAO,CAAS,EACxB,QAAQ,8BAA+B,IAAI,CAClD,CACF,CCtCO,YACL,EACqB,CACrB,GAAM,GAAS,GAAK,MAAa,MAAM,CAAC,QAAS,MAAM,CAAC,EAIxD,MAHe,IAAK,MAAa,YAAY,EAAO,CAAK,EAGlD,MAAM,EACN,EAAM,OACf,CAUO,YACL,EAA4B,EACV,CAzEpB,MA0EE,GAAM,GAAU,GAAI,KAAuB,CAAK,EAG1C,EAA2B,CAAC,EAClC,OAAS,GAAI,EAAG,EAAI,EAAM,OAAQ,IAChC,OAAW,KAAU,GACnB,AAAI,EAAM,GAAG,WAAW,EAAO,IAAI,GACjC,GAAO,EAAO,MAAQ,GACtB,EAAQ,OAAO,CAAM,GAI3B,OAAW,KAAU,GACnB,AAAI,QAAK,iBAAL,kBAAsB,EAAO,OAC/B,GAAO,EAAO,MAAQ,IAG1B,MAAO,EACT,CC0BA,YAAoB,EAAa,EAAuB,CACtD,GAAM,CAAC,EAAG,GAAK,CAAC,GAAI,KAAI,CAAC,EAAG,GAAI,KAAI,CAAC,CAAC,EACtC,MAAO,CACL,GAAG,GAAI,KAAI,CAAC,GAAG,CAAC,EAAE,OAAO,GAAS,CAAC,EAAE,IAAI,CAAK,CAAC,CAAC,CAClD,CACF,CASO,WAAa,CAgCX,YAAY,CAAE,SAAQ,OAAM,WAAwB,CACzD,KAAK,QAAU,EAGf,KAAK,UAAY,GAAuB,CAAI,EAC5C,KAAK,UAAY,GAAuB,EAAQ,EAAK,EAGrD,KAAK,UAAU,UAAY,GAAI,QAAO,EAAO,SAAS,EAGtD,KAAK,MAAQ,KAAK,UAAY,CAG5B,AAAI,EAAO,KAAK,SAAW,GAAK,EAAO,KAAK,KAAO,KACjD,KAAK,IAAK,KAAa,EAAO,KAAK,GAAG,EAC7B,EAAO,KAAK,OAAS,GAC9B,KAAK,IAAK,KAAa,cAAc,GAAG,EAAO,IAAI,CAAC,EAItD,GAAM,GAAM,GAAW,CACrB,UAAW,iBAAkB,SAC/B,EAAG,EAAQ,QAAQ,EAGnB,OAAW,KAAQ,GAAO,KAAK,IAAI,GACjC,IAAa,KAAO,KAAQ,KAAa,EAC1C,EACC,OAAW,KAAM,GACf,KAAK,SAAS,OAAO,EAAK,EAAG,EAC7B,KAAK,eAAe,OAAO,EAAK,EAAG,EAKvC,KAAK,IAAI,UAAU,EAGnB,KAAK,MAAM,QAAS,CAAE,MAAO,GAAI,CAAC,EAClC,KAAK,MAAM,MAAM,EACjB,KAAK,MAAM,OAAQ,CAAE,MAAO,GAAI,CAAC,EAGjC,OAAW,KAAO,GAChB,KAAK,IAAI,CAAG,CAChB,CAAC,CACH,CAkBO,OAAO,EAA6B,CACzC,GAAI,EACF,GAAI,CACF,GAAM,GAAY,KAAK,UAAU,CAAK,EAGhC,EAAU,GAAiB,CAAK,EACnC,OAAO,GACN,EAAO,WAAa,KAAK,MAAM,SAAS,UACzC,EAGG,EAAS,KAAK,MAAM,OAAO,GAAG,IAAQ,EAGzC,OAAyB,CAAC,EAAM,CAAE,MAAK,QAAO,eAAgB,CAC7D,GAAM,GAAW,KAAK,UAAU,IAAI,CAAG,EACvC,GAAI,MAAO,IAAa,YAAa,CACnC,GAAM,CAAE,WAAU,QAAO,OAAM,OAAM,UAAW,EAG1C,EAAQ,GACZ,EACA,OAAO,KAAK,EAAU,QAAQ,CAChC,EAGM,EAAQ,CAAC,CAAC,EAAS,EAAC,OAAO,OAAO,CAAK,EAAE,MAAM,GAAK,CAAC,EAC3D,EAAK,KAAK,KACR,WACA,MAAO,EAAU,CAAK,EACtB,KAAO,EAAU,CAAI,GAClB,GAAQ,CAAE,KAAM,EAAK,IAAI,CAAS,CAAE,GAJ/B,CAKR,MAAO,EAAS,GAAI,GACpB,OACF,EAAC,CACH,CACA,MAAO,EACT,EAAG,CAAC,CAAC,EAGJ,KAAK,CAAC,EAAG,IAAM,EAAE,MAAQ,EAAE,KAAK,EAGhC,OAAO,CAAC,EAAO,IAAW,CACzB,GAAM,GAAW,KAAK,UAAU,IAAI,EAAO,QAAQ,EACnD,GAAI,MAAO,IAAa,YAAa,CACnC,GAAM,GAAM,UAAY,GACpB,EAAS,OAAQ,SACjB,EAAS,SACb,EAAM,IAAI,EAAK,CAAC,GAAG,EAAM,IAAI,CAAG,GAAK,CAAC,EAAG,CAAM,CAAC,CAClD,CACA,MAAO,EACT,EAAG,GAAI,IAA+B,EAGpC,EACJ,GAAI,KAAK,QAAQ,YAAa,CAC5B,GAAM,GAAS,KAAK,MAAM,MAAM,GAAW,CACzC,OAAW,KAAU,GACnB,EAAQ,KAAK,EAAO,KAAM,CACxB,OAAQ,CAAC,OAAO,EAChB,SAAU,KAAK,MAAM,SAAS,SAC9B,SAAU,KAAK,MAAM,SAAS,QAChC,CAAC,CACL,CAAC,EAGD,EAAc,EAAO,OACjB,OAAO,KAAK,EAAO,GAAG,UAAU,QAAQ,EACxC,CAAC,CACP,CAGA,MAAO,IACL,MAAO,CAAC,GAAG,EAAO,OAAO,CAAC,GACvB,MAAO,IAAgB,aAAe,CAAE,aAAY,EAI3D,OAAQ,EAAN,CACA,QAAQ,KAAK,kBAAkB,qCAAoC,CACrE,CAIF,MAAO,CAAE,MAAO,CAAC,CAAE,CACrB,CACF,ELpQA,GAAI,GAqBJ,YACE,EACe,gCACf,GAAI,GAAO,UAGX,GAAI,MAAO,SAAW,aAAe,gBAAkB,QAAQ,CAC7D,GAAM,GAAS,SAAS,cAAiC,aAAa,EAChE,CAAC,GAAQ,EAAO,IAAI,MAAM,SAAS,EAGzC,EAAO,EAAK,QAAQ,KAAM,CAAI,CAChC,CAGA,GAAM,GAAU,CAAC,EACjB,OAAW,KAAQ,GAAO,KAAM,CAC9B,OAAQ,OAGD,KACH,EAAQ,KAAK,GAAG,cAAiB,EACjC,UAGG,SACA,KACH,EAAQ,KAAK,GAAG,cAAiB,EACjC,MAIJ,AAAI,IAAS,MACX,EAAQ,KAAK,GAAG,cAAiB,UAAa,CAClD,CAGA,AAAI,EAAO,KAAK,OAAS,GACvB,EAAQ,KAAK,GAAG,yBAA4B,EAG1C,EAAQ,QACV,MAAM,eACJ,GAAG,oCACH,GAAG,CACL,EACJ,GAaA,YACE,EACwB,gCACxB,OAAQ,EAAQ,UAGT,GACH,YAAM,IAAqB,EAAQ,KAAK,MAAM,EAC9C,EAAQ,GAAI,GAAO,EAAQ,IAAI,EACxB,CACL,KAAM,CACR,MAGG,GACH,MAAO,CACL,KAAM,EACN,KAAM,EAAQ,EAAM,OAAO,EAAQ,IAAI,EAAI,CAAE,MAAO,CAAC,CAAE,CACzD,UAIA,KAAM,IAAI,WAAU,sBAAsB,EAEhD,GAOA,KAAK,KAAO,WAGZ,iBAAiB,UAAW,AAAM,GAAM,0BACtC,YAAY,KAAM,IAAQ,EAAG,IAAI,CAAC,CACpC,EAAC", + "names": [] +} diff --git a/assets/osg-school-2024-attendees.jpeg b/assets/osg-school-2024-attendees.jpeg new file mode 100644 index 00000000..88b3dff1 Binary files /dev/null and b/assets/osg-school-2024-attendees.jpeg differ diff --git a/assets/osg-user-school-2023-group.png b/assets/osg-user-school-2023-group.png new file mode 100644 index 00000000..8d1ca9c7 Binary files /dev/null and b/assets/osg-user-school-2023-group.png differ diff --git a/assets/overview_htcondor_job_submission.png b/assets/overview_htcondor_job_submission.png new file mode 100644 index 00000000..88137cb5 Binary files /dev/null and b/assets/overview_htcondor_job_submission.png differ diff --git a/assets/simple-DAG_fig.png b/assets/simple-DAG_fig.png new file mode 100644 index 00000000..4788a3a5 Binary files /dev/null and b/assets/simple-DAG_fig.png differ diff --git a/assets/stylesheets/main.644de097.min.css b/assets/stylesheets/main.644de097.min.css new file mode 100644 index 00000000..c7462620 --- /dev/null +++ b/assets/stylesheets/main.644de097.min.css @@ -0,0 +1 @@ +@charset "UTF-8";html{-webkit-text-size-adjust:none;-moz-text-size-adjust:none;-ms-text-size-adjust:none;text-size-adjust:none;box-sizing:border-box}*,:after,:before{box-sizing:inherit}@media (prefers-reduced-motion){*,:after,:before{transition:none!important}}body{margin:0}a,button,input,label{-webkit-tap-highlight-color:transparent}a{color:inherit;text-decoration:none}hr{border:0;box-sizing:initial;display:block;height:.05rem;overflow:visible;padding:0}small{font-size:80%}sub,sup{line-height:1em}img{border-style:none}table{border-collapse:initial;border-spacing:0}td,th{font-weight:400;vertical-align:top}button{background:transparent;border:0;font-family:inherit;font-size:inherit;margin:0;padding:0}input{border:0;outline:none}:root{--md-default-fg-color:rgba(0,0,0,.87);--md-default-fg-color--light:rgba(0,0,0,.54);--md-default-fg-color--lighter:rgba(0,0,0,.32);--md-default-fg-color--lightest:rgba(0,0,0,.07);--md-default-bg-color:#fff;--md-default-bg-color--light:hsla(0,0%,100%,.7);--md-default-bg-color--lighter:hsla(0,0%,100%,.3);--md-default-bg-color--lightest:hsla(0,0%,100%,.12);--md-primary-fg-color:#4051b5;--md-primary-fg-color--light:#5d6cc0;--md-primary-fg-color--dark:#303fa1;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7);--md-accent-fg-color:#526cfe;--md-accent-fg-color--transparent:rgba(82,108,254,.1);--md-accent-bg-color:#fff;--md-accent-bg-color--light:hsla(0,0%,100%,.7);--md-shadow-z1:0 0.2rem 0.5rem rgba(0,0,0,.05),0 0 0.05rem rgba(0,0,0,.1);--md-shadow-z2:0 0.2rem 0.5rem rgba(0,0,0,.1),0 0 0.05rem rgba(0,0,0,.25);--md-shadow-z3:0 0.2rem 0.5rem rgba(0,0,0,.2),0 0 0.05rem rgba(0,0,0,.35)}:root>*{--md-code-fg-color:#36464e;--md-code-bg-color:#f5f5f5;--md-code-hl-color:rgba(255,255,0,.5);--md-code-hl-number-color:#d52a2a;--md-code-hl-special-color:#db1457;--md-code-hl-function-color:#a846b9;--md-code-hl-constant-color:#6e59d9;--md-code-hl-keyword-color:#3f6ec6;--md-code-hl-string-color:#1c7d4d;--md-code-hl-name-color:var(--md-code-fg-color);--md-code-hl-operator-color:var(--md-default-fg-color--light);--md-code-hl-punctuation-color:var(--md-default-fg-color--light);--md-code-hl-comment-color:var(--md-default-fg-color--light);--md-code-hl-generic-color:var(--md-default-fg-color--light);--md-code-hl-variable-color:var(--md-default-fg-color--light);--md-typeset-color:var(--md-default-fg-color);--md-typeset-a-color:var(--md-primary-fg-color);--md-typeset-mark-color:rgba(255,255,0,.5);--md-typeset-del-color:rgba(245,80,61,.15);--md-typeset-ins-color:rgba(11,213,112,.15);--md-typeset-kbd-color:#fafafa;--md-typeset-kbd-accent-color:#fff;--md-typeset-kbd-border-color:#b8b8b8;--md-typeset-table-color:rgba(0,0,0,.12);--md-admonition-fg-color:var(--md-default-fg-color);--md-admonition-bg-color:var(--md-default-bg-color);--md-footer-fg-color:#fff;--md-footer-fg-color--light:hsla(0,0%,100%,.7);--md-footer-fg-color--lighter:hsla(0,0%,100%,.3);--md-footer-bg-color:rgba(0,0,0,.87);--md-footer-bg-color--dark:rgba(0,0,0,.32)}.md-icon svg{fill:currentcolor;display:block;height:1.2rem;width:1.2rem}body{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;--md-text-font-family:var(--md-text-font,_),-apple-system,BlinkMacSystemFont,Helvetica,Arial,sans-serif;--md-code-font-family:var(--md-code-font,_),SFMono-Regular,Consolas,Menlo,monospace}body,input{font-feature-settings:"kern","liga";font-family:var(--md-text-font-family)}body,code,input,kbd,pre{color:var(--md-typeset-color)}code,kbd,pre{font-feature-settings:"kern";font-family:var(--md-code-font-family)}:root{--md-typeset-table-sort-icon:url('data:image/svg+xml;charset=utf-8,');--md-typeset-table-sort-icon--asc:url('data:image/svg+xml;charset=utf-8,');--md-typeset-table-sort-icon--desc:url('data:image/svg+xml;charset=utf-8,')}.md-typeset{-webkit-print-color-adjust:exact;color-adjust:exact;font-size:.8rem;line-height:1.6}@media print{.md-typeset{font-size:.68rem}}.md-typeset blockquote,.md-typeset dl,.md-typeset figure,.md-typeset ol,.md-typeset pre,.md-typeset ul{margin-bottom:1em;margin-top:1em}.md-typeset h1{color:var(--md-default-fg-color--light);font-size:2em;line-height:1.3;margin:0 0 1.25em}.md-typeset h1,.md-typeset h2{font-weight:300;letter-spacing:-.01em}.md-typeset h2{font-size:1.5625em;line-height:1.4;margin:1.6em 0 .64em}.md-typeset h3{font-size:1.25em;font-weight:400;letter-spacing:-.01em;line-height:1.5;margin:1.6em 0 .8em}.md-typeset h2+h3{margin-top:.8em}.md-typeset h4{font-weight:700;letter-spacing:-.01em;margin:1em 0}.md-typeset h5,.md-typeset h6{color:var(--md-default-fg-color--light);font-size:.8em;font-weight:700;letter-spacing:-.01em;margin:1.25em 0}.md-typeset h5{text-transform:uppercase}.md-typeset hr{border-bottom:.05rem solid var(--md-default-fg-color--lightest);display:flow-root;margin:1.5em 0}.md-typeset a{color:var(--md-typeset-a-color);word-break:break-word}.md-typeset a,.md-typeset a:before{transition:color 125ms}.md-typeset a:focus,.md-typeset a:hover{color:var(--md-accent-fg-color)}.md-typeset a:focus code,.md-typeset a:hover code{background-color:var(--md-accent-fg-color--transparent)}.md-typeset a code{color:currentcolor}.md-typeset a.focus-visible{outline-color:var(--md-accent-fg-color);outline-offset:.2rem}.md-typeset code,.md-typeset kbd,.md-typeset pre{color:var(--md-code-fg-color);direction:ltr}@media print{.md-typeset code,.md-typeset kbd,.md-typeset pre{white-space:pre-wrap}}.md-typeset code{background-color:var(--md-code-bg-color);border-radius:.1rem;-webkit-box-decoration-break:clone;box-decoration-break:clone;font-size:.85em;padding:0 .2941176471em;transition:background-color 125ms;word-break:break-word}.md-typeset code:not(.focus-visible){-webkit-tap-highlight-color:transparent;outline:none}.md-typeset pre{display:flow-root;line-height:1.4;position:relative}.md-typeset pre>code{-webkit-box-decoration-break:slice;box-decoration-break:slice;box-shadow:none;display:block;margin:0;outline-color:var(--md-accent-fg-color);overflow:auto;padding:.7720588235em 1.1764705882em;scrollbar-color:var(--md-default-fg-color--lighter) transparent;scrollbar-width:thin;touch-action:auto;word-break:normal}.md-typeset pre>code:hover{scrollbar-color:var(--md-accent-fg-color) transparent}.md-typeset pre>code::-webkit-scrollbar{height:.2rem;width:.2rem}.md-typeset pre>code::-webkit-scrollbar-thumb{background-color:var(--md-default-fg-color--lighter)}.md-typeset pre>code::-webkit-scrollbar-thumb:hover{background-color:var(--md-accent-fg-color)}.md-typeset kbd{background-color:var(--md-typeset-kbd-color);border-radius:.1rem;box-shadow:0 .1rem 0 .05rem var(--md-typeset-kbd-border-color),0 .1rem 0 var(--md-typeset-kbd-border-color),0 -.1rem .2rem var(--md-typeset-kbd-accent-color) inset;color:var(--md-default-fg-color);display:inline-block;font-size:.75em;padding:0 .6666666667em;vertical-align:text-top;word-break:break-word}.md-typeset mark{background-color:var(--md-typeset-mark-color);-webkit-box-decoration-break:clone;box-decoration-break:clone;color:inherit;word-break:break-word}.md-typeset abbr{border-bottom:.05rem dotted var(--md-default-fg-color--light);cursor:help;text-decoration:none}@media (hover:none){.md-typeset abbr{position:relative}.md-typeset abbr[title]:-webkit-any(:focus,:hover):after{background-color:var(--md-default-fg-color);border-radius:.1rem;box-shadow:var(--md-shadow-z3);color:var(--md-default-bg-color);content:attr(title);display:inline-block;font-size:.7rem;margin-top:2em;max-width:80%;min-width:-webkit-max-content;min-width:max-content;padding:.2rem .3rem;position:absolute;width:auto}.md-typeset abbr[title]:-moz-any(:focus,:hover):after{background-color:var(--md-default-fg-color);border-radius:.1rem;box-shadow:var(--md-shadow-z3);color:var(--md-default-bg-color);content:attr(title);display:inline-block;font-size:.7rem;margin-top:2em;max-width:80%;min-width:-moz-max-content;min-width:max-content;padding:.2rem .3rem;position:absolute;width:auto}[dir=ltr] .md-typeset abbr[title]:-webkit-any(:focus,:hover):after{left:0}[dir=ltr] .md-typeset abbr[title]:-moz-any(:focus,:hover):after{left:0}[dir=ltr] .md-typeset abbr[title]:is(:focus,:hover):after{left:0}[dir=rtl] .md-typeset abbr[title]:-webkit-any(:focus,:hover):after{right:0}[dir=rtl] .md-typeset abbr[title]:-moz-any(:focus,:hover):after{right:0}[dir=rtl] .md-typeset abbr[title]:is(:focus,:hover):after{right:0}.md-typeset abbr[title]:is(:focus,:hover):after{background-color:var(--md-default-fg-color);border-radius:.1rem;box-shadow:var(--md-shadow-z3);color:var(--md-default-bg-color);content:attr(title);display:inline-block;font-size:.7rem;margin-top:2em;max-width:80%;min-width:-webkit-max-content;min-width:-moz-max-content;min-width:max-content;padding:.2rem .3rem;position:absolute;width:auto}}.md-typeset small{opacity:.75}[dir=ltr] .md-typeset sub,[dir=ltr] .md-typeset sup{margin-left:.078125em}[dir=rtl] .md-typeset sub,[dir=rtl] .md-typeset sup{margin-right:.078125em}[dir=ltr] .md-typeset blockquote{padding-left:.6rem}[dir=rtl] .md-typeset blockquote{padding-right:.6rem}[dir=ltr] .md-typeset blockquote{border-left:.2rem solid var(--md-default-fg-color--lighter)}[dir=rtl] .md-typeset blockquote{border-right:.2rem solid var(--md-default-fg-color--lighter)}.md-typeset blockquote{color:var(--md-default-fg-color--light);margin-left:0;margin-right:0}.md-typeset ul{list-style-type:disc}[dir=ltr] .md-typeset ol,[dir=ltr] .md-typeset ul{margin-left:.625em}[dir=rtl] .md-typeset ol,[dir=rtl] .md-typeset ul{margin-right:.625em}.md-typeset ol,.md-typeset ul{padding:0}.md-typeset ol:not([hidden]),.md-typeset ul:not([hidden]){display:flow-root}.md-typeset ol ol,.md-typeset ul ol{list-style-type:lower-alpha}.md-typeset ol ol ol,.md-typeset ul ol ol{list-style-type:lower-roman}[dir=ltr] .md-typeset ol li,[dir=ltr] .md-typeset ul li{margin-left:1.25em}[dir=rtl] .md-typeset ol li,[dir=rtl] .md-typeset ul li{margin-right:1.25em}.md-typeset ol li,.md-typeset ul li{margin-bottom:.5em}.md-typeset ol li blockquote,.md-typeset ol li p,.md-typeset ul li blockquote,.md-typeset ul li p{margin:.5em 0}.md-typeset ol li:last-child,.md-typeset ul li:last-child{margin-bottom:0}.md-typeset ol li :-webkit-any(ul,ol),.md-typeset ul li :-webkit-any(ul,ol){margin-bottom:.5em;margin-top:.5em}.md-typeset ol li :-moz-any(ul,ol),.md-typeset ul li :-moz-any(ul,ol){margin-bottom:.5em;margin-top:.5em}[dir=ltr] .md-typeset ol li :-webkit-any(ul,ol),[dir=ltr] .md-typeset ul li :-webkit-any(ul,ol){margin-left:.625em}[dir=ltr] .md-typeset ol li :-moz-any(ul,ol),[dir=ltr] .md-typeset ul li :-moz-any(ul,ol){margin-left:.625em}[dir=ltr] .md-typeset ol li :is(ul,ol),[dir=ltr] .md-typeset ul li :is(ul,ol){margin-left:.625em}[dir=rtl] .md-typeset ol li :-webkit-any(ul,ol),[dir=rtl] .md-typeset ul li :-webkit-any(ul,ol){margin-right:.625em}[dir=rtl] .md-typeset ol li :-moz-any(ul,ol),[dir=rtl] .md-typeset ul li :-moz-any(ul,ol){margin-right:.625em}[dir=rtl] .md-typeset ol li :is(ul,ol),[dir=rtl] .md-typeset ul li :is(ul,ol){margin-right:.625em}.md-typeset ol li :is(ul,ol),.md-typeset ul li :is(ul,ol){margin-bottom:.5em;margin-top:.5em}[dir=ltr] .md-typeset dd{margin-left:1.875em}[dir=rtl] .md-typeset dd{margin-right:1.875em}.md-typeset dd{margin-bottom:1.5em;margin-top:1em}.md-typeset img,.md-typeset svg{height:auto;max-width:100%}.md-typeset img[align=left],.md-typeset svg[align=left]{margin:1em 1em 1em 0}.md-typeset img[align=right],.md-typeset svg[align=right]{margin:1em 0 1em 1em}.md-typeset img[align]:only-child,.md-typeset svg[align]:only-child{margin-top:0}.md-typeset img[src$="#only-dark"]{display:none}.md-typeset figure{display:flow-root;margin:1em auto;max-width:100%;text-align:center;width:-webkit-fit-content;width:-moz-fit-content;width:fit-content}.md-typeset figure img{display:block}.md-typeset figcaption{font-style:italic;margin:1em auto;max-width:24rem}.md-typeset iframe{max-width:100%}.md-typeset table:not([class]){background-color:var(--md-default-bg-color);border:.05rem solid var(--md-typeset-table-color);border-radius:.1rem;display:inline-block;font-size:.64rem;max-width:100%;overflow:auto;touch-action:auto}@media print{.md-typeset table:not([class]){display:table}}.md-typeset table:not([class])+*{margin-top:1.5em}.md-typeset table:not([class]) :-webkit-any(th,td)>:first-child{margin-top:0}.md-typeset table:not([class]) :-moz-any(th,td)>:first-child{margin-top:0}.md-typeset table:not([class]) :is(th,td)>:first-child{margin-top:0}.md-typeset table:not([class]) :-webkit-any(th,td)>:last-child{margin-bottom:0}.md-typeset table:not([class]) :-moz-any(th,td)>:last-child{margin-bottom:0}.md-typeset table:not([class]) :is(th,td)>:last-child{margin-bottom:0}.md-typeset table:not([class]) :-webkit-any(th,td):not([align]){text-align:left}.md-typeset table:not([class]) :-moz-any(th,td):not([align]){text-align:left}.md-typeset table:not([class]) :is(th,td):not([align]){text-align:left}[dir=rtl] .md-typeset table:not([class]) :-webkit-any(th,td):not([align]){text-align:right}[dir=rtl] .md-typeset table:not([class]) :-moz-any(th,td):not([align]){text-align:right}[dir=rtl] .md-typeset table:not([class]) :is(th,td):not([align]){text-align:right}.md-typeset table:not([class]) th{font-weight:700;min-width:5rem;padding:.9375em 1.25em;vertical-align:top}.md-typeset table:not([class]) th a{color:inherit}.md-typeset table:not([class]) td{border-top:.05rem solid var(--md-typeset-table-color);padding:.9375em 1.25em;vertical-align:top}.md-typeset table:not([class]) tbody tr{transition:background-color 125ms}.md-typeset table:not([class]) tbody tr:hover{background-color:rgba(0,0,0,.035);box-shadow:0 .05rem 0 var(--md-default-bg-color) inset}.md-typeset table:not([class]) a{word-break:normal}.md-typeset table th[role=columnheader]{cursor:pointer}[dir=ltr] .md-typeset table th[role=columnheader]:after{margin-left:.5em}[dir=rtl] .md-typeset table th[role=columnheader]:after{margin-right:.5em}.md-typeset table th[role=columnheader]:after{content:"";display:inline-block;height:1.2em;-webkit-mask-image:var(--md-typeset-table-sort-icon);mask-image:var(--md-typeset-table-sort-icon);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;transition:background-color 125ms;vertical-align:text-bottom;width:1.2em}.md-typeset table th[role=columnheader]:hover:after{background-color:var(--md-default-fg-color--lighter)}.md-typeset table th[role=columnheader][aria-sort=ascending]:after{background-color:var(--md-default-fg-color--light);-webkit-mask-image:var(--md-typeset-table-sort-icon--asc);mask-image:var(--md-typeset-table-sort-icon--asc)}.md-typeset table th[role=columnheader][aria-sort=descending]:after{background-color:var(--md-default-fg-color--light);-webkit-mask-image:var(--md-typeset-table-sort-icon--desc);mask-image:var(--md-typeset-table-sort-icon--desc)}.md-typeset__scrollwrap{margin:1em -.8rem;overflow-x:auto;touch-action:auto}.md-typeset__table{display:inline-block;margin-bottom:.5em;padding:0 .8rem}@media print{.md-typeset__table{display:block}}html .md-typeset__table table{display:table;margin:0;overflow:hidden;width:100%}@media screen and (max-width:44.9375em){.md-content__inner>pre{margin:1em -.8rem}.md-content__inner>pre code{border-radius:0}}.md-banner{background-color:var(--md-footer-bg-color);color:var(--md-footer-fg-color);overflow:auto}@media print{.md-banner{display:none}}.md-banner--warning{background:var(--md-typeset-mark-color);color:var(--md-default-fg-color)}.md-banner__inner{font-size:.7rem;margin:.6rem auto;padding:0 .8rem}html{font-size:125%;height:100%;overflow-x:hidden}@media screen and (min-width:100em){html{font-size:137.5%}}@media screen and (min-width:125em){html{font-size:150%}}body{background-color:var(--md-default-bg-color);display:flex;flex-direction:column;font-size:.5rem;min-height:100%;position:relative;width:100%}@media print{body{display:block}}@media screen and (max-width:59.9375em){body[data-md-state=lock]{position:fixed}}.md-grid{margin-left:auto;margin-right:auto;max-width:61rem}.md-container{display:flex;flex-direction:column;flex-grow:1}@media print{.md-container{display:block}}.md-main{flex-grow:1}.md-main__inner{display:flex;height:100%;margin-top:1.5rem}.md-ellipsis{overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.md-toggle{display:none}.md-option{height:0;opacity:0;position:absolute;width:0}.md-option:checked+label:not([hidden]){display:block}.md-option.focus-visible+label{outline-color:var(--md-accent-fg-color);outline-style:auto}.md-skip{background-color:var(--md-default-fg-color);border-radius:.1rem;color:var(--md-default-bg-color);font-size:.64rem;margin:.5rem;opacity:0;outline-color:var(--md-accent-fg-color);padding:.3rem .5rem;position:fixed;transform:translateY(.4rem);z-index:-1}.md-skip:focus{opacity:1;transform:translateY(0);transition:transform .25s cubic-bezier(.4,0,.2,1),opacity 175ms 75ms;z-index:10}@page{margin:25mm}:root{--md-clipboard-icon:url('data:image/svg+xml;charset=utf-8,')}.md-clipboard{border-radius:.1rem;color:var(--md-default-fg-color--lightest);cursor:pointer;height:1.5em;outline-color:var(--md-accent-fg-color);outline-offset:.1rem;position:absolute;right:.5em;top:.5em;transition:color .25s;width:1.5em;z-index:1}@media print{.md-clipboard{display:none}}.md-clipboard:not(.focus-visible){-webkit-tap-highlight-color:transparent;outline:none}:hover>.md-clipboard{color:var(--md-default-fg-color--light)}.md-clipboard:-webkit-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-clipboard:-moz-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-clipboard:is(:focus,:hover){color:var(--md-accent-fg-color)}.md-clipboard:after{background-color:currentcolor;content:"";display:block;height:1.125em;margin:0 auto;-webkit-mask-image:var(--md-clipboard-icon);mask-image:var(--md-clipboard-icon);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;width:1.125em}.md-clipboard--inline{cursor:pointer}.md-clipboard--inline code{transition:color .25s,background-color .25s}.md-clipboard--inline:-webkit-any(:focus,:hover) code{background-color:var(--md-accent-fg-color--transparent);color:var(--md-accent-fg-color)}.md-clipboard--inline:-moz-any(:focus,:hover) code{background-color:var(--md-accent-fg-color--transparent);color:var(--md-accent-fg-color)}.md-clipboard--inline:is(:focus,:hover) code{background-color:var(--md-accent-fg-color--transparent);color:var(--md-accent-fg-color)}.md-content{flex-grow:1;min-width:0}.md-content__inner{margin:0 .8rem 1.2rem;padding-top:.6rem}@media screen and (min-width:76.25em){[dir=ltr] .md-sidebar--primary:not([hidden])~.md-content>.md-content__inner{margin-left:1.2rem}[dir=ltr] .md-sidebar--secondary:not([hidden])~.md-content>.md-content__inner,[dir=rtl] .md-sidebar--primary:not([hidden])~.md-content>.md-content__inner{margin-right:1.2rem}[dir=rtl] .md-sidebar--secondary:not([hidden])~.md-content>.md-content__inner{margin-left:1.2rem}}.md-content__inner:before{content:"";display:block;height:.4rem}.md-content__inner>:last-child{margin-bottom:0}[dir=ltr] .md-content__button{margin-left:.4rem}[dir=rtl] .md-content__button{margin-right:.4rem}.md-content__button{float:right;margin:.4rem 0;padding:0}@media print{.md-content__button{display:none}}[dir=rtl] .md-content__button{float:left}.md-typeset .md-content__button{color:var(--md-default-fg-color--lighter)}.md-content__button svg{display:inline;vertical-align:top}[dir=rtl] .md-content__button svg{transform:scaleX(-1)}[dir=ltr] .md-dialog{right:.8rem}[dir=rtl] .md-dialog{left:.8rem}.md-dialog{background-color:var(--md-default-fg-color);border-radius:.1rem;bottom:.8rem;box-shadow:var(--md-shadow-z3);min-width:11.1rem;opacity:0;padding:.4rem .6rem;pointer-events:none;position:fixed;transform:translateY(100%);transition:transform 0ms .4s,opacity .4s;z-index:4}@media print{.md-dialog{display:none}}.md-dialog[data-md-state=open]{opacity:1;pointer-events:auto;transform:translateY(0);transition:transform .4s cubic-bezier(.075,.85,.175,1),opacity .4s}.md-dialog__inner{color:var(--md-default-bg-color);font-size:.7rem}.md-footer{background-color:var(--md-footer-bg-color);color:var(--md-footer-fg-color)}@media print{.md-footer{display:none}}.md-footer__inner{display:flex;justify-content:space-between;overflow:auto;padding:.2rem}.md-footer__link{display:flex;flex-grow:0.01;outline-color:var(--md-accent-fg-color);overflow:hidden;padding-bottom:.4rem;padding-top:1.4rem;transition:opacity .25s}.md-footer__link:-webkit-any(:focus,:hover){opacity:.7}.md-footer__link:-moz-any(:focus,:hover){opacity:.7}.md-footer__link:is(:focus,:hover){opacity:.7}[dir=rtl] .md-footer__link svg{transform:scaleX(-1)}@media screen and (max-width:44.9375em){.md-footer__link--prev .md-footer__title{display:none}}[dir=ltr] .md-footer__link--next{margin-left:auto}[dir=rtl] .md-footer__link--next{margin-right:auto}.md-footer__link--next{text-align:right}[dir=rtl] .md-footer__link--next{text-align:left}.md-footer__title{flex-grow:1;font-size:.9rem;line-height:2.4rem;max-width:calc(100% - 2.4rem);padding:0 1rem;position:relative}.md-footer__button{margin:.2rem;padding:.4rem}.md-footer__direction{font-size:.64rem;left:0;margin-top:-1rem;opacity:.7;padding:0 1rem;position:absolute;right:0}.md-footer-meta{background-color:var(--md-footer-bg-color--dark)}.md-footer-meta__inner{display:flex;flex-wrap:wrap;justify-content:space-between;padding:.2rem}html .md-footer-meta.md-typeset a{color:var(--md-footer-fg-color--light)}html .md-footer-meta.md-typeset a:-webkit-any(:focus,:hover){color:var(--md-footer-fg-color)}html .md-footer-meta.md-typeset a:-moz-any(:focus,:hover){color:var(--md-footer-fg-color)}html .md-footer-meta.md-typeset a:is(:focus,:hover){color:var(--md-footer-fg-color)}.md-copyright{color:var(--md-footer-fg-color--lighter);font-size:.64rem;margin:auto .6rem;padding:.4rem 0;width:100%}@media screen and (min-width:45em){.md-copyright{width:auto}}.md-copyright__highlight{color:var(--md-footer-fg-color--light)}.md-social{margin:0 .4rem;padding:.2rem 0 .6rem}@media screen and (min-width:45em){.md-social{padding:.6rem 0}}.md-social__link{display:inline-block;height:1.6rem;text-align:center;width:1.6rem}.md-social__link:before{line-height:1.9}.md-social__link svg{fill:currentcolor;max-height:.8rem;vertical-align:-25%}.md-typeset .md-button{border:.1rem solid;border-radius:.1rem;color:var(--md-primary-fg-color);cursor:pointer;display:inline-block;font-weight:700;padding:.625em 2em;transition:color 125ms,background-color 125ms,border-color 125ms}.md-typeset .md-button--primary{background-color:var(--md-primary-fg-color);border-color:var(--md-primary-fg-color);color:var(--md-primary-bg-color)}.md-typeset .md-button:-webkit-any(:focus,:hover){background-color:var(--md-accent-fg-color);border-color:var(--md-accent-fg-color);color:var(--md-accent-bg-color)}.md-typeset .md-button:-moz-any(:focus,:hover){background-color:var(--md-accent-fg-color);border-color:var(--md-accent-fg-color);color:var(--md-accent-bg-color)}.md-typeset .md-button:is(:focus,:hover){background-color:var(--md-accent-fg-color);border-color:var(--md-accent-fg-color);color:var(--md-accent-bg-color)}[dir=ltr] .md-typeset .md-input{border-top-left-radius:.1rem}[dir=ltr] .md-typeset .md-input,[dir=rtl] .md-typeset .md-input{border-top-right-radius:.1rem}[dir=rtl] .md-typeset .md-input{border-top-left-radius:.1rem}.md-typeset .md-input{border-bottom:.1rem solid var(--md-default-fg-color--lighter);box-shadow:var(--md-shadow-z1);font-size:.8rem;height:1.8rem;padding:0 .6rem;transition:border .25s,box-shadow .25s}.md-typeset .md-input:-webkit-any(:focus,:hover){border-bottom-color:var(--md-accent-fg-color);box-shadow:var(--md-shadow-z2)}.md-typeset .md-input:-moz-any(:focus,:hover){border-bottom-color:var(--md-accent-fg-color);box-shadow:var(--md-shadow-z2)}.md-typeset .md-input:is(:focus,:hover){border-bottom-color:var(--md-accent-fg-color);box-shadow:var(--md-shadow-z2)}.md-typeset .md-input--stretch{width:100%}.md-header{background-color:var(--md-primary-fg-color);box-shadow:0 0 .2rem transparent,0 .2rem .4rem transparent;color:var(--md-primary-bg-color);left:0;position:-webkit-sticky;position:sticky;right:0;top:0;z-index:4}@media print{.md-header{display:none}}.md-header[data-md-state=shadow]{box-shadow:0 0 .2rem rgba(0,0,0,.1),0 .2rem .4rem rgba(0,0,0,.2);transition:transform .25s cubic-bezier(.1,.7,.1,1),box-shadow .25s}.md-header[data-md-state=hidden]{transform:translateY(-100%);transition:transform .25s cubic-bezier(.8,0,.6,1),box-shadow .25s}.md-header__inner{align-items:center;display:flex;padding:0 .2rem}.md-header__button{color:currentcolor;cursor:pointer;margin:.2rem;outline-color:var(--md-accent-fg-color);padding:.4rem;position:relative;transition:opacity .25s;vertical-align:middle;z-index:1}.md-header__button:hover{opacity:.7}.md-header__button:not([hidden]){display:inline-block}.md-header__button:not(.focus-visible){-webkit-tap-highlight-color:transparent;outline:none}.md-header__button.md-logo{margin:.2rem;padding:.4rem}@media screen and (max-width:76.1875em){.md-header__button.md-logo{display:none}}.md-header__button.md-logo :-webkit-any(img,svg){fill:currentcolor;display:block;height:1.2rem;width:auto}.md-header__button.md-logo :-moz-any(img,svg){fill:currentcolor;display:block;height:1.2rem;width:auto}.md-header__button.md-logo :is(img,svg){fill:currentcolor;display:block;height:1.2rem;width:auto}@media screen and (min-width:60em){.md-header__button[for=__search]{display:none}}.no-js .md-header__button[for=__search]{display:none}[dir=rtl] .md-header__button[for=__search] svg{transform:scaleX(-1)}@media screen and (min-width:76.25em){.md-header__button[for=__drawer]{display:none}}.md-header__topic{display:flex;max-width:100%;position:absolute;transition:transform .4s cubic-bezier(.1,.7,.1,1),opacity .15s}.md-header__topic+.md-header__topic{opacity:0;pointer-events:none;transform:translateX(1.25rem);transition:transform .4s cubic-bezier(1,.7,.1,.1),opacity .15s;z-index:-1}[dir=rtl] .md-header__topic+.md-header__topic{transform:translateX(-1.25rem)}.md-header__topic:first-child{font-weight:700}[dir=ltr] .md-header__title{margin-right:.4rem}[dir=rtl] .md-header__title{margin-left:.4rem}[dir=ltr] .md-header__title{margin-left:1rem}[dir=rtl] .md-header__title{margin-right:1rem}.md-header__title{flex-grow:1;font-size:.9rem;height:2.4rem;line-height:2.4rem}.md-header__title[data-md-state=active] .md-header__topic{opacity:0;pointer-events:none;transform:translateX(-1.25rem);transition:transform .4s cubic-bezier(1,.7,.1,.1),opacity .15s;z-index:-1}[dir=rtl] .md-header__title[data-md-state=active] .md-header__topic{transform:translateX(1.25rem)}.md-header__title[data-md-state=active] .md-header__topic+.md-header__topic{opacity:1;pointer-events:auto;transform:translateX(0);transition:transform .4s cubic-bezier(.1,.7,.1,1),opacity .15s;z-index:0}.md-header__title>.md-header__ellipsis{height:100%;position:relative;width:100%}.md-header__option{display:flex;flex-shrink:0;max-width:100%;transition:max-width 0ms .25s,opacity .25s .25s;white-space:nowrap}[data-md-toggle=search]:checked~.md-header .md-header__option{max-width:0;opacity:0;transition:max-width 0ms,opacity 0ms}.md-header__source{display:none}@media screen and (min-width:60em){[dir=ltr] .md-header__source{margin-left:1rem}[dir=rtl] .md-header__source{margin-right:1rem}.md-header__source{display:block;max-width:11.7rem;width:11.7rem}}@media screen and (min-width:76.25em){[dir=ltr] .md-header__source{margin-left:1.4rem}[dir=rtl] .md-header__source{margin-right:1.4rem}}:root{--md-nav-icon--prev:url('data:image/svg+xml;charset=utf-8,');--md-nav-icon--next:url('data:image/svg+xml;charset=utf-8,');--md-toc-icon:url('data:image/svg+xml;charset=utf-8,')}.md-nav{font-size:.7rem;line-height:1.3}.md-nav__title{display:block;font-weight:700;overflow:hidden;padding:0 .6rem;text-overflow:ellipsis}.md-nav__title .md-nav__button{display:none}.md-nav__title .md-nav__button img{height:100%;width:auto}.md-nav__title .md-nav__button.md-logo :-webkit-any(img,svg){fill:currentcolor;display:block;height:2.4rem;max-width:100%;object-fit:contain;width:auto}.md-nav__title .md-nav__button.md-logo :-moz-any(img,svg){fill:currentcolor;display:block;height:2.4rem;max-width:100%;object-fit:contain;width:auto}.md-nav__title .md-nav__button.md-logo :is(img,svg){fill:currentcolor;display:block;height:2.4rem;max-width:100%;object-fit:contain;width:auto}.md-nav__list{list-style:none;margin:0;padding:0}.md-nav__item{padding:0 .6rem}[dir=ltr] .md-nav__item .md-nav__item{padding-right:0}[dir=rtl] .md-nav__item .md-nav__item{padding-left:0}.md-nav__link{align-items:center;cursor:pointer;display:flex;justify-content:space-between;margin-top:.625em;overflow:hidden;scroll-snap-align:start;text-overflow:ellipsis;transition:color 125ms}.md-nav__link[data-md-state=blur]{color:var(--md-default-fg-color--light)}.md-nav__item .md-nav__link--active{color:var(--md-typeset-a-color)}.md-nav__item .md-nav__link--index [href]{width:100%}.md-nav__link:-webkit-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-nav__link:-moz-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-nav__link:is(:focus,:hover){color:var(--md-accent-fg-color)}.md-nav__link.focus-visible{outline-color:var(--md-accent-fg-color);outline-offset:.2rem}.md-nav--primary .md-nav__link[for=__toc]{display:none}.md-nav--primary .md-nav__link[for=__toc] .md-icon:after{background-color:currentcolor;display:block;height:100%;-webkit-mask-image:var(--md-toc-icon);mask-image:var(--md-toc-icon);width:100%}.md-nav--primary .md-nav__link[for=__toc]~.md-nav{display:none}.md-nav__link>*{cursor:pointer;display:flex}.md-nav__icon{flex-shrink:0}.md-nav__source{display:none}@media screen and (max-width:76.1875em){.md-nav--primary,.md-nav--primary .md-nav{background-color:var(--md-default-bg-color);display:flex;flex-direction:column;height:100%;left:0;position:absolute;right:0;top:0;z-index:1}.md-nav--primary :-webkit-any(.md-nav__title,.md-nav__item){font-size:.8rem;line-height:1.5}.md-nav--primary :-moz-any(.md-nav__title,.md-nav__item){font-size:.8rem;line-height:1.5}.md-nav--primary :is(.md-nav__title,.md-nav__item){font-size:.8rem;line-height:1.5}.md-nav--primary .md-nav__title{background-color:var(--md-default-fg-color--lightest);color:var(--md-default-fg-color--light);cursor:pointer;font-weight:400;height:5.6rem;line-height:2.4rem;padding:3rem .8rem .2rem;position:relative;white-space:nowrap}[dir=ltr] .md-nav--primary .md-nav__title .md-nav__icon{left:.4rem}[dir=rtl] .md-nav--primary .md-nav__title .md-nav__icon{right:.4rem}.md-nav--primary .md-nav__title .md-nav__icon{display:block;height:1.2rem;margin:.2rem;position:absolute;top:.4rem;width:1.2rem}.md-nav--primary .md-nav__title .md-nav__icon:after{background-color:currentcolor;content:"";display:block;height:100%;-webkit-mask-image:var(--md-nav-icon--prev);mask-image:var(--md-nav-icon--prev);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;width:100%}.md-nav--primary .md-nav__title~.md-nav__list{background-color:var(--md-default-bg-color);box-shadow:0 .05rem 0 var(--md-default-fg-color--lightest) inset;overflow-y:auto;-ms-scroll-snap-type:y mandatory;scroll-snap-type:y mandatory;touch-action:pan-y}.md-nav--primary .md-nav__title~.md-nav__list>:first-child{border-top:0}.md-nav--primary .md-nav__title[for=__drawer]{background-color:var(--md-primary-fg-color);color:var(--md-primary-bg-color);font-weight:700}.md-nav--primary .md-nav__title .md-logo{display:block;left:.2rem;margin:.2rem;padding:.4rem;position:absolute;right:.2rem;top:.2rem}.md-nav--primary .md-nav__list{flex:1}.md-nav--primary .md-nav__item{border-top:.05rem solid var(--md-default-fg-color--lightest);padding:0}.md-nav--primary .md-nav__item--active>.md-nav__link{color:var(--md-typeset-a-color)}.md-nav--primary .md-nav__item--active>.md-nav__link:-webkit-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-nav--primary .md-nav__item--active>.md-nav__link:-moz-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-nav--primary .md-nav__item--active>.md-nav__link:is(:focus,:hover){color:var(--md-accent-fg-color)}.md-nav--primary .md-nav__link{margin-top:0;padding:.6rem .8rem}[dir=ltr] .md-nav--primary .md-nav__link .md-nav__icon{margin-right:-.2rem}[dir=rtl] .md-nav--primary .md-nav__link .md-nav__icon{margin-left:-.2rem}.md-nav--primary .md-nav__link .md-nav__icon{font-size:1.2rem;height:1.2rem;width:1.2rem}.md-nav--primary .md-nav__link .md-nav__icon:after{background-color:currentcolor;content:"";display:block;height:100%;-webkit-mask-image:var(--md-nav-icon--next);mask-image:var(--md-nav-icon--next);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;width:100%}[dir=rtl] .md-nav--primary .md-nav__icon:after{transform:scale(-1)}.md-nav--primary .md-nav--secondary .md-nav{background-color:initial;position:static}[dir=ltr] .md-nav--primary .md-nav--secondary .md-nav .md-nav__link{padding-left:1.4rem}[dir=rtl] .md-nav--primary .md-nav--secondary .md-nav .md-nav__link{padding-right:1.4rem}[dir=ltr] .md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav__link{padding-left:2rem}[dir=rtl] .md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav__link{padding-right:2rem}[dir=ltr] .md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav .md-nav__link{padding-left:2.6rem}[dir=rtl] .md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav .md-nav__link{padding-right:2.6rem}[dir=ltr] .md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav .md-nav .md-nav__link{padding-left:3.2rem}[dir=rtl] .md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav .md-nav .md-nav__link{padding-right:3.2rem}.md-nav--secondary{background-color:initial}.md-nav__toggle~.md-nav{display:flex;opacity:0;transform:translateX(100%);transition:transform .25s cubic-bezier(.8,0,.6,1),opacity 125ms 50ms}[dir=rtl] .md-nav__toggle~.md-nav{transform:translateX(-100%)}.md-nav__toggle:checked~.md-nav{opacity:1;transform:translateX(0);transition:transform .25s cubic-bezier(.4,0,.2,1),opacity 125ms 125ms}.md-nav__toggle:checked~.md-nav>.md-nav__list{-webkit-backface-visibility:hidden;backface-visibility:hidden}}@media screen and (max-width:59.9375em){.md-nav--primary .md-nav__link[for=__toc]{display:flex}.md-nav--primary .md-nav__link[for=__toc] .md-icon:after{content:""}.md-nav--primary .md-nav__link[for=__toc]+.md-nav__link{display:none}.md-nav--primary .md-nav__link[for=__toc]~.md-nav{display:flex}.md-nav__source{background-color:var(--md-primary-fg-color--dark);color:var(--md-primary-bg-color);display:block;padding:0 .2rem}}@media screen and (min-width:60em) and (max-width:76.1875em){.md-nav--integrated .md-nav__link[for=__toc]{display:flex}.md-nav--integrated .md-nav__link[for=__toc] .md-icon:after{content:""}.md-nav--integrated .md-nav__link[for=__toc]+.md-nav__link{display:none}.md-nav--integrated .md-nav__link[for=__toc]~.md-nav{display:flex}}@media screen and (min-width:60em){.md-nav--secondary .md-nav__title[for=__toc]{scroll-snap-align:start}.md-nav--secondary .md-nav__title .md-nav__icon{display:none}}@media screen and (min-width:76.25em){.md-nav{transition:max-height .25s cubic-bezier(.86,0,.07,1)}.md-nav--primary .md-nav__title[for=__drawer]{scroll-snap-align:start}.md-nav--primary .md-nav__title .md-nav__icon,.md-nav__toggle~.md-nav{display:none}.md-nav__toggle:-webkit-any(:checked,:indeterminate)~.md-nav{display:block}.md-nav__toggle:-moz-any(:checked,:indeterminate)~.md-nav{display:block}.md-nav__toggle:is(:checked,:indeterminate)~.md-nav{display:block}.md-nav__item--nested>.md-nav>.md-nav__title{display:none}.md-nav__item--section{display:block;margin:1.25em 0}.md-nav__item--section:last-child{margin-bottom:0}.md-nav__item--section>.md-nav__link{font-weight:700;pointer-events:none}.md-nav__item--section>.md-nav__link--index [href]{pointer-events:auto}.md-nav__item--section>.md-nav__link .md-nav__icon{display:none}.md-nav__item--section>.md-nav{display:block}.md-nav__item--section>.md-nav>.md-nav__list>.md-nav__item{padding:0}.md-nav__icon{border-radius:100%;float:right;height:.9rem;transition:background-color .25s,transform .25s;width:.9rem}[dir=rtl] .md-nav__icon{float:left;transform:rotate(180deg)}.md-nav__icon:hover{background-color:var(--md-accent-fg-color--transparent)}.md-nav__icon:after{background-color:currentcolor;content:"";display:inline-block;height:100%;-webkit-mask-image:var(--md-nav-icon--next);mask-image:var(--md-nav-icon--next);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;vertical-align:-.1rem;width:100%}.md-nav__item--nested .md-nav__toggle:checked~.md-nav__link .md-nav__icon,.md-nav__item--nested .md-nav__toggle:indeterminate~.md-nav__link .md-nav__icon{transform:rotate(90deg)}.md-nav--lifted>.md-nav__list>.md-nav__item,.md-nav--lifted>.md-nav__list>.md-nav__item--nested,.md-nav--lifted>.md-nav__title{display:none}.md-nav--lifted>.md-nav__list>.md-nav__item--active{display:block;padding:0}.md-nav--lifted>.md-nav__list>.md-nav__item--active>.md-nav__link{font-weight:700;margin-top:0;padding:0 .6rem;pointer-events:none}.md-nav--lifted>.md-nav__list>.md-nav__item--active>.md-nav__link--index [href]{pointer-events:auto}.md-nav--lifted>.md-nav__list>.md-nav__item--active>.md-nav__link .md-nav__icon{display:none}.md-nav--lifted .md-nav[data-md-level="1"]{display:block}[dir=ltr] .md-nav--lifted .md-nav[data-md-level="1"]>.md-nav__list>.md-nav__item{padding-right:.6rem}[dir=rtl] .md-nav--lifted .md-nav[data-md-level="1"]>.md-nav__list>.md-nav__item{padding-left:.6rem}.md-nav--integrated>.md-nav__list>.md-nav__item--active:not(.md-nav__item--nested){padding:0 .6rem}.md-nav--integrated>.md-nav__list>.md-nav__item--active:not(.md-nav__item--nested)>.md-nav__link{padding:0}[dir=ltr] .md-nav--integrated>.md-nav__list>.md-nav__item--active .md-nav--secondary{border-left:.05rem solid var(--md-primary-fg-color)}[dir=rtl] .md-nav--integrated>.md-nav__list>.md-nav__item--active .md-nav--secondary{border-right:.05rem solid var(--md-primary-fg-color)}.md-nav--integrated>.md-nav__list>.md-nav__item--active .md-nav--secondary{display:block;margin-bottom:1.25em}.md-nav--integrated>.md-nav__list>.md-nav__item--active .md-nav--secondary>.md-nav__title{display:none}}:root{--md-search-result-icon:url('data:image/svg+xml;charset=utf-8,')}.md-search{position:relative}@media screen and (min-width:60em){.md-search{padding:.2rem 0}}.no-js .md-search{display:none}.md-search__overlay{opacity:0;z-index:1}@media screen and (max-width:59.9375em){[dir=ltr] .md-search__overlay{left:-2.2rem}[dir=rtl] .md-search__overlay{right:-2.2rem}.md-search__overlay{background-color:var(--md-default-bg-color);border-radius:1rem;height:2rem;overflow:hidden;pointer-events:none;position:absolute;top:-1rem;transform-origin:center;transition:transform .3s .1s,opacity .2s .2s;width:2rem}[data-md-toggle=search]:checked~.md-header .md-search__overlay{opacity:1;transition:transform .4s,opacity .1s}}@media screen and (min-width:60em){[dir=ltr] .md-search__overlay{left:0}[dir=rtl] .md-search__overlay{right:0}.md-search__overlay{background-color:rgba(0,0,0,.54);cursor:pointer;height:0;position:fixed;top:0;transition:width 0ms .25s,height 0ms .25s,opacity .25s;width:0}[data-md-toggle=search]:checked~.md-header .md-search__overlay{height:200vh;opacity:1;transition:width 0ms,height 0ms,opacity .25s;width:100%}}@media screen and (max-width:29.9375em){[data-md-toggle=search]:checked~.md-header .md-search__overlay{transform:scale(45)}}@media screen and (min-width:30em) and (max-width:44.9375em){[data-md-toggle=search]:checked~.md-header .md-search__overlay{transform:scale(60)}}@media screen and (min-width:45em) and (max-width:59.9375em){[data-md-toggle=search]:checked~.md-header .md-search__overlay{transform:scale(75)}}.md-search__inner{-webkit-backface-visibility:hidden;backface-visibility:hidden}@media screen and (max-width:59.9375em){[dir=ltr] .md-search__inner{left:0}[dir=rtl] .md-search__inner{right:0}.md-search__inner{height:0;opacity:0;overflow:hidden;position:fixed;top:0;transform:translateX(5%);transition:width 0ms .3s,height 0ms .3s,transform .15s cubic-bezier(.4,0,.2,1) .15s,opacity .15s .15s;width:0;z-index:2}[dir=rtl] .md-search__inner{transform:translateX(-5%)}[data-md-toggle=search]:checked~.md-header .md-search__inner{height:100%;opacity:1;transform:translateX(0);transition:width 0ms 0ms,height 0ms 0ms,transform .15s cubic-bezier(.1,.7,.1,1) .15s,opacity .15s .15s;width:100%}}@media screen and (min-width:60em){.md-search__inner{float:right;padding:.1rem 0;position:relative;transition:width .25s cubic-bezier(.1,.7,.1,1);width:11.7rem}[dir=rtl] .md-search__inner{float:left}}@media screen and (min-width:60em) and (max-width:76.1875em){[data-md-toggle=search]:checked~.md-header .md-search__inner{width:23.4rem}}@media screen and (min-width:76.25em){[data-md-toggle=search]:checked~.md-header .md-search__inner{width:34.4rem}}.md-search__form{background-color:var(--md-default-bg-color);box-shadow:0 0 .6rem transparent;height:2.4rem;position:relative;transition:color .25s,background-color .25s;z-index:2}@media screen and (min-width:60em){.md-search__form{background-color:rgba(0,0,0,.26);border-radius:.1rem;height:1.8rem}.md-search__form:hover{background-color:hsla(0,0%,100%,.12)}}[data-md-toggle=search]:checked~.md-header .md-search__form{background-color:var(--md-default-bg-color);border-radius:.1rem .1rem 0 0;box-shadow:0 0 .6rem rgba(0,0,0,.07);color:var(--md-default-fg-color)}[dir=ltr] .md-search__input{padding-left:3.6rem;padding-right:2.2rem}[dir=rtl] .md-search__input{padding-left:2.2rem;padding-right:3.6rem}.md-search__input{background:transparent;font-size:.9rem;height:100%;position:relative;text-overflow:ellipsis;width:100%;z-index:2}.md-search__input::-moz-placeholder{-moz-transition:color .25s;transition:color .25s}.md-search__input::-ms-input-placeholder{-ms-transition:color .25s;transition:color .25s}.md-search__input::placeholder{transition:color .25s}.md-search__input::-moz-placeholder{color:var(--md-default-fg-color--light)}.md-search__input::-ms-input-placeholder{color:var(--md-default-fg-color--light)}.md-search__input::placeholder,.md-search__input~.md-search__icon{color:var(--md-default-fg-color--light)}.md-search__input::-ms-clear{display:none}@media screen and (max-width:59.9375em){.md-search__input{font-size:.9rem;height:2.4rem;width:100%}}@media screen and (min-width:60em){[dir=ltr] .md-search__input{padding-left:2.2rem}[dir=rtl] .md-search__input{padding-right:2.2rem}.md-search__input{color:inherit;font-size:.8rem}.md-search__input::-moz-placeholder{color:var(--md-primary-bg-color--light)}.md-search__input::-ms-input-placeholder{color:var(--md-primary-bg-color--light)}.md-search__input::placeholder{color:var(--md-primary-bg-color--light)}.md-search__input+.md-search__icon{color:var(--md-primary-bg-color)}[data-md-toggle=search]:checked~.md-header .md-search__input{text-overflow:clip}[data-md-toggle=search]:checked~.md-header .md-search__input::-moz-placeholder{color:var(--md-default-fg-color--light)}[data-md-toggle=search]:checked~.md-header .md-search__input::-ms-input-placeholder{color:var(--md-default-fg-color--light)}[data-md-toggle=search]:checked~.md-header .md-search__input+.md-search__icon,[data-md-toggle=search]:checked~.md-header .md-search__input::placeholder{color:var(--md-default-fg-color--light)}}.md-search__icon{cursor:pointer;display:inline-block;height:1.2rem;transition:color .25s,opacity .25s;width:1.2rem}.md-search__icon:hover{opacity:.7}[dir=ltr] .md-search__icon[for=__search]{left:.5rem}[dir=rtl] .md-search__icon[for=__search]{right:.5rem}.md-search__icon[for=__search]{position:absolute;top:.3rem;z-index:2}[dir=rtl] .md-search__icon[for=__search] svg{transform:scaleX(-1)}@media screen and (max-width:59.9375em){[dir=ltr] .md-search__icon[for=__search]{left:.8rem}[dir=rtl] .md-search__icon[for=__search]{right:.8rem}.md-search__icon[for=__search]{top:.6rem}.md-search__icon[for=__search] svg:first-child{display:none}}@media screen and (min-width:60em){.md-search__icon[for=__search]{pointer-events:none}.md-search__icon[for=__search] svg:last-child{display:none}}[dir=ltr] .md-search__options{right:.5rem}[dir=rtl] .md-search__options{left:.5rem}.md-search__options{pointer-events:none;position:absolute;top:.3rem;z-index:2}@media screen and (max-width:59.9375em){[dir=ltr] .md-search__options{right:.8rem}[dir=rtl] .md-search__options{left:.8rem}.md-search__options{top:.6rem}}[dir=ltr] .md-search__options>*{margin-left:.2rem}[dir=rtl] .md-search__options>*{margin-right:.2rem}.md-search__options>*{color:var(--md-default-fg-color--light);opacity:0;transform:scale(.75);transition:transform .15s cubic-bezier(.1,.7,.1,1),opacity .15s}.md-search__options>:not(.focus-visible){-webkit-tap-highlight-color:transparent;outline:none}[data-md-toggle=search]:checked~.md-header .md-search__input:valid~.md-search__options>*{opacity:1;pointer-events:auto;transform:scale(1)}[data-md-toggle=search]:checked~.md-header .md-search__input:valid~.md-search__options>:hover{opacity:.7}[dir=ltr] .md-search__suggest{padding-left:3.6rem;padding-right:2.2rem}[dir=rtl] .md-search__suggest{padding-left:2.2rem;padding-right:3.6rem}.md-search__suggest{align-items:center;color:var(--md-default-fg-color--lighter);display:flex;font-size:.9rem;height:100%;opacity:0;position:absolute;top:0;transition:opacity 50ms;white-space:nowrap;width:100%}@media screen and (min-width:60em){[dir=ltr] .md-search__suggest{padding-left:2.2rem}[dir=rtl] .md-search__suggest{padding-right:2.2rem}.md-search__suggest{font-size:.8rem}}[data-md-toggle=search]:checked~.md-header .md-search__suggest{opacity:1;transition:opacity .3s .1s}[dir=ltr] .md-search__output{border-bottom-left-radius:.1rem}[dir=ltr] .md-search__output,[dir=rtl] .md-search__output{border-bottom-right-radius:.1rem}[dir=rtl] .md-search__output{border-bottom-left-radius:.1rem}.md-search__output{overflow:hidden;position:absolute;width:100%;z-index:1}@media screen and (max-width:59.9375em){.md-search__output{bottom:0;top:2.4rem}}@media screen and (min-width:60em){.md-search__output{opacity:0;top:1.9rem;transition:opacity .4s}[data-md-toggle=search]:checked~.md-header .md-search__output{box-shadow:var(--md-shadow-z3);opacity:1}}.md-search__scrollwrap{-webkit-backface-visibility:hidden;backface-visibility:hidden;background-color:var(--md-default-bg-color);height:100%;overflow-y:auto;touch-action:pan-y}@media (-webkit-max-device-pixel-ratio:1),(max-resolution:1dppx){.md-search__scrollwrap{transform:translateZ(0)}}@media screen and (min-width:60em) and (max-width:76.1875em){.md-search__scrollwrap{width:23.4rem}}@media screen and (min-width:76.25em){.md-search__scrollwrap{width:34.4rem}}@media screen and (min-width:60em){.md-search__scrollwrap{max-height:0;scrollbar-color:var(--md-default-fg-color--lighter) transparent;scrollbar-width:thin}[data-md-toggle=search]:checked~.md-header .md-search__scrollwrap{max-height:75vh}.md-search__scrollwrap:hover{scrollbar-color:var(--md-accent-fg-color) transparent}.md-search__scrollwrap::-webkit-scrollbar{height:.2rem;width:.2rem}.md-search__scrollwrap::-webkit-scrollbar-thumb{background-color:var(--md-default-fg-color--lighter)}.md-search__scrollwrap::-webkit-scrollbar-thumb:hover{background-color:var(--md-accent-fg-color)}}.md-search-result{color:var(--md-default-fg-color);word-break:break-word}.md-search-result__meta{background-color:var(--md-default-fg-color--lightest);color:var(--md-default-fg-color--light);font-size:.64rem;line-height:1.8rem;padding:0 .8rem;scroll-snap-align:start}@media screen and (min-width:60em){[dir=ltr] .md-search-result__meta{padding-left:2.2rem}[dir=rtl] .md-search-result__meta{padding-right:2.2rem}}.md-search-result__list{list-style:none;margin:0;padding:0}.md-search-result__item{box-shadow:0 -.05rem var(--md-default-fg-color--lightest)}.md-search-result__item:first-child{box-shadow:none}.md-search-result__link{display:block;outline:none;scroll-snap-align:start;transition:background-color .25s}.md-search-result__link:-webkit-any(:focus,:hover){background-color:var(--md-accent-fg-color--transparent)}.md-search-result__link:-moz-any(:focus,:hover){background-color:var(--md-accent-fg-color--transparent)}.md-search-result__link:is(:focus,:hover){background-color:var(--md-accent-fg-color--transparent)}.md-search-result__link:last-child p:last-child{margin-bottom:.6rem}.md-search-result__more summary{color:var(--md-typeset-a-color);cursor:pointer;display:block;font-size:.64rem;outline:none;padding:.75em .8rem;scroll-snap-align:start;transition:color .25s,background-color .25s}@media screen and (min-width:60em){[dir=ltr] .md-search-result__more summary{padding-left:2.2rem}[dir=rtl] .md-search-result__more summary{padding-right:2.2rem}}.md-search-result__more summary:-webkit-any(:focus,:hover){background-color:var(--md-accent-fg-color--transparent);color:var(--md-accent-fg-color)}.md-search-result__more summary:-moz-any(:focus,:hover){background-color:var(--md-accent-fg-color--transparent);color:var(--md-accent-fg-color)}.md-search-result__more summary:is(:focus,:hover){background-color:var(--md-accent-fg-color--transparent);color:var(--md-accent-fg-color)}.md-search-result__more summary::marker{display:none}.md-search-result__more summary::-webkit-details-marker{display:none}.md-search-result__more summary~*>*{opacity:.65}.md-search-result__article{overflow:hidden;padding:0 .8rem;position:relative}@media screen and (min-width:60em){[dir=ltr] .md-search-result__article{padding-left:2.2rem}[dir=rtl] .md-search-result__article{padding-right:2.2rem}}.md-search-result__article--document .md-search-result__title{font-size:.8rem;font-weight:400;line-height:1.4;margin:.55rem 0}[dir=ltr] .md-search-result__icon{left:0}[dir=rtl] .md-search-result__icon{right:0}.md-search-result__icon{color:var(--md-default-fg-color--light);height:1.2rem;margin:.5rem;position:absolute;width:1.2rem}@media screen and (max-width:59.9375em){.md-search-result__icon{display:none}}.md-search-result__icon:after{background-color:currentcolor;content:"";display:inline-block;height:100%;-webkit-mask-image:var(--md-search-result-icon);mask-image:var(--md-search-result-icon);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;width:100%}[dir=rtl] .md-search-result__icon:after{transform:scaleX(-1)}.md-search-result__title{font-size:.64rem;font-weight:700;line-height:1.6;margin:.5em 0}.md-search-result__teaser{-webkit-box-orient:vertical;-webkit-line-clamp:2;color:var(--md-default-fg-color--light);display:-webkit-box;font-size:.64rem;line-height:1.6;margin:.5em 0;max-height:2rem;overflow:hidden;text-overflow:ellipsis}@media screen and (max-width:44.9375em){.md-search-result__teaser{-webkit-line-clamp:3;max-height:3rem}}@media screen and (min-width:60em) and (max-width:76.1875em){.md-search-result__teaser{-webkit-line-clamp:3;max-height:3rem}}.md-search-result__teaser mark{background-color:initial;text-decoration:underline}.md-search-result__terms{font-size:.64rem;font-style:italic;margin:.5em 0}.md-search-result mark{background-color:initial;color:var(--md-accent-fg-color)}.md-select{position:relative;z-index:1}.md-select__inner{background-color:var(--md-default-bg-color);border-radius:.1rem;box-shadow:var(--md-shadow-z2);color:var(--md-default-fg-color);left:50%;margin-top:.2rem;max-height:0;opacity:0;position:absolute;top:calc(100% - .2rem);transform:translate3d(-50%,.3rem,0);transition:transform .25s 375ms,opacity .25s .25s,max-height 0ms .5s}.md-select:-webkit-any(:focus-within,:hover) .md-select__inner{max-height:10rem;opacity:1;transform:translate3d(-50%,0,0);-webkit-transition:transform .25s cubic-bezier(.1,.7,.1,1),opacity .25s,max-height 0ms;transition:transform .25s cubic-bezier(.1,.7,.1,1),opacity .25s,max-height 0ms}.md-select:-moz-any(:focus-within,:hover) .md-select__inner{max-height:10rem;opacity:1;transform:translate3d(-50%,0,0);-moz-transition:transform .25s cubic-bezier(.1,.7,.1,1),opacity .25s,max-height 0ms;transition:transform .25s cubic-bezier(.1,.7,.1,1),opacity .25s,max-height 0ms}.md-select:is(:focus-within,:hover) .md-select__inner{max-height:10rem;opacity:1;transform:translate3d(-50%,0,0);transition:transform .25s cubic-bezier(.1,.7,.1,1),opacity .25s,max-height 0ms}.md-select__inner:after{border-bottom:.2rem solid transparent;border-bottom-color:var(--md-default-bg-color);border-left:.2rem solid transparent;border-right:.2rem solid transparent;border-top:0;content:"";height:0;left:50%;margin-left:-.2rem;margin-top:-.2rem;position:absolute;top:0;width:0}.md-select__list{border-radius:.1rem;font-size:.8rem;list-style-type:none;margin:0;max-height:inherit;overflow:auto;padding:0}.md-select__item{line-height:1.8rem}[dir=ltr] .md-select__link{padding-left:.6rem;padding-right:1.2rem}[dir=rtl] .md-select__link{padding-left:1.2rem;padding-right:.6rem}.md-select__link{cursor:pointer;display:block;outline:none;scroll-snap-align:start;transition:background-color .25s,color .25s;width:100%}.md-select__link:-webkit-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-select__link:-moz-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-select__link:is(:focus,:hover){color:var(--md-accent-fg-color)}.md-select__link:focus{background-color:var(--md-default-fg-color--lightest)}.md-sidebar{align-self:flex-start;flex-shrink:0;padding:1.2rem 0;position:-webkit-sticky;position:sticky;top:2.4rem;width:12.1rem}@media print{.md-sidebar{display:none}}@media screen and (max-width:76.1875em){[dir=ltr] .md-sidebar--primary{left:-12.1rem}[dir=rtl] .md-sidebar--primary{right:-12.1rem}.md-sidebar--primary{background-color:var(--md-default-bg-color);display:block;height:100%;position:fixed;top:0;transform:translateX(0);transition:transform .25s cubic-bezier(.4,0,.2,1),box-shadow .25s;width:12.1rem;z-index:5}[data-md-toggle=drawer]:checked~.md-container .md-sidebar--primary{box-shadow:var(--md-shadow-z3);transform:translateX(12.1rem)}[dir=rtl] [data-md-toggle=drawer]:checked~.md-container .md-sidebar--primary{transform:translateX(-12.1rem)}.md-sidebar--primary .md-sidebar__scrollwrap{bottom:0;left:0;margin:0;overflow:hidden;position:absolute;right:0;-ms-scroll-snap-type:none;scroll-snap-type:none;top:0}}@media screen and (min-width:76.25em){.md-sidebar{height:0}.no-js .md-sidebar{height:auto}}.md-sidebar--secondary{display:none;order:2}@media screen and (min-width:60em){.md-sidebar--secondary{height:0}.no-js .md-sidebar--secondary{height:auto}.md-sidebar--secondary:not([hidden]){display:block}.md-sidebar--secondary .md-sidebar__scrollwrap{touch-action:pan-y}}.md-sidebar__scrollwrap{-webkit-backface-visibility:hidden;backface-visibility:hidden;margin:0 .2rem;overflow-y:auto;scrollbar-color:var(--md-default-fg-color--lighter) transparent;scrollbar-width:thin}.md-sidebar__scrollwrap:hover{scrollbar-color:var(--md-accent-fg-color) transparent}.md-sidebar__scrollwrap::-webkit-scrollbar{height:.2rem;width:.2rem}.md-sidebar__scrollwrap::-webkit-scrollbar-thumb{background-color:var(--md-default-fg-color--lighter)}.md-sidebar__scrollwrap::-webkit-scrollbar-thumb:hover{background-color:var(--md-accent-fg-color)}@media screen and (max-width:76.1875em){.md-overlay{background-color:rgba(0,0,0,.54);height:0;opacity:0;position:fixed;top:0;transition:width 0ms .25s,height 0ms .25s,opacity .25s;width:0;z-index:5}[data-md-toggle=drawer]:checked~.md-overlay{height:100%;opacity:1;transition:width 0ms,height 0ms,opacity .25s;width:100%}}@-webkit-keyframes facts{0%{height:0}to{height:.65rem}}@keyframes facts{0%{height:0}to{height:.65rem}}@-webkit-keyframes fact{0%{opacity:0;transform:translateY(100%)}50%{opacity:0}to{opacity:1;transform:translateY(0)}}@keyframes fact{0%{opacity:0;transform:translateY(100%)}50%{opacity:0}to{opacity:1;transform:translateY(0)}}:root{--md-source-forks-icon:url('data:image/svg+xml;charset=utf-8,');--md-source-repositories-icon:url('data:image/svg+xml;charset=utf-8,');--md-source-stars-icon:url('data:image/svg+xml;charset=utf-8,');--md-source-version-icon:url('data:image/svg+xml;charset=utf-8,')}.md-source{-webkit-backface-visibility:hidden;backface-visibility:hidden;display:block;font-size:.65rem;line-height:1.2;outline-color:var(--md-accent-fg-color);transition:opacity .25s;white-space:nowrap}.md-source:hover{opacity:.7}.md-source__icon{display:inline-block;height:2.4rem;vertical-align:middle;width:2rem}[dir=ltr] .md-source__icon svg{margin-left:.6rem}[dir=rtl] .md-source__icon svg{margin-right:.6rem}.md-source__icon svg{margin-top:.6rem}[dir=ltr] .md-source__icon+.md-source__repository{margin-left:-2rem}[dir=rtl] .md-source__icon+.md-source__repository{margin-right:-2rem}[dir=ltr] .md-source__icon+.md-source__repository{padding-left:2rem}[dir=rtl] .md-source__icon+.md-source__repository{padding-right:2rem}[dir=ltr] .md-source__repository{margin-left:.6rem}[dir=rtl] .md-source__repository{margin-right:.6rem}.md-source__repository{display:inline-block;max-width:calc(100% - 1.2rem);overflow:hidden;text-overflow:ellipsis;vertical-align:middle}.md-source__facts{font-size:.55rem;list-style-type:none;margin:.1rem 0 0;opacity:.75;overflow:hidden;padding:0}[data-md-state=done] .md-source__facts{-webkit-animation:facts .25s ease-in;animation:facts .25s ease-in}.md-source__fact{display:inline-block}[data-md-state=done] .md-source__fact{-webkit-animation:fact .4s ease-out;animation:fact .4s ease-out}[dir=ltr] .md-source__fact:before{margin-right:.1rem}[dir=rtl] .md-source__fact:before{margin-left:.1rem}.md-source__fact:before{background-color:currentcolor;content:"";display:inline-block;height:.6rem;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;vertical-align:text-top;width:.6rem}[dir=ltr] .md-source__fact:nth-child(1n+2):before{margin-left:.4rem}[dir=rtl] .md-source__fact:nth-child(1n+2):before{margin-right:.4rem}.md-source__fact--version:before{-webkit-mask-image:var(--md-source-version-icon);mask-image:var(--md-source-version-icon)}.md-source__fact--stars:before{-webkit-mask-image:var(--md-source-stars-icon);mask-image:var(--md-source-stars-icon)}.md-source__fact--forks:before{-webkit-mask-image:var(--md-source-forks-icon);mask-image:var(--md-source-forks-icon)}.md-source__fact--repositories:before{-webkit-mask-image:var(--md-source-repositories-icon);mask-image:var(--md-source-repositories-icon)}.md-tabs{background-color:var(--md-primary-fg-color);color:var(--md-primary-bg-color);overflow:auto;width:100%}@media print{.md-tabs{display:none}}@media screen and (max-width:76.1875em){.md-tabs{display:none}}.md-tabs[data-md-state=hidden]{pointer-events:none}[dir=ltr] .md-tabs__list{margin-left:.2rem}[dir=rtl] .md-tabs__list{margin-right:.2rem}.md-tabs__list{contain:content;list-style:none;margin:0;padding:0;white-space:nowrap}.md-tabs__item{display:inline-block;height:2.4rem;padding-left:.6rem;padding-right:.6rem}.md-tabs__link{-webkit-backface-visibility:hidden;backface-visibility:hidden;display:block;font-size:.7rem;margin-top:.8rem;opacity:.7;outline-color:var(--md-accent-fg-color);outline-offset:.2rem;transition:transform .4s cubic-bezier(.1,.7,.1,1),opacity .25s}.md-tabs__link--active,.md-tabs__link:-webkit-any(:focus,:hover){color:inherit;opacity:1}.md-tabs__link--active,.md-tabs__link:-moz-any(:focus,:hover){color:inherit;opacity:1}.md-tabs__link--active,.md-tabs__link:is(:focus,:hover){color:inherit;opacity:1}.md-tabs__item:nth-child(2) .md-tabs__link{transition-delay:20ms}.md-tabs__item:nth-child(3) .md-tabs__link{transition-delay:40ms}.md-tabs__item:nth-child(4) .md-tabs__link{transition-delay:60ms}.md-tabs__item:nth-child(5) .md-tabs__link{transition-delay:80ms}.md-tabs__item:nth-child(6) .md-tabs__link{transition-delay:.1s}.md-tabs__item:nth-child(7) .md-tabs__link{transition-delay:.12s}.md-tabs__item:nth-child(8) .md-tabs__link{transition-delay:.14s}.md-tabs__item:nth-child(9) .md-tabs__link{transition-delay:.16s}.md-tabs__item:nth-child(10) .md-tabs__link{transition-delay:.18s}.md-tabs__item:nth-child(11) .md-tabs__link{transition-delay:.2s}.md-tabs__item:nth-child(12) .md-tabs__link{transition-delay:.22s}.md-tabs__item:nth-child(13) .md-tabs__link{transition-delay:.24s}.md-tabs__item:nth-child(14) .md-tabs__link{transition-delay:.26s}.md-tabs__item:nth-child(15) .md-tabs__link{transition-delay:.28s}.md-tabs__item:nth-child(16) .md-tabs__link{transition-delay:.3s}.md-tabs[data-md-state=hidden] .md-tabs__link{opacity:0;transform:translateY(50%);transition:transform 0ms .1s,opacity .1s}.md-tags{margin-bottom:.75em}[dir=ltr] .md-tag{margin-right:.5em}[dir=rtl] .md-tag{margin-left:.5em}.md-tag{background:var(--md-default-fg-color--lightest);border-radius:.4rem;display:inline-block;font-size:.64rem;font-weight:700;line-height:1.6;margin-bottom:.5em;padding:.3125em .9375em}.md-tag[href]{-webkit-tap-highlight-color:transparent;color:inherit;outline:none;transition:color 125ms,background-color 125ms}.md-tag[href]:focus,.md-tag[href]:hover{background-color:var(--md-accent-fg-color);color:var(--md-accent-bg-color)}[id]>.md-tag{vertical-align:text-top}@-webkit-keyframes pulse{0%{box-shadow:0 0 0 0 var(--md-default-fg-color--lightest)}75%{box-shadow:0 0 0 .625em transparent}to{box-shadow:0 0 0 0 transparent}}@keyframes pulse{0%{box-shadow:0 0 0 0 var(--md-default-fg-color--lightest)}75%{box-shadow:0 0 0 .625em transparent}to{box-shadow:0 0 0 0 transparent}}:root{--md-tooltip-width:20rem}.md-tooltip{-webkit-backface-visibility:hidden;backface-visibility:hidden;background-color:var(--md-default-bg-color);border-radius:.1rem;box-shadow:var(--md-shadow-z2);color:var(--md-default-fg-color);left:clamp(var(--md-tooltip-0,0rem) + .8rem,var(--md-tooltip-x),(100vw + var(--md-tooltip-0,0rem) + .8rem - var(--md-tooltip-width) - 2 * .8rem));max-height:0;max-width:calc(100vw - 1.6rem);opacity:0;position:absolute;top:var(--md-tooltip-y);transform:translateY(-.4rem);transition:transform 0ms .25s,opacity .25s,max-height 0ms .25s,z-index .25s;width:var(--md-tooltip-width);z-index:0}:focus-within>.md-tooltip{max-height:1000%;opacity:1;transform:translateY(0);transition:transform .25s cubic-bezier(.1,.7,.1,1),opacity .25s,max-height .25s,z-index 0ms}.focus-visible>.md-tooltip{outline:var(--md-accent-fg-color) auto}.md-tooltip__inner{font-size:.64rem;padding:.8rem}.md-tooltip__inner.md-typeset>:first-child{margin-top:0}.md-tooltip__inner.md-typeset>:last-child{margin-bottom:0}.md-annotation{outline:none;white-space:normal}[dir=rtl] .md-annotation{direction:rtl}.md-annotation:not([hidden]){display:inline-block;line-height:1.325}.md-annotation:focus-within>*{z-index:2}.md-annotation__inner{font-family:var(--md-text-font-family);top:calc(var(--md-tooltip-y) + 1.2ch)}:not(:focus-within)>.md-annotation__inner{pointer-events:none;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none}.md-annotation__index{color:#fff;cursor:pointer;margin:0 1ch;position:relative;transition:z-index .25s;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;z-index:0}.md-annotation__index:after{-webkit-animation:pulse 2s infinite;animation:pulse 2s infinite;background-color:var(--md-default-fg-color--lighter);border-radius:2ch;content:"";height:2.2ch;left:-.126em;margin:0 -.4ch;padding:0 .4ch;position:absolute;transition:color .25s,background-color .25s;width:calc(100% + 1.2ch);width:max(2.2ch,100% + 1.2ch);z-index:-1}@media (prefers-reduced-motion){.md-annotation__index:after{-webkit-animation:none;animation:none}}:-webkit-any(:focus-within,:hover)>.md-annotation__index:after{background-color:var(--md-accent-fg-color)}:-moz-any(:focus-within,:hover)>.md-annotation__index:after{background-color:var(--md-accent-fg-color)}:is(:focus-within,:hover)>.md-annotation__index:after{background-color:var(--md-accent-fg-color)}:focus-within>.md-annotation__index:after{-webkit-animation:none;animation:none;transition:color .25s,background-color .25s}.md-annotation__index [data-md-annotation-id]{display:inline-block;line-height:90%}.md-annotation__index [data-md-annotation-id]:before{content:attr(data-md-annotation-id);display:inline-block;padding-bottom:.1em;transition:transform .4s cubic-bezier(.1,.7,.1,1);vertical-align:.0625em}@media not print{.md-annotation__index [data-md-annotation-id]:before{content:"+"}:focus-within>.md-annotation__index [data-md-annotation-id]:before{transform:rotate(45deg)}}:-webkit-any(:focus-within,:hover)>.md-annotation__index{color:var(--md-accent-bg-color)}:-moz-any(:focus-within,:hover)>.md-annotation__index{color:var(--md-accent-bg-color)}:is(:focus-within,:hover)>.md-annotation__index{color:var(--md-accent-bg-color)}:focus-within>.md-annotation__index{-webkit-animation:none;animation:none;transition:none}[dir=ltr] .md-top{margin-left:50%}[dir=rtl] .md-top{margin-right:50%}.md-top{background-color:var(--md-default-bg-color);border-radius:1.6rem;box-shadow:var(--md-shadow-z2);color:var(--md-default-fg-color--light);font-size:.7rem;outline:none;padding:.4rem .8rem;position:fixed;top:3.2rem;transform:translate(-50%);transition:color 125ms,background-color 125ms,transform 125ms cubic-bezier(.4,0,.2,1),opacity 125ms;z-index:2}@media print{.md-top{display:none}}[dir=rtl] .md-top{transform:translate(50%)}.md-top[data-md-state=hidden]{opacity:0;pointer-events:none;transform:translate(-50%,.2rem);transition-duration:0ms}[dir=rtl] .md-top[data-md-state=hidden]{transform:translate(50%,.2rem)}.md-top:-webkit-any(:focus,:hover){background-color:var(--md-accent-fg-color);color:var(--md-accent-bg-color)}.md-top:-moz-any(:focus,:hover){background-color:var(--md-accent-fg-color);color:var(--md-accent-bg-color)}.md-top:is(:focus,:hover){background-color:var(--md-accent-fg-color);color:var(--md-accent-bg-color)}.md-top svg{display:inline-block;vertical-align:-.5em}@-webkit-keyframes hoverfix{0%{pointer-events:none}}@keyframes hoverfix{0%{pointer-events:none}}:root{--md-version-icon:url('data:image/svg+xml;charset=utf-8,')}.md-version{flex-shrink:0;font-size:.8rem;height:2.4rem}[dir=ltr] .md-version__current{margin-left:1.4rem;margin-right:.4rem}[dir=rtl] .md-version__current{margin-left:.4rem;margin-right:1.4rem}.md-version__current{color:inherit;cursor:pointer;outline:none;position:relative;top:.05rem}[dir=ltr] .md-version__current:after{margin-left:.4rem}[dir=rtl] .md-version__current:after{margin-right:.4rem}.md-version__current:after{background-color:currentcolor;content:"";display:inline-block;height:.6rem;-webkit-mask-image:var(--md-version-icon);mask-image:var(--md-version-icon);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;width:.4rem}.md-version__list{background-color:var(--md-default-bg-color);border-radius:.1rem;box-shadow:var(--md-shadow-z2);color:var(--md-default-fg-color);list-style-type:none;margin:.2rem .8rem;max-height:0;opacity:0;overflow:auto;padding:0;position:absolute;-ms-scroll-snap-type:y mandatory;scroll-snap-type:y mandatory;top:.15rem;transition:max-height 0ms .5s,opacity .25s .25s;z-index:3}.md-version:-webkit-any(:focus-within,:hover) .md-version__list{max-height:10rem;opacity:1;-webkit-transition:max-height 0ms,opacity .25s;transition:max-height 0ms,opacity .25s}.md-version:-moz-any(:focus-within,:hover) .md-version__list{max-height:10rem;opacity:1;-moz-transition:max-height 0ms,opacity .25s;transition:max-height 0ms,opacity .25s}.md-version:is(:focus-within,:hover) .md-version__list{max-height:10rem;opacity:1;transition:max-height 0ms,opacity .25s}@media (pointer:coarse){.md-version:hover .md-version__list{-webkit-animation:hoverfix .25s forwards;animation:hoverfix .25s forwards}.md-version:focus-within .md-version__list{-webkit-animation:none;animation:none}}.md-version__item{line-height:1.8rem}[dir=ltr] .md-version__link{padding-left:.6rem;padding-right:1.2rem}[dir=rtl] .md-version__link{padding-left:1.2rem;padding-right:.6rem}.md-version__link{cursor:pointer;display:block;outline:none;scroll-snap-align:start;transition:color .25s,background-color .25s;white-space:nowrap;width:100%}.md-version__link:-webkit-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-version__link:-moz-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-version__link:is(:focus,:hover){color:var(--md-accent-fg-color)}.md-version__link:focus{background-color:var(--md-default-fg-color--lightest)}:root{--md-admonition-icon--note:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--abstract:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--info:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--tip:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--success:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--question:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--warning:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--failure:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--danger:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--bug:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--example:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--quote:url('data:image/svg+xml;charset=utf-8,')}.md-typeset :-webkit-any(.admonition,details){background-color:var(--md-admonition-bg-color);border:0 solid #448aff;border-radius:.1rem;box-shadow:var(--md-shadow-z1);color:var(--md-admonition-fg-color);display:flow-root;font-size:.64rem;margin:1.5625em 0;padding:0 .6rem;page-break-inside:avoid}.md-typeset :-moz-any(.admonition,details){background-color:var(--md-admonition-bg-color);border:0 solid #448aff;border-radius:.1rem;box-shadow:var(--md-shadow-z1);color:var(--md-admonition-fg-color);display:flow-root;font-size:.64rem;margin:1.5625em 0;padding:0 .6rem;page-break-inside:avoid}[dir=ltr] .md-typeset :-webkit-any(.admonition,details){border-left-width:.2rem}[dir=ltr] .md-typeset :-moz-any(.admonition,details){border-left-width:.2rem}[dir=ltr] .md-typeset :is(.admonition,details){border-left-width:.2rem}[dir=rtl] .md-typeset :-webkit-any(.admonition,details){border-right-width:.2rem}[dir=rtl] .md-typeset :-moz-any(.admonition,details){border-right-width:.2rem}[dir=rtl] .md-typeset :is(.admonition,details){border-right-width:.2rem}.md-typeset :is(.admonition,details){background-color:var(--md-admonition-bg-color);border:0 solid #448aff;border-radius:.1rem;box-shadow:var(--md-shadow-z1);color:var(--md-admonition-fg-color);display:flow-root;font-size:.64rem;margin:1.5625em 0;padding:0 .6rem;page-break-inside:avoid}@media print{.md-typeset :-webkit-any(.admonition,details){box-shadow:none}.md-typeset :-moz-any(.admonition,details){box-shadow:none}.md-typeset :is(.admonition,details){box-shadow:none}}.md-typeset :-webkit-any(.admonition,details)>*{box-sizing:border-box}.md-typeset :-moz-any(.admonition,details)>*{box-sizing:border-box}.md-typeset :is(.admonition,details)>*{box-sizing:border-box}.md-typeset :-webkit-any(.admonition,details) :-webkit-any(.admonition,details){margin-bottom:1em;margin-top:1em}.md-typeset :-moz-any(.admonition,details) :-moz-any(.admonition,details){margin-bottom:1em;margin-top:1em}.md-typeset :is(.admonition,details) :is(.admonition,details){margin-bottom:1em;margin-top:1em}.md-typeset :-webkit-any(.admonition,details) .md-typeset__scrollwrap{margin:1em -.6rem}.md-typeset :-moz-any(.admonition,details) .md-typeset__scrollwrap{margin:1em -.6rem}.md-typeset :is(.admonition,details) .md-typeset__scrollwrap{margin:1em -.6rem}.md-typeset :-webkit-any(.admonition,details) .md-typeset__table{padding:0 .6rem}.md-typeset :-moz-any(.admonition,details) .md-typeset__table{padding:0 .6rem}.md-typeset :is(.admonition,details) .md-typeset__table{padding:0 .6rem}.md-typeset :-webkit-any(.admonition,details)>.tabbed-set:only-child{margin-top:0}.md-typeset :-moz-any(.admonition,details)>.tabbed-set:only-child{margin-top:0}.md-typeset :is(.admonition,details)>.tabbed-set:only-child{margin-top:0}html .md-typeset :-webkit-any(.admonition,details)>:last-child{margin-bottom:.6rem}html .md-typeset :-moz-any(.admonition,details)>:last-child{margin-bottom:.6rem}html .md-typeset :is(.admonition,details)>:last-child{margin-bottom:.6rem}.md-typeset :-webkit-any(.admonition-title,summary){background-color:rgba(68,138,255,.1);border:0 solid #448aff;font-weight:700;margin-bottom:0;margin-top:0;padding-bottom:.4rem;padding-top:.4rem;position:relative}.md-typeset :-moz-any(.admonition-title,summary){background-color:rgba(68,138,255,.1);border:0 solid #448aff;font-weight:700;margin-bottom:0;margin-top:0;padding-bottom:.4rem;padding-top:.4rem;position:relative}[dir=ltr] .md-typeset :-webkit-any(.admonition-title,summary){margin-left:-.8rem;margin-right:-.6rem}[dir=ltr] .md-typeset :-moz-any(.admonition-title,summary){margin-left:-.8rem;margin-right:-.6rem}[dir=ltr] .md-typeset :is(.admonition-title,summary){margin-left:-.8rem;margin-right:-.6rem}[dir=rtl] .md-typeset :-webkit-any(.admonition-title,summary){margin-left:-.6rem;margin-right:-.8rem}[dir=rtl] .md-typeset :-moz-any(.admonition-title,summary){margin-left:-.6rem;margin-right:-.8rem}[dir=rtl] .md-typeset :is(.admonition-title,summary){margin-left:-.6rem;margin-right:-.8rem}[dir=ltr] .md-typeset :-webkit-any(.admonition-title,summary){padding-left:2rem;padding-right:.6rem}[dir=ltr] .md-typeset :-moz-any(.admonition-title,summary){padding-left:2rem;padding-right:.6rem}[dir=ltr] .md-typeset :is(.admonition-title,summary){padding-left:2rem;padding-right:.6rem}[dir=rtl] .md-typeset :-webkit-any(.admonition-title,summary){padding-left:.6rem;padding-right:2rem}[dir=rtl] .md-typeset :-moz-any(.admonition-title,summary){padding-left:.6rem;padding-right:2rem}[dir=rtl] .md-typeset :is(.admonition-title,summary){padding-left:.6rem;padding-right:2rem}[dir=ltr] .md-typeset :-webkit-any(.admonition-title,summary){border-left-width:.2rem}[dir=ltr] .md-typeset :-moz-any(.admonition-title,summary){border-left-width:.2rem}[dir=ltr] .md-typeset :is(.admonition-title,summary){border-left-width:.2rem}[dir=rtl] .md-typeset :-webkit-any(.admonition-title,summary){border-right-width:.2rem}[dir=rtl] .md-typeset :-moz-any(.admonition-title,summary){border-right-width:.2rem}[dir=rtl] .md-typeset :is(.admonition-title,summary){border-right-width:.2rem}[dir=ltr] .md-typeset :-webkit-any(.admonition-title,summary){border-top-left-radius:.1rem}[dir=ltr] .md-typeset :-moz-any(.admonition-title,summary){border-top-left-radius:.1rem}[dir=ltr] .md-typeset :is(.admonition-title,summary){border-top-left-radius:.1rem}[dir=rtl] .md-typeset :-webkit-any(.admonition-title,summary){border-top-right-radius:.1rem}[dir=rtl] .md-typeset :-moz-any(.admonition-title,summary){border-top-right-radius:.1rem}[dir=rtl] .md-typeset :is(.admonition-title,summary){border-top-right-radius:.1rem}.md-typeset :is(.admonition-title,summary){background-color:rgba(68,138,255,.1);border:0 solid #448aff;font-weight:700;margin-bottom:0;margin-top:0;padding-bottom:.4rem;padding-top:.4rem;position:relative}html .md-typeset :-webkit-any(.admonition-title,summary):last-child{margin-bottom:0}html .md-typeset :-moz-any(.admonition-title,summary):last-child{margin-bottom:0}html .md-typeset :is(.admonition-title,summary):last-child{margin-bottom:0}.md-typeset :-webkit-any(.admonition-title,summary):before{background-color:#448aff;content:"";height:1rem;-webkit-mask-image:var(--md-admonition-icon--note);mask-image:var(--md-admonition-icon--note);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;position:absolute;top:.625em;width:1rem}.md-typeset :-moz-any(.admonition-title,summary):before{background-color:#448aff;content:"";height:1rem;mask-image:var(--md-admonition-icon--note);mask-repeat:no-repeat;mask-size:contain;position:absolute;top:.625em;width:1rem}[dir=ltr] .md-typeset :-webkit-any(.admonition-title,summary):before{left:.6rem}[dir=ltr] .md-typeset :-moz-any(.admonition-title,summary):before{left:.6rem}[dir=ltr] .md-typeset :is(.admonition-title,summary):before{left:.6rem}[dir=rtl] .md-typeset :-webkit-any(.admonition-title,summary):before{right:.6rem}[dir=rtl] .md-typeset :-moz-any(.admonition-title,summary):before{right:.6rem}[dir=rtl] .md-typeset :is(.admonition-title,summary):before{right:.6rem}.md-typeset :is(.admonition-title,summary):before{background-color:#448aff;content:"";height:1rem;-webkit-mask-image:var(--md-admonition-icon--note);mask-image:var(--md-admonition-icon--note);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;position:absolute;top:.625em;width:1rem}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.note){border-color:#448aff}.md-typeset :-moz-any(.admonition,details):-moz-any(.note){border-color:#448aff}.md-typeset :is(.admonition,details):is(.note){border-color:#448aff}.md-typeset :-webkit-any(.note)>:-webkit-any(.admonition-title,summary){background-color:rgba(68,138,255,.1);border-color:#448aff}.md-typeset :-moz-any(.note)>:-moz-any(.admonition-title,summary){background-color:rgba(68,138,255,.1);border-color:#448aff}.md-typeset :is(.note)>:is(.admonition-title,summary){background-color:rgba(68,138,255,.1);border-color:#448aff}.md-typeset :-webkit-any(.note)>:-webkit-any(.admonition-title,summary):before{background-color:#448aff;-webkit-mask-image:var(--md-admonition-icon--note);mask-image:var(--md-admonition-icon--note);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain}.md-typeset :-moz-any(.note)>:-moz-any(.admonition-title,summary):before{background-color:#448aff;mask-image:var(--md-admonition-icon--note);mask-repeat:no-repeat;mask-size:contain}.md-typeset :is(.note)>:is(.admonition-title,summary):before{background-color:#448aff;-webkit-mask-image:var(--md-admonition-icon--note);mask-image:var(--md-admonition-icon--note);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.abstract,.summary,.tldr){border-color:#00b0ff}.md-typeset :-moz-any(.admonition,details):-moz-any(.abstract,.summary,.tldr){border-color:#00b0ff}.md-typeset :is(.admonition,details):is(.abstract,.summary,.tldr){border-color:#00b0ff}.md-typeset :-webkit-any(.abstract,.summary,.tldr)>:-webkit-any(.admonition-title,summary){background-color:rgba(0,176,255,.1);border-color:#00b0ff}.md-typeset :-moz-any(.abstract,.summary,.tldr)>:-moz-any(.admonition-title,summary){background-color:rgba(0,176,255,.1);border-color:#00b0ff}.md-typeset :is(.abstract,.summary,.tldr)>:is(.admonition-title,summary){background-color:rgba(0,176,255,.1);border-color:#00b0ff}.md-typeset :-webkit-any(.abstract,.summary,.tldr)>:-webkit-any(.admonition-title,summary):before{background-color:#00b0ff;-webkit-mask-image:var(--md-admonition-icon--abstract);mask-image:var(--md-admonition-icon--abstract);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain}.md-typeset :-moz-any(.abstract,.summary,.tldr)>:-moz-any(.admonition-title,summary):before{background-color:#00b0ff;mask-image:var(--md-admonition-icon--abstract);mask-repeat:no-repeat;mask-size:contain}.md-typeset :is(.abstract,.summary,.tldr)>:is(.admonition-title,summary):before{background-color:#00b0ff;-webkit-mask-image:var(--md-admonition-icon--abstract);mask-image:var(--md-admonition-icon--abstract);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.info,.todo){border-color:#00b8d4}.md-typeset :-moz-any(.admonition,details):-moz-any(.info,.todo){border-color:#00b8d4}.md-typeset :is(.admonition,details):is(.info,.todo){border-color:#00b8d4}.md-typeset :-webkit-any(.info,.todo)>:-webkit-any(.admonition-title,summary){background-color:rgba(0,184,212,.1);border-color:#00b8d4}.md-typeset :-moz-any(.info,.todo)>:-moz-any(.admonition-title,summary){background-color:rgba(0,184,212,.1);border-color:#00b8d4}.md-typeset :is(.info,.todo)>:is(.admonition-title,summary){background-color:rgba(0,184,212,.1);border-color:#00b8d4}.md-typeset :-webkit-any(.info,.todo)>:-webkit-any(.admonition-title,summary):before{background-color:#00b8d4;-webkit-mask-image:var(--md-admonition-icon--info);mask-image:var(--md-admonition-icon--info);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain}.md-typeset :-moz-any(.info,.todo)>:-moz-any(.admonition-title,summary):before{background-color:#00b8d4;mask-image:var(--md-admonition-icon--info);mask-repeat:no-repeat;mask-size:contain}.md-typeset :is(.info,.todo)>:is(.admonition-title,summary):before{background-color:#00b8d4;-webkit-mask-image:var(--md-admonition-icon--info);mask-image:var(--md-admonition-icon--info);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.tip,.hint,.important){border-color:#00bfa5}.md-typeset :-moz-any(.admonition,details):-moz-any(.tip,.hint,.important){border-color:#00bfa5}.md-typeset :is(.admonition,details):is(.tip,.hint,.important){border-color:#00bfa5}.md-typeset :-webkit-any(.tip,.hint,.important)>:-webkit-any(.admonition-title,summary){background-color:rgba(0,191,165,.1);border-color:#00bfa5}.md-typeset :-moz-any(.tip,.hint,.important)>:-moz-any(.admonition-title,summary){background-color:rgba(0,191,165,.1);border-color:#00bfa5}.md-typeset :is(.tip,.hint,.important)>:is(.admonition-title,summary){background-color:rgba(0,191,165,.1);border-color:#00bfa5}.md-typeset :-webkit-any(.tip,.hint,.important)>:-webkit-any(.admonition-title,summary):before{background-color:#00bfa5;-webkit-mask-image:var(--md-admonition-icon--tip);mask-image:var(--md-admonition-icon--tip);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain}.md-typeset :-moz-any(.tip,.hint,.important)>:-moz-any(.admonition-title,summary):before{background-color:#00bfa5;mask-image:var(--md-admonition-icon--tip);mask-repeat:no-repeat;mask-size:contain}.md-typeset :is(.tip,.hint,.important)>:is(.admonition-title,summary):before{background-color:#00bfa5;-webkit-mask-image:var(--md-admonition-icon--tip);mask-image:var(--md-admonition-icon--tip);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.success,.check,.done){border-color:#00c853}.md-typeset :-moz-any(.admonition,details):-moz-any(.success,.check,.done){border-color:#00c853}.md-typeset :is(.admonition,details):is(.success,.check,.done){border-color:#00c853}.md-typeset :-webkit-any(.success,.check,.done)>:-webkit-any(.admonition-title,summary){background-color:rgba(0,200,83,.1);border-color:#00c853}.md-typeset :-moz-any(.success,.check,.done)>:-moz-any(.admonition-title,summary){background-color:rgba(0,200,83,.1);border-color:#00c853}.md-typeset :is(.success,.check,.done)>:is(.admonition-title,summary){background-color:rgba(0,200,83,.1);border-color:#00c853}.md-typeset :-webkit-any(.success,.check,.done)>:-webkit-any(.admonition-title,summary):before{background-color:#00c853;-webkit-mask-image:var(--md-admonition-icon--success);mask-image:var(--md-admonition-icon--success);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain}.md-typeset :-moz-any(.success,.check,.done)>:-moz-any(.admonition-title,summary):before{background-color:#00c853;mask-image:var(--md-admonition-icon--success);mask-repeat:no-repeat;mask-size:contain}.md-typeset :is(.success,.check,.done)>:is(.admonition-title,summary):before{background-color:#00c853;-webkit-mask-image:var(--md-admonition-icon--success);mask-image:var(--md-admonition-icon--success);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.question,.help,.faq){border-color:#64dd17}.md-typeset :-moz-any(.admonition,details):-moz-any(.question,.help,.faq){border-color:#64dd17}.md-typeset :is(.admonition,details):is(.question,.help,.faq){border-color:#64dd17}.md-typeset :-webkit-any(.question,.help,.faq)>:-webkit-any(.admonition-title,summary){background-color:rgba(100,221,23,.1);border-color:#64dd17}.md-typeset :-moz-any(.question,.help,.faq)>:-moz-any(.admonition-title,summary){background-color:rgba(100,221,23,.1);border-color:#64dd17}.md-typeset :is(.question,.help,.faq)>:is(.admonition-title,summary){background-color:rgba(100,221,23,.1);border-color:#64dd17}.md-typeset :-webkit-any(.question,.help,.faq)>:-webkit-any(.admonition-title,summary):before{background-color:#64dd17;-webkit-mask-image:var(--md-admonition-icon--question);mask-image:var(--md-admonition-icon--question);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain}.md-typeset :-moz-any(.question,.help,.faq)>:-moz-any(.admonition-title,summary):before{background-color:#64dd17;mask-image:var(--md-admonition-icon--question);mask-repeat:no-repeat;mask-size:contain}.md-typeset :is(.question,.help,.faq)>:is(.admonition-title,summary):before{background-color:#64dd17;-webkit-mask-image:var(--md-admonition-icon--question);mask-image:var(--md-admonition-icon--question);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.warning,.caution,.attention){border-color:#ff9100}.md-typeset :-moz-any(.admonition,details):-moz-any(.warning,.caution,.attention){border-color:#ff9100}.md-typeset :is(.admonition,details):is(.warning,.caution,.attention){border-color:#ff9100}.md-typeset :-webkit-any(.warning,.caution,.attention)>:-webkit-any(.admonition-title,summary){background-color:rgba(255,145,0,.1);border-color:#ff9100}.md-typeset :-moz-any(.warning,.caution,.attention)>:-moz-any(.admonition-title,summary){background-color:rgba(255,145,0,.1);border-color:#ff9100}.md-typeset :is(.warning,.caution,.attention)>:is(.admonition-title,summary){background-color:rgba(255,145,0,.1);border-color:#ff9100}.md-typeset :-webkit-any(.warning,.caution,.attention)>:-webkit-any(.admonition-title,summary):before{background-color:#ff9100;-webkit-mask-image:var(--md-admonition-icon--warning);mask-image:var(--md-admonition-icon--warning);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain}.md-typeset :-moz-any(.warning,.caution,.attention)>:-moz-any(.admonition-title,summary):before{background-color:#ff9100;mask-image:var(--md-admonition-icon--warning);mask-repeat:no-repeat;mask-size:contain}.md-typeset :is(.warning,.caution,.attention)>:is(.admonition-title,summary):before{background-color:#ff9100;-webkit-mask-image:var(--md-admonition-icon--warning);mask-image:var(--md-admonition-icon--warning);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.failure,.fail,.missing){border-color:#ff5252}.md-typeset :-moz-any(.admonition,details):-moz-any(.failure,.fail,.missing){border-color:#ff5252}.md-typeset :is(.admonition,details):is(.failure,.fail,.missing){border-color:#ff5252}.md-typeset :-webkit-any(.failure,.fail,.missing)>:-webkit-any(.admonition-title,summary){background-color:rgba(255,82,82,.1);border-color:#ff5252}.md-typeset :-moz-any(.failure,.fail,.missing)>:-moz-any(.admonition-title,summary){background-color:rgba(255,82,82,.1);border-color:#ff5252}.md-typeset :is(.failure,.fail,.missing)>:is(.admonition-title,summary){background-color:rgba(255,82,82,.1);border-color:#ff5252}.md-typeset :-webkit-any(.failure,.fail,.missing)>:-webkit-any(.admonition-title,summary):before{background-color:#ff5252;-webkit-mask-image:var(--md-admonition-icon--failure);mask-image:var(--md-admonition-icon--failure);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain}.md-typeset :-moz-any(.failure,.fail,.missing)>:-moz-any(.admonition-title,summary):before{background-color:#ff5252;mask-image:var(--md-admonition-icon--failure);mask-repeat:no-repeat;mask-size:contain}.md-typeset :is(.failure,.fail,.missing)>:is(.admonition-title,summary):before{background-color:#ff5252;-webkit-mask-image:var(--md-admonition-icon--failure);mask-image:var(--md-admonition-icon--failure);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.danger,.error){border-color:#ff1744}.md-typeset :-moz-any(.admonition,details):-moz-any(.danger,.error){border-color:#ff1744}.md-typeset :is(.admonition,details):is(.danger,.error){border-color:#ff1744}.md-typeset :-webkit-any(.danger,.error)>:-webkit-any(.admonition-title,summary){background-color:rgba(255,23,68,.1);border-color:#ff1744}.md-typeset :-moz-any(.danger,.error)>:-moz-any(.admonition-title,summary){background-color:rgba(255,23,68,.1);border-color:#ff1744}.md-typeset :is(.danger,.error)>:is(.admonition-title,summary){background-color:rgba(255,23,68,.1);border-color:#ff1744}.md-typeset :-webkit-any(.danger,.error)>:-webkit-any(.admonition-title,summary):before{background-color:#ff1744;-webkit-mask-image:var(--md-admonition-icon--danger);mask-image:var(--md-admonition-icon--danger);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain}.md-typeset :-moz-any(.danger,.error)>:-moz-any(.admonition-title,summary):before{background-color:#ff1744;mask-image:var(--md-admonition-icon--danger);mask-repeat:no-repeat;mask-size:contain}.md-typeset :is(.danger,.error)>:is(.admonition-title,summary):before{background-color:#ff1744;-webkit-mask-image:var(--md-admonition-icon--danger);mask-image:var(--md-admonition-icon--danger);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.bug){border-color:#f50057}.md-typeset :-moz-any(.admonition,details):-moz-any(.bug){border-color:#f50057}.md-typeset :is(.admonition,details):is(.bug){border-color:#f50057}.md-typeset :-webkit-any(.bug)>:-webkit-any(.admonition-title,summary){background-color:rgba(245,0,87,.1);border-color:#f50057}.md-typeset :-moz-any(.bug)>:-moz-any(.admonition-title,summary){background-color:rgba(245,0,87,.1);border-color:#f50057}.md-typeset :is(.bug)>:is(.admonition-title,summary){background-color:rgba(245,0,87,.1);border-color:#f50057}.md-typeset :-webkit-any(.bug)>:-webkit-any(.admonition-title,summary):before{background-color:#f50057;-webkit-mask-image:var(--md-admonition-icon--bug);mask-image:var(--md-admonition-icon--bug);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain}.md-typeset :-moz-any(.bug)>:-moz-any(.admonition-title,summary):before{background-color:#f50057;mask-image:var(--md-admonition-icon--bug);mask-repeat:no-repeat;mask-size:contain}.md-typeset :is(.bug)>:is(.admonition-title,summary):before{background-color:#f50057;-webkit-mask-image:var(--md-admonition-icon--bug);mask-image:var(--md-admonition-icon--bug);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.example){border-color:#7c4dff}.md-typeset :-moz-any(.admonition,details):-moz-any(.example){border-color:#7c4dff}.md-typeset :is(.admonition,details):is(.example){border-color:#7c4dff}.md-typeset :-webkit-any(.example)>:-webkit-any(.admonition-title,summary){background-color:rgba(124,77,255,.1);border-color:#7c4dff}.md-typeset :-moz-any(.example)>:-moz-any(.admonition-title,summary){background-color:rgba(124,77,255,.1);border-color:#7c4dff}.md-typeset :is(.example)>:is(.admonition-title,summary){background-color:rgba(124,77,255,.1);border-color:#7c4dff}.md-typeset :-webkit-any(.example)>:-webkit-any(.admonition-title,summary):before{background-color:#7c4dff;-webkit-mask-image:var(--md-admonition-icon--example);mask-image:var(--md-admonition-icon--example);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain}.md-typeset :-moz-any(.example)>:-moz-any(.admonition-title,summary):before{background-color:#7c4dff;mask-image:var(--md-admonition-icon--example);mask-repeat:no-repeat;mask-size:contain}.md-typeset :is(.example)>:is(.admonition-title,summary):before{background-color:#7c4dff;-webkit-mask-image:var(--md-admonition-icon--example);mask-image:var(--md-admonition-icon--example);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.quote,.cite){border-color:#9e9e9e}.md-typeset :-moz-any(.admonition,details):-moz-any(.quote,.cite){border-color:#9e9e9e}.md-typeset :is(.admonition,details):is(.quote,.cite){border-color:#9e9e9e}.md-typeset :-webkit-any(.quote,.cite)>:-webkit-any(.admonition-title,summary){background-color:hsla(0,0%,62%,.1);border-color:#9e9e9e}.md-typeset :-moz-any(.quote,.cite)>:-moz-any(.admonition-title,summary){background-color:hsla(0,0%,62%,.1);border-color:#9e9e9e}.md-typeset :is(.quote,.cite)>:is(.admonition-title,summary){background-color:hsla(0,0%,62%,.1);border-color:#9e9e9e}.md-typeset :-webkit-any(.quote,.cite)>:-webkit-any(.admonition-title,summary):before{background-color:#9e9e9e;-webkit-mask-image:var(--md-admonition-icon--quote);mask-image:var(--md-admonition-icon--quote);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain}.md-typeset :-moz-any(.quote,.cite)>:-moz-any(.admonition-title,summary):before{background-color:#9e9e9e;mask-image:var(--md-admonition-icon--quote);mask-repeat:no-repeat;mask-size:contain}.md-typeset :is(.quote,.cite)>:is(.admonition-title,summary):before{background-color:#9e9e9e;-webkit-mask-image:var(--md-admonition-icon--quote);mask-image:var(--md-admonition-icon--quote);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain}:root{--md-footnotes-icon:url('data:image/svg+xml;charset=utf-8,')}.md-typeset .footnote{color:var(--md-default-fg-color--light);font-size:.64rem}[dir=ltr] .md-typeset .footnote>ol{margin-left:0}[dir=rtl] .md-typeset .footnote>ol{margin-right:0}.md-typeset .footnote>ol>li{transition:color 125ms}.md-typeset .footnote>ol>li:target{color:var(--md-default-fg-color)}.md-typeset .footnote>ol>li:focus-within .footnote-backref{opacity:1;transform:translateX(0);transition:none}.md-typeset .footnote>ol>li:-webkit-any(:hover,:target) .footnote-backref{opacity:1;transform:translateX(0)}.md-typeset .footnote>ol>li:-moz-any(:hover,:target) .footnote-backref{opacity:1;transform:translateX(0)}.md-typeset .footnote>ol>li:is(:hover,:target) .footnote-backref{opacity:1;transform:translateX(0)}.md-typeset .footnote>ol>li>:first-child{margin-top:0}.md-typeset .footnote-ref{font-size:.75em;font-weight:700}html .md-typeset .footnote-ref{outline-offset:.1rem}.md-typeset [id^="fnref:"]:target>.footnote-ref{outline:auto}.md-typeset .footnote-backref{color:var(--md-typeset-a-color);display:inline-block;font-size:0;opacity:0;transform:translateX(.25rem);transition:color .25s,transform .25s .25s,opacity 125ms .25s;vertical-align:text-bottom}@media print{.md-typeset .footnote-backref{color:var(--md-typeset-a-color);opacity:1;transform:translateX(0)}}[dir=rtl] .md-typeset .footnote-backref{transform:translateX(-.25rem)}.md-typeset .footnote-backref:hover{color:var(--md-accent-fg-color)}.md-typeset .footnote-backref:before{background-color:currentcolor;content:"";display:inline-block;height:.8rem;-webkit-mask-image:var(--md-footnotes-icon);mask-image:var(--md-footnotes-icon);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;width:.8rem}[dir=rtl] .md-typeset .footnote-backref:before svg{transform:scaleX(-1)}[dir=ltr] .md-typeset .headerlink{margin-left:.5rem}[dir=rtl] .md-typeset .headerlink{margin-right:.5rem}.md-typeset .headerlink{color:var(--md-default-fg-color--lighter);display:inline-block;opacity:0;transition:color .25s,opacity 125ms}@media print{.md-typeset .headerlink{display:none}}.md-typeset .headerlink:focus,.md-typeset :-webkit-any(:hover,:target)>.headerlink{opacity:1;-webkit-transition:color .25s,opacity 125ms;transition:color .25s,opacity 125ms}.md-typeset .headerlink:focus,.md-typeset :-moz-any(:hover,:target)>.headerlink{opacity:1;-moz-transition:color .25s,opacity 125ms;transition:color .25s,opacity 125ms}.md-typeset .headerlink:focus,.md-typeset :is(:hover,:target)>.headerlink{opacity:1;transition:color .25s,opacity 125ms}.md-typeset .headerlink:-webkit-any(:focus,:hover),.md-typeset :target>.headerlink{color:var(--md-accent-fg-color)}.md-typeset .headerlink:-moz-any(:focus,:hover),.md-typeset :target>.headerlink{color:var(--md-accent-fg-color)}.md-typeset .headerlink:is(:focus,:hover),.md-typeset :target>.headerlink{color:var(--md-accent-fg-color)}.md-typeset :target{--md-scroll-margin:3.6rem;--md-scroll-offset:0rem;scroll-margin-top:calc(var(--md-scroll-margin) - var(--md-scroll-offset))}@media screen and (min-width:76.25em){.md-header--lifted~.md-container .md-typeset :target{--md-scroll-margin:6rem}}.md-typeset :-webkit-any(h1,h2,h3):target{--md-scroll-offset:0.2rem}.md-typeset :-moz-any(h1,h2,h3):target{--md-scroll-offset:0.2rem}.md-typeset :is(h1,h2,h3):target{--md-scroll-offset:0.2rem}.md-typeset h4:target{--md-scroll-offset:0.15rem}.md-typeset div.arithmatex{overflow:auto}@media screen and (max-width:44.9375em){.md-typeset div.arithmatex{margin:0 -.8rem}}.md-typeset div.arithmatex>*{margin-left:auto!important;margin-right:auto!important;padding:0 .8rem;touch-action:auto;width:-webkit-min-content;width:-moz-min-content;width:min-content}.md-typeset div.arithmatex>* mjx-container{margin:0!important}.md-typeset :-webkit-any(del,ins,.comment).critic{-webkit-box-decoration-break:clone;box-decoration-break:clone}.md-typeset :-moz-any(del,ins,.comment).critic{box-decoration-break:clone}.md-typeset :is(del,ins,.comment).critic{-webkit-box-decoration-break:clone;box-decoration-break:clone}.md-typeset del.critic{background-color:var(--md-typeset-del-color)}.md-typeset ins.critic{background-color:var(--md-typeset-ins-color)}.md-typeset .critic.comment{color:var(--md-code-hl-comment-color)}.md-typeset .critic.comment:before{content:"/* "}.md-typeset .critic.comment:after{content:" */"}.md-typeset .critic.block{box-shadow:none;display:block;margin:1em 0;overflow:auto;padding-left:.8rem;padding-right:.8rem}.md-typeset .critic.block>:first-child{margin-top:.5em}.md-typeset .critic.block>:last-child{margin-bottom:.5em}:root{--md-details-icon:url('data:image/svg+xml;charset=utf-8,')}.md-typeset details{display:flow-root;overflow:visible;padding-top:0}.md-typeset details[open]>summary:after{transform:rotate(90deg)}.md-typeset details:not([open]){box-shadow:none;padding-bottom:0}.md-typeset details:not([open])>summary{border-radius:.1rem}[dir=ltr] .md-typeset summary{padding-right:1.8rem}[dir=rtl] .md-typeset summary{padding-left:1.8rem}[dir=ltr] .md-typeset summary{border-top-left-radius:.1rem}[dir=ltr] .md-typeset summary,[dir=rtl] .md-typeset summary{border-top-right-radius:.1rem}[dir=rtl] .md-typeset summary{border-top-left-radius:.1rem}.md-typeset summary{cursor:pointer;display:block;min-height:1rem}.md-typeset summary.focus-visible{outline-color:var(--md-accent-fg-color);outline-offset:.2rem}.md-typeset summary:not(.focus-visible){-webkit-tap-highlight-color:transparent;outline:none}[dir=ltr] .md-typeset summary:after{right:.4rem}[dir=rtl] .md-typeset summary:after{left:.4rem}.md-typeset summary:after{background-color:currentcolor;content:"";height:1rem;-webkit-mask-image:var(--md-details-icon);mask-image:var(--md-details-icon);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;position:absolute;top:.625em;transform:rotate(0deg);transition:transform .25s;width:1rem}[dir=rtl] .md-typeset summary:after{transform:rotate(180deg)}.md-typeset summary::marker{display:none}.md-typeset summary::-webkit-details-marker{display:none}.md-typeset :-webkit-any(.emojione,.twemoji,.gemoji){display:inline-flex;height:1.125em;vertical-align:text-top}.md-typeset :-moz-any(.emojione,.twemoji,.gemoji){display:inline-flex;height:1.125em;vertical-align:text-top}.md-typeset :is(.emojione,.twemoji,.gemoji){display:inline-flex;height:1.125em;vertical-align:text-top}.md-typeset :-webkit-any(.emojione,.twemoji,.gemoji) svg{fill:currentcolor;max-height:100%;width:1.125em}.md-typeset :-moz-any(.emojione,.twemoji,.gemoji) svg{fill:currentcolor;max-height:100%;width:1.125em}.md-typeset :is(.emojione,.twemoji,.gemoji) svg{fill:currentcolor;max-height:100%;width:1.125em}.highlight :-webkit-any(.o,.ow){color:var(--md-code-hl-operator-color)}.highlight :-moz-any(.o,.ow){color:var(--md-code-hl-operator-color)}.highlight :is(.o,.ow){color:var(--md-code-hl-operator-color)}.highlight .p{color:var(--md-code-hl-punctuation-color)}.highlight :-webkit-any(.cpf,.l,.s,.sb,.sc,.s2,.si,.s1,.ss){color:var(--md-code-hl-string-color)}.highlight :-moz-any(.cpf,.l,.s,.sb,.sc,.s2,.si,.s1,.ss){color:var(--md-code-hl-string-color)}.highlight :is(.cpf,.l,.s,.sb,.sc,.s2,.si,.s1,.ss){color:var(--md-code-hl-string-color)}.highlight :-webkit-any(.cp,.se,.sh,.sr,.sx){color:var(--md-code-hl-special-color)}.highlight :-moz-any(.cp,.se,.sh,.sr,.sx){color:var(--md-code-hl-special-color)}.highlight :is(.cp,.se,.sh,.sr,.sx){color:var(--md-code-hl-special-color)}.highlight :-webkit-any(.m,.mb,.mf,.mh,.mi,.il,.mo){color:var(--md-code-hl-number-color)}.highlight :-moz-any(.m,.mb,.mf,.mh,.mi,.il,.mo){color:var(--md-code-hl-number-color)}.highlight :is(.m,.mb,.mf,.mh,.mi,.il,.mo){color:var(--md-code-hl-number-color)}.highlight :-webkit-any(.k,.kd,.kn,.kp,.kr,.kt){color:var(--md-code-hl-keyword-color)}.highlight :-moz-any(.k,.kd,.kn,.kp,.kr,.kt){color:var(--md-code-hl-keyword-color)}.highlight :is(.k,.kd,.kn,.kp,.kr,.kt){color:var(--md-code-hl-keyword-color)}.highlight :-webkit-any(.kc,.n){color:var(--md-code-hl-name-color)}.highlight :-moz-any(.kc,.n){color:var(--md-code-hl-name-color)}.highlight :is(.kc,.n){color:var(--md-code-hl-name-color)}.highlight :-webkit-any(.no,.nb,.bp){color:var(--md-code-hl-constant-color)}.highlight :-moz-any(.no,.nb,.bp){color:var(--md-code-hl-constant-color)}.highlight :is(.no,.nb,.bp){color:var(--md-code-hl-constant-color)}.highlight :-webkit-any(.nc,.ne,.nf,.nn){color:var(--md-code-hl-function-color)}.highlight :-moz-any(.nc,.ne,.nf,.nn){color:var(--md-code-hl-function-color)}.highlight :is(.nc,.ne,.nf,.nn){color:var(--md-code-hl-function-color)}.highlight :-webkit-any(.nd,.ni,.nl,.nt){color:var(--md-code-hl-keyword-color)}.highlight :-moz-any(.nd,.ni,.nl,.nt){color:var(--md-code-hl-keyword-color)}.highlight :is(.nd,.ni,.nl,.nt){color:var(--md-code-hl-keyword-color)}.highlight :-webkit-any(.c,.cm,.c1,.ch,.cs,.sd){color:var(--md-code-hl-comment-color)}.highlight :-moz-any(.c,.cm,.c1,.ch,.cs,.sd){color:var(--md-code-hl-comment-color)}.highlight :is(.c,.cm,.c1,.ch,.cs,.sd){color:var(--md-code-hl-comment-color)}.highlight :-webkit-any(.na,.nv,.vc,.vg,.vi){color:var(--md-code-hl-variable-color)}.highlight :-moz-any(.na,.nv,.vc,.vg,.vi){color:var(--md-code-hl-variable-color)}.highlight :is(.na,.nv,.vc,.vg,.vi){color:var(--md-code-hl-variable-color)}.highlight :-webkit-any(.ge,.gr,.gh,.go,.gp,.gs,.gu,.gt){color:var(--md-code-hl-generic-color)}.highlight :-moz-any(.ge,.gr,.gh,.go,.gp,.gs,.gu,.gt){color:var(--md-code-hl-generic-color)}.highlight :is(.ge,.gr,.gh,.go,.gp,.gs,.gu,.gt){color:var(--md-code-hl-generic-color)}.highlight :-webkit-any(.gd,.gi){border-radius:.1rem;margin:0 -.125em;padding:0 .125em}.highlight :-moz-any(.gd,.gi){border-radius:.1rem;margin:0 -.125em;padding:0 .125em}.highlight :is(.gd,.gi){border-radius:.1rem;margin:0 -.125em;padding:0 .125em}.highlight .gd{background-color:var(--md-typeset-del-color)}.highlight .gi{background-color:var(--md-typeset-ins-color)}.highlight .hll{background-color:var(--md-code-hl-color);display:block;margin:0 -1.1764705882em;padding:0 1.1764705882em}.highlight span.filename{background-color:var(--md-code-bg-color);border-bottom:.05rem solid var(--md-default-fg-color--lightest);border-top-left-radius:.1rem;border-top-right-radius:.1rem;display:block;font-size:.85em;font-weight:700;margin-top:1em;padding:.6617647059em 1.1764705882em;position:relative}.highlight span.filename+pre{margin-top:0}.highlight span.filename+pre>code{border-top-left-radius:0;border-top-right-radius:0}.highlight [data-linenos]:before{background-color:var(--md-code-bg-color);box-shadow:-.05rem 0 var(--md-default-fg-color--lightest) inset;color:var(--md-default-fg-color--light);content:attr(data-linenos);float:left;left:-1.1764705882em;margin-left:-1.1764705882em;margin-right:1.1764705882em;padding-left:1.1764705882em;position:-webkit-sticky;position:sticky;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;z-index:3}.highlight code a[id]{position:absolute;visibility:hidden}.highlight code[data-md-copying] .hll{display:contents}.highlight code[data-md-copying] .md-annotation{display:none}.highlighttable{display:flow-root}.highlighttable :-webkit-any(tbody,td){display:block;padding:0}.highlighttable :-moz-any(tbody,td){display:block;padding:0}.highlighttable :is(tbody,td){display:block;padding:0}.highlighttable tr{display:flex}.highlighttable pre{margin:0}.highlighttable th.filename{flex-grow:1;padding:0;text-align:left}.highlighttable .linenos{background-color:var(--md-code-bg-color);border-bottom-left-radius:.1rem;border-top-left-radius:.1rem;font-size:.85em;padding:.7720588235em 0 .7720588235em 1.1764705882em;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none}.highlighttable .linenodiv{box-shadow:-.05rem 0 var(--md-default-fg-color--lightest) inset;padding-right:.5882352941em}.highlighttable .linenodiv pre{color:var(--md-default-fg-color--light);text-align:right}.highlighttable .code{flex:1;min-width:0}.linenodiv a{color:inherit}.md-typeset .highlighttable{direction:ltr;margin:1em 0}.md-typeset .highlighttable code{border-bottom-left-radius:0;border-top-left-radius:0}.md-typeset :-webkit-any(.highlight,.highlighttable)+.result{border:.05rem solid var(--md-code-bg-color);border-bottom-left-radius:.1rem;border-bottom-right-radius:.1rem;border-top-width:.1rem;margin-top:-1.125em;overflow:visible;padding:0 1em}.md-typeset :-moz-any(.highlight,.highlighttable)+.result{border:.05rem solid var(--md-code-bg-color);border-bottom-left-radius:.1rem;border-bottom-right-radius:.1rem;border-top-width:.1rem;margin-top:-1.125em;overflow:visible;padding:0 1em}.md-typeset :is(.highlight,.highlighttable)+.result{border:.05rem solid var(--md-code-bg-color);border-bottom-left-radius:.1rem;border-bottom-right-radius:.1rem;border-top-width:.1rem;margin-top:-1.125em;overflow:visible;padding:0 1em}.md-typeset :-webkit-any(.highlight,.highlighttable)+.result:after{clear:both;content:"";display:block}.md-typeset :-moz-any(.highlight,.highlighttable)+.result:after{clear:both;content:"";display:block}.md-typeset :is(.highlight,.highlighttable)+.result:after{clear:both;content:"";display:block}@media screen and (max-width:44.9375em){.md-content__inner>.highlight{margin:1em -.8rem}.md-content__inner>.highlight .hll{margin:0 -.8rem;padding:0 .8rem}.md-content__inner>.highlight code{border-radius:0}.md-content__inner>.highlight+.result{border-left-width:0;border-radius:0;border-right-width:0;margin-left:-.8rem;margin-right:-.8rem}.md-content__inner>.highlighttable{border-radius:0;margin:1em -.8rem}.md-content__inner>.highlighttable .hll{margin:0 -.8rem;padding:0 .8rem}}.md-typeset .keys kbd:-webkit-any(:before,:after){-moz-osx-font-smoothing:initial;-webkit-font-smoothing:initial;color:inherit;margin:0;position:relative}.md-typeset .keys kbd:-moz-any(:before,:after){-moz-osx-font-smoothing:initial;-webkit-font-smoothing:initial;color:inherit;margin:0;position:relative}.md-typeset .keys kbd:is(:before,:after){-moz-osx-font-smoothing:initial;-webkit-font-smoothing:initial;color:inherit;margin:0;position:relative}.md-typeset .keys span{color:var(--md-default-fg-color--light);padding:0 .2em}.md-typeset .keys .key-alt:before,.md-typeset .keys .key-left-alt:before,.md-typeset .keys .key-right-alt:before{content:"⎇";padding-right:.4em}.md-typeset .keys .key-command:before,.md-typeset .keys .key-left-command:before,.md-typeset .keys .key-right-command:before{content:"⌘";padding-right:.4em}.md-typeset .keys .key-control:before,.md-typeset .keys .key-left-control:before,.md-typeset .keys .key-right-control:before{content:"⌃";padding-right:.4em}.md-typeset .keys .key-left-meta:before,.md-typeset .keys .key-meta:before,.md-typeset .keys .key-right-meta:before{content:"◆";padding-right:.4em}.md-typeset .keys .key-left-option:before,.md-typeset .keys .key-option:before,.md-typeset .keys .key-right-option:before{content:"⌥";padding-right:.4em}.md-typeset .keys .key-left-shift:before,.md-typeset .keys .key-right-shift:before,.md-typeset .keys .key-shift:before{content:"⇧";padding-right:.4em}.md-typeset .keys .key-left-super:before,.md-typeset .keys .key-right-super:before,.md-typeset .keys .key-super:before{content:"❖";padding-right:.4em}.md-typeset .keys .key-left-windows:before,.md-typeset .keys .key-right-windows:before,.md-typeset .keys .key-windows:before{content:"⊞";padding-right:.4em}.md-typeset .keys .key-arrow-down:before{content:"↓";padding-right:.4em}.md-typeset .keys .key-arrow-left:before{content:"←";padding-right:.4em}.md-typeset .keys .key-arrow-right:before{content:"→";padding-right:.4em}.md-typeset .keys .key-arrow-up:before{content:"↑";padding-right:.4em}.md-typeset .keys .key-backspace:before{content:"⌫";padding-right:.4em}.md-typeset .keys .key-backtab:before{content:"⇤";padding-right:.4em}.md-typeset .keys .key-caps-lock:before{content:"⇪";padding-right:.4em}.md-typeset .keys .key-clear:before{content:"⌧";padding-right:.4em}.md-typeset .keys .key-context-menu:before{content:"☰";padding-right:.4em}.md-typeset .keys .key-delete:before{content:"⌦";padding-right:.4em}.md-typeset .keys .key-eject:before{content:"⏏";padding-right:.4em}.md-typeset .keys .key-end:before{content:"⤓";padding-right:.4em}.md-typeset .keys .key-escape:before{content:"⎋";padding-right:.4em}.md-typeset .keys .key-home:before{content:"⤒";padding-right:.4em}.md-typeset .keys .key-insert:before{content:"⎀";padding-right:.4em}.md-typeset .keys .key-page-down:before{content:"⇟";padding-right:.4em}.md-typeset .keys .key-page-up:before{content:"⇞";padding-right:.4em}.md-typeset .keys .key-print-screen:before{content:"⎙";padding-right:.4em}.md-typeset .keys .key-tab:after{content:"⇥";padding-left:.4em}.md-typeset .keys .key-num-enter:after{content:"⌤";padding-left:.4em}.md-typeset .keys .key-enter:after{content:"⏎";padding-left:.4em}.md-typeset .tabbed-set{border-radius:.1rem;display:flex;flex-flow:column wrap;margin:1em 0;position:relative}.md-typeset .tabbed-set>input{height:0;opacity:0;position:absolute;width:0}.md-typeset .tabbed-set>input:target{--md-scroll-offset:0.625em}.md-typeset .tabbed-labels{-ms-overflow-style:none;box-shadow:0 -.05rem var(--md-default-fg-color--lightest) inset;display:flex;max-width:100%;overflow:auto;-ms-scroll-snap-type:x proximity;scroll-snap-type:x proximity;scrollbar-width:none}@media print{.md-typeset .tabbed-labels{display:contents}}@media screen{.js .md-typeset .tabbed-labels{position:relative}.js .md-typeset .tabbed-labels:before{background:var(--md-accent-fg-color);bottom:0;content:"";display:block;height:2px;left:0;position:absolute;transform:translateX(var(--md-indicator-x));transition:width 225ms,transform .25s;transition-timing-function:cubic-bezier(.4,0,.2,1);width:var(--md-indicator-width)}}.md-typeset .tabbed-labels::-webkit-scrollbar{display:none}.md-typeset .tabbed-labels>label{border-bottom:.1rem solid transparent;border-radius:.1rem .1rem 0 0;color:var(--md-default-fg-color--light);cursor:pointer;flex-shrink:0;font-size:.64rem;font-weight:700;padding:.78125em 1.25em .625em;scroll-snap-align:start;transition:background-color .25s,color .25s;white-space:nowrap;width:auto}@media print{.md-typeset .tabbed-labels>label:first-child{order:1}.md-typeset .tabbed-labels>label:nth-child(2){order:2}.md-typeset .tabbed-labels>label:nth-child(3){order:3}.md-typeset .tabbed-labels>label:nth-child(4){order:4}.md-typeset .tabbed-labels>label:nth-child(5){order:5}.md-typeset .tabbed-labels>label:nth-child(6){order:6}.md-typeset .tabbed-labels>label:nth-child(7){order:7}.md-typeset .tabbed-labels>label:nth-child(8){order:8}.md-typeset .tabbed-labels>label:nth-child(9){order:9}.md-typeset .tabbed-labels>label:nth-child(10){order:10}.md-typeset .tabbed-labels>label:nth-child(11){order:11}.md-typeset .tabbed-labels>label:nth-child(12){order:12}.md-typeset .tabbed-labels>label:nth-child(13){order:13}.md-typeset .tabbed-labels>label:nth-child(14){order:14}.md-typeset .tabbed-labels>label:nth-child(15){order:15}.md-typeset .tabbed-labels>label:nth-child(16){order:16}.md-typeset .tabbed-labels>label:nth-child(17){order:17}.md-typeset .tabbed-labels>label:nth-child(18){order:18}.md-typeset .tabbed-labels>label:nth-child(19){order:19}.md-typeset .tabbed-labels>label:nth-child(20){order:20}}.md-typeset .tabbed-labels>label:hover{color:var(--md-accent-fg-color)}.md-typeset .tabbed-content{width:100%}@media print{.md-typeset .tabbed-content{display:contents}}.md-typeset .tabbed-block{display:none}@media print{.md-typeset .tabbed-block{display:block}.md-typeset .tabbed-block:first-child{order:1}.md-typeset .tabbed-block:nth-child(2){order:2}.md-typeset .tabbed-block:nth-child(3){order:3}.md-typeset .tabbed-block:nth-child(4){order:4}.md-typeset .tabbed-block:nth-child(5){order:5}.md-typeset .tabbed-block:nth-child(6){order:6}.md-typeset .tabbed-block:nth-child(7){order:7}.md-typeset .tabbed-block:nth-child(8){order:8}.md-typeset .tabbed-block:nth-child(9){order:9}.md-typeset .tabbed-block:nth-child(10){order:10}.md-typeset .tabbed-block:nth-child(11){order:11}.md-typeset .tabbed-block:nth-child(12){order:12}.md-typeset .tabbed-block:nth-child(13){order:13}.md-typeset .tabbed-block:nth-child(14){order:14}.md-typeset .tabbed-block:nth-child(15){order:15}.md-typeset .tabbed-block:nth-child(16){order:16}.md-typeset .tabbed-block:nth-child(17){order:17}.md-typeset .tabbed-block:nth-child(18){order:18}.md-typeset .tabbed-block:nth-child(19){order:19}.md-typeset .tabbed-block:nth-child(20){order:20}}.md-typeset .tabbed-block>.highlight:first-child>pre:first-child,.md-typeset .tabbed-block>.highlighttable:first-child,.md-typeset .tabbed-block>pre:first-child{margin:0}[dir=ltr] .md-typeset .tabbed-block>.highlight:first-child>pre:first-child>code,[dir=ltr] .md-typeset .tabbed-block>.highlighttable:first-child>code,[dir=ltr] .md-typeset .tabbed-block>pre:first-child>code{border-top-left-radius:0}[dir=ltr] .md-typeset .tabbed-block>.highlight:first-child>pre:first-child>code,[dir=ltr] .md-typeset .tabbed-block>.highlighttable:first-child>code,[dir=ltr] .md-typeset .tabbed-block>pre:first-child>code,[dir=rtl] .md-typeset .tabbed-block>.highlight:first-child>pre:first-child>code,[dir=rtl] .md-typeset .tabbed-block>.highlighttable:first-child>code,[dir=rtl] .md-typeset .tabbed-block>pre:first-child>code{border-top-right-radius:0}[dir=ltr] .md-typeset .tabbed-block>.highlighttable:first-child .linenos,[dir=rtl] .md-typeset .tabbed-block>.highlight:first-child>pre:first-child>code,[dir=rtl] .md-typeset .tabbed-block>.highlighttable:first-child>code,[dir=rtl] .md-typeset .tabbed-block>pre:first-child>code{border-top-left-radius:0}[dir=ltr] .md-typeset .tabbed-block>.highlighttable:first-child .linenos,[dir=rtl] .md-typeset .tabbed-block>.highlighttable:first-child .linenos{border-top-right-radius:0}[dir=rtl] .md-typeset .tabbed-block>.highlighttable:first-child .linenos{border-top-left-radius:0}.md-typeset .tabbed-block>.tabbed-set{margin:0}@media screen and (max-width:44.9375em){[dir=ltr] .md-content__inner>.tabbed-set .tabbed-labels{padding-left:.8rem}[dir=rtl] .md-content__inner>.tabbed-set .tabbed-labels{padding-right:.8rem}.md-content__inner>.tabbed-set .tabbed-labels{margin:0 -.8rem;max-width:100vw;scroll-padding-inline-start:.8rem}[dir=ltr] .md-content__inner>.tabbed-set .tabbed-labels:after{padding-right:.8rem}[dir=rtl] .md-content__inner>.tabbed-set .tabbed-labels:after{padding-left:.8rem}.md-content__inner>.tabbed-set .tabbed-labels:after{content:""}}@media screen{.md-typeset .tabbed-set>input:first-child:checked~.tabbed-labels>:first-child,.md-typeset .tabbed-set>input:nth-child(10):checked~.tabbed-labels>:nth-child(10),.md-typeset .tabbed-set>input:nth-child(11):checked~.tabbed-labels>:nth-child(11),.md-typeset .tabbed-set>input:nth-child(12):checked~.tabbed-labels>:nth-child(12),.md-typeset .tabbed-set>input:nth-child(13):checked~.tabbed-labels>:nth-child(13),.md-typeset .tabbed-set>input:nth-child(14):checked~.tabbed-labels>:nth-child(14),.md-typeset .tabbed-set>input:nth-child(15):checked~.tabbed-labels>:nth-child(15),.md-typeset .tabbed-set>input:nth-child(16):checked~.tabbed-labels>:nth-child(16),.md-typeset .tabbed-set>input:nth-child(17):checked~.tabbed-labels>:nth-child(17),.md-typeset .tabbed-set>input:nth-child(18):checked~.tabbed-labels>:nth-child(18),.md-typeset .tabbed-set>input:nth-child(19):checked~.tabbed-labels>:nth-child(19),.md-typeset .tabbed-set>input:nth-child(2):checked~.tabbed-labels>:nth-child(2),.md-typeset .tabbed-set>input:nth-child(20):checked~.tabbed-labels>:nth-child(20),.md-typeset .tabbed-set>input:nth-child(3):checked~.tabbed-labels>:nth-child(3),.md-typeset .tabbed-set>input:nth-child(4):checked~.tabbed-labels>:nth-child(4),.md-typeset .tabbed-set>input:nth-child(5):checked~.tabbed-labels>:nth-child(5),.md-typeset .tabbed-set>input:nth-child(6):checked~.tabbed-labels>:nth-child(6),.md-typeset .tabbed-set>input:nth-child(7):checked~.tabbed-labels>:nth-child(7),.md-typeset .tabbed-set>input:nth-child(8):checked~.tabbed-labels>:nth-child(8),.md-typeset .tabbed-set>input:nth-child(9):checked~.tabbed-labels>:nth-child(9){color:var(--md-accent-fg-color)}.md-typeset .no-js .tabbed-set>input:first-child:checked~.tabbed-labels>:first-child,.md-typeset .no-js .tabbed-set>input:nth-child(10):checked~.tabbed-labels>:nth-child(10),.md-typeset .no-js .tabbed-set>input:nth-child(11):checked~.tabbed-labels>:nth-child(11),.md-typeset .no-js .tabbed-set>input:nth-child(12):checked~.tabbed-labels>:nth-child(12),.md-typeset .no-js .tabbed-set>input:nth-child(13):checked~.tabbed-labels>:nth-child(13),.md-typeset .no-js .tabbed-set>input:nth-child(14):checked~.tabbed-labels>:nth-child(14),.md-typeset .no-js .tabbed-set>input:nth-child(15):checked~.tabbed-labels>:nth-child(15),.md-typeset .no-js .tabbed-set>input:nth-child(16):checked~.tabbed-labels>:nth-child(16),.md-typeset .no-js .tabbed-set>input:nth-child(17):checked~.tabbed-labels>:nth-child(17),.md-typeset .no-js .tabbed-set>input:nth-child(18):checked~.tabbed-labels>:nth-child(18),.md-typeset .no-js .tabbed-set>input:nth-child(19):checked~.tabbed-labels>:nth-child(19),.md-typeset .no-js .tabbed-set>input:nth-child(2):checked~.tabbed-labels>:nth-child(2),.md-typeset .no-js .tabbed-set>input:nth-child(20):checked~.tabbed-labels>:nth-child(20),.md-typeset .no-js .tabbed-set>input:nth-child(3):checked~.tabbed-labels>:nth-child(3),.md-typeset .no-js .tabbed-set>input:nth-child(4):checked~.tabbed-labels>:nth-child(4),.md-typeset .no-js .tabbed-set>input:nth-child(5):checked~.tabbed-labels>:nth-child(5),.md-typeset .no-js .tabbed-set>input:nth-child(6):checked~.tabbed-labels>:nth-child(6),.md-typeset .no-js .tabbed-set>input:nth-child(7):checked~.tabbed-labels>:nth-child(7),.md-typeset .no-js .tabbed-set>input:nth-child(8):checked~.tabbed-labels>:nth-child(8),.md-typeset .no-js .tabbed-set>input:nth-child(9):checked~.tabbed-labels>:nth-child(9),.no-js .md-typeset .tabbed-set>input:first-child:checked~.tabbed-labels>:first-child,.no-js .md-typeset .tabbed-set>input:nth-child(10):checked~.tabbed-labels>:nth-child(10),.no-js .md-typeset .tabbed-set>input:nth-child(11):checked~.tabbed-labels>:nth-child(11),.no-js .md-typeset .tabbed-set>input:nth-child(12):checked~.tabbed-labels>:nth-child(12),.no-js .md-typeset .tabbed-set>input:nth-child(13):checked~.tabbed-labels>:nth-child(13),.no-js .md-typeset .tabbed-set>input:nth-child(14):checked~.tabbed-labels>:nth-child(14),.no-js .md-typeset .tabbed-set>input:nth-child(15):checked~.tabbed-labels>:nth-child(15),.no-js .md-typeset .tabbed-set>input:nth-child(16):checked~.tabbed-labels>:nth-child(16),.no-js .md-typeset .tabbed-set>input:nth-child(17):checked~.tabbed-labels>:nth-child(17),.no-js .md-typeset .tabbed-set>input:nth-child(18):checked~.tabbed-labels>:nth-child(18),.no-js .md-typeset .tabbed-set>input:nth-child(19):checked~.tabbed-labels>:nth-child(19),.no-js .md-typeset .tabbed-set>input:nth-child(2):checked~.tabbed-labels>:nth-child(2),.no-js .md-typeset .tabbed-set>input:nth-child(20):checked~.tabbed-labels>:nth-child(20),.no-js .md-typeset .tabbed-set>input:nth-child(3):checked~.tabbed-labels>:nth-child(3),.no-js .md-typeset .tabbed-set>input:nth-child(4):checked~.tabbed-labels>:nth-child(4),.no-js .md-typeset .tabbed-set>input:nth-child(5):checked~.tabbed-labels>:nth-child(5),.no-js .md-typeset .tabbed-set>input:nth-child(6):checked~.tabbed-labels>:nth-child(6),.no-js .md-typeset .tabbed-set>input:nth-child(7):checked~.tabbed-labels>:nth-child(7),.no-js .md-typeset .tabbed-set>input:nth-child(8):checked~.tabbed-labels>:nth-child(8),.no-js .md-typeset .tabbed-set>input:nth-child(9):checked~.tabbed-labels>:nth-child(9){border-color:var(--md-accent-fg-color)}}.md-typeset .tabbed-set>input:first-child.focus-visible~.tabbed-labels>:first-child,.md-typeset .tabbed-set>input:nth-child(10).focus-visible~.tabbed-labels>:nth-child(10),.md-typeset .tabbed-set>input:nth-child(11).focus-visible~.tabbed-labels>:nth-child(11),.md-typeset .tabbed-set>input:nth-child(12).focus-visible~.tabbed-labels>:nth-child(12),.md-typeset .tabbed-set>input:nth-child(13).focus-visible~.tabbed-labels>:nth-child(13),.md-typeset .tabbed-set>input:nth-child(14).focus-visible~.tabbed-labels>:nth-child(14),.md-typeset .tabbed-set>input:nth-child(15).focus-visible~.tabbed-labels>:nth-child(15),.md-typeset .tabbed-set>input:nth-child(16).focus-visible~.tabbed-labels>:nth-child(16),.md-typeset .tabbed-set>input:nth-child(17).focus-visible~.tabbed-labels>:nth-child(17),.md-typeset .tabbed-set>input:nth-child(18).focus-visible~.tabbed-labels>:nth-child(18),.md-typeset .tabbed-set>input:nth-child(19).focus-visible~.tabbed-labels>:nth-child(19),.md-typeset .tabbed-set>input:nth-child(2).focus-visible~.tabbed-labels>:nth-child(2),.md-typeset .tabbed-set>input:nth-child(20).focus-visible~.tabbed-labels>:nth-child(20),.md-typeset .tabbed-set>input:nth-child(3).focus-visible~.tabbed-labels>:nth-child(3),.md-typeset .tabbed-set>input:nth-child(4).focus-visible~.tabbed-labels>:nth-child(4),.md-typeset .tabbed-set>input:nth-child(5).focus-visible~.tabbed-labels>:nth-child(5),.md-typeset .tabbed-set>input:nth-child(6).focus-visible~.tabbed-labels>:nth-child(6),.md-typeset .tabbed-set>input:nth-child(7).focus-visible~.tabbed-labels>:nth-child(7),.md-typeset .tabbed-set>input:nth-child(8).focus-visible~.tabbed-labels>:nth-child(8),.md-typeset .tabbed-set>input:nth-child(9).focus-visible~.tabbed-labels>:nth-child(9){background-color:var(--md-accent-fg-color--transparent)}.md-typeset .tabbed-set>input:first-child:checked~.tabbed-content>:first-child,.md-typeset .tabbed-set>input:nth-child(10):checked~.tabbed-content>:nth-child(10),.md-typeset .tabbed-set>input:nth-child(11):checked~.tabbed-content>:nth-child(11),.md-typeset .tabbed-set>input:nth-child(12):checked~.tabbed-content>:nth-child(12),.md-typeset .tabbed-set>input:nth-child(13):checked~.tabbed-content>:nth-child(13),.md-typeset .tabbed-set>input:nth-child(14):checked~.tabbed-content>:nth-child(14),.md-typeset .tabbed-set>input:nth-child(15):checked~.tabbed-content>:nth-child(15),.md-typeset .tabbed-set>input:nth-child(16):checked~.tabbed-content>:nth-child(16),.md-typeset .tabbed-set>input:nth-child(17):checked~.tabbed-content>:nth-child(17),.md-typeset .tabbed-set>input:nth-child(18):checked~.tabbed-content>:nth-child(18),.md-typeset .tabbed-set>input:nth-child(19):checked~.tabbed-content>:nth-child(19),.md-typeset .tabbed-set>input:nth-child(2):checked~.tabbed-content>:nth-child(2),.md-typeset .tabbed-set>input:nth-child(20):checked~.tabbed-content>:nth-child(20),.md-typeset .tabbed-set>input:nth-child(3):checked~.tabbed-content>:nth-child(3),.md-typeset .tabbed-set>input:nth-child(4):checked~.tabbed-content>:nth-child(4),.md-typeset .tabbed-set>input:nth-child(5):checked~.tabbed-content>:nth-child(5),.md-typeset .tabbed-set>input:nth-child(6):checked~.tabbed-content>:nth-child(6),.md-typeset .tabbed-set>input:nth-child(7):checked~.tabbed-content>:nth-child(7),.md-typeset .tabbed-set>input:nth-child(8):checked~.tabbed-content>:nth-child(8),.md-typeset .tabbed-set>input:nth-child(9):checked~.tabbed-content>:nth-child(9){display:block}:root{--md-tasklist-icon:url('data:image/svg+xml;charset=utf-8,');--md-tasklist-icon--checked:url('data:image/svg+xml;charset=utf-8,')}.md-typeset .task-list-item{list-style-type:none;position:relative}[dir=ltr] .md-typeset .task-list-item [type=checkbox]{left:-2em}[dir=rtl] .md-typeset .task-list-item [type=checkbox]{right:-2em}.md-typeset .task-list-item [type=checkbox]{position:absolute;top:.45em}.md-typeset .task-list-control [type=checkbox]{opacity:0;z-index:-1}[dir=ltr] .md-typeset .task-list-indicator:before{left:-1.5em}[dir=rtl] .md-typeset .task-list-indicator:before{right:-1.5em}.md-typeset .task-list-indicator:before{background-color:var(--md-default-fg-color--lightest);content:"";height:1.25em;-webkit-mask-image:var(--md-tasklist-icon);mask-image:var(--md-tasklist-icon);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;position:absolute;top:.15em;width:1.25em}.md-typeset [type=checkbox]:checked+.task-list-indicator:before{background-color:#00e676;-webkit-mask-image:var(--md-tasklist-icon--checked);mask-image:var(--md-tasklist-icon--checked)}:root>*{--md-mermaid-font-family:var(--md-text-font-family),sans-serif;--md-mermaid-edge-color:var(--md-code-fg-color);--md-mermaid-node-bg-color:var(--md-accent-fg-color--transparent);--md-mermaid-node-fg-color:var(--md-accent-fg-color);--md-mermaid-label-bg-color:var(--md-default-bg-color);--md-mermaid-label-fg-color:var(--md-code-fg-color)}.mermaid{line-height:normal;margin:1em 0}@media screen and (min-width:45em){[dir=ltr] .md-typeset .inline{margin-right:.8rem}[dir=rtl] .md-typeset .inline{margin-left:.8rem}.md-typeset .inline{float:left;margin-bottom:.8rem;margin-top:0;width:11.7rem}[dir=rtl] .md-typeset .inline{float:right}[dir=ltr] .md-typeset .inline.end{margin-left:.8rem;margin-right:0}[dir=rtl] .md-typeset .inline.end{margin-left:0;margin-right:.8rem}.md-typeset .inline.end{float:right}[dir=rtl] .md-typeset .inline.end{float:left}} \ No newline at end of file diff --git a/assets/stylesheets/main.644de097.min.css.map b/assets/stylesheets/main.644de097.min.css.map new file mode 100644 index 00000000..1a08f82d --- /dev/null +++ b/assets/stylesheets/main.644de097.min.css.map @@ -0,0 +1 @@ +{"version":3,"sources":["src/assets/stylesheets/main/extensions/pymdownx/_keys.scss","../../../src/assets/stylesheets/main.scss","src/assets/stylesheets/main/_resets.scss","src/assets/stylesheets/main/_colors.scss","src/assets/stylesheets/main/_icons.scss","src/assets/stylesheets/main/_typeset.scss","src/assets/stylesheets/utilities/_break.scss","src/assets/stylesheets/main/layout/_banner.scss","src/assets/stylesheets/main/layout/_base.scss","src/assets/stylesheets/main/layout/_clipboard.scss","src/assets/stylesheets/main/layout/_content.scss","src/assets/stylesheets/main/layout/_dialog.scss","src/assets/stylesheets/main/layout/_footer.scss","src/assets/stylesheets/main/layout/_form.scss","src/assets/stylesheets/main/layout/_header.scss","src/assets/stylesheets/main/layout/_nav.scss","src/assets/stylesheets/main/layout/_search.scss","src/assets/stylesheets/main/layout/_select.scss","src/assets/stylesheets/main/layout/_sidebar.scss","src/assets/stylesheets/main/layout/_source.scss","src/assets/stylesheets/main/layout/_tabs.scss","src/assets/stylesheets/main/layout/_tag.scss","src/assets/stylesheets/main/layout/_tooltip.scss","src/assets/stylesheets/main/layout/_top.scss","src/assets/stylesheets/main/layout/_version.scss","src/assets/stylesheets/main/extensions/markdown/_admonition.scss","node_modules/material-design-color/material-color.scss","src/assets/stylesheets/main/extensions/markdown/_footnotes.scss","src/assets/stylesheets/main/extensions/markdown/_toc.scss","src/assets/stylesheets/main/extensions/pymdownx/_arithmatex.scss","src/assets/stylesheets/main/extensions/pymdownx/_critic.scss","src/assets/stylesheets/main/extensions/pymdownx/_details.scss","src/assets/stylesheets/main/extensions/pymdownx/_emoji.scss","src/assets/stylesheets/main/extensions/pymdownx/_highlight.scss","src/assets/stylesheets/main/extensions/pymdownx/_tabbed.scss","src/assets/stylesheets/main/extensions/pymdownx/_tasklist.scss","src/assets/stylesheets/main/integrations/_mermaid.scss","src/assets/stylesheets/main/_modifiers.scss"],"names":[],"mappings":"AAgGM,gBCwwGN,CC50GA,KAEE,6BAAA,CAAA,0BAAA,CAAA,yBAAA,CAAA,qBAAA,CADA,qBDzBF,CC8BA,iBAGE,kBD3BF,CC8BE,gCANF,iBAOI,yBDzBF,CACF,CC6BA,KACE,QD1BF,CC8BA,qBAIE,uCD3BF,CC+BA,EACE,aAAA,CACA,oBD5BF,CCgCA,GAME,QAAA,CAJA,kBAAA,CADA,aAAA,CAEA,aAAA,CAEA,gBAAA,CADA,SD3BF,CCiCA,MACE,aD9BF,CCkCA,QAEE,eD/BF,CCmCA,IACE,iBDhCF,CCoCA,MACE,uBAAA,CACA,gBDjCF,CCqCA,MAEE,eAAA,CACA,kBDlCF,CCsCA,OAKE,sBAAA,CACA,QAAA,CAFA,mBAAA,CADA,iBAAA,CAFA,QAAA,CACA,SD/BF,CCuCA,MACE,QAAA,CACA,YDpCF,CErDA,MAGE,qCAAA,CACA,4CAAA,CACA,8CAAA,CACA,+CAAA,CACA,0BAAA,CACA,+CAAA,CACA,iDAAA,CACA,mDAAA,CAGA,6BAAA,CACA,oCAAA,CACA,mCAAA,CACA,0BAAA,CACA,+CAAA,CAGA,4BAAA,CACA,qDAAA,CACA,yBAAA,CACA,8CAAA,CA0DA,yEAAA,CAKA,yEAAA,CAKA,yEFTF,CExDE,QAGE,0BAAA,CACA,0BAAA,CAGA,qCAAA,CACA,iCAAA,CACA,kCAAA,CACA,mCAAA,CACA,mCAAA,CACA,kCAAA,CACA,iCAAA,CACA,+CAAA,CACA,6DAAA,CACA,gEAAA,CACA,4DAAA,CACA,4DAAA,CACA,6DAAA,CAGA,6CAAA,CAGA,+CAAA,CAGA,0CAAA,CAGA,0CAAA,CACA,2CAAA,CAGA,8BAAA,CACA,kCAAA,CACA,qCAAA,CAGA,wCAAA,CAGA,mDAAA,CACA,mDAAA,CAGA,yBAAA,CACA,8CAAA,CACA,gDAAA,CACA,oCAAA,CACA,0CFsCJ,CGhHE,aAIE,iBAAA,CAHA,aAAA,CAEA,aAAA,CADA,YHqHJ,CI1HA,KACE,kCAAA,CACA,iCAAA,CAGA,uGAAA,CAKA,mFJ2HF,CIrHA,WAGE,mCAAA,CACA,sCJwHF,CIpHA,wBANE,6BJkIF,CI5HA,aAIE,4BAAA,CACA,sCJuHF,CI/GA,MACE,0NAAA,CACA,mNAAA,CACA,oNJkHF,CI3GA,YAGE,gCAAA,CAAA,kBAAA,CAFA,eAAA,CACA,eJ+GF,CI1GE,aAPF,YAQI,gBJ6GF,CACF,CI1GE,uGAME,iBAAA,CAAA,cJ4GJ,CIxGE,eAEE,uCAAA,CAEA,aAAA,CACA,eAAA,CAJA,iBJ+GJ,CItGE,8BAPE,eAAA,CAGA,qBJiHJ,CI7GE,eAGE,kBAAA,CACA,eAAA,CAHA,oBJ4GJ,CIpGE,eAGE,gBAAA,CADA,eAAA,CAGA,qBAAA,CADA,eAAA,CAHA,mBJ0GJ,CIlGE,kBACE,eJoGJ,CIhGE,eAEE,eAAA,CACA,qBAAA,CAFA,YJoGJ,CI9FE,8BAGE,uCAAA,CAEA,cAAA,CADA,eAAA,CAEA,qBAAA,CAJA,eJoGJ,CI5FE,eACE,wBJ8FJ,CI1FE,eAGE,+DAAA,CAFA,iBAAA,CACA,cJ6FJ,CIxFE,cACE,+BAAA,CACA,qBJ0FJ,CIvFI,mCAEE,sBJwFN,CIpFI,wCAEE,+BJqFN,CIlFM,kDACE,uDJoFR,CI/EI,mBACE,kBJiFN,CI7EI,4BACE,uCAAA,CACA,oBJ+EN,CI1EE,iDAGE,6BAAA,CACA,aJ4EJ,CIzEI,aAPF,iDAQI,oBJ8EJ,CACF,CI1EE,iBAIE,wCAAA,CACA,mBAAA,CACA,kCAAA,CAAA,0BAAA,CAJA,eAAA,CADA,uBAAA,CAMA,iCAAA,CAJA,qBJgFJ,CIzEI,qCAEE,uCAAA,CADA,YJ4EN,CItEE,gBAEE,iBAAA,CACA,eAAA,CAFA,iBJ0EJ,CIrEI,qBAQE,kCAAA,CAAA,0BAAA,CADA,eAAA,CANA,aAAA,CACA,QAAA,CAIA,uCAAA,CAFA,aAAA,CADA,oCAAA,CAQA,+DAAA,CADA,oBAAA,CADA,iBAAA,CAJA,iBJ6EN,CIpEM,2BACE,qDJsER,CIlEM,wCAEE,YAAA,CADA,WJqER,CIhEM,8CACE,oDJkER,CI/DQ,oDACE,0CJiEV,CI1DE,gBAOE,4CAAA,CACA,mBAAA,CACA,mKACE,CAPF,gCAAA,CAFA,oBAAA,CAGA,eAAA,CAFA,uBAAA,CAGA,uBAAA,CACA,qBJ+DJ,CIrDE,iBAGE,6CAAA,CACA,kCAAA,CAAA,0BAAA,CAHA,aAAA,CACA,qBJyDJ,CInDE,iBAEE,6DAAA,CACA,WAAA,CAFA,oBJuDJ,CIlDI,oBANF,iBAOI,iBJqDJ,CIlDI,yDAWE,2CAAA,CACA,mBAAA,CACA,8BAAA,CAJA,gCAAA,CAKA,mBAAA,CAXA,oBAAA,CAOA,eAAA,CAHA,cAAA,CADA,aAAA,CADA,6BAAA,CAAA,qBAAA,CAGA,mBAAA,CAPA,iBAAA,CAGA,UJ8DN,CIlEI,sDAWE,2CAAA,CACA,mBAAA,CACA,8BAAA,CAJA,gCAAA,CAKA,mBAAA,CAXA,oBAAA,CAOA,eAAA,CAHA,cAAA,CADA,aAAA,CADA,0BAAA,CAAA,qBAAA,CAGA,mBAAA,CAPA,iBAAA,CAGA,UJ8DN,CIlEI,mEAEE,MJgEN,CIlEI,gEAEE,MJgEN,CIlEI,0DAEE,MJgEN,CIlEI,mEAEE,OJgEN,CIlEI,gEAEE,OJgEN,CIlEI,0DAEE,OJgEN,CIlEI,gDAWE,2CAAA,CACA,mBAAA,CACA,8BAAA,CAJA,gCAAA,CAKA,mBAAA,CAXA,oBAAA,CAOA,eAAA,CAHA,cAAA,CADA,aAAA,CADA,6BAAA,CAAA,0BAAA,CAAA,qBAAA,CAGA,mBAAA,CAPA,iBAAA,CAGA,UJ8DN,CACF,CI/CE,kBACE,WJiDJ,CI7CE,oDAEE,qBJ+CJ,CIjDE,oDAEE,sBJ+CJ,CI3CE,iCACE,kBJgDJ,CIjDE,iCACE,mBJgDJ,CIjDE,iCAIE,2DJ6CJ,CIjDE,iCAIE,4DJ6CJ,CIjDE,uBAGE,uCAAA,CADA,aAAA,CAAA,cJ+CJ,CIzCE,eACE,oBJ2CJ,CIvCE,kDAEE,kBJ0CJ,CI5CE,kDAEE,mBJ0CJ,CI5CE,8BAGE,SJyCJ,CItCI,0DACE,iBJyCN,CIrCI,oCACE,2BJwCN,CIrCM,0CACE,2BJwCR,CInCI,wDAEE,kBJsCN,CIxCI,wDAEE,mBJsCN,CIxCI,oCACE,kBJuCN,CInCM,kGAEE,aJuCR,CInCM,0DACE,eJsCR,CIlCM,4EACE,kBAAA,CAAA,eJsCR,CIvCM,sEACE,kBAAA,CAAA,eJsCR,CIvCM,gGAEE,kBJqCR,CIvCM,0FAEE,kBJqCR,CIvCM,8EAEE,kBJqCR,CIvCM,gGAEE,mBJqCR,CIvCM,0FAEE,mBJqCR,CIvCM,8EAEE,mBJqCR,CIvCM,0DACE,kBAAA,CAAA,eJsCR,CI/BE,yBAEE,mBJiCJ,CInCE,yBAEE,oBJiCJ,CInCE,eACE,mBAAA,CAAA,cJkCJ,CI7BE,gCAGE,WAAA,CADA,cJgCJ,CI5BI,wDAEE,oBJ+BN,CI3BI,0DAEE,oBJ8BN,CI1BI,oEACE,YJ6BN,CIxBE,mCACE,YJ0BJ,CItBE,mBACE,iBAAA,CAGA,eAAA,CADA,cAAA,CAEA,iBAAA,CAHA,yBAAA,CAAA,sBAAA,CAAA,iBJ2BJ,CIrBI,uBACE,aJuBN,CIlBE,uBAGE,iBAAA,CADA,eAAA,CADA,eJsBJ,CIhBE,mBACE,cJkBJ,CIdE,+BAKE,2CAAA,CACA,iDAAA,CACA,mBAAA,CANA,oBAAA,CAGA,gBAAA,CAFA,cAAA,CACA,aAAA,CAKA,iBJgBJ,CIbI,aAXF,+BAYI,aJgBJ,CACF,CIXI,iCACE,gBJaN,CINM,gEACE,YJQR,CITM,6DACE,YJQR,CITM,uDACE,YJQR,CIJM,+DACE,eJMR,CIPM,4DACE,eJMR,CIPM,sDACE,eJMR,CIDI,gEACE,eJGN,CIJI,6DACE,eJGN,CIJI,uDACE,eJGN,CIAM,0EACE,gBJER,CIHM,uEACE,gBJER,CIHM,iEACE,gBJER,CIGI,kCAGE,eAAA,CAFA,cAAA,CACA,sBAAA,CAEA,kBJDN,CIIM,oCACE,aJFR,CIOI,kCAGE,qDAAA,CAFA,sBAAA,CACA,kBJJN,CISI,wCACE,iCJPN,CIUM,8CACE,iCAAA,CACA,sDJRR,CIaI,iCACE,iBJXN,CIgBE,wCACE,cJdJ,CIiBI,wDAIE,gBJTN,CIKI,wDAIE,iBJTN,CIKI,8CAUE,UAAA,CATA,oBAAA,CAEA,YAAA,CAGA,oDAAA,CAAA,4CAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CACA,iCAAA,CAJA,0BAAA,CAHA,WJPN,CImBI,oDACE,oDJjBN,CIqBI,mEACE,kDAAA,CACA,yDAAA,CAAA,iDJnBN,CIuBI,oEACE,kDAAA,CACA,0DAAA,CAAA,kDJrBN,CI0BE,wBACE,iBAAA,CACA,eAAA,CACA,iBJxBJ,CI4BE,mBACE,oBAAA,CACA,kBAAA,CACA,eJ1BJ,CI6BI,aANF,mBAOI,aJ1BJ,CACF,CI6BI,8BACE,aAAA,CAEA,QAAA,CACA,eAAA,CAFA,UJzBN,CKhWI,wCDwYF,uBACE,iBJpCF,CIuCE,4BACE,eJrCJ,CACF,CMliBA,WAGE,0CAAA,CADA,+BAAA,CADA,aNsiBF,CMjiBE,aANF,WAOI,YNoiBF,CACF,CMjiBE,oBAEE,uCAAA,CADA,gCNoiBJ,CM/hBE,kBAGE,eAAA,CAFA,iBAAA,CACA,eNkiBJ,COrjBA,KASE,cAAA,CARA,WAAA,CACA,iBPyjBF,CKrZI,oCEtKJ,KAaI,gBPkjBF,CACF,CK1ZI,oCEtKJ,KAkBI,cPkjBF,CACF,CO7iBA,KASE,2CAAA,CAPA,YAAA,CACA,qBAAA,CAKA,eAAA,CAHA,eAAA,CAJA,iBAAA,CAGA,UPmjBF,CO3iBE,aAZF,KAaI,aP8iBF,CACF,CK3ZI,wCEhJF,yBAII,cP2iBJ,CACF,COliBA,SAEE,gBAAA,CAAA,iBAAA,CADA,ePsiBF,COjiBA,cACE,YAAA,CACA,qBAAA,CACA,WPoiBF,COjiBE,aANF,cAOI,aPoiBF,CACF,COhiBA,SACE,WPmiBF,COhiBE,gBACE,YAAA,CACA,WAAA,CACA,iBPkiBJ,CO7hBA,aACE,eAAA,CAEA,sBAAA,CADA,kBPiiBF,COvhBA,WACE,YP0hBF,COrhBA,WAGE,QAAA,CACA,SAAA,CAHA,iBAAA,CACA,OP0hBF,COrhBE,uCACE,aPuhBJ,COnhBE,+BAEE,uCAAA,CADA,kBPshBJ,COhhBA,SASE,2CAAA,CACA,mBAAA,CAHA,gCAAA,CACA,gBAAA,CAHA,YAAA,CAQA,SAAA,CAFA,uCAAA,CALA,mBAAA,CALA,cAAA,CAWA,2BAAA,CARA,UP0hBF,CO9gBE,eAGE,SAAA,CADA,uBAAA,CAEA,oEACE,CAJF,UPmhBJ,COrgBA,MACE,WPwgBF,CQlqBA,MACE,+PRoqBF,CQ9pBA,cAQE,mBAAA,CADA,0CAAA,CAIA,cAAA,CALA,YAAA,CAGA,uCAAA,CACA,oBAAA,CATA,iBAAA,CAEA,UAAA,CADA,QAAA,CAUA,qBAAA,CAPA,WAAA,CADA,SRyqBF,CQ9pBE,aAfF,cAgBI,YRiqBF,CACF,CQ9pBE,kCAEE,uCAAA,CADA,YRiqBJ,CQ5pBE,qBACE,uCR8pBJ,CQ1pBE,yCACE,+BR4pBJ,CQ7pBE,sCACE,+BR4pBJ,CQ7pBE,gCACE,+BR4pBJ,CQvpBE,oBAKE,6BAAA,CAIA,UAAA,CARA,aAAA,CAEA,cAAA,CACA,aAAA,CAEA,2CAAA,CAAA,mCAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CANA,aRgqBJ,CQrpBE,sBACE,cRupBJ,CQppBI,2BACE,2CRspBN,CQhpBI,sDAEE,uDAAA,CADA,+BRmpBN,CQppBI,mDAEE,uDAAA,CADA,+BRmpBN,CQppBI,6CAEE,uDAAA,CADA,+BRmpBN,CSxtBA,YACE,WAAA,CAIA,WTwtBF,CSrtBE,mBACE,qBAAA,CACA,iBTutBJ,CK3jBI,sCItJE,4EACE,kBTotBN,CShtBI,0JACE,mBTktBN,CSntBI,8EACE,kBTktBN,CACF,CS7sBI,0BAGE,UAAA,CAFA,aAAA,CACA,YTgtBN,CS3sBI,+BACE,eT6sBN,CSvsBE,8BAGE,iBT0sBJ,CS7sBE,8BAGE,kBT0sBJ,CS7sBE,oBACE,WAAA,CACA,cAAA,CAEA,STysBJ,CStsBI,aAPF,oBAQI,YTysBJ,CACF,CStsBI,8BACE,UTwsBN,CSpsBI,gCACE,yCTssBN,CSlsBI,wBACE,cAAA,CACA,kBTosBN,CSjsBM,kCACE,oBTmsBR,CUzwBA,qBAEE,WVuxBF,CUzxBA,qBAEE,UVuxBF,CUzxBA,WAOE,2CAAA,CACA,mBAAA,CALA,YAAA,CAMA,8BAAA,CAJA,iBAAA,CAMA,SAAA,CALA,mBAAA,CASA,mBAAA,CAdA,cAAA,CASA,0BAAA,CAEA,wCACE,CATF,SVqxBF,CUvwBE,aAlBF,WAmBI,YV0wBF,CACF,CUvwBE,+BAEE,SAAA,CAIA,mBAAA,CALA,uBAAA,CAEA,kEV0wBJ,CUnwBE,kBACE,gCAAA,CACA,eVqwBJ,CWxyBA,WAEE,0CAAA,CADA,+BX4yBF,CWxyBE,aALF,WAMI,YX2yBF,CACF,CWxyBE,kBACE,YAAA,CACA,6BAAA,CAEA,aAAA,CADA,aX2yBJ,CWtyBE,iBACE,YAAA,CAKA,cAAA,CAIA,uCAAA,CADA,eAAA,CADA,oBAAA,CADA,kBAAA,CAIA,uBXoyBJ,CWjyBI,4CACE,UXmyBN,CWpyBI,yCACE,UXmyBN,CWpyBI,mCACE,UXmyBN,CW/xBI,+BACE,oBXiyBN,CK9oBI,wCMzII,yCACE,YX0xBR,CACF,CWrxBI,iCACE,gBXwxBN,CWzxBI,iCACE,iBXwxBN,CWzxBI,uBAEE,gBXuxBN,CWpxBM,iCACE,eXsxBR,CWhxBE,kBAEE,WAAA,CAGA,eAAA,CACA,kBAAA,CAHA,6BAAA,CACA,cAAA,CAHA,iBXuxBJ,CW9wBE,mBACE,YAAA,CACA,aXgxBJ,CW5wBE,sBAKE,gBAAA,CAHA,MAAA,CACA,gBAAA,CAGA,UAAA,CAFA,cAAA,CAHA,iBAAA,CACA,OXkxBJ,CWzwBA,gBACE,gDX4wBF,CWzwBE,uBACE,YAAA,CACA,cAAA,CACA,6BAAA,CACA,aX2wBJ,CWvwBE,kCACE,sCXywBJ,CWtwBI,6DACE,+BXwwBN,CWzwBI,0DACE,+BXwwBN,CWzwBI,oDACE,+BXwwBN,CWhwBA,cAIE,wCAAA,CACA,gBAAA,CAHA,iBAAA,CACA,eAAA,CAFA,UXuwBF,CKztBI,mCM/CJ,cASI,UXmwBF,CACF,CW/vBE,yBACE,sCXiwBJ,CW1vBA,WACE,cAAA,CACA,qBX6vBF,CKtuBI,mCMzBJ,WAMI,eX6vBF,CACF,CW1vBE,iBACE,oBAAA,CAEA,aAAA,CACA,iBAAA,CAFA,YX8vBJ,CWzvBI,wBACE,eX2vBN,CWvvBI,qBAGE,iBAAA,CAFA,gBAAA,CACA,mBX0vBN,CY55BE,uBAKE,kBAAA,CACA,mBAAA,CAHA,gCAAA,CAIA,cAAA,CANA,oBAAA,CAGA,eAAA,CAFA,kBAAA,CAMA,gEZ+5BJ,CYz5BI,gCAEE,2CAAA,CACA,uCAAA,CAFA,gCZ65BN,CYv5BI,kDAEE,0CAAA,CACA,sCAAA,CAFA,+BZ25BN,CY55BI,+CAEE,0CAAA,CACA,sCAAA,CAFA,+BZ25BN,CY55BI,yCAEE,0CAAA,CACA,sCAAA,CAFA,+BZ25BN,CYp5BE,gCAKE,4BZy5BJ,CY95BE,gEAME,6BZw5BJ,CY95BE,gCAME,4BZw5BJ,CY95BE,sBAIE,6DAAA,CAGA,8BAAA,CAJA,eAAA,CAFA,aAAA,CACA,eAAA,CAMA,sCZs5BJ,CYj5BI,iDACE,6CAAA,CACA,8BZm5BN,CYr5BI,8CACE,6CAAA,CACA,8BZm5BN,CYr5BI,wCACE,6CAAA,CACA,8BZm5BN,CY/4BI,+BACE,UZi5BN,Cap8BA,WAME,2CAAA,CAGA,0DACE,CALF,gCAAA,CAFA,MAAA,CAFA,uBAAA,CAAA,eAAA,CAEA,OAAA,CADA,KAAA,CAEA,Sb08BF,Cah8BE,aAdF,WAeI,Ybm8BF,CACF,Cah8BE,iCACE,gEACE,CAEF,kEbg8BJ,Ca17BE,iCACE,2BAAA,CACA,iEb47BJ,Cat7BE,kBAEE,kBAAA,CADA,YAAA,CAEA,ebw7BJ,Cap7BE,mBAKE,kBAAA,CAGA,cAAA,CALA,YAAA,CAIA,uCAAA,CAHA,aAAA,CAHA,iBAAA,CAQA,uBAAA,CAHA,qBAAA,CAJA,Sb67BJ,Can7BI,yBACE,Ubq7BN,Caj7BI,iCACE,oBbm7BN,Ca/6BI,uCAEE,uCAAA,CADA,Ybk7BN,Ca76BI,2BACE,YAAA,CACA,ab+6BN,CKj0BI,wCQhHA,2BAMI,Yb+6BN,CACF,Ca56BM,iDAIE,iBAAA,CAHA,aAAA,CAEA,aAAA,CADA,Ubg7BR,Cal7BM,8CAIE,iBAAA,CAHA,aAAA,CAEA,aAAA,CADA,Ubg7BR,Cal7BM,wCAIE,iBAAA,CAHA,aAAA,CAEA,aAAA,CADA,Ubg7BR,CK/1BI,mCQ1EA,iCAII,Yby6BN,CACF,Cat6BM,wCACE,Ybw6BR,Cap6BM,+CACE,oBbs6BR,CK12BI,sCQvDA,iCAII,Ybi6BN,CACF,Ca55BE,kBAEE,YAAA,CACA,cAAA,CAFA,iBAAA,CAGA,8Db85BJ,Caz5BI,oCAGE,SAAA,CAIA,mBAAA,CALA,6BAAA,CAEA,8DACE,CAJF,Ub+5BN,Cat5BM,8CACE,8Bbw5BR,Can5BI,8BACE,ebq5BN,Cah5BE,4BAGE,kBbq5BJ,Cax5BE,4BAGE,iBbq5BJ,Cax5BE,4BAIE,gBbo5BJ,Cax5BE,4BAIE,iBbo5BJ,Cax5BE,kBACE,WAAA,CAIA,eAAA,CAHA,aAAA,CAIA,kBbk5BJ,Ca/4BI,0DAGE,SAAA,CAIA,mBAAA,CALA,8BAAA,CAEA,8DACE,CAJF,Ubq5BN,Ca54BM,oEACE,6Bb84BR,Ca14BM,4EAGE,SAAA,CAIA,mBAAA,CALA,uBAAA,CAEA,8DACE,CAJF,Sbg5BR,Car4BI,uCAGE,WAAA,CAFA,iBAAA,CACA,Ubw4BN,Cal4BE,mBACE,YAAA,CACA,aAAA,CACA,cAAA,CAEA,+CACE,CAFF,kBbq4BJ,Ca/3BI,8DACE,WAAA,CACA,SAAA,CACA,oCbi4BN,Ca13BE,mBACE,Yb43BJ,CK/6BI,mCQkDF,6BAQI,gBb43BJ,Cap4BA,6BAQI,iBb43BJ,Cap4BA,mBAKI,aAAA,CAEA,iBAAA,CADA,ab83BJ,CACF,CKv7BI,sCQkDF,6BAaI,kBb43BJ,Caz4BA,6BAaI,mBb43BJ,CACF,CclmCA,MACE,0MAAA,CACA,gMAAA,CACA,yNdqmCF,Cc/lCA,QACE,eAAA,CACA,edkmCF,Cc/lCE,eACE,aAAA,CAGA,eAAA,CADA,eAAA,CADA,eAAA,CAGA,sBdimCJ,Cc9lCI,+BACE,YdgmCN,Cc7lCM,mCAEE,WAAA,CADA,UdgmCR,CcxlCQ,6DAME,iBAAA,CALA,aAAA,CAGA,aAAA,CADA,cAAA,CAEA,kBAAA,CAHA,Ud8lCV,CchmCQ,0DAME,iBAAA,CALA,aAAA,CAGA,aAAA,CADA,cAAA,CAEA,kBAAA,CAHA,Ud8lCV,CchmCQ,oDAME,iBAAA,CALA,aAAA,CAGA,aAAA,CADA,cAAA,CAEA,kBAAA,CAHA,Ud8lCV,CcnlCE,cAGE,eAAA,CAFA,QAAA,CACA,SdslCJ,CcjlCE,cACE,edmlCJ,CchlCI,sCACE,edklCN,CcnlCI,sCACE,cdklCN,Cc7kCE,cAEE,kBAAA,CAKA,cAAA,CANA,YAAA,CAEA,6BAAA,CACA,iBAAA,CACA,eAAA,CAIA,uBAAA,CAHA,sBAAA,CAEA,sBdglCJ,Cc5kCI,kCACE,uCd8kCN,Cc1kCI,oCACE,+Bd4kCN,CcxkCI,0CACE,Ud0kCN,CctkCI,yCACE,+BdwkCN,CczkCI,sCACE,+BdwkCN,CczkCI,gCACE,+BdwkCN,CcpkCI,4BACE,uCAAA,CACA,oBdskCN,CclkCI,0CACE,YdokCN,CcjkCM,yDAKE,6BAAA,CAJA,aAAA,CAEA,WAAA,CACA,qCAAA,CAAA,6BAAA,CAFA,UdskCR,Cc/jCM,kDACE,YdikCR,Cc5jCI,gBAEE,cAAA,CADA,Yd+jCN,CczjCE,cACE,ad2jCJ,CcvjCE,gBACE,YdyjCJ,CKvgCI,wCS3CA,0CASE,2CAAA,CAHA,YAAA,CACA,qBAAA,CACA,WAAA,CAJA,MAAA,CAFA,iBAAA,CAEA,OAAA,CADA,KAAA,CAEA,SdwjCJ,Cc7iCI,4DACE,eAAA,CACA,ed+iCN,CcjjCI,yDACE,eAAA,CACA,ed+iCN,CcjjCI,mDACE,eAAA,CACA,ed+iCN,Cc3iCI,gCAQE,qDAAA,CAJA,uCAAA,CAKA,cAAA,CAJA,eAAA,CAHA,aAAA,CAIA,kBAAA,CAHA,wBAAA,CAFA,iBAAA,CAMA,kBd+iCN,Cc1iCM,wDAGE,UdgjCR,CcnjCM,wDAGE,WdgjCR,CcnjCM,8CAIE,aAAA,CAEA,aAAA,CACA,YAAA,CANA,iBAAA,CACA,SAAA,CAGA,Yd8iCR,CcziCQ,oDAIE,6BAAA,CAIA,UAAA,CAPA,aAAA,CAEA,WAAA,CAEA,2CAAA,CAAA,mCAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CALA,UdijCV,CctiCM,8CAEE,2CAAA,CACA,gEACE,CAHF,eAAA,CAIA,gCAAA,CAAA,4BAAA,CACA,kBduiCR,CcpiCQ,2DACE,YdsiCV,CcjiCM,8CAGE,2CAAA,CAFA,gCAAA,CACA,edoiCR,Cc/hCM,yCAIE,aAAA,CADA,UAAA,CAEA,YAAA,CACA,aAAA,CALA,iBAAA,CAEA,WAAA,CADA,SdqiCR,Cc5hCI,+BACE,Md8hCN,Cc1hCI,+BAEE,4DAAA,CADA,Sd6hCN,CczhCM,qDACE,+Bd2hCR,CcxhCQ,gFACE,+Bd0hCV,Cc3hCQ,6EACE,+Bd0hCV,Cc3hCQ,uEACE,+Bd0hCV,CcphCI,+BACE,YAAA,CACA,mBdshCN,CcnhCM,uDAGE,mBdshCR,CczhCM,uDAGE,kBdshCR,CczhCM,6CAIE,gBAAA,CAFA,aAAA,CADA,YdwhCR,CclhCQ,mDAIE,6BAAA,CAIA,UAAA,CAPA,aAAA,CAEA,WAAA,CAEA,2CAAA,CAAA,mCAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CALA,Ud0hCV,Cc3gCM,+CACE,mBd6gCR,CcrgCM,4CAEE,wBAAA,CADA,edwgCR,CcpgCQ,oEACE,mBdsgCV,CcvgCQ,oEACE,oBdsgCV,CclgCQ,4EACE,iBdogCV,CcrgCQ,4EACE,kBdogCV,CchgCQ,oFACE,mBdkgCV,CcngCQ,oFACE,oBdkgCV,Cc9/BQ,4FACE,mBdggCV,CcjgCQ,4FACE,oBdggCV,Ccz/BE,mBACE,wBd2/BJ,Ccv/BE,wBACE,YAAA,CAEA,SAAA,CADA,0BAAA,CAEA,oEdy/BJ,Ccp/BI,kCACE,2Bds/BN,Ccj/BE,gCAEE,SAAA,CADA,uBAAA,CAEA,qEdm/BJ,Cc9+BI,8CAEE,kCAAA,CAAA,0Bd++BN,CACF,CKppCI,wCS6KA,0CACE,Yd0+BJ,Ccv+BI,yDACE,Udy+BN,Ccr+BI,wDACE,Ydu+BN,Ccn+BI,kDACE,Ydq+BN,Cch+BE,gBAIE,iDAAA,CADA,gCAAA,CAFA,aAAA,CACA,edo+BJ,CACF,CKjtCM,6DSsPF,6CACE,Yd89BJ,Cc39BI,4DACE,Ud69BN,Ccz9BI,2DACE,Yd29BN,Ccv9BI,qDACE,Ydy9BN,CACF,CKzsCI,mCS2PE,6CACE,uBdi9BN,Cc78BI,gDACE,Yd+8BN,CACF,CKjtCI,sCS7JJ,QAqaI,oDd68BF,Ccv8BI,8CACE,uBdy8BN,Cc/7BE,sEACE,Ydo8BJ,Cch8BE,6DACE,adk8BJ,Ccn8BE,0DACE,adk8BJ,Ccn8BE,oDACE,adk8BJ,Cc97BE,6CACE,Ydg8BJ,Cc57BE,uBACE,aAAA,CACA,ed87BJ,Cc37BI,kCACE,ed67BN,Ccz7BI,qCACE,eAAA,CACA,mBd27BN,Ccx7BM,mDACE,mBd07BR,Cct7BM,mDACE,Ydw7BR,Ccn7BI,+BACE,adq7BN,Ccl7BM,2DACE,Sdo7BR,Cc96BE,cAIE,kBAAA,CAHA,WAAA,CAEA,YAAA,CAEA,+CACE,CAJF,Wdm7BJ,Cc36BI,wBACE,UAAA,CACA,wBd66BN,Ccz6BI,oBACE,uDd26BN,Ccv6BI,oBAKE,6BAAA,CAIA,UAAA,CARA,oBAAA,CAEA,WAAA,CAGA,2CAAA,CAAA,mCAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CAJA,qBAAA,CAFA,Udg7BN,Ccr6BI,0JAEE,uBds6BN,Ccx5BI,+HACE,Yd85BN,Cc35BM,oDACE,aAAA,CACA,Sd65BR,Cc15BQ,kEAGE,eAAA,CAFA,YAAA,CACA,eAAA,CAEA,mBd45BV,Ccz5BU,gFACE,mBd25BZ,Ccv5BU,gFACE,Ydy5BZ,Ccj5BI,2CACE,adm5BN,Cch5BM,iFACE,mBdk5BR,Ccn5BM,iFACE,kBdk5BR,Ccz4BI,mFACE,ed24BN,Ccx4BM,iGACE,Sd04BR,Ccr4BI,qFAGE,mDdu4BN,Cc14BI,qFAGE,oDdu4BN,Cc14BI,2EACE,aAAA,CACA,oBdw4BN,Ccp4BM,0FACE,Yds4BR,CACF,Cez+CA,MACE,igBf4+CF,Cet+CA,WACE,iBfy+CF,CK30CI,mCU/JJ,WAKI,efy+CF,CACF,Cet+CE,kBACE,Yfw+CJ,Cep+CE,oBAEE,SAAA,CADA,Sfu+CJ,CKp0CI,wCUpKF,8BAQI,Yf8+CJ,Cet/CA,8BAQI,af8+CJ,Cet/CA,oBAYI,2CAAA,CACA,kBAAA,CAHA,WAAA,CACA,eAAA,CAOA,mBAAA,CAZA,iBAAA,CACA,SAAA,CAOA,uBAAA,CACA,4CACE,CAPF,Uf6+CJ,Cej+CI,+DACE,SAAA,CACA,oCfm+CN,CACF,CK12CI,mCUjJF,8BAiCI,Mfq+CJ,CetgDA,8BAiCI,Ofq+CJ,CetgDA,oBAoCI,gCAAA,CACA,cAAA,CAFA,QAAA,CAJA,cAAA,CACA,KAAA,CAMA,sDACE,CALF,Ofo+CJ,Ce19CI,+DAME,YAAA,CACA,SAAA,CACA,4CACE,CARF,Uf+9CN,CACF,CKz2CI,wCUxGA,+DAII,mBfi9CN,CACF,CKv5CM,6DU/DF,+DASI,mBfi9CN,CACF,CK55CM,6DU/DF,+DAcI,mBfi9CN,CACF,Ce58CE,kBAEE,kCAAA,CAAA,0Bf68CJ,CK33CI,wCUpFF,4BAQI,Mfo9CJ,Ce59CA,4BAQI,Ofo9CJ,Ce59CA,kBAWI,QAAA,CAGA,SAAA,CAFA,eAAA,CANA,cAAA,CACA,KAAA,CAMA,wBAAA,CAEA,qGACE,CANF,OAAA,CADA,Sfm9CJ,Cet8CI,4BACE,yBfw8CN,Cep8CI,6DAEE,WAAA,CAEA,SAAA,CADA,uBAAA,CAEA,sGACE,CALF,Uf08CN,CACF,CKt6CI,mCUjEF,kBA2CI,WAAA,CAEA,eAAA,CAHA,iBAAA,CAIA,8CAAA,CAFA,afm8CJ,Ce97CI,4BACE,Ufg8CN,CACF,CKx8CM,6DUYF,6DAII,af47CN,CACF,CKv7CI,sCUVA,6DASI,af47CN,CACF,Cev7CE,iBAIE,2CAAA,CACA,gCAAA,CAFA,aAAA,CAFA,iBAAA,CAKA,2CACE,CALF,Sf67CJ,CKp8CI,mCUKF,iBAaI,gCAAA,CACA,mBAAA,CAFA,afy7CJ,Cep7CI,uBACE,oCfs7CN,CACF,Cel7CI,4DAEE,2CAAA,CACA,6BAAA,CACA,oCAAA,CAHA,gCfu7CN,Ce/6CE,4BAKE,mBAAA,CAAA,oBfo7CJ,Cez7CE,4BAKE,mBAAA,CAAA,oBfo7CJ,Cez7CE,kBAQE,sBAAA,CAFA,eAAA,CAFA,WAAA,CAHA,iBAAA,CAMA,sBAAA,CAJA,UAAA,CADA,Sfu7CJ,Ce96CI,oCACE,0BAAA,CAAA,qBfg7CN,Cej7CI,yCACE,yBAAA,CAAA,qBfg7CN,Cej7CI,+BACE,qBfg7CN,Ce56CI,oCAEE,uCf66CN,Ce/6CI,yCAEE,uCf66CN,Ce/6CI,kEAEE,uCf66CN,Cez6CI,6BACE,Yf26CN,CKp9CI,wCUkBF,kBA8BI,eAAA,CADA,aAAA,CADA,Uf46CJ,CACF,CK9+CI,mCUqCF,4BAmCI,mBf46CJ,Ce/8CA,4BAmCI,oBf46CJ,Ce/8CA,kBAoCI,aAAA,CACA,ef06CJ,Cev6CI,oCACE,uCfy6CN,Ce16CI,yCACE,uCfy6CN,Ce16CI,+BACE,uCfy6CN,Cer6CI,mCACE,gCfu6CN,Cen6CI,6DACE,kBfq6CN,Cel6CM,+EAEE,uCfm6CR,Cer6CM,oFAEE,uCfm6CR,Cer6CM,wJAEE,uCfm6CR,CACF,Ce75CE,iBAIE,cAAA,CAHA,oBAAA,CAEA,aAAA,CAEA,kCACE,CAJF,Yfk6CJ,Ce15CI,uBACE,Uf45CN,Cex5CI,yCAGE,Uf25CN,Ce95CI,yCAGE,Wf25CN,Ce95CI,+BACE,iBAAA,CACA,SAAA,CAEA,Sf05CN,Cev5CM,6CACE,oBfy5CR,CKjgDI,wCUgGA,yCAcI,Ufw5CN,Cet6CE,yCAcI,Wfw5CN,Cet6CE,+BAaI,Sfy5CN,Cer5CM,+CACE,Yfu5CR,CACF,CK7hDI,mCUmHA,+BAwBI,mBfs5CN,Cen5CM,8CACE,Yfq5CR,CACF,Ce/4CE,8BAGE,Wfm5CJ,Cet5CE,8BAGE,Ufm5CJ,Cet5CE,oBAKE,mBAAA,CAJA,iBAAA,CACA,SAAA,CAEA,Sfk5CJ,CKzhDI,wCUmIF,8BAUI,Wfi5CJ,Ce35CA,8BAUI,Ufi5CJ,Ce35CA,oBASI,Sfk5CJ,CACF,Ce94CI,gCACE,iBfo5CN,Cer5CI,gCACE,kBfo5CN,Cer5CI,sBAEE,uCAAA,CAEA,SAAA,CADA,oBAAA,CAEA,+Dfg5CN,Ce34CM,yCAEE,uCAAA,CADA,Yf84CR,Cez4CM,yFAGE,SAAA,CACA,mBAAA,CAFA,kBf44CR,Cev4CQ,8FACE,Ufy4CV,Cel4CE,8BAOE,mBAAA,CAAA,oBfy4CJ,Ceh5CE,8BAOE,mBAAA,CAAA,oBfy4CJ,Ceh5CE,oBAIE,kBAAA,CAIA,yCAAA,CALA,YAAA,CAMA,eAAA,CAHA,WAAA,CAKA,SAAA,CAVA,iBAAA,CACA,KAAA,CAUA,uBAAA,CAFA,kBAAA,CALA,Uf24CJ,CKnlDI,mCUmMF,8BAgBI,mBfq4CJ,Cer5CA,8BAgBI,oBfq4CJ,Cer5CA,oBAiBI,efo4CJ,CACF,Cej4CI,+DACE,SAAA,CACA,0Bfm4CN,Ce93CE,6BAKE,+Bfi4CJ,Cet4CE,0DAME,gCfg4CJ,Cet4CE,6BAME,+Bfg4CJ,Cet4CE,mBAIE,eAAA,CAHA,iBAAA,CAEA,UAAA,CADA,Sfo4CJ,CKllDI,wCU4MF,mBAWI,QAAA,CADA,Ufi4CJ,CACF,CK3mDI,mCU+NF,mBAiBI,SAAA,CADA,UAAA,CAEA,sBfg4CJ,Ce73CI,8DACE,8BAAA,CACA,Sf+3CN,CACF,Ce13CE,uBAKE,kCAAA,CAAA,0BAAA,CAFA,2CAAA,CAFA,WAAA,CACA,eAAA,CAOA,kBfw3CJ,Cer3CI,iEAZF,uBAaI,uBfw3CJ,CACF,CKxpDM,6DUkRJ,uBAkBI,afw3CJ,CACF,CKvoDI,sCU4PF,uBAuBI,afw3CJ,CACF,CK5oDI,mCU4PF,uBA4BI,YAAA,CAEA,+DAAA,CADA,oBfy3CJ,Cer3CI,kEACE,efu3CN,Cen3CI,6BACE,qDfq3CN,Cej3CI,0CAEE,YAAA,CADA,Wfo3CN,Ce/2CI,gDACE,oDfi3CN,Ce92CM,sDACE,0Cfg3CR,CACF,Cez2CA,kBACE,gCAAA,CACA,qBf42CF,Cez2CE,wBAKE,qDAAA,CAHA,uCAAA,CACA,gBAAA,CACA,kBAAA,CAHA,eAAA,CAKA,uBf22CJ,CKhrDI,mCU+TF,kCAUI,mBf22CJ,Cer3CA,kCAUI,oBf22CJ,CACF,Cev2CE,wBAGE,eAAA,CAFA,QAAA,CACA,Sf02CJ,Cer2CE,wBACE,yDfu2CJ,Cep2CI,oCACE,efs2CN,Cej2CE,wBACE,aAAA,CACA,YAAA,CAEA,uBAAA,CADA,gCfo2CJ,Ceh2CI,mDACE,uDfk2CN,Cen2CI,gDACE,uDfk2CN,Cen2CI,0CACE,uDfk2CN,Ce91CI,gDACE,mBfg2CN,Ce31CE,gCAGE,+BAAA,CAGA,cAAA,CALA,aAAA,CAGA,gBAAA,CACA,YAAA,CAHA,mBAAA,CAQA,uBAAA,CAHA,2Cf81CJ,CKttDI,mCUiXF,0CAcI,mBf21CJ,Cez2CA,0CAcI,oBf21CJ,CACF,Cex1CI,2DAEE,uDAAA,CADA,+Bf21CN,Ce51CI,wDAEE,uDAAA,CADA,+Bf21CN,Ce51CI,kDAEE,uDAAA,CADA,+Bf21CN,Cet1CI,wCACE,Yfw1CN,Cen1CI,wDACE,Yfq1CN,Cej1CI,oCACE,Wfm1CN,Ce90CE,2BAGE,eAAA,CADA,eAAA,CADA,iBfk1CJ,CK7uDI,mCU0ZF,qCAOI,mBfg1CJ,Cev1CA,qCAOI,oBfg1CJ,CACF,Ce10CM,8DAGE,eAAA,CADA,eAAA,CAEA,eAAA,CAHA,ef+0CR,Cet0CE,kCAEE,Mf40CJ,Ce90CE,kCAEE,Of40CJ,Ce90CE,wBAME,uCAAA,CAFA,aAAA,CACA,YAAA,CAJA,iBAAA,CAEA,Yf20CJ,CK7uDI,wCU+ZF,wBAUI,Yfw0CJ,CACF,Cer0CI,8BAIE,6BAAA,CAIA,UAAA,CAPA,oBAAA,CAEA,WAAA,CAEA,+CAAA,CAAA,uCAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CALA,Uf60CN,Cep0CM,wCACE,oBfs0CR,Ceh0CE,yBAGE,gBAAA,CADA,eAAA,CAEA,eAAA,CAHA,afq0CJ,Ce9zCE,0BASE,2BAAA,CACA,oBAAA,CALA,uCAAA,CAJA,mBAAA,CAKA,gBAAA,CACA,eAAA,CAJA,aAAA,CADA,eAAA,CAEA,eAAA,CAIA,sBfk0CJ,CKjxDI,wCUucF,0BAeI,oBAAA,CADA,efi0CJ,CACF,CKh0DM,6DUgfJ,0BAqBI,oBAAA,CADA,efi0CJ,CACF,Ce7zCI,+BAEE,wBAAA,CADA,yBfg0CN,Ce1zCE,yBAEE,gBAAA,CACA,iBAAA,CAFA,af8zCJ,CexzCE,uBAEE,wBAAA,CADA,+Bf2zCJ,CgBn+DA,WACE,iBAAA,CACA,ShBs+DF,CgBn+DE,kBAOE,2CAAA,CACA,mBAAA,CACA,8BAAA,CAHA,gCAAA,CAHA,QAAA,CAEA,gBAAA,CADA,YAAA,CAOA,SAAA,CAVA,iBAAA,CACA,sBAAA,CAQA,mCAAA,CAEA,oEhBq+DJ,CgB/9DI,+DACE,gBAAA,CAEA,SAAA,CADA,+BAAA,CAEA,sFACE,CADF,8EhBi+DN,CgBr+DI,4DACE,gBAAA,CAEA,SAAA,CADA,+BAAA,CAEA,mFACE,CADF,8EhBi+DN,CgBr+DI,sDACE,gBAAA,CAEA,SAAA,CADA,+BAAA,CAEA,8EhBi+DN,CgB19DI,wBAUE,qCAAA,CAAA,8CAAA,CAFA,mCAAA,CAAA,oCAAA,CACA,YAAA,CAEA,UAAA,CANA,QAAA,CAFA,QAAA,CAIA,kBAAA,CADA,iBAAA,CALA,iBAAA,CACA,KAAA,CAEA,OhBm+DN,CgBv9DE,iBAOE,mBAAA,CAFA,eAAA,CACA,oBAAA,CAJA,QAAA,CADA,kBAAA,CAGA,aAAA,CADA,ShB69DJ,CgBr9DE,iBACE,kBhBu9DJ,CgBn9DE,2BAGE,kBAAA,CAAA,oBhBy9DJ,CgB59DE,2BAGE,mBAAA,CAAA,mBhBy9DJ,CgB59DE,iBAKE,cAAA,CAJA,aAAA,CAGA,YAAA,CAKA,uBAAA,CAHA,2CACE,CALF,UhB09DJ,CgBh9DI,4CACE,+BhBk9DN,CgBn9DI,yCACE,+BhBk9DN,CgBn9DI,mCACE,+BhBk9DN,CgB98DI,uBACE,qDhBg9DN,CiBpiEA,YAIE,qBAAA,CADA,aAAA,CAGA,gBAAA,CALA,uBAAA,CAAA,eAAA,CACA,UAAA,CAGA,ajBwiEF,CiBpiEE,aATF,YAUI,YjBuiEF,CACF,CKz3DI,wCYxKA,+BAGE,ajB2iEJ,CiB9iEE,+BAGE,cjB2iEJ,CiB9iEE,qBAQE,2CAAA,CAHA,aAAA,CAEA,WAAA,CANA,cAAA,CACA,KAAA,CAOA,uBAAA,CACA,iEACE,CALF,aAAA,CAFA,SjB0iEJ,CiB/hEI,mEACE,8BAAA,CACA,6BjBiiEN,CiB9hEM,6EACE,8BjBgiER,CiB3hEI,6CAEE,QAAA,CAAA,MAAA,CACA,QAAA,CAEA,eAAA,CAJA,iBAAA,CACA,OAAA,CAEA,yBAAA,CAAA,qBAAA,CAFA,KjBgiEN,CACF,CKx6DI,sCYtKJ,YAuDI,QjB2hEF,CiBxhEE,mBACE,WjB0hEJ,CACF,CiBthEE,uBACE,YAAA,CACA,OjBwhEJ,CKp7DI,mCYtGF,uBAMI,QjBwhEJ,CiBrhEI,8BACE,WjBuhEN,CiBnhEI,qCACE,ajBqhEN,CiBjhEI,+CACE,kBjBmhEN,CACF,CiB9gEE,wBAIE,kCAAA,CAAA,0BAAA,CAHA,cAAA,CACA,eAAA,CAQA,+DAAA,CADA,oBjB4gEJ,CiBxgEI,8BACE,qDjB0gEN,CiBtgEI,2CAEE,YAAA,CADA,WjBygEN,CiBpgEI,iDACE,oDjBsgEN,CiBngEM,uDACE,0CjBqgER,CKn8DI,wCYxDF,YAME,gCAAA,CADA,QAAA,CAEA,SAAA,CANA,cAAA,CACA,KAAA,CAMA,sDACE,CALF,OAAA,CADA,SjBogEF,CiBz/DE,4CAEE,WAAA,CACA,SAAA,CACA,4CACE,CAJF,UjB8/DJ,CACF,CkB/oEA,yBACE,GACE,QlBipEF,CkB9oEA,GACE,alBgpEF,CACF,CkBvpEA,iBACE,GACE,QlBipEF,CkB9oEA,GACE,alBgpEF,CACF,CkB5oEA,wBACE,GAEE,SAAA,CADA,0BlB+oEF,CkB3oEA,IACE,SlB6oEF,CkB1oEA,GAEE,SAAA,CADA,uBlB6oEF,CACF,CkBzpEA,gBACE,GAEE,SAAA,CADA,0BlB+oEF,CkB3oEA,IACE,SlB6oEF,CkB1oEA,GAEE,SAAA,CADA,uBlB6oEF,CACF,CkBpoEA,MACE,mgBAAA,CACA,oiBAAA,CACA,0nBAAA,CACA,mhBlBsoEF,CkBhoEA,WAOE,kCAAA,CAAA,0BAAA,CANA,aAAA,CACA,gBAAA,CACA,eAAA,CAEA,uCAAA,CAGA,uBAAA,CAJA,kBlBsoEF,CkB/nEE,iBACE,UlBioEJ,CkB7nEE,iBACE,oBAAA,CAEA,aAAA,CACA,qBAAA,CAFA,UlBioEJ,CkB5nEI,+BAEE,iBlB8nEN,CkBhoEI,+BAEE,kBlB8nEN,CkBhoEI,qBACE,gBlB+nEN,CkB1nEI,kDACE,iBlB6nEN,CkB9nEI,kDACE,kBlB6nEN,CkB9nEI,kDAEE,iBlB4nEN,CkB9nEI,kDAEE,kBlB4nEN,CkBvnEE,iCAGE,iBlB4nEJ,CkB/nEE,iCAGE,kBlB4nEJ,CkB/nEE,uBACE,oBAAA,CACA,6BAAA,CAEA,eAAA,CACA,sBAAA,CACA,qBlBynEJ,CkBrnEE,kBAIE,gBAAA,CACA,oBAAA,CAJA,gBAAA,CAKA,WAAA,CAHA,eAAA,CADA,SlB2nEJ,CkBpnEI,uCACE,oCAAA,CAAA,4BlBsnEN,CkBjnEE,iBACE,oBlBmnEJ,CkBhnEI,sCACE,mCAAA,CAAA,2BlBknEN,CkB9mEI,kCAIE,kBlBqnEN,CkBznEI,kCAIE,iBlBqnEN,CkBznEI,wBAME,6BAAA,CAGA,UAAA,CARA,oBAAA,CAEA,YAAA,CAIA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CAHA,uBAAA,CAHA,WlBunEN,CkB5mEI,kDACE,iBlB8mEN,CkB/mEI,kDACE,kBlB8mEN,CkB1mEI,iCACE,gDAAA,CAAA,wClB4mEN,CkBxmEI,+BACE,8CAAA,CAAA,sClB0mEN,CkBtmEI,+BACE,8CAAA,CAAA,sClBwmEN,CkBpmEI,sCACE,qDAAA,CAAA,6ClBsmEN,CmBxvEA,SAIE,2CAAA,CADA,gCAAA,CADA,aAAA,CADA,UnB8vEF,CmBxvEE,aAPF,SAQI,YnB2vEF,CACF,CK3kEI,wCczLJ,SAaI,YnB2vEF,CACF,CmBxvEE,+BACE,mBnB0vEJ,CmBtvEE,yBAEE,iBnB4vEJ,CmB9vEE,yBAEE,kBnB4vEJ,CmB9vEE,eAME,eAAA,CADA,eAAA,CAJA,QAAA,CAEA,SAAA,CACA,kBnB0vEJ,CmBpvEE,eACE,oBAAA,CACA,aAAA,CACA,kBAAA,CAAA,mBnBsvEJ,CmBjvEE,eAOE,kCAAA,CAAA,0BAAA,CANA,aAAA,CAEA,eAAA,CADA,gBAAA,CAMA,UAAA,CAJA,uCAAA,CACA,oBAAA,CAIA,8DnBkvEJ,CmB7uEI,iEAEE,aAAA,CACA,SnB8uEN,CmBjvEI,8DAEE,aAAA,CACA,SnB8uEN,CmBjvEI,wDAEE,aAAA,CACA,SnB8uEN,CmBzuEM,2CACE,qBnB2uER,CmB5uEM,2CACE,qBnB8uER,CmB/uEM,2CACE,qBnBivER,CmBlvEM,2CACE,qBnBovER,CmBrvEM,2CACE,oBnBuvER,CmBxvEM,2CACE,qBnB0vER,CmB3vEM,2CACE,qBnB6vER,CmB9vEM,2CACE,qBnBgwER,CmBjwEM,4CACE,qBnBmwER,CmBpwEM,4CACE,oBnBswER,CmBvwEM,4CACE,qBnBywER,CmB1wEM,4CACE,qBnB4wER,CmB7wEM,4CACE,qBnB+wER,CmBhxEM,4CACE,qBnBkxER,CmBnxEM,4CACE,oBnBqxER,CmB/wEI,8CAEE,SAAA,CADA,yBAAA,CAEA,wCnBixEN,CoBz1EA,SACE,mBpB41EF,CoBx1EA,kBAEE,iBpBk2EF,CoBp2EA,kBAEE,gBpBk2EF,CoBp2EA,QAQE,+CAAA,CACA,mBAAA,CARA,oBAAA,CAKA,gBAAA,CADA,eAAA,CAEA,eAAA,CAJA,kBAAA,CACA,uBpBg2EF,CoBx1EE,cAGE,uCAAA,CAFA,aAAA,CACA,YAAA,CAEA,6CpB01EJ,CoBr1EI,wCAGE,0CAAA,CADA,+BpBu1EN,CoBj1EE,aACE,uBpBm1EJ,CqBt3EA,yBACE,GACE,uDrBy3EF,CqBt3EA,IACE,mCrBw3EF,CqBr3EA,GACE,8BrBu3EF,CACF,CqBl4EA,iBACE,GACE,uDrBy3EF,CqBt3EA,IACE,mCrBw3EF,CqBr3EA,GACE,8BrBu3EF,CACF,CqB/2EA,MACE,wBrBi3EF,CqB32EA,YA0BE,kCAAA,CAAA,0BAAA,CALA,2CAAA,CACA,mBAAA,CACA,8BAAA,CAHA,gCAAA,CAjBA,iJACE,CAeF,YAAA,CADA,8BAAA,CASA,SAAA,CA1BA,iBAAA,CACA,uBAAA,CAsBA,4BAAA,CAIA,2EACE,CAZF,6BAAA,CADA,SrBs3EF,CqBn2EE,0BACE,gBAAA,CAEA,SAAA,CADA,uBAAA,CAEA,2FrBq2EJ,CqB71EE,2BACE,sCrB+1EJ,CqB31EE,mBAEE,gBAAA,CADA,arB81EJ,CqB11EI,2CACE,YrB41EN,CqBx1EI,0CACE,erB01EN,CqBl1EA,eAEE,YAAA,CADA,kBrBs1EF,CqBl1EE,yBACE,arBo1EJ,CqBh1EE,6BACE,oBAAA,CAGA,iBrBg1EJ,CqB50EE,8BACE,SrB80EJ,CqB10EE,sBAEE,sCAAA,CADA,qCrB60EJ,CqBz0EI,0CAEE,mBAAA,CADA,wBAAA,CAAA,qBAAA,CAAA,oBAAA,CAAA,gBrB40EN,CqBt0EE,sBAIE,UAAA,CACA,cAAA,CAFA,YAAA,CAFA,iBAAA,CAKA,uBAAA,CACA,wBAAA,CAAA,qBAAA,CAAA,oBAAA,CAAA,gBAAA,CALA,SrB60EJ,CqBl0EI,4BAgBE,mCAAA,CAAA,2BAAA,CALA,oDAAA,CACA,iBAAA,CAKA,UAAA,CATA,YAAA,CANA,YAAA,CAOA,cAAA,CACA,cAAA,CATA,iBAAA,CAYA,2CACE,CARF,wBAAA,CACA,6BAAA,CAJA,UrB80EN,CqB7zEM,gCApBF,4BAqBI,sBAAA,CAAA,crBg0EN,CACF,CqB7zEM,+DACE,0CrB+zER,CqBh0EM,4DACE,0CrB+zER,CqBh0EM,sDACE,0CrB+zER,CqB3zEM,0CAIE,sBAAA,CAAA,cAAA,CAHA,2CrB8zER,CqBtzEI,8CACE,oBAAA,CACA,erBwzEN,CqBrzEM,qDAKE,mCAAA,CAJA,oBAAA,CACA,mBAAA,CAEA,iDAAA,CADA,sBrByzER,CqBnzEQ,iBATF,qDAUI,WrBszER,CqBnzEQ,mEACE,uBrBqzEV,CACF,CqB/yEI,yDACE,+BrBizEN,CqBlzEI,sDACE,+BrBizEN,CqBlzEI,gDACE,+BrBizEN,CqB7yEI,oCAEE,sBAAA,CAAA,cAAA,CADA,erBgzEN,CsBxgFA,kBAIE,etBohFF,CsBxhFA,kBAIE,gBtBohFF,CsBxhFA,QAQE,2CAAA,CACA,oBAAA,CAEA,8BAAA,CALA,uCAAA,CACA,eAAA,CAGA,YAAA,CALA,mBAAA,CAJA,cAAA,CACA,UAAA,CAUA,yBAAA,CACA,mGACE,CAXF,StBqhFF,CsBpgFE,aApBF,QAqBI,YtBugFF,CACF,CsBpgFE,kBACE,wBtBsgFJ,CsBlgFE,8BAEE,SAAA,CAEA,mBAAA,CAHA,+BAAA,CAEA,uBtBqgFJ,CsBjgFI,wCACE,8BtBmgFN,CsB9/EE,mCAEE,0CAAA,CADA,+BtBigFJ,CsBlgFE,gCAEE,0CAAA,CADA,+BtBigFJ,CsBlgFE,0BAEE,0CAAA,CADA,+BtBigFJ,CsB5/EE,YACE,oBAAA,CACA,oBtB8/EJ,CuBjjFA,4BACE,GACE,mBvBojFF,CACF,CuBvjFA,oBACE,GACE,mBvBojFF,CACF,CuB5iFA,MACE,kiBvB8iFF,CuBxiFA,YACE,aAAA,CAEA,eAAA,CADA,avB4iFF,CuBxiFE,+BAOE,kBAAA,CAAA,kBvByiFJ,CuBhjFE,+BAOE,iBAAA,CAAA,mBvByiFJ,CuBhjFE,qBAQE,aAAA,CAEA,cAAA,CADA,YAAA,CARA,iBAAA,CAKA,UvB0iFJ,CuBniFI,qCAIE,iBvByiFN,CuB7iFI,qCAIE,kBvByiFN,CuB7iFI,2BAKE,6BAAA,CAGA,UAAA,CAPA,oBAAA,CAEA,YAAA,CAGA,yCAAA,CAAA,iCAAA,CACA,6BAAA,CAAA,qBAAA,CALA,WvB2iFN,CuBhiFE,kBAUE,2CAAA,CACA,mBAAA,CACA,8BAAA,CAJA,gCAAA,CACA,oBAAA,CAJA,kBAAA,CADA,YAAA,CASA,SAAA,CANA,aAAA,CADA,SAAA,CALA,iBAAA,CAgBA,gCAAA,CAAA,4BAAA,CAfA,UAAA,CAYA,+CACE,CAZF,SvB8iFJ,CuB7hFI,gEACE,gBAAA,CACA,SAAA,CACA,8CACE,CADF,sCvB+hFN,CuBliFI,6DACE,gBAAA,CACA,SAAA,CACA,2CACE,CADF,sCvB+hFN,CuBliFI,uDACE,gBAAA,CACA,SAAA,CACA,sCvB+hFN,CuBzhFI,wBAGE,oCACE,wCAAA,CAAA,gCvByhFN,CuBrhFI,2CACE,sBAAA,CAAA,cvBuhFN,CACF,CuBlhFE,kBACE,kBvBohFJ,CuBhhFE,4BAGE,kBAAA,CAAA,oBvBuhFJ,CuB1hFE,4BAGE,mBAAA,CAAA,mBvBuhFJ,CuB1hFE,kBAME,cAAA,CALA,aAAA,CAIA,YAAA,CAKA,uBAAA,CAHA,2CACE,CAJF,kBAAA,CAFA,UvBwhFJ,CuB7gFI,6CACE,+BvB+gFN,CuBhhFI,0CACE,+BvB+gFN,CuBhhFI,oCACE,+BvB+gFN,CuB3gFI,wBACE,qDvB6gFN,CwB5mFA,MAEI,2RAAA,CAAA,8WAAA,CAAA,sPAAA,CAAA,8xBAAA,CAAA,qNAAA,CAAA,gbAAA,CAAA,gMAAA,CAAA,+PAAA,CAAA,8KAAA,CAAA,0eAAA,CAAA,kUAAA,CAAA,gMxBqoFJ,CwBznFE,8CAOE,8CAAA,CACA,sBAAA,CAEA,mBAAA,CACA,8BAAA,CAPA,mCAAA,CAHA,iBAAA,CAIA,gBAAA,CAHA,iBAAA,CACA,eAAA,CAGA,uBxBioFJ,CwBvoFE,2CAOE,8CAAA,CACA,sBAAA,CAEA,mBAAA,CACA,8BAAA,CAPA,mCAAA,CAHA,iBAAA,CAIA,gBAAA,CAHA,iBAAA,CACA,eAAA,CAGA,uBxBioFJ,CwBvoFE,wDASE,uBxB8nFJ,CwBvoFE,qDASE,uBxB8nFJ,CwBvoFE,+CASE,uBxB8nFJ,CwBvoFE,wDASE,wBxB8nFJ,CwBvoFE,qDASE,wBxB8nFJ,CwBvoFE,+CASE,wBxB8nFJ,CwBvoFE,qCAOE,8CAAA,CACA,sBAAA,CAEA,mBAAA,CACA,8BAAA,CAPA,mCAAA,CAHA,iBAAA,CAIA,gBAAA,CAHA,iBAAA,CACA,eAAA,CAGA,uBxBioFJ,CwBznFI,aAdF,8CAeI,exB4nFJ,CwB3oFA,2CAeI,exB4nFJ,CwB3oFA,qCAeI,exB4nFJ,CACF,CwBxnFI,gDACE,qBxB0nFN,CwB3nFI,6CACE,qBxB0nFN,CwB3nFI,uCACE,qBxB0nFN,CwBtnFI,gFAEE,iBAAA,CADA,cxBynFN,CwB1nFI,0EAEE,iBAAA,CADA,cxBynFN,CwB1nFI,8DAEE,iBAAA,CADA,cxBynFN,CwBpnFI,sEACE,iBxBsnFN,CwBvnFI,mEACE,iBxBsnFN,CwBvnFI,6DACE,iBxBsnFN,CwBlnFI,iEACE,exBonFN,CwBrnFI,8DACE,exBonFN,CwBrnFI,wDACE,exBonFN,CwBhnFI,qEACE,YxBknFN,CwBnnFI,kEACE,YxBknFN,CwBnnFI,4DACE,YxBknFN,CwB9mFI,+DACE,mBxBgnFN,CwBjnFI,4DACE,mBxBgnFN,CwBjnFI,sDACE,mBxBgnFN,CwB3mFE,oDAOE,oCAAA,CACA,sBAAA,CAFA,eAAA,CAJA,eAAA,CAAA,YAAA,CAEA,oBAAA,CAAA,iBAAA,CAHA,iBxBsnFJ,CwBvnFE,iDAOE,oCAAA,CACA,sBAAA,CAFA,eAAA,CAJA,eAAA,CAAA,YAAA,CAEA,oBAAA,CAAA,iBAAA,CAHA,iBxBsnFJ,CwBvnFE,8DAGE,kBAAA,CAAA,mBxBonFJ,CwBvnFE,2DAGE,kBAAA,CAAA,mBxBonFJ,CwBvnFE,qDAGE,kBAAA,CAAA,mBxBonFJ,CwBvnFE,8DAGE,kBAAA,CAAA,mBxBonFJ,CwBvnFE,2DAGE,kBAAA,CAAA,mBxBonFJ,CwBvnFE,qDAGE,kBAAA,CAAA,mBxBonFJ,CwBvnFE,8DAKE,iBAAA,CAAA,mBxBknFJ,CwBvnFE,2DAKE,iBAAA,CAAA,mBxBknFJ,CwBvnFE,qDAKE,iBAAA,CAAA,mBxBknFJ,CwBvnFE,8DAKE,kBAAA,CAAA,kBxBknFJ,CwBvnFE,2DAKE,kBAAA,CAAA,kBxBknFJ,CwBvnFE,qDAKE,kBAAA,CAAA,kBxBknFJ,CwBvnFE,8DASE,uBxB8mFJ,CwBvnFE,2DASE,uBxB8mFJ,CwBvnFE,qDASE,uBxB8mFJ,CwBvnFE,8DASE,wBxB8mFJ,CwBvnFE,2DASE,wBxB8mFJ,CwBvnFE,qDASE,wBxB8mFJ,CwBvnFE,8DAUE,4BxB6mFJ,CwBvnFE,2DAUE,4BxB6mFJ,CwBvnFE,qDAUE,4BxB6mFJ,CwBvnFE,8DAUE,6BxB6mFJ,CwBvnFE,2DAUE,6BxB6mFJ,CwBvnFE,qDAUE,6BxB6mFJ,CwBvnFE,2CAOE,oCAAA,CACA,sBAAA,CAFA,eAAA,CAJA,eAAA,CAAA,YAAA,CAEA,oBAAA,CAAA,iBAAA,CAHA,iBxBsnFJ,CwB1mFI,oEACE,exB4mFN,CwB7mFI,iEACE,exB4mFN,CwB7mFI,2DACE,exB4mFN,CwBxmFI,2DAME,wBCwIU,CDpIV,UAAA,CALA,WAAA,CAEA,kDAAA,CAAA,0CAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CARA,iBAAA,CACA,UAAA,CAEA,UxBgnFN,CwBpnFI,wDAME,wBCwIU,CDpIV,UAAA,CALA,WAAA,CAEA,0CAAA,CACA,qBAAA,CACA,iBAAA,CARA,iBAAA,CACA,UAAA,CAEA,UxBgnFN,CwBpnFI,qEAGE,UxBinFN,CwBpnFI,kEAGE,UxBinFN,CwBpnFI,4DAGE,UxBinFN,CwBpnFI,qEAGE,WxBinFN,CwBpnFI,kEAGE,WxBinFN,CwBpnFI,4DAGE,WxBinFN,CwBpnFI,kDAME,wBCwIU,CDpIV,UAAA,CALA,WAAA,CAEA,kDAAA,CAAA,0CAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CARA,iBAAA,CACA,UAAA,CAEA,UxBgnFN,CwBtlFE,iEACE,oBxBylFJ,CwB1lFE,2DACE,oBxBylFJ,CwB1lFE,+CACE,oBxBylFJ,CwBrlFE,wEACE,oCAAA,CACA,oBxBwlFJ,CwB1lFE,kEACE,oCAAA,CACA,oBxBwlFJ,CwB1lFE,sDACE,oCAAA,CACA,oBxBwlFJ,CwBrlFI,+EACE,wBApBG,CAqBH,kDAAA,CAAA,0CAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBxBulFN,CwB3lFI,yEACE,wBApBG,CAqBH,0CAAA,CACA,qBAAA,CACA,iBxBulFN,CwB3lFI,6DACE,wBApBG,CAqBH,kDAAA,CAAA,0CAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBxBulFN,CwBrmFE,oFACE,oBxBwmFJ,CwBzmFE,8EACE,oBxBwmFJ,CwBzmFE,kEACE,oBxBwmFJ,CwBpmFE,2FACE,mCAAA,CACA,oBxBumFJ,CwBzmFE,qFACE,mCAAA,CACA,oBxBumFJ,CwBzmFE,yEACE,mCAAA,CACA,oBxBumFJ,CwBpmFI,kGACE,wBApBG,CAqBH,sDAAA,CAAA,8CAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBxBsmFN,CwB1mFI,4FACE,wBApBG,CAqBH,8CAAA,CACA,qBAAA,CACA,iBxBsmFN,CwB1mFI,gFACE,wBApBG,CAqBH,sDAAA,CAAA,8CAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBxBsmFN,CwBpnFE,uEACE,oBxBunFJ,CwBxnFE,iEACE,oBxBunFJ,CwBxnFE,qDACE,oBxBunFJ,CwBnnFE,8EACE,mCAAA,CACA,oBxBsnFJ,CwBxnFE,wEACE,mCAAA,CACA,oBxBsnFJ,CwBxnFE,4DACE,mCAAA,CACA,oBxBsnFJ,CwBnnFI,qFACE,wBApBG,CAqBH,kDAAA,CAAA,0CAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBxBqnFN,CwBznFI,+EACE,wBApBG,CAqBH,0CAAA,CACA,qBAAA,CACA,iBxBqnFN,CwBznFI,mEACE,wBApBG,CAqBH,kDAAA,CAAA,0CAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBxBqnFN,CwBnoFE,iFACE,oBxBsoFJ,CwBvoFE,2EACE,oBxBsoFJ,CwBvoFE,+DACE,oBxBsoFJ,CwBloFE,wFACE,mCAAA,CACA,oBxBqoFJ,CwBvoFE,kFACE,mCAAA,CACA,oBxBqoFJ,CwBvoFE,sEACE,mCAAA,CACA,oBxBqoFJ,CwBloFI,+FACE,wBApBG,CAqBH,iDAAA,CAAA,yCAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBxBooFN,CwBxoFI,yFACE,wBApBG,CAqBH,yCAAA,CACA,qBAAA,CACA,iBxBooFN,CwBxoFI,6EACE,wBApBG,CAqBH,iDAAA,CAAA,yCAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBxBooFN,CwBlpFE,iFACE,oBxBqpFJ,CwBtpFE,2EACE,oBxBqpFJ,CwBtpFE,+DACE,oBxBqpFJ,CwBjpFE,wFACE,kCAAA,CACA,oBxBopFJ,CwBtpFE,kFACE,kCAAA,CACA,oBxBopFJ,CwBtpFE,sEACE,kCAAA,CACA,oBxBopFJ,CwBjpFI,+FACE,wBApBG,CAqBH,qDAAA,CAAA,6CAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBxBmpFN,CwBvpFI,yFACE,wBApBG,CAqBH,6CAAA,CACA,qBAAA,CACA,iBxBmpFN,CwBvpFI,6EACE,wBApBG,CAqBH,qDAAA,CAAA,6CAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBxBmpFN,CwBjqFE,gFACE,oBxBoqFJ,CwBrqFE,0EACE,oBxBoqFJ,CwBrqFE,8DACE,oBxBoqFJ,CwBhqFE,uFACE,oCAAA,CACA,oBxBmqFJ,CwBrqFE,iFACE,oCAAA,CACA,oBxBmqFJ,CwBrqFE,qEACE,oCAAA,CACA,oBxBmqFJ,CwBhqFI,8FACE,wBApBG,CAqBH,sDAAA,CAAA,8CAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBxBkqFN,CwBtqFI,wFACE,wBApBG,CAqBH,8CAAA,CACA,qBAAA,CACA,iBxBkqFN,CwBtqFI,4EACE,wBApBG,CAqBH,sDAAA,CAAA,8CAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBxBkqFN,CwBhrFE,wFACE,oBxBmrFJ,CwBprFE,kFACE,oBxBmrFJ,CwBprFE,sEACE,oBxBmrFJ,CwB/qFE,+FACE,mCAAA,CACA,oBxBkrFJ,CwBprFE,yFACE,mCAAA,CACA,oBxBkrFJ,CwBprFE,6EACE,mCAAA,CACA,oBxBkrFJ,CwB/qFI,sGACE,wBApBG,CAqBH,qDAAA,CAAA,6CAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBxBirFN,CwBrrFI,gGACE,wBApBG,CAqBH,6CAAA,CACA,qBAAA,CACA,iBxBirFN,CwBrrFI,oFACE,wBApBG,CAqBH,qDAAA,CAAA,6CAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBxBirFN,CwB/rFE,mFACE,oBxBksFJ,CwBnsFE,6EACE,oBxBksFJ,CwBnsFE,iEACE,oBxBksFJ,CwB9rFE,0FACE,mCAAA,CACA,oBxBisFJ,CwBnsFE,oFACE,mCAAA,CACA,oBxBisFJ,CwBnsFE,wEACE,mCAAA,CACA,oBxBisFJ,CwB9rFI,iGACE,wBApBG,CAqBH,qDAAA,CAAA,6CAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBxBgsFN,CwBpsFI,2FACE,wBApBG,CAqBH,6CAAA,CACA,qBAAA,CACA,iBxBgsFN,CwBpsFI,+EACE,wBApBG,CAqBH,qDAAA,CAAA,6CAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBxBgsFN,CwB9sFE,0EACE,oBxBitFJ,CwBltFE,oEACE,oBxBitFJ,CwBltFE,wDACE,oBxBitFJ,CwB7sFE,iFACE,mCAAA,CACA,oBxBgtFJ,CwBltFE,2EACE,mCAAA,CACA,oBxBgtFJ,CwBltFE,+DACE,mCAAA,CACA,oBxBgtFJ,CwB7sFI,wFACE,wBApBG,CAqBH,oDAAA,CAAA,4CAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBxB+sFN,CwBntFI,kFACE,wBApBG,CAqBH,4CAAA,CACA,qBAAA,CACA,iBxB+sFN,CwBntFI,sEACE,wBApBG,CAqBH,oDAAA,CAAA,4CAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBxB+sFN,CwB7tFE,gEACE,oBxBguFJ,CwBjuFE,0DACE,oBxBguFJ,CwBjuFE,8CACE,oBxBguFJ,CwB5tFE,uEACE,kCAAA,CACA,oBxB+tFJ,CwBjuFE,iEACE,kCAAA,CACA,oBxB+tFJ,CwBjuFE,qDACE,kCAAA,CACA,oBxB+tFJ,CwB5tFI,8EACE,wBApBG,CAqBH,iDAAA,CAAA,yCAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBxB8tFN,CwBluFI,wEACE,wBApBG,CAqBH,yCAAA,CACA,qBAAA,CACA,iBxB8tFN,CwBluFI,4DACE,wBApBG,CAqBH,iDAAA,CAAA,yCAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBxB8tFN,CwB5uFE,oEACE,oBxB+uFJ,CwBhvFE,8DACE,oBxB+uFJ,CwBhvFE,kDACE,oBxB+uFJ,CwB3uFE,2EACE,oCAAA,CACA,oBxB8uFJ,CwBhvFE,qEACE,oCAAA,CACA,oBxB8uFJ,CwBhvFE,yDACE,oCAAA,CACA,oBxB8uFJ,CwB3uFI,kFACE,wBApBG,CAqBH,qDAAA,CAAA,6CAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBxB6uFN,CwBjvFI,4EACE,wBApBG,CAqBH,6CAAA,CACA,qBAAA,CACA,iBxB6uFN,CwBjvFI,gEACE,wBApBG,CAqBH,qDAAA,CAAA,6CAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBxB6uFN,CwB3vFE,wEACE,oBxB8vFJ,CwB/vFE,kEACE,oBxB8vFJ,CwB/vFE,sDACE,oBxB8vFJ,CwB1vFE,+EACE,kCAAA,CACA,oBxB6vFJ,CwB/vFE,yEACE,kCAAA,CACA,oBxB6vFJ,CwB/vFE,6DACE,kCAAA,CACA,oBxB6vFJ,CwB1vFI,sFACE,wBApBG,CAqBH,mDAAA,CAAA,2CAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBxB4vFN,CwBhwFI,gFACE,wBApBG,CAqBH,2CAAA,CACA,qBAAA,CACA,iBxB4vFN,CwBhwFI,oEACE,wBApBG,CAqBH,mDAAA,CAAA,2CAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBxB4vFN,C0Bn5FA,MACE,wM1Bs5FF,C0B74FE,sBACE,uCAAA,CACA,gB1Bg5FJ,C0B74FI,mCACE,a1B+4FN,C0Bh5FI,mCACE,c1B+4FN,C0B34FM,4BACE,sB1B64FR,C0B14FQ,mCACE,gC1B44FV,C0Bx4FQ,2DAEE,SAAA,CADA,uBAAA,CAEA,e1B04FV,C0Bt4FQ,0EAEE,SAAA,CADA,uB1By4FV,C0B14FQ,uEAEE,SAAA,CADA,uB1By4FV,C0B14FQ,iEAEE,SAAA,CADA,uB1By4FV,C0Bp4FQ,yCACE,Y1Bs4FV,C0B/3FE,0BAEE,eAAA,CADA,e1Bk4FJ,C0B93FI,+BACE,oB1Bg4FN,C0B33FE,gDACE,Y1B63FJ,C0Bz3FE,8BAEE,+BAAA,CADA,oBAAA,CAGA,WAAA,CAGA,SAAA,CADA,4BAAA,CAEA,4DACE,CAJF,0B1B63FJ,C0Bp3FI,aAdF,8BAeI,+BAAA,CAEA,SAAA,CADA,uB1Bw3FJ,CACF,C0Bp3FI,wCACE,6B1Bs3FN,C0Bl3FI,oCACE,+B1Bo3FN,C0Bh3FI,qCAIE,6BAAA,CAIA,UAAA,CAPA,oBAAA,CAEA,YAAA,CAEA,2CAAA,CAAA,mCAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CALA,W1Bw3FN,C0B52FQ,mDACE,oB1B82FV,C2B39FE,kCAEE,iB3Bi+FJ,C2Bn+FE,kCAEE,kB3Bi+FJ,C2Bn+FE,wBAGE,yCAAA,CAFA,oBAAA,CAGA,SAAA,CACA,mC3B89FJ,C2Bz9FI,aAVF,wBAWI,Y3B49FJ,CACF,C2Bx9FE,mFAEE,SAAA,CACA,2CACE,CADF,mC3B09FJ,C2B79FE,gFAEE,SAAA,CACA,wCACE,CADF,mC3B09FJ,C2B79FE,0EAEE,SAAA,CACA,mC3B09FJ,C2Bp9FE,mFAEE,+B3Bs9FJ,C2Bx9FE,gFAEE,+B3Bs9FJ,C2Bx9FE,0EAEE,+B3Bs9FJ,C2Bl9FE,oBACE,yBAAA,CACA,uBAAA,CAGA,yE3Bk9FJ,CKn1FI,sCsBrHE,qDACE,uB3B28FN,CACF,C2Bt8FE,0CACE,yB3Bw8FJ,C2Bz8FE,uCACE,yB3Bw8FJ,C2Bz8FE,iCACE,yB3Bw8FJ,C2Bp8FE,sBACE,0B3Bs8FJ,C4BjgGE,2BACE,a5BogGJ,CK/0FI,wCuBtLF,2BAKI,e5BogGJ,CACF,C4BjgGI,6BAEE,0BAAA,CAAA,2BAAA,CACA,eAAA,CACA,iBAAA,CAHA,yBAAA,CAAA,sBAAA,CAAA,iB5BsgGN,C4BhgGM,2CACE,kB5BkgGR,C6BnhGE,kDACE,kCAAA,CAAA,0B7BshGJ,C6BvhGE,+CACE,0B7BshGJ,C6BvhGE,yCACE,kCAAA,CAAA,0B7BshGJ,C6BlhGE,uBACE,4C7BohGJ,C6BhhGE,uBACE,4C7BkhGJ,C6B9gGE,4BACE,qC7BghGJ,C6B7gGI,mCACE,a7B+gGN,C6B3gGI,kCACE,a7B6gGN,C6BxgGE,0BAKE,eAAA,CAJA,aAAA,CACA,YAAA,CAEA,aAAA,CADA,kBAAA,CAAA,mB7B4gGJ,C6BvgGI,uCACE,e7BygGN,C6BrgGI,sCACE,kB7BugGN,C8BtjGA,MACE,8L9ByjGF,C8BhjGE,oBACE,iBAAA,CAEA,gBAAA,CADA,a9BojGJ,C8BhjGI,wCACE,uB9BkjGN,C8B9iGI,gCAEE,eAAA,CADA,gB9BijGN,C8B1iGM,wCACE,mB9B4iGR,C8BtiGE,8BAGE,oB9B2iGJ,C8B9iGE,8BAGE,mB9B2iGJ,C8B9iGE,8BAIE,4B9B0iGJ,C8B9iGE,4DAKE,6B9ByiGJ,C8B9iGE,8BAKE,4B9ByiGJ,C8B9iGE,oBAME,cAAA,CALA,aAAA,CACA,e9B4iGJ,C8BriGI,kCACE,uCAAA,CACA,oB9BuiGN,C8BniGI,wCAEE,uCAAA,CADA,Y9BsiGN,C8BjiGI,oCAGE,W9B4iGN,C8B/iGI,oCAGE,U9B4iGN,C8B/iGI,0BAME,6BAAA,CAMA,UAAA,CAPA,WAAA,CAEA,yCAAA,CAAA,iCAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CARA,iBAAA,CACA,UAAA,CAQA,sBAAA,CACA,yBAAA,CAPA,U9B2iGN,C8BhiGM,oCACE,wB9BkiGR,C8B7hGI,4BACE,Y9B+hGN,C8B1hGI,4CACE,Y9B4hGN,C+B9mGE,qDACE,mBAAA,CACA,cAAA,CACA,uB/BinGJ,C+BpnGE,kDACE,mBAAA,CACA,cAAA,CACA,uB/BinGJ,C+BpnGE,4CACE,mBAAA,CACA,cAAA,CACA,uB/BinGJ,C+B9mGI,yDAGE,iBAAA,CADA,eAAA,CADA,a/BknGN,C+BnnGI,sDAGE,iBAAA,CADA,eAAA,CADA,a/BknGN,C+BnnGI,gDAGE,iBAAA,CADA,eAAA,CADA,a/BknGN,CgCxnGE,gCACE,sChC2nGJ,CgC5nGE,6BACE,sChC2nGJ,CgC5nGE,uBACE,sChC2nGJ,CgCxnGE,cACE,yChC0nGJ,CgC9mGE,4DACE,oChCgnGJ,CgCjnGE,yDACE,oChCgnGJ,CgCjnGE,mDACE,oChCgnGJ,CgCxmGE,6CACE,qChC0mGJ,CgC3mGE,0CACE,qChC0mGJ,CgC3mGE,oCACE,qChC0mGJ,CgChmGE,oDACE,oChCkmGJ,CgCnmGE,iDACE,oChCkmGJ,CgCnmGE,2CACE,oChCkmGJ,CgCzlGE,gDACE,qChC2lGJ,CgC5lGE,6CACE,qChC2lGJ,CgC5lGE,uCACE,qChC2lGJ,CgCtlGE,gCACE,kChCwlGJ,CgCzlGE,6BACE,kChCwlGJ,CgCzlGE,uBACE,kChCwlGJ,CgCllGE,qCACE,sChColGJ,CgCrlGE,kCACE,sChColGJ,CgCrlGE,4BACE,sChColGJ,CgC7kGE,yCACE,sChC+kGJ,CgChlGE,sCACE,sChC+kGJ,CgChlGE,gCACE,sChC+kGJ,CgCxkGE,yCACE,qChC0kGJ,CgC3kGE,sCACE,qChC0kGJ,CgC3kGE,gCACE,qChC0kGJ,CgCjkGE,gDACE,qChCmkGJ,CgCpkGE,6CACE,qChCmkGJ,CgCpkGE,uCACE,qChCmkGJ,CgC3jGE,6CACE,sChC6jGJ,CgC9jGE,0CACE,sChC6jGJ,CgC9jGE,oCACE,sChC6jGJ,CgCljGE,yDACE,qChCojGJ,CgCrjGE,sDACE,qChCojGJ,CgCrjGE,gDACE,qChCojGJ,CgC/iGE,iCAGE,mBAAA,CAFA,gBAAA,CACA,gBhCkjGJ,CgCpjGE,8BAGE,mBAAA,CAFA,gBAAA,CACA,gBhCkjGJ,CgCpjGE,wBAGE,mBAAA,CAFA,gBAAA,CACA,gBhCkjGJ,CgC9iGE,eACE,4ChCgjGJ,CgC7iGE,eACE,4ChC+iGJ,CgC3iGE,gBAIE,wCAAA,CAHA,aAAA,CACA,wBAAA,CACA,wBhC8iGJ,CgCziGE,yBAOE,wCAAA,CACA,+DAAA,CACA,4BAAA,CACA,6BAAA,CARA,aAAA,CAIA,eAAA,CADA,eAAA,CAFA,cAAA,CACA,oCAAA,CAHA,iBhCojGJ,CgCxiGI,6BACE,YhC0iGN,CgCviGM,kCACE,wBAAA,CACA,yBhCyiGR,CgCniGE,iCAWE,wCAAA,CACA,+DAAA,CAFA,uCAAA,CAGA,0BAAA,CAPA,UAAA,CAJA,oBAAA,CAMA,2BAAA,CADA,2BAAA,CAEA,2BAAA,CARA,uBAAA,CAAA,eAAA,CAaA,wBAAA,CAAA,qBAAA,CAAA,oBAAA,CAAA,gBAAA,CATA,ShC4iGJ,CgC1hGE,sBACE,iBAAA,CACA,iBhC4hGJ,CgCphGI,sCACE,gBhCshGN,CgClhGI,gDACE,YhCohGN,CgC1gGA,gBACE,iBhC6gGF,CgCzgGE,uCACE,aAAA,CACA,ShC2gGJ,CgC7gGE,oCACE,aAAA,CACA,ShC2gGJ,CgC7gGE,8BACE,aAAA,CACA,ShC2gGJ,CgCtgGE,mBACE,YhCwgGJ,CgCngGE,oBACE,QhCqgGJ,CgCjgGE,4BACE,WAAA,CACA,SAAA,CACA,ehCmgGJ,CgC9/FE,yBAIE,wCAAA,CAEA,+BAAA,CADA,4BAAA,CAFA,eAAA,CADA,oDAAA,CAKA,wBAAA,CAAA,qBAAA,CAAA,oBAAA,CAAA,gBhCggGJ,CgC5/FE,2BAEE,+DAAA,CADA,2BhC+/FJ,CgC3/FI,+BACE,uCAAA,CACA,gBhC6/FN,CgCx/FE,sBACE,MAAA,CACA,WhC0/FJ,CgCr/FA,aACE,ahCw/FF,CgC/+FE,4BAEE,aAAA,CADA,YhCm/FJ,CgC/+FI,iCAEE,2BAAA,CADA,wBhCk/FN,CgC5+FE,6DAKE,2CAAA,CAEA,+BAAA,CADA,gCAAA,CADA,sBAAA,CAJA,mBAAA,CAEA,gBAAA,CADA,ahCm/FJ,CgCr/FE,0DAKE,2CAAA,CAEA,+BAAA,CADA,gCAAA,CADA,sBAAA,CAJA,mBAAA,CAEA,gBAAA,CADA,ahCm/FJ,CgCr/FE,oDAKE,2CAAA,CAEA,+BAAA,CADA,gCAAA,CADA,sBAAA,CAJA,mBAAA,CAEA,gBAAA,CADA,ahCm/FJ,CgC3+FI,mEAEE,UAAA,CACA,UAAA,CAFA,ahC++FN,CgCh/FI,gEAEE,UAAA,CACA,UAAA,CAFA,ahC++FN,CgCh/FI,0DAEE,UAAA,CACA,UAAA,CAFA,ahC++FN,CK1mGI,wC2B0IF,8BACE,iBhCo+FF,CgCj+FE,mCACE,eAAA,CACA,ehCm+FJ,CgC/9FE,mCACE,ehCi+FJ,CgC79FE,sCAEE,mBAAA,CACA,eAAA,CADA,oBAAA,CADA,kBAAA,CAAA,mBhCi+FJ,CgC19FA,mCAEE,eAAA,CADA,iBhC89FF,CgC19FE,wCACE,eAAA,CACA,ehC49FJ,CACF,CDxzGI,kDAIE,+BAAA,CACA,8BAAA,CAFA,aAAA,CADA,QAAA,CADA,iBC8zGN,CD/zGI,+CAIE,+BAAA,CACA,8BAAA,CAFA,aAAA,CADA,QAAA,CADA,iBC8zGN,CD/zGI,yCAIE,+BAAA,CACA,8BAAA,CAFA,aAAA,CADA,QAAA,CADA,iBC8zGN,CDtzGI,uBAEE,uCAAA,CADA,cCyzGN,CDpwGM,iHAEE,WAlDkB,CAiDlB,kBC+wGR,CDhxGM,6HAEE,WAlDkB,CAiDlB,kBC2xGR,CD5xGM,6HAEE,WAlDkB,CAiDlB,kBCuyGR,CDxyGM,oHAEE,WAlDkB,CAiDlB,kBCmzGR,CDpzGM,0HAEE,WAlDkB,CAiDlB,kBC+zGR,CDh0GM,uHAEE,WAlDkB,CAiDlB,kBC20GR,CD50GM,uHAEE,WAlDkB,CAiDlB,kBCu1GR,CDx1GM,6HAEE,WAlDkB,CAiDlB,kBCm2GR,CDp2GM,yCAEE,WAlDkB,CAiDlB,kBCu2GR,CDx2GM,yCAEE,WAlDkB,CAiDlB,kBC22GR,CD52GM,0CAEE,WAlDkB,CAiDlB,kBC+2GR,CDh3GM,uCAEE,WAlDkB,CAiDlB,kBCm3GR,CDp3GM,wCAEE,WAlDkB,CAiDlB,kBCu3GR,CDx3GM,sCAEE,WAlDkB,CAiDlB,kBC23GR,CD53GM,wCAEE,WAlDkB,CAiDlB,kBC+3GR,CDh4GM,oCAEE,WAlDkB,CAiDlB,kBCm4GR,CDp4GM,2CAEE,WAlDkB,CAiDlB,kBCu4GR,CDx4GM,qCAEE,WAlDkB,CAiDlB,kBC24GR,CD54GM,oCAEE,WAlDkB,CAiDlB,kBC+4GR,CDh5GM,kCAEE,WAlDkB,CAiDlB,kBCm5GR,CDp5GM,qCAEE,WAlDkB,CAiDlB,kBCu5GR,CDx5GM,mCAEE,WAlDkB,CAiDlB,kBC25GR,CD55GM,qCAEE,WAlDkB,CAiDlB,kBC+5GR,CDh6GM,wCAEE,WAlDkB,CAiDlB,kBCm6GR,CDp6GM,sCAEE,WAlDkB,CAiDlB,kBCu6GR,CDx6GM,2CAEE,WAlDkB,CAiDlB,kBC26GR,CDh6GM,iCAEE,WAPkB,CAMlB,iBCm6GR,CDp6GM,uCAEE,WAPkB,CAMlB,iBCu6GR,CDx6GM,mCAEE,WAPkB,CAMlB,iBC26GR,CiC1/GE,wBAKE,mBAAA,CAHA,YAAA,CACA,qBAAA,CACA,YAAA,CAHA,iBjCigHJ,CiCv/GI,8BAGE,QAAA,CACA,SAAA,CAHA,iBAAA,CACA,OjC2/GN,CiCt/GM,qCACE,0BjCw/GR,CiCz9GE,2BAME,uBAAA,CAFA,+DAAA,CAHA,YAAA,CACA,cAAA,CACA,aAAA,CAEA,gCAAA,CAAA,4BAAA,CAEA,oBjC29GJ,CiCx9GI,aAVF,2BAWI,gBjC29GJ,CACF,CiCx9GI,cAGE,+BACE,iBjCw9GN,CiCr9GM,sCAOE,oCAAA,CALA,QAAA,CAWA,UAAA,CATA,aAAA,CAEA,UAAA,CAHA,MAAA,CAFA,iBAAA,CAOA,2CAAA,CACA,qCACE,CAEF,kDAAA,CAPA,+BjC69GR,CACF,CiCh9GI,8CACE,YjCk9GN,CiC98GI,iCAQE,qCAAA,CAEA,6BAAA,CANA,uCAAA,CAOA,cAAA,CAVA,aAAA,CAKA,gBAAA,CADA,eAAA,CAFA,8BAAA,CAMA,uBAAA,CAGA,2CACE,CANF,kBAAA,CALA,UjC09GN,CiC38GM,aAII,6CACE,OjC08GV,CiC38GQ,8CACE,OjC68GV,CiC98GQ,8CACE,OjCg9GV,CiCj9GQ,8CACE,OjCm9GV,CiCp9GQ,8CACE,OjCs9GV,CiCv9GQ,8CACE,OjCy9GV,CiC19GQ,8CACE,OjC49GV,CiC79GQ,8CACE,OjC+9GV,CiCh+GQ,8CACE,OjCk+GV,CiCn+GQ,+CACE,QjCq+GV,CiCt+GQ,+CACE,QjCw+GV,CiCz+GQ,+CACE,QjC2+GV,CiC5+GQ,+CACE,QjC8+GV,CiC/+GQ,+CACE,QjCi/GV,CiCl/GQ,+CACE,QjCo/GV,CiCr/GQ,+CACE,QjCu/GV,CiCx/GQ,+CACE,QjC0/GV,CiC3/GQ,+CACE,QjC6/GV,CiC9/GQ,+CACE,QjCggHV,CiCjgHQ,+CACE,QjCmgHV,CACF,CiC9/GM,uCACE,+BjCggHR,CiC1/GE,4BACE,UjC4/GJ,CiCz/GI,aAJF,4BAKI,gBjC4/GJ,CACF,CiCx/GE,0BACE,YjC0/GJ,CiCv/GI,aAJF,0BAKI,ajC0/GJ,CiCt/GM,sCACE,OjCw/GR,CiCz/GM,uCACE,OjC2/GR,CiC5/GM,uCACE,OjC8/GR,CiC//GM,uCACE,OjCigHR,CiClgHM,uCACE,OjCogHR,CiCrgHM,uCACE,OjCugHR,CiCxgHM,uCACE,OjC0gHR,CiC3gHM,uCACE,OjC6gHR,CiC9gHM,uCACE,OjCghHR,CiCjhHM,wCACE,QjCmhHR,CiCphHM,wCACE,QjCshHR,CiCvhHM,wCACE,QjCyhHR,CiC1hHM,wCACE,QjC4hHR,CiC7hHM,wCACE,QjC+hHR,CiChiHM,wCACE,QjCkiHR,CiCniHM,wCACE,QjCqiHR,CiCtiHM,wCACE,QjCwiHR,CiCziHM,wCACE,QjC2iHR,CiC5iHM,wCACE,QjC8iHR,CiC/iHM,wCACE,QjCijHR,CACF,CiC3iHI,iKAGE,QjC6iHN,CiC1iHM,8MACE,wBjC+iHR,CiChjHM,4ZAEE,yBjC8iHR,CiCziHI,uRACE,wBjC4iHN,CiC7iHI,kJAEE,yBjC2iHN,CiC7iHI,yEAEE,wBjC2iHN,CiCviHI,sCACE,QjCyiHN,CKriHI,wC4BSF,wDAGE,kBjCiiHF,CiCpiHA,wDAGE,mBjCiiHF,CiCpiHA,8CAEE,eAAA,CADA,eAAA,CAGA,iCjCgiHF,CiC5hHE,8DACE,mBjC+hHJ,CiChiHE,8DACE,kBjC+hHJ,CiChiHE,oDAEE,UjC8hHJ,CACF,CiClhHE,cAHF,olDAII,+BjCqhHF,CiClhHE,g8GACE,sCjCohHJ,CACF,CiC/gHA,4sDACE,uDjCkhHF,CiC9gHA,wmDACE,ajCihHF,CkC3vHA,MACE,mVAAA,CAEA,4VlC+vHF,CkCrvHE,4BAEE,oBAAA,CADA,iBlCyvHJ,CkCpvHI,sDAGE,SlCsvHN,CkCzvHI,sDAGE,UlCsvHN,CkCzvHI,4CACE,iBAAA,CACA,SlCuvHN,CkCjvHE,+CAEE,SAAA,CADA,UlCovHJ,CkC/uHE,kDAGE,WlCwvHJ,CkC3vHE,kDAGE,YlCwvHJ,CkC3vHE,wCAME,qDAAA,CAIA,UAAA,CALA,aAAA,CAEA,0CAAA,CAAA,kCAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CARA,iBAAA,CACA,SAAA,CAEA,YlCuvHJ,CkC7uHE,gEACE,wBT0Wa,CSzWb,mDAAA,CAAA,2ClC+uHJ,CmChyHA,QACE,8DAAA,CAGA,+CAAA,CACA,iEAAA,CACA,oDAAA,CACA,sDAAA,CACA,mDnCiyHF,CmC7xHA,SAEE,kBAAA,CADA,YnCiyHF,CKxoHI,mC+BhKA,8BAIE,kBpC6yHJ,CoCjzHE,8BAIE,iBpC6yHJ,CoCjzHE,oBACE,UAAA,CAIA,mBAAA,CAFA,YAAA,CADA,apC+yHJ,CoCzyHI,8BACE,WpC2yHN,CoCvyHI,kCAEE,iBAAA,CAAA,cpCyyHN,CoC3yHI,kCAEE,aAAA,CAAA,kBpCyyHN,CoC3yHI,wBACE,WpC0yHN,CoCtyHM,kCACE,UpCwyHR,CACF","file":"main.css"} \ No newline at end of file diff --git a/assets/stylesheets/palette.e6a45f82.min.css b/assets/stylesheets/palette.e6a45f82.min.css new file mode 100644 index 00000000..9d16769c --- /dev/null +++ b/assets/stylesheets/palette.e6a45f82.min.css @@ -0,0 +1 @@ +[data-md-color-accent=red]{--md-accent-fg-color:#ff1947;--md-accent-fg-color--transparent:rgba(255,25,71,.1);--md-accent-bg-color:#fff;--md-accent-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-accent=pink]{--md-accent-fg-color:#f50056;--md-accent-fg-color--transparent:rgba(245,0,86,.1);--md-accent-bg-color:#fff;--md-accent-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-accent=purple]{--md-accent-fg-color:#df41fb;--md-accent-fg-color--transparent:rgba(223,65,251,.1);--md-accent-bg-color:#fff;--md-accent-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-accent=deep-purple]{--md-accent-fg-color:#7c4dff;--md-accent-fg-color--transparent:rgba(124,77,255,.1);--md-accent-bg-color:#fff;--md-accent-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-accent=indigo]{--md-accent-fg-color:#526cfe;--md-accent-fg-color--transparent:rgba(82,108,254,.1);--md-accent-bg-color:#fff;--md-accent-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-accent=blue]{--md-accent-fg-color:#4287ff;--md-accent-fg-color--transparent:rgba(66,135,255,.1);--md-accent-bg-color:#fff;--md-accent-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-accent=light-blue]{--md-accent-fg-color:#0091eb;--md-accent-fg-color--transparent:rgba(0,145,235,.1);--md-accent-bg-color:#fff;--md-accent-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-accent=cyan]{--md-accent-fg-color:#00bad6;--md-accent-fg-color--transparent:rgba(0,186,214,.1);--md-accent-bg-color:#fff;--md-accent-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-accent=teal]{--md-accent-fg-color:#00bda4;--md-accent-fg-color--transparent:rgba(0,189,164,.1);--md-accent-bg-color:#fff;--md-accent-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-accent=green]{--md-accent-fg-color:#00c753;--md-accent-fg-color--transparent:rgba(0,199,83,.1);--md-accent-bg-color:#fff;--md-accent-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-accent=light-green]{--md-accent-fg-color:#63de17;--md-accent-fg-color--transparent:rgba(99,222,23,.1);--md-accent-bg-color:#fff;--md-accent-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-accent=lime]{--md-accent-fg-color:#b0eb00;--md-accent-fg-color--transparent:rgba(176,235,0,.1);--md-accent-bg-color:rgba(0,0,0,.87);--md-accent-bg-color--light:rgba(0,0,0,.54)}[data-md-color-accent=yellow]{--md-accent-fg-color:#ffd500;--md-accent-fg-color--transparent:rgba(255,213,0,.1);--md-accent-bg-color:rgba(0,0,0,.87);--md-accent-bg-color--light:rgba(0,0,0,.54)}[data-md-color-accent=amber]{--md-accent-fg-color:#fa0;--md-accent-fg-color--transparent:rgba(255,170,0,.1);--md-accent-bg-color:rgba(0,0,0,.87);--md-accent-bg-color--light:rgba(0,0,0,.54)}[data-md-color-accent=orange]{--md-accent-fg-color:#ff9100;--md-accent-fg-color--transparent:rgba(255,145,0,.1);--md-accent-bg-color:rgba(0,0,0,.87);--md-accent-bg-color--light:rgba(0,0,0,.54)}[data-md-color-accent=deep-orange]{--md-accent-fg-color:#ff6e42;--md-accent-fg-color--transparent:rgba(255,110,66,.1);--md-accent-bg-color:#fff;--md-accent-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=red]{--md-primary-fg-color:#ef5552;--md-primary-fg-color--light:#e57171;--md-primary-fg-color--dark:#e53734;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=pink]{--md-primary-fg-color:#e92063;--md-primary-fg-color--light:#ec417a;--md-primary-fg-color--dark:#c3185d;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=purple]{--md-primary-fg-color:#ab47bd;--md-primary-fg-color--light:#bb69c9;--md-primary-fg-color--dark:#8c24a8;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=deep-purple]{--md-primary-fg-color:#7e56c2;--md-primary-fg-color--light:#9574cd;--md-primary-fg-color--dark:#673ab6;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=indigo]{--md-primary-fg-color:#4051b5;--md-primary-fg-color--light:#5d6cc0;--md-primary-fg-color--dark:#303fa1;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=blue]{--md-primary-fg-color:#2094f3;--md-primary-fg-color--light:#42a5f5;--md-primary-fg-color--dark:#1975d2;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=light-blue]{--md-primary-fg-color:#02a6f2;--md-primary-fg-color--light:#28b5f6;--md-primary-fg-color--dark:#0287cf;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=cyan]{--md-primary-fg-color:#00bdd6;--md-primary-fg-color--light:#25c5da;--md-primary-fg-color--dark:#0097a8;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=teal]{--md-primary-fg-color:#009485;--md-primary-fg-color--light:#26a699;--md-primary-fg-color--dark:#007a6c;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=green]{--md-primary-fg-color:#4cae4f;--md-primary-fg-color--light:#68bb6c;--md-primary-fg-color--dark:#398e3d;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=light-green]{--md-primary-fg-color:#8bc34b;--md-primary-fg-color--light:#9ccc66;--md-primary-fg-color--dark:#689f38;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=lime]{--md-primary-fg-color:#cbdc38;--md-primary-fg-color--light:#d3e156;--md-primary-fg-color--dark:#b0b52c;--md-primary-bg-color:rgba(0,0,0,.87);--md-primary-bg-color--light:rgba(0,0,0,.54)}[data-md-color-primary=yellow]{--md-primary-fg-color:#ffec3d;--md-primary-fg-color--light:#ffee57;--md-primary-fg-color--dark:#fbc02d;--md-primary-bg-color:rgba(0,0,0,.87);--md-primary-bg-color--light:rgba(0,0,0,.54)}[data-md-color-primary=amber]{--md-primary-fg-color:#ffc105;--md-primary-fg-color--light:#ffc929;--md-primary-fg-color--dark:#ffa200;--md-primary-bg-color:rgba(0,0,0,.87);--md-primary-bg-color--light:rgba(0,0,0,.54)}[data-md-color-primary=orange]{--md-primary-fg-color:#ffa724;--md-primary-fg-color--light:#ffa724;--md-primary-fg-color--dark:#fa8900;--md-primary-bg-color:rgba(0,0,0,.87);--md-primary-bg-color--light:rgba(0,0,0,.54)}[data-md-color-primary=deep-orange]{--md-primary-fg-color:#ff6e42;--md-primary-fg-color--light:#ff8a66;--md-primary-fg-color--dark:#f4511f;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=brown]{--md-primary-fg-color:#795649;--md-primary-fg-color--light:#8d6e62;--md-primary-fg-color--dark:#5d4037;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=grey]{--md-primary-fg-color:#757575;--md-primary-fg-color--light:#9e9e9e;--md-primary-fg-color--dark:#616161;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=blue-grey]{--md-primary-fg-color:#546d78;--md-primary-fg-color--light:#607c8a;--md-primary-fg-color--dark:#455a63;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=white]{--md-primary-fg-color:#fff;--md-primary-fg-color--light:hsla(0,0%,100%,.7);--md-primary-fg-color--dark:rgba(0,0,0,.07);--md-primary-bg-color:rgba(0,0,0,.87);--md-primary-bg-color--light:rgba(0,0,0,.54);--md-typeset-a-color:#4051b5}@media screen and (min-width:60em){[data-md-color-primary=white] .md-search__form{background-color:rgba(0,0,0,.07)}[data-md-color-primary=white] .md-search__form:hover{background-color:rgba(0,0,0,.32)}[data-md-color-primary=white] .md-search__input+.md-search__icon{color:rgba(0,0,0,.87)}}@media screen and (min-width:76.25em){[data-md-color-primary=white] .md-tabs{border-bottom:.05rem solid rgba(0,0,0,.07)}}[data-md-color-primary=black]{--md-primary-fg-color:#000;--md-primary-fg-color--light:rgba(0,0,0,.54);--md-primary-fg-color--dark:#000;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7);--md-typeset-a-color:#4051b5}[data-md-color-primary=black] .md-header{background-color:#000}@media screen and (max-width:59.9375em){[data-md-color-primary=black] .md-nav__source{background-color:rgba(0,0,0,.87)}}@media screen and (min-width:60em){[data-md-color-primary=black] .md-search__form{background-color:hsla(0,0%,100%,.12)}[data-md-color-primary=black] .md-search__form:hover{background-color:hsla(0,0%,100%,.3)}}@media screen and (max-width:76.1875em){html [data-md-color-primary=black] .md-nav--primary .md-nav__title[for=__drawer]{background-color:#000}}@media screen and (min-width:76.25em){[data-md-color-primary=black] .md-tabs{background-color:#000}}@media screen{[data-md-color-scheme=slate]{--md-hue:232;--md-default-fg-color:hsla(var(--md-hue),75%,95%,1);--md-default-fg-color--light:hsla(var(--md-hue),75%,90%,0.62);--md-default-fg-color--lighter:hsla(var(--md-hue),75%,90%,0.32);--md-default-fg-color--lightest:hsla(var(--md-hue),75%,90%,0.12);--md-default-bg-color:hsla(var(--md-hue),15%,21%,1);--md-default-bg-color--light:hsla(var(--md-hue),15%,21%,0.54);--md-default-bg-color--lighter:hsla(var(--md-hue),15%,21%,0.26);--md-default-bg-color--lightest:hsla(var(--md-hue),15%,21%,0.07);--md-code-fg-color:hsla(var(--md-hue),18%,86%,1);--md-code-bg-color:hsla(var(--md-hue),15%,15%,1);--md-code-hl-color:rgba(66,135,255,.15);--md-code-hl-number-color:#e6695b;--md-code-hl-special-color:#f06090;--md-code-hl-function-color:#c973d9;--md-code-hl-constant-color:#9383e2;--md-code-hl-keyword-color:#6791e0;--md-code-hl-string-color:#2fb170;--md-code-hl-name-color:var(--md-code-fg-color);--md-code-hl-operator-color:var(--md-default-fg-color--light);--md-code-hl-punctuation-color:var(--md-default-fg-color--light);--md-code-hl-comment-color:var(--md-default-fg-color--light);--md-code-hl-generic-color:var(--md-default-fg-color--light);--md-code-hl-variable-color:var(--md-default-fg-color--light);--md-typeset-color:var(--md-default-fg-color);--md-typeset-a-color:var(--md-primary-fg-color);--md-typeset-mark-color:rgba(66,135,255,.3);--md-typeset-kbd-color:hsla(var(--md-hue),15%,94%,0.12);--md-typeset-kbd-accent-color:hsla(var(--md-hue),15%,94%,0.2);--md-typeset-kbd-border-color:hsla(var(--md-hue),15%,14%,1);--md-typeset-table-color:hsla(var(--md-hue),75%,95%,0.12);--md-admonition-bg-color:hsla(var(--md-hue),0%,100%,0.025);--md-footer-bg-color:hsla(var(--md-hue),15%,12%,0.87);--md-footer-bg-color--dark:hsla(var(--md-hue),15%,10%,1)}[data-md-color-scheme=slate][data-md-color-primary=black],[data-md-color-scheme=slate][data-md-color-primary=white]{--md-typeset-a-color:#5d6cc0}[data-md-color-scheme=slate] img[src$="#only-light"]{display:none}[data-md-color-scheme=slate] img[src$="#only-dark"]{display:initial}} \ No newline at end of file diff --git a/assets/stylesheets/palette.e6a45f82.min.css.map b/assets/stylesheets/palette.e6a45f82.min.css.map new file mode 100644 index 00000000..b33c518d --- /dev/null +++ b/assets/stylesheets/palette.e6a45f82.min.css.map @@ -0,0 +1 @@ +{"version":3,"sources":["src/assets/stylesheets/palette/_accent.scss","../../../src/assets/stylesheets/palette.scss","src/assets/stylesheets/palette/_primary.scss","src/assets/stylesheets/utilities/_break.scss","src/assets/stylesheets/palette/_scheme.scss"],"names":[],"mappings":"AA8CE,2BACE,4BAAA,CACA,oDAAA,CAOE,yBAAA,CACA,8CCnDN,CDyCE,4BACE,4BAAA,CACA,mDAAA,CAOE,yBAAA,CACA,8CC5CN,CDkCE,8BACE,4BAAA,CACA,qDAAA,CAOE,yBAAA,CACA,8CCrCN,CD2BE,mCACE,4BAAA,CACA,qDAAA,CAOE,yBAAA,CACA,8CC9BN,CDoBE,8BACE,4BAAA,CACA,qDAAA,CAOE,yBAAA,CACA,8CCvBN,CDaE,4BACE,4BAAA,CACA,qDAAA,CAOE,yBAAA,CACA,8CChBN,CDME,kCACE,4BAAA,CACA,oDAAA,CAOE,yBAAA,CACA,8CCTN,CDDE,4BACE,4BAAA,CACA,oDAAA,CAOE,yBAAA,CACA,8CCFN,CDRE,4BACE,4BAAA,CACA,oDAAA,CAOE,yBAAA,CACA,8CCKN,CDfE,6BACE,4BAAA,CACA,mDAAA,CAOE,yBAAA,CACA,8CCYN,CDtBE,mCACE,4BAAA,CACA,oDAAA,CAOE,yBAAA,CACA,8CCmBN,CD7BE,4BACE,4BAAA,CACA,oDAAA,CAIE,oCAAA,CACA,2CC6BN,CDpCE,8BACE,4BAAA,CACA,oDAAA,CAIE,oCAAA,CACA,2CCoCN,CD3CE,6BACE,yBAAA,CACA,oDAAA,CAIE,oCAAA,CACA,2CC2CN,CDlDE,8BACE,4BAAA,CACA,oDAAA,CAIE,oCAAA,CACA,2CCkDN,CDzDE,mCACE,4BAAA,CACA,qDAAA,CAOE,yBAAA,CACA,8CCsDN,CC3DE,4BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CDwDN,CCnEE,6BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CDgEN,CC3EE,+BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CDwEN,CCnFE,oCACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CDgFN,CC3FE,+BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CDwFN,CCnGE,6BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CDgGN,CC3GE,mCACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CDwGN,CCnHE,6BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CDgHN,CC3HE,6BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CDwHN,CCnIE,8BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CDgIN,CC3IE,oCACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CDwIN,CCnJE,6BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAIE,qCAAA,CACA,4CDmJN,CC3JE,+BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAIE,qCAAA,CACA,4CD2JN,CCnKE,8BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAIE,qCAAA,CACA,4CDmKN,CC3KE,+BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAIE,qCAAA,CACA,4CD2KN,CCnLE,oCACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CDgLN,CC3LE,8BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CDwLN,CCnME,6BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CDgMN,CC3ME,kCACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CDwMN,CC9LA,8BACE,0BAAA,CACA,+CAAA,CACA,2CAAA,CACA,qCAAA,CACA,4CAAA,CAGA,4BD+LF,CE9EI,mCD3GA,+CACE,gCD4LJ,CCzLI,qDACE,gCD2LN,CCtLE,iEACE,qBDwLJ,CACF,CEzFI,sCDxFA,uCACE,0CDoLJ,CACF,CC3KA,8BACE,0BAAA,CACA,4CAAA,CACA,gCAAA,CACA,0BAAA,CACA,+CAAA,CAGA,4BD4KF,CCzKE,yCACE,qBD2KJ,CEvFI,wCD7EA,8CACE,gCDuKJ,CACF,CE/GI,mCDjDA,+CACE,oCDmKJ,CChKI,qDACE,mCDkKN,CACF,CEpGI,wCDtDA,iFACE,qBD6JJ,CACF,CE5HI,sCD1BA,uCACE,qBDyJJ,CACF,CGvSA,cAGE,6BAKE,YAAA,CAGA,mDAAA,CACA,6DAAA,CACA,+DAAA,CACA,gEAAA,CACA,mDAAA,CACA,6DAAA,CACA,+DAAA,CACA,gEAAA,CAGA,gDAAA,CACA,gDAAA,CAGA,uCAAA,CACA,iCAAA,CACA,kCAAA,CACA,mCAAA,CACA,mCAAA,CACA,kCAAA,CACA,iCAAA,CACA,+CAAA,CACA,6DAAA,CACA,gEAAA,CACA,4DAAA,CACA,4DAAA,CACA,6DAAA,CAGA,6CAAA,CAGA,+CAAA,CAGA,2CAAA,CAGA,uDAAA,CACA,6DAAA,CACA,2DAAA,CAGA,yDAAA,CAGA,0DAAA,CAGA,qDAAA,CACA,wDHgRF,CG7QE,oHAIE,4BH4QJ,CGxQE,qDACE,YH0QJ,CGtQE,oDACE,eHwQJ,CACF","file":"palette.css"} \ No newline at end of file diff --git a/hpc_administration/administrators/osg-flock/index.html b/hpc_administration/administrators/osg-flock/index.html new file mode 100644 index 00000000..2e1deac8 --- /dev/null +++ b/hpc_administration/administrators/osg-flock/index.html @@ -0,0 +1,2319 @@ + + + + + + + + + + + + + + + + + + Osg flock - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    + +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/hpc_administration/test-document/index.html b/hpc_administration/test-document/index.html new file mode 100644 index 00000000..28f4090b --- /dev/null +++ b/hpc_administration/test-document/index.html @@ -0,0 +1,2390 @@ + + + + + + + + + + + + + + + + + + Header 1 - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Header 1

    +

    Header 2

    +

    Header 3

    +

    Header 4

    +
    Header 5
    +
    Header 6
    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/automated_workflows/dagman-simple-example/index.html b/htc_workloads/automated_workflows/dagman-simple-example/index.html new file mode 100644 index 00000000..65b35b3c --- /dev/null +++ b/htc_workloads/automated_workflows/dagman-simple-example/index.html @@ -0,0 +1,2754 @@ + + + + + + + + + + + + + + + + + + Simple Example of a DAGMan Workflow - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    + +
    + + +
    +
    + + + + +

    Simple Example of a DAGMan Workflow

    +

    This guide walks you step-by-step through the construction and submission of a +simple DAGMan workflow. +We recommend this guide if you are interested in automating your job submissions.

    +

    Overview

    +

    In this guide:

    +
      +
    1. Introduction
    2. +
    3. Structure of the DAG
    4. +
    5. The Minimal DAG Input File
    6. +
    7. The Submit Files
    8. +
    9. Running the Simple DAG
    10. +
    11. Monitoring the Simple DAG
    12. +
    13. Wrapping Up
    14. +
    +

    For the full details on various DAGMan features, see the HTCondor manual pages:

    + +

    1. Introduction

    +

    Consider the case of two HTCondor jobs that use the submit files A.sub and B.sub. +Let's say that A.sub generates an output file (output.txt) that B.sub will analyze. +To run this workflow manually, we would

    +
      +
    1. Submit the first HTCondor job with condor_submit A.sub.
    2. +
    3. Wait for the first HTCondor job to complete successfully.
    4. +
    5. Submit the second HTCondor job with condor_submit B.sub.
    6. +
    +

    If the first HTCondor job using A.sub is fairly short, then manually running this workflow is not a big deal. +But if the first HTCondor job takes a long time to complete (maybe takes several hours to run, or has to wait for special resources), +this can be very inconvenient. +Instead, we can use DAGMan to automatically submit B.sub once the first HTCondor job using A.sub has completed successfully. +This guide walks through the process of creating such a DAGMan workflow.

    +

    2. Structure of the DAG

    +

    In this scenario, our workflow could be described as a DAG consisting of two nodes (A.sub and B.sub) connected by a single edge (output.txt). +To represent this relationship, we will define nodes A and B - corresponding to A.sub and B.sub, respectively - and connect them with a line pointing from A and B, like in this figure:

    +

    Node A with arrow pointing to Node B

    +

    In order to use DAGMan to run this workflow, we need to communicate this structure to DAGMan via the .dag input file.

    +

    3. The Minimal DAG Input File

    +

    Let's call the input file simple.dag. +At minimum, the contents of the simple.dag input file are

    +
    # simple.dag
    +
    +# Define the DAG jobs
    +JOB A A.sub
    +JOB B B.sub
    +
    +# Define the connections
    +PARENT A CHILD B
    +
    +

    In a DAGMan input file, a node is defined using the JOB keyword, followed by the name of the node and the name of the corresponding submit file. +In this case, we have created a node named A and instructed DAGMan to use the submit file A.sub for executing that node. +We have similarly created node B and instructed DAGMan to use the submit file B.sub. +(While there is no requirement that the name of the node match the name of the corresponding submit file, it is convenient to use a consistent naming scheme.)

    +

    To connect the nodes, we use the PARENT .. CHILD .. syntax. +Since node B requires that node A has completed successfully, we say that node A is the PARENT while node B is the CHILD. +Note that we do not need to define why node B is dependent on node A, only that it is.

    +

    4. The Submit Files

    +

    Now let's define simple examples of the submit files A.sub and B.sub.

    +

    Node A

    +

    First, the submit file A.sub uses the executable A.sh, which will generate the file called output.txt. +We have explicitly told HTCondor to transfer back this file by using the transfer_output_files command.

    +
    # A.sub
    +
    +executable = A.sh
    +
    +log = A.log
    +output = A.out
    +error = A.err
    +
    +transfer_output_files = output.txt
    +
    ++JobDurationCategory = "Medium"
    +
    +request_cpus = 1
    +request_memory = 1GB
    +request_disk = 1GB
    +
    +queue
    +
    +

    The executable file simply saves the hostname of the machine running the script:

    +
    #!/bin/bash
    +
    +# A.sh
    +hostname > output.txt
    +
    +sleep 1m  # so we can see the job in "running" status
    +
    +

    Node B

    +

    Second, the submit file B.sub uses the executable B.sh to print a message using the contents of the output.txt file generated by A.sh. +We have explicitly told HTCondor to transfer output.txt as an input file for this job, using the transfer_input_files command. +Thus we have finally defined the "edge" that connects nodes A and B: the use of output.txt.

    +
    # B.sub
    +
    +executable = B.sh
    +
    +log = B.log
    +output = B.out
    +error = B.err
    +
    +transfer_input_files = output.txt
    +
    ++JobDurationCategory = "Medium"
    +
    +request_cpus = 1
    +request_memory = 1GB
    +request_disk = 1GB
    +
    +queue
    +
    +

    The executable file contains the command for printing the desired message, which will be printed to B.out.

    +
    #!/bin/bash
    +
    +# B.sh
    +echo "The previous job was executed on the following machine:"
    +cat output.txt
    +
    +sleep 1m  # so we can see the job in "running" status
    +
    +

    The directory structure

    +

    Based on the contents of simple.dag, DAGMan is expecting that the submit files A.sub and B.sub are in the same directory as simple.dag. +The submit files in turn are expecting A.sh and B.sh be in the same directory as A.sub and B.sub. +Thus, we have the following directory structure:

    +
    DAG_simple/
    +|-- A.sh
    +|-- A.sub
    +|-- B.sh
    +|-- B.sub
    +|-- simple.dag
    +
    +

    It is possible to organize each job into its own directory, but for now we will use this simple, flat organization.

    +

    5. Running the Simple DAG

    +

    To run the DAG workflow described by simple.dag, we use the HTCondor command condor_submit_dag:

    +
    condor_submit_dag simple.dag
    +
    +

    The DAGMan utility will then parse the input file and generate an assortment of related files that it will use for monitoring and managing your workflow. +Here is the output of running the above command:

    +
    [user@ap40 DAG_simple]$ condor_submit_dag simple.dag
    +
    +Loading classad userMap 'checkpoint_destination_map' ts=1699037029 from /etc/condor/checkpoint-destination-mapfile
    +-----------------------------------------------------------------------
    +File for submitting this DAG to HTCondor           : simple.dag.condor.sub
    +Log of DAGMan debugging messages                 : simple.dag.dagman.out
    +Log of HTCondor library output                     : simple.dag.lib.out
    +Log of HTCondor library error messages             : simple.dag.lib.err
    +Log of the life of condor_dagman itself          : simple.dag.dagman.log
    +
    +Submitting job(s).
    +1 job(s) submitted to cluster 562265.
    +-----------------------------------------------------------------------
    +
    +

    The output shows the list of standard files that are created with every DAG submission along with brief descriptions. +A couple of additional files, some of them temporary, will be created during the lifetime of the DAG.

    +

    6. Monitoring the Simple DAG

    +

    You can see the status of the DAG in your queue just like with any other HTCondor job submission.

    +
    [user@ap40 DAG_simple]$ condor_q
    +
    +-- Schedd: ap40.uw.osg-htc.org : <128.105.68.92:9618?... @ 12/14/23 11:26:51
    +OWNER       BATCH_NAME           SUBMITTED   DONE   RUN    IDLE  TOTAL JOB_IDS
    +user        simple.dag+562265  12/14 11:26      _      _      1      2 562279.0
    +
    +

    There are a couple of things to note about the condor_q output:

    +
      +
    • The BATCH_NAME for the DAGMan job is the name of the input DAG file, simple.dag, plus the Job ID of the DAGMan scheduler job (562265 in this case): simple.dag+562265.
    • +
    • The total number of jobs for simple.dag+562265 corresponds to the total number of nodes in the DAG (2).
    • +
    • Only 1 node is listed as "Idle", meaning that DAGMan has only submitted 1 job so far. This is consistent with the fact that node A has to complete before DAGMan can submit the job for node B.
    • +
    +
    +

    Note that if you are very quick to run your condor_q command after running your condor_submit_dag command, then you may see only the DAGMan scheduler job. It may take a few seconds for DAGMan to start up and submit the HTCondor job associated with the first node.

    +
    +

    To see more detailed information about the DAG workflow, use condor_q -nob -dag. +For example,

    +
    [user@ap40 DAG_simple]$ condor_q -dag -nob
    +
    +-- Schedd: ap40.uw.osg-htc.org : <128.105.68.92:9618?... @ 12/14/23 11:27:03
    + ID        OWNER/NODENAME      SUBMITTED     RUN_TIME ST PRI SIZE CMD
    +562265.0   user                12/14 11:26   0+00:00:37 R  0    0.5 condor_dagman -p 0 -f -l . -Loc
    +562279.0    |-A                12/14 11:26   0+00:00:00 I  0    0.0 A.sh
    +
    +

    In this case, the first entry is the DAGMan scheduler job that you created when you first submitted the DAG. +The following entries correspond to the nodes whose jobs are currently in the queue. +Nodes that have not yet been submitted by DAGMan or that have completed and thus left the queue will not show up in your condor_q output.

    +

    7. Wrapping Up

    +

    After waiting enough time, this simple DAG workflow should complete without any issues. +But of course, that will not be the case for every DAG, especially as you start to create your own. +DAGMan has a lot more features for managing and submitting DAG workflows, ranging from how to handle errors, combining DAG workflows, and restarting failed DAG workflows.

    +

    For now, we recommend that you continue exploring DAGMan by going through our Intermediate DAGMan Tutorial. There is also our guide Overview: Submit Workflows with HTCondor's DAGMan, which contains links to more resources in the More Resources section.

    +

    Finally, the definitive guide to DAGMan and DAG workflows is HTCondor's DAGMan Documentation.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/automated_workflows/dagman-workflows/index.html b/htc_workloads/automated_workflows/dagman-workflows/index.html new file mode 100644 index 00000000..226985b3 --- /dev/null +++ b/htc_workloads/automated_workflows/dagman-workflows/index.html @@ -0,0 +1,2914 @@ + + + + + + + + + + + + + + + + + + Overview: Submit Workflows with HTCondor's DAGMan - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Overview: Submit Workflows with HTCondor's DAGMan

    +

    If you want to automate job submission, keep reading to learn about HTCondor's DAGMan utility.

    +

    Overview

    +

    In this guide:

    + +

    Introduction

    +

    If your work requires jobs that run in a particular sequence, you may benefit +from a workflow tool that submits and monitors jobs for you in the correct +order. HTCondor has a built in utility called "DAGMan" that automates the +job submission of such a workflow.

    +

    This talk (originally presented at HTCondor Week 2020) gives a good introduction +to DAGMan and its most useful features:

    +

    +DAGMan Talk +

    +

    DAGMan can be a powerful tool for creating large and complex HTCondor workflows.

    +

    What is DAGMan?

    +

    DAGMan is short for "DAG Manager", and is a utility built into HTCondor for automatically running a workflow (DAG) of jobs, +where the results of an earlier job are required for running a later job. +This workflow is similar to a flowchart with a definite beginning and ending. +More specificially, "DAG" is an acronym for Directed Acyclic Graph, a concept from the mathematic field of graph theory:

    +
      +
    1. Graph: a collection of points ("nodes" or "vertices") connected to each other by lines ("edges").
    2. +
    3. Directed: the edges between nodes have direction, that is, each edge begins on one node and ends on a different node.
    4. +
    5. Acyclic: the graph does not have a cycle - or loop - where the graph returns to a previous node.
    6. +
    +

    By using a directed acyclic graph, we can guarantee that the workflow has a defined 'start' and 'end'. +In DAGMan, each node in the workflow corresponds to a job submission (i.e., condor_submit). +Each edge in the workflow corresponds to a set of files that are the output of one job submission and +the input of another job submission. +For convenience, we refer to such a workflow and the files necessary to execute it as "the DAG".

    +

    The Basics of the DAG Input File

    +

    The purpose of the DAG input file (typically .dag) is to instruct DAGMan on the structure of the workflow you want to run. +Additional instructions can be included in the DAG input file about how to manage the job submissions, rerun jobs (nodes), +or to run pre- or post-processing scripts.

    +

    In general, the structure of the .dag input file consists of one instruction per line, with each line starting with a keyword defining the type of instruction.

    +

    1. Defining the DAG jobs

    +

    To define a DAG job, we begin a new line with JOB then provide the name, the submit file, and any additional options. The syntax is

    +
    JOB JobName JobSubmitFile [additional options]
    +
    +

    where you need to replace JobName with the name you would like the DAG job to have, and JobSubmitFile with the name or path of the corresponding submit file. Both JobName and JobSubmitFile need to be specified.

    +

    Every node in your workflow must have a JOB entry in the .dag input file. While there are other instructions that can reference a particular node, they will only work if the node in question has a corresponding JOB entry.

    +

    2. Defining the connections

    +

    To define the relationship between DAG jobs in a workflow, we begin a new line with PARENT then the name of the first DAG job, followed by CHILD and the name of the second DAG job. That is, the PARENT DAG job must complete successfully before DAGMan will submit the CHILD DAG job. In fact, you can define such relationship for many DAG jobs (nodes) at the same time. Thus, the syntax is

    +
    PARENT p1 [p2 ...] CHILD c1 [c2 ...]
    +
    +

    where you replace p# with the JobName for each parent DAG job, and c# with the JobName for each child DAG job. The child DAG jobs will only be submitted if all of the parent DAG jobs are completed successfully. Each JobName you provide must have a corresponding JOB entry elsewhere in the .dag input file.

    +
    +

    Technically, DAGMan does not require that each DAG job in a workflow is connected to another DAG job. +This allows you to submit many unrelated DAG jobs at one time using DAGMan.

    +
    +

    Note that in defining the PARENT-CHILD relationship, there is no definition of how they are related. +Effectively, DAGMan does not need to know the reason why the PARENT DAG jobs must complete successfully in order to submit the CHILD DAG jobs. +There can be many reasons why you might want to execute the DAG jobs in this order, although the most common reason +is that the PARENT DAG jobs create files that are required by the CHILD DAG jobs. +In that case, it is up to you to organize the submit files of those DAG jobs in such a way that the output of the PARENT DAG jobs +can be used as the input of the CHILD DAG jobs. +In the DAGMan Features section, we will discuss tools that can assist you with this endeavor.

    +

    Running a DAG Workflow

    +

    1. Submitting the DAG

    +

    Because the DAG workflow represents a special type of job, a special command is used to submit it. To submit the DAG workflow, use

    +
    condor_submit_dag example.dag
    +
    +

    where example.dag is the name of your DAG input file containing the JOB and PARENT-CHILD definitions for your workflow. +This will create and submit a "DAGMan job" that will in turn be responsible for submitting and monitoring the job nodes described in your DAG input file.

    +

    A set of files is created for every DAG submission, and the output of the condor_submit_dag lists the files with a brief description. +For the above submit command, the output will look like:

    +
    ------------------------------------------------------------------------
    +File for submitting this DAG to HTCondor           : example.dag.condor.sub
    +Log of DAGMan debugging messages                 : example.dag.dagman.out
    +Log of HTCondor library output                     : example.dag.lib.out
    +Log of HTCondor library error messages             : example.dag.lib.err
    +Log of the life of condor_dagman itself          : example.dag.dagman.log
    +
    +Submitting job(s).
    +1 job(s) submitted to cluster ######.
    +------------------------------------------------------------------------
    +
    +

    2. Monitoring the DAG

    +

    The DAGMan job is actually a "scheduler" job (described by example.dag.condor.sub) and the status and progress of the DAGMan job is saved to example.dag.dagman.out. +Using condor_q or condor_watch_q, the DAGMan job will be under the name example.dag+######, where ###### is the Cluster ID of the DAGMan scheduler job. +Each job submitted by DAGMan, however, will be assigned a separate Cluster ID.

    +

    For a more detailed status display, you can use

    +
    condor_q -dag -nobatch
    +
    +

    If you want to see the status of just the DAGMan job proper, use

    +
    condor_q -dag -nobatch -constr 'JobUniverse == 7'
    +
    +

    (Technically, this shows all "scheduler" type HTCondor jobs, but for most users this will only include DAGMan jobs.)

    +

    For even more details about the execution of the DAG workflow, you can examine the contents of the example.dag.dagman.out file. +The file contains timestamped log information of the execution and status of nodes in the DAG, along with statistics. +As the DAG progresses, it will also create the files example.dag.metrics and example.dag.nodes.log, where the metrics file contains the current statistics of the DAG and the log file is an aggregate of the individual nodes' user log files.

    +

    If you want to see the status of a specific node, use

    +
    condor_q -dag -nobatch -constr 'DAGNodeName == "YourNodeName"'
    +
    +

    where YourNodeName should be replaced with the name of the node you want to know the status of. +Note that this works only for jobs that are currently in the queue; if the node has not yet been submitted, or if it has completed and thus exited the queue, then you will not see the node using this command. +To see if the node has completed, you should examine the contents of the .dagman.out file. +A simple way to see the relevant log messages is to use a command like

    +
    grep "Node YourNodeName" example.dag.dagman.out
    +
    +

    If you'd like to monitor the status of the individual nodes in your DAG workflow using condor_watch_q, then wait long enough for the .nodes.log file to be generated. +Then run

    +
    condor_watch_q -file example.dag.nodes.log
    +
    +

    Now condor_watch_q will update when DAGMan submits another job.

    +

    3. Removing the DAG

    +

    To remove the DAG, you need to condor_rm the Cluster ID corresponding to the DAGMan scheduler job. +This will also remove the jobs that the DAGMan scheduler job submitted as part of executing the DAG workflow. +A removed DAG is almost always marked as a failed DAG, and as such will generate a rescue DAG (see below).

    +

    DAGMan Features

    +

    1. Pre- and post-processing for DAG jobs

    +

    You can tell DAGMan to execute a script before or after it submits the HTCondor job for a particular node. +Such a script will be executed on the submit server itself and can be used to set up the files needed for the HTCondor job, or to clean up or validate the files after a successful HTCondor job.

    +

    The instructions for executing these scripts are placed in the input .dag file. +You must specify the name of the node the script is attached to and whether the script is to be executed before (PRE) or after (POST) the HTCondor job. +Here is a simple example:

    +
    # Define the node (required) (example node named "my_node")
    +JOB my_node run.sub
    +
    +# Define the script for executing before submitting run.sub (optional)
    +SCRIPT PRE my_node setup.sh
    +
    +# Define a script for executing after run.sub has completed (optional)
    +SCRIPT POST my_node cleanup.sh
    +
    +

    In this example, when it is time for DAGMan to execute the node my_node, it will take the following steps:

    +
      +
    1. Execute setup.sh (the PRE script)
    2. +
    3. Submit the HTCondor job run.sub (the node's JOB)
    4. +
    5. Wait for the HTCondor job to complete
    6. +
    7. Execute cleanup.sh (the POST script)
    8. +
    +

    All of these steps count as part of DAGMan's attempt to execute the node my_node and may affect whether DAGMan considers the node to have succeeded or failed. For more information on PRE and POST scripts as well as other scripts that DAGMan can use, see the HTCondor documentation.

    +

    2. Retrying failed nodes

    +

    You can tell DAGMan to automatically retry a node if it fails. +This way you don't have to manually restart the DAG if the job failed due to a transient issue.

    +

    The instructions for how many times to retry a node go in the input .dag file. +You must specify the node and the maximum number of times that DAGMan should attempt to retry that node. +Here is a simple example:

    +
    # Define the node (required) (example node named "my_node")
    +JOB my_node run.sub
    +
    +# Define the number of times to retry "my_node"
    +RETRY my_node 2
    +
    +

    In this example, if the job associated with node my_node fails for some reason, then DAGMan will resubmit run.sub up to 2 more times.

    +

    You can also apply the retry for statement to all nodes in the DAG by specifying ALL_NODES instead of a specific node name. +For example,

    +
    RETRY ALL_NODES 2
    +
    +

    As a general rule, you should not set the number of retry attempts to more than 1 or 2 times. +If a job is failing repeatedly, it is better to troubleshoot the cause of that failure. +This is especially true when you applying the RETRY statement to all of the nodes in your DAG.

    +

    DAGMan considers the exit code of the last executed step when it considers the success or failure of the node overall. +There are various possible combinations that can determine the success or failure of the node itself, as discussed in the HTCondor documentation here. +DAGMan only considers the success/failure of the node as a whole when deciding if it needs to attempt a retry. +Importantly, if the .sub file for a node submits multiple HTCondor jobs, when any one of those jobs fails, DAGMan considers all of the jobs to have failed and will remove them from queue.

    +

    Finally, note that DAGMan does not consider an HTCondor job with a "hold" status as being completed. +In that case, you can include a command in the submit file to automatically remove a held job from the queue. +When a job is removed from the queue, DAGMan considers that job to be failed (though as noted above, failure of the HTCondor job does not necessarily mean the node has failed).

    +

    For more information on the RETRY statement, see the HTCondor documentation.

    +

    3. Restarting a failed DAG

    +

    Generally, a DAG is considered failed if any one of its component nodes has failed. +That does not mean, however, that DAGMan immediately stops the DAG. +Instead, when DAGMan encounters a failed node, it will attempt to complete as much of the DAG as possible that does not require that node. +Only then will DAGMan stop running the workflow.

    +

    When the DAGMan job exits from a failed DAG, it generates a report of the status of the nodes in a file called a "Rescue DAG" with the extension .rescue###, +starting from .rescue001 and counting up each time a Rescue DAG is generated. +The Rescue DAG can then be used by DAGMan to restart the DAG, skipping over nodes that are marked as completed successfully and jumping directly to the failed nodes that need to be resubmitted. +The power of this feature is that DAGMan will not duplicate the work of already completed nodes, which is especially useful when there is an issue at the end of a large DAG.

    +

    DAGMan will automatically use a Rescue DAG if it exists when you use condor_submit_dag to submit the original .dag input file. +If more than one Rescue DAG exists for a given .dag input file, then DAGMan will use the most recent Rescue DAG +(the one with the highest number at the end of .rescue###).

    +
    # Automatically use the Rescue DAG if it exists
    +condor_submit_dag example.dag
    +
    +
    + +If you do NOT want DAGMan to use an existing Rescue DAG, then you can use the `-force` option to start the DAG completely from scratch: + + +
    # Do NOT use the Rescue DAG if it exists
    +condor_submit_dag -force example.dag
    +
    + + +
    + +

    For more information on Rescue DAGs and how to explicitly control them, see the HTCondor documentation.

    +
    +

    If the DAGMan scheduler job itself crashes (or is placed on hold) and is unable to write a Rescue DAG, then when the DAGMan job is resubmitted (or released), DAGMan will go into "recovery mode". +Essentially this involves DAGMan reconstructing the Rescue DAG that should have been written, but wasn't due to the job interruption. +DAGMan will then resume the DAG based on its analysis of the files that do exist.

    +
    +

    More Resources

    +

    Tutorials

    +

    If you are interested in using DAGMan to automatically run a workflow, we highly recommend that you first go through our tutorial Simple Example of a DAG Workflow. +This tutorial takes you step by step through the mechanics of creating and submitting a DAG.

    +

    Once you've understood the basics from the simple tutorial, you are ready to explore more examples and scenarios in our Intermediate DAGMan Tutorial.

    +

    Trainings & Videos

    +

    A recent live training covering the materials in the Intermediate DAGMan Tutorial was held by the current lead developer for HTCondor's DAGMan utility: DAGMan: HTCondor's Workflow Manager.

    +

    An introductory tutorial to DAGMan previously presented at HTCondor Week was recorded and is available on YouTube: HTCondor DAGMan Workflows tutorial.

    +

    More recently, the current lead developer of HTCondor's DAGMan utility gave an intermediate tutorial: HTC23 DAGMan intermediate.

    +

    Documentation

    +

    HTCondor's DAGMan Documentation

    +

    The HTCondor documentation is the definitive guide to DAGMan and contains a wealth of information about DAGMan, its features, and its behaviors.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/automated_workflows/tutorial-dagman-intermediate/index.html b/htc_workloads/automated_workflows/tutorial-dagman-intermediate/index.html new file mode 100644 index 00000000..ac518733 --- /dev/null +++ b/htc_workloads/automated_workflows/tutorial-dagman-intermediate/index.html @@ -0,0 +1,2636 @@ + + + + + + + + + + + + + + + + + + Intermediate DAGMan: Uses and Features - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Intermediate DAGMan: Uses and Features

    +

    This tutorial helps you explore HTCondor's DAGMan its many features. +You can download the tutorial materials with the following command:

    +
    $ git clone https://github.com/OSGConnect/tutorial-dagman-intermediate
    + +

    Now move into the new directory to see the contents of the tutorial:

    +
    $ cd tutorial-dagman-intermediate
    + +

    At the top level is a worked example of a "Diamond DAG" that summarizes the basic components of a creating, submitting, and managing DAGMan workflows. +In the lower level additional_examples directory are more worked examples with their own READMEs highlighting specific features that can be used with DAGMan. +Brief descriptions of these examples are provided in the Additional Examples section at the end of this tutorial.

    +

    Before working on this tutorial, we recommend that you read through our other DAGMan guides:

    + +

    The definitive guide to DAGMan is HTCondor's DAGMan Documentation.

    +

    Types of DAGs

    +

    While any workflow that satisfies the definition of a "Directed Acyclic Graph" (DAG) can be executed using DAGMan, there are certain types that are the most commonly used:

    +
      +
    • Sequential DAG: all the nodes are connected in a sequence of one after the other, with no branching or splitting. This is good for conducting increasingly refined analyses of a dataset or initial result, or chaining together a long-running calculation. The simplest example of this type is used in the guide Simple Example of a DAGMan Workflow.
    • +
    • Split and recombine DAG: the first node is connected to many nodes of the same layer (split) which then all connect back to the final node (recombine). Here, you can set up the shared environment in the first node and use it to parallelize the work into many individual jobs, then finally combine/analyze the results in the final node. The simplest example of this type is the "Diamond DAG" - the subject of this tutorial.
    • +
    • Collection DAG: no node is connected to any other node. This is good for the situation where you need to run a bunch of otherwise unrelated jobs, perhaps ones that are competing for a limited resource. The simplest example of this type is a DAG consisting of a single node.
    • +
    +

    These types are by no means "official", nor are they the only types of structure that a DAG can take. Rather, they serve as starting points from which you can build your own DAG workflow, which will likely consist of some combination of the above elements.

    +

    The Diamond DAG

    +

    As mentioned above, the "Diamond DAG" is the simplest example of a "split and recombine" DAG. +In this case, the first node TOP is connected to two nodes LEFT and RIGHT (the "split"), which are then connected to the final node BOTTOM (the "recombine").

    +

    Diamond DAG figure

    +

    To describe the flow of the DAG and the parts needed to execute it, DAGMan uses a custom description language in an input file, typically named <DAG Name>.dag. +The two most important commands in the DAG description language are:

    +
      +
    1. JOB <NodeName> <NodeSubmitFile> - Describes a node and the submit file it will use to run the node.
    2. +
    3. PARENT <NodeName1> CHILD <NodeName2> - Describes the edge starting from <NodeName1> and pointing to <NodeName2>.
    4. +
    +

    These commands have been used to construct the Diamond DAG and are saved in the file diamond.dag. +To view the contents of diamond.dag, run

    +
    $ cat diamond.dag
    + +

    Before you continue, we recommend that you closely examine the contents of diamond.dag and identify its components. +Furthermore, try to identify the submit file for each node, and use that submit file to determine the nature of the HTCondor job that will be submitted for each node.

    +

    Submitting a DAG

    +

    To submit a DAGMan workflow to HTCondor, you can use one of the following commands:

    +
    $ condor_submit_dag diamond.dag
    +  or
    +$ htcondor dag submit diamond.dag
    + +

    What Happens?

    +

    When a DAG is submitted to HTCondor a special job is created to run DAGMan +on behalf of you the user. This job runs the provided HTCSS DAGMan executable +in the AP job queue. This is an actual job that can be queried and acted upon.

    +

    You may also notice that lots of files are created. These files are all part +of DAGMan and have various purposes. In general, the files that should +always exist are as follows:

    +
      +
    • DAGMan job proper files
    • +
    • <DAG Name>.condor.sub - Submit file for the DAGMan job proper
    • +
    • <DAG Name>.dagman.log - Job event log file for the DAGMan job proper
    • +
    • <DAG Name>.lib.err - Standard error stream file for the DAGMan job proper
    • +
    • <DAG Name>.lib.out - Standard output stream file for the DAGMan job proper
    • +
    • Informational DAGMan files
    • +
    • <DAG Name>.dagman.out - General DAGMan process logging file
    • +
    • <DAG Name>.nodes.log - Collective job event log file for all managed jobs (Heart of DAGMan)
    • +
    • <DAG Name>.metrics - JSON formatted information about the DAG
    • +
    +

    Of these files, the two most important are the <DAG Name>.dagman.out and <DAG Name>.nodes.log. +The .dagman.out file contains the entire history and status of DAGMan's execution of your workflow. +The .nodes.log file on the other hand is the accumulated log entries for every HTCondor job that DAGMan submitted, +and DAGMan monitors the contents of this file to generate the contents of the .dagman.out file.

    +
    +

    Note: these are not all the files that DAGMan can produce. +Depending on the options and features you employ in your DAG input file, more files with different purposes can be created.

    +
    +

    Monitoring DAGMan

    +

    The DAGMan job and the jobs in the DAG workflow can be found in the AP job queue +and so the normal methods of job monitoring work. +That also means that you can interact with these jobs, though in a more limited fashion than a regular job (see Running and Managing DAGMan for more details).

    +

    A plain condor_q command will show a condensed batch view of the jobs submitted, running, and managed by the DAGMan job proper. +For more information about jobs running under DAGMan, use the -nobatch and -dag flags:

    +
    # Basic job query (Batched/Condensed)
    +$ condor_q
    +
    +# Non-Batched query
    +$ condor_q -nobatch
    +
    +# Increased information
    +$ condor_q -nobatch -dag
    + +

    You can also watch the progress of the DAG and the jobs running under it +by running:

    +
    $ condor_watch_q
    + +
    +

    Note that condor_watch_q works by monitoring the log files of jobs that are in the queue, but only at the time of its execution. +Additional jobs submitted by DAGMan while condor_watch_q is running will not appear in condor_watch_q. +To see additional jobs as they are submitted, wait for DAGMan to create the .nodes.log file, then run

    +

    $ condor_watch_q -files *.log
    +

    +
    +

    For more detail about the status and progress of your DAG workflow, you can use the noun-verb command:

    +
    $ htcondor dag status DAGManJobID
    + +

    where DAGManJobID is the ID for the DAGMan job proper. +Note that the information in the output of this command does not update frequently, and so it is not suited for short-lived DAG workflows such as the current example.

    +

    When your DAG workflow has completed, the DAGMan job proper will disappear from the queue. +If the DAG workflow completed successfully, then the .dag.dagman.out file should have a message that All jobs Completed!, though it may be difficult to find manually (try using grep "All jobs Completed!" *.dag.dagman.out instead). +If the DAG workflow was aborted due to an error, then the .dag.dagman.out file should have the message Aborting DAG.... +Assuming that the DAGMan job proper did not crash, then regardless the final line of the .dag.dagman.out file should contain (condor_DAGMAN) pid ####### EXITING WITH STATUS #, where the number after STATUS is the exit code (0 if success, not 0 if failure).

    +

    How DAGMan Handles Relative Paths

    +

    By default, the directory that DAGMan submits all jobs from is the same directory you are in when you run condor_submit_dag. +This directory (let's call it the submit directory) is the starting directory for any relative path in the .dag input file or in the node .sub files that DAGMan submits.

    +

    This can be observed by inspecting the sleep.sub submit file in the SleepJob sub-directory and by inspecting the diamond.dag input file. +In the diamond.dag file, the jobs are declared using a relative path. +For example:

    +
    JOB TOP ./SleepJob/sleep.sub
    +
    +

    This tells DAGMan that the submit file for the JOB TOP is sleep.sub, located in the SleepJob in the submit directory (.). +Similarly, the submit file sleep.sub uses paths relative to the submit directory for defining the save locations for the .log, .out, and .err files, i.e.,

    +
    log        = ./SleepJob/$(JOB).log
    +
    +

    This behavior is consistent with submission of regular (non-DAGMan) jobs, e.g. condor_submit SleepJob/sleep.sub.

    +
    +

    Contrary to the above behavior, the .dag.* log/output files generated by the DAGMan job proper will always be in the same directory as the .dag input file.

    +
    +

    This is just the default behavior, and there are ways to make the location of job submission/management more obvious. +See the HTCondor documentation for more details: File Paths in DAGs.

    +

    Additional Examples

    +

    Additional examples that cover various topics related to DAGMan are provided in the folder additional_examples with corresponding READMEs. +The following order of the examples is recommended:

    +
      +
    1. RescueDag - Example for DAGs that don't exit successfully
    2. +
    3. PreScript- Example using a pre-script for a node
    4. +
    5. PostScript - Example using a post-script for a node
    6. +
    7. Retry - Example for retrying a failed node
    8. +
    9. VARS - Example of reusing a single submit file for multiple nodes with differing variables
    10. +
    11. SubDAG (advanced) - Example using a subDAG
    12. +
    13. Splice (advanced) - Example of using DAG splices
    14. +
    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/automated_workflows/tutorial-pegasus/index.html b/htc_workloads/automated_workflows/tutorial-pegasus/index.html new file mode 100644 index 00000000..706d9728 --- /dev/null +++ b/htc_workloads/automated_workflows/tutorial-pegasus/index.html @@ -0,0 +1,2791 @@ + + + + + + + + + + + + + + + + + + Use Pegasus to Manage Workflows on OSPool Access Points - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Pegasus Workflows

    +

    Introduction

    +

    The Pegasus project encompasses a set of technologies that help workflow-based applications execute in a number of different environments including desktops, campus clusters, grids, and clouds. Pegasus bridges the scientific domain and the execution environment by automatically mapping high-level workflow descriptions onto distributed resources. It automatically locates the necessary input data and computational resources necessary for workflow execution. Pegasus enables scientists to construct workflows in abstract terms without worrying about the details of the underlying execution environment or the particulars of the low-level specifications required by the middleware. Some of the advantages of using Pegasus include:

    +
      +
    • +

      Portability / Reuse - User created workflows can easily be run in different environments without alteration. Pegasus currently runs workflows on compute systems scheduled via HTCondor, including the OSPool, and other other systems or via other schedulers (e.g. XSEDE resources, Amazon EC2, Google Cloud, and many campus clusters). The same workflow can run on a single system or across a heterogeneous set of resources.

      +
    • +
    • +

      Performance - The Pegasus mapper can reorder, group, and prioritize tasks in order to increase the overall workflow performance.

      +
    • +
    • +

      Scalability - Pegasus can easily scale both the size of the workflow, and the resources that the workflow is distributed over. Pegasus runs workflows ranging from just a few computational tasks up to 1 million tasks. The number of resources involved in executing a workflow can scale as needed without any impediments to performance.

      +
    • +
    • +

      Provenance - By default, all jobs in Pegasus are launched via the kickstart process that captures runtime provenance of the job and helps in debugging. The provenance data is collected in a database, and the data can be summarized with tools such as pegasus-statistics or directly with SQL queries.

      +
    • +
    • +

      Data Management - Pegasus handles replica selection, data transfers and output registrations in data catalogs. These tasks are added to a workflow as auxiliary jobs by the Pegasus planner.

      +
    • +
    • +

      Reliability - Jobs and data transfers are automatically retried in case of failures. Debugging tools such as pegasus-analyzer help the user to debug the workflow in case of non-recoverable failures.

      +
    • +
    • +

      Error Recovery - When errors occur, Pegasus tries to recover when possible by retrying tasks, retrying the entire workflow, providing workflow-level checkpointing, re-mapping portions of the workflow, trying alternative data sources for staging data, and, when all else fails, providing a rescue workflow containing a description of only the work that remains to be done. Pegasus keeps track of what has been done (provenance) including the locations of data used and produced, and which software was used with which parameters.

      +
    • +
    +

    As mentioned earlier in this book, OSG has no read/write enabled shared file system across the resources. Jobs are required to either bring inputs along with the job, or as part of the job stage the inputs from a remote location. The following examples highlight how Pegasus can be used to manage workloads in such an environment by providing an abstraction layer around things like data movements and job retries, enabling the users to run larger workloads, spending less time developing job management tools and babysitting their computations.

    +

    Pegasus workflows have 4 components:

    +
      +
    1. Site Catalog - Describes the execution environment in which the workflow + will be executed.
    2. +
    3. Transformation Catalog - Specifies locations of the executables used by + the workflow.
    4. +
    5. Replica Catalog - Specifies locations of the input data to the workflow.
    6. +
    7. Workflow Description - An abstract workflow description containing compute + steps and dependencies between the steps. We refer to this workflow as abstract + because it does not contain data locations and available software.
    8. +
    +

    When developing a Pegasus Workflow using the +Python API, +all four components may be defined in the same file.

    +

    For details, please refer to the Pegasus documentation.

    +

    Wordfreq Workflow

    +

    fig 1

    +

    wordfreq is an example application and workflow that can be used to introduce +Pegasus tools and concepts.

    +

    The application is available on the OSG Access Points.

    +

    This example uses a custom container to run jobs. The container +capability is provided by OSG (Containers - Apptainer/Singularity) +and is used by setting HTCondor properties when defining your workflow.

    +

    Exercise 1: create a copy of the Pegasus tutorial and change the working +directory to the wordfreq workflow by running the following commands:

    +
    $ git clone https://github.com/OSGConnect/tutorial-pegasus
    +$ cd tutorial-pegasus/wordfreq
    +
    +

    In the wordfreq directory, you will find:

    +
    wordfreq/
    +├── bin
    +|   ├── summarize
    +|   └── wordfreq
    +├── inputs
    +|   ├── Alices_Adventures_in_Wonderland_by_Lewis_Carroll.txt
    +|   ├── Dracula_by_Bram_Stoker.txt
    +|   ├── Pride_and_Prejudice_by_Jane_Austen.txt
    +|   ├── The_Adventures_of_Huckleberry_Finn_by_Mark_Twain.txt
    +|   ├── Ulysses_by_James_Joyce.txt
    +|   └── Visual_Signaling_By_Signal_Corps_United_States_Army.txt
    +├── many-more-inputs
    +|   └── ...
    +└── workflow.py
    +
    +

    The inputs/ directory contains 6 public domain ebooks. The wordreq workflow uses the +two executables in the bin/ directory. bin/wordfreq takes a text file as input +and produces a summary output file containting the counts and names of the top five +most frequently used words from the input file. A wordfreq job is created for +each file in inputs/. bin/summarize concatenates the +the output of each wordfreq job into a single output file called summary.txt.

    +

    This workflow structure, which is a set of independent tasks joining into a single summary +or analysis type of task, is a common use case on OSG and therefore this workflow +can be thought of as a template for such problems. For example, instead of using +wordfreq on ebooks, the application could be protein folding on a set of input +structures.

    +

    When invoked, the workflow script (workflow.py) does the following major steps:

    +
      +
    1. +

      Generates a site catalog, which describes the execution environment in + which the workflow will be run.

      +
       def generate_site_catalog(self):
      +
      +     username = getpass.getuser()
      +
      +     local = (
      +         Site("local")
      +         .add_directories(
      +             Directory(
      +                 Directory.SHARED_STORAGE, self.output_dir
      +             ).add_file_servers(
      +                 FileServer(f"file://{self.output_dir}", Operation.ALL)
      +             )
      +         )
      +         .add_directories(
      +             Directory(
      +                 Directory.SHARED_SCRATCH, self.scratch_dir
      +             ).add_file_servers(
      +                 FileServer(f"file://{self.scratch_dir}", Operation.ALL)
      +             )
      +         )
      +     )
      +
      +     condorpool = (
      +         Site("condorpool")
      +         .add_pegasus_profile(style="condor")
      +         .add_condor_profile(
      +             universe="vanilla",
      +             requirements="HAS_SINGULARITY == True",
      +             request_cpus=1,
      +             request_memory="1 GB",
      +             request_disk="1 GB",
      +          )
      +         .add_profiles(
      +             Namespace.CONDOR,
      +             key="+SingularityImage",
      +             value='"/cvmfs/singularity.opensciencegrid.org/htc/rocky:9"'
      +         )
      +     )
      +
      +     self.sc.add_sites(local, condorpool)
      +
      +

      In order for the workflow to use the container capability provided by OSG + (Containers - Apptainer/Singularity), + the following HTCondor profiles must be + added to the condorpool execution site: + +SingularityImage='"/cvmfs/singularity.opensciencegrid.org/htc/rocky:9"'.

      +
    2. +
    3. +

      Generates the transformation catalog, which specifies the executables used + in the workflow and contains the locations where they are physically located. + In this example, we have two entries: wordfreq and summarize.

      +
       def generate_transformation_catalog(self):
      +
      +     wordfreq = Transformation(
      +                 name="wordfreq",
      +                 site="local",
      +                 pfn=self.TOP_DIR / "bin/wordfreq",
      +                 is_stageable=True
      +             ).add_pegasus_profile(clusters_size=1)
      +
      +     summarize = Transformation(
      +                     name="summarize",
      +                     site="local",
      +                     pfn=self.TOP_DIR / "bin/summarize",
      +                     is_stageable=True
      +                 )
      +
      +     self.tc.add_transformations(wordfreq, summarize)
      +
      +
    4. +
    5. +

      Generates the replica catalog, which specifies the physical locations of + any input files used by the workflow. In this example, there is an entry for + each file in the inputs/ directory.

      +
       def generate_replica_catalog(self):
      +
      +     input_files = [File(f.name) for f in (self.TOP_DIR / "inputs").iterdir() if f.name.endswith(".txt")]
      +
      +     for f in input_files:
      +         self.rc.add_replica(site="local", lfn=f, pfn=self.TOP_DIR / "inputs" / f.lfn)
      +
      +
    6. +
    7. +

      Builds the wordfreq workflow. Note that + in this step there is no mention of data movement and job details as these are + added by Pegasus when the workflow is planned into an executable workflow. As + part of the planning process, additional jobs which handle scratch directory + creation, data staging, and data cleanup are added to the workflow.

      +
       def generate_workflow(self):
      +
      +     # last job, child of all others
      +     summarize_job = (
      +         Job("summarize")
      +         .add_outputs(File("summary.txt"))
      +     )
      +     self.wf.add_jobs(summarize_job)
      +
      +     input_files = [File(f.name) for f in (self.TOP_DIR / "inputs").iterdir() if f.name.endswith(".txt")]
      +
      +     for f in input_files:
      +         out_file = File(f.lfn + ".out")
      +         wordfreq_job = (
      +             Job("wordfreq")
      +             .add_args(f, out_file)
      +             .add_inputs(f)
      +             .add_outputs(out_file)
      +         )
      +
      +         self.wf.add_jobs(wordfreq_job)
      +
      +         # establish the relationship between the jobs
      +         summarize_job.add_inputs(out_file)
      +
      +
    8. +
    +

    Exercise 2: Submit the workflow by executing workflow.py.

    +
    $ ./workflow.py
    +
    +

    Note that when Pegasus plans/submits a workflow, a workflow directory is created +and presented in the output. In the following example output, the workflow directory +is /home/ryantanaka/workflows/runs/ryantanaka/pegasus/wordfreq-workflow/run0014.

    +
    2020.12.18 14:33:07.059 CST:   -----------------------------------------------------------------------
    +2020.12.18 14:33:07.064 CST:   File for submitting this DAG to HTCondor           : wordfreq-workflow-0.dag.condor.sub
    +2020.12.18 14:33:07.070 CST:   Log of DAGMan debugging messages                 : wordfreq-workflow-0.dag.dagman.out
    +2020.12.18 14:33:07.075 CST:   Log of HTCondor library output                     : wordfreq-workflow-0.dag.lib.out
    +2020.12.18 14:33:07.080 CST:   Log of HTCondor library error messages             : wordfreq-workflow-0.dag.lib.err
    +2020.12.18 14:33:07.086 CST:   Log of the life of condor_dagman itself          : wordfreq-workflow-0.dag.dagman.log
    +2020.12.18 14:33:07.091 CST:
    +2020.12.18 14:33:07.096 CST:   -no_submit given, not submitting DAG to HTCondor.  You can do this with:
    +2020.12.18 14:33:07.107 CST:   -----------------------------------------------------------------------
    +2020.12.18 14:33:10.381 CST:   Your database is compatible with Pegasus version: 5.1.0dev
    +2020.12.18 14:33:11.347 CST:   Created Pegasus database in: sqlite:////home/ryantanaka/workflows/runs/ryantanaka/pegasus/wordfreq-workflow/run0014/wordfreq-workflow-0.replicas.db
    +2020.12.18 14:33:11.352 CST:   Your database is compatible with Pegasus version: 5.1.0dev
    +2020.12.18 14:33:11.404 CST:   Output replica catalog set to jdbc:sqlite:/home/ryantanaka/workflows/runs/ryantanaka/pegasus/wordfreq-workflow/run0014/wordfreq-workflow-0.replicas.db
    +[WARNING]  Submitting to condor wordfreq-workflow-0.dag.condor.sub
    +2020.12.18 14:33:12.060 CST:   Time taken to execute is 5.818 seconds
    +
    +Your workflow has been started and is running in the base directory:
    +
    +/home/ryantanaka/workflows/runs/ryantanaka/pegasus/wordfreq-workflow/run0014
    +
    +*** To monitor the workflow you can run ***
    +
    +pegasus-status -l /home/ryantanaka/workflows/runs/ryantanaka/pegasus/wordfreq-workflow/run0014
    +
    +
    +*** To remove your workflow run ***
    +
    +pegasus-remove /home/ryantanaka/workflows/runs/ryantanaka/pegasus/wordfreq-workflow/run0014
    +
    +

    This directory is the handle to the workflow instance +and is used by Pegasus command line tools. Some useful tools to know about:

    +
      +
    • pegasus-status -v [wfdir] + Provides status on a currently running workflow. (more)
    • +
    • pegasus-analyzer [wfdir] + Provides debugging clues why a workflow failed. Run this after a workflow has failed. (more)
    • +
    • pegasus-statistics [wfdir] + Provides statistics, such as walltimes, on a workflow after it has completed. (more)
    • +
    • pegasus-remove [wfdir] + Removes a workflow from the system. (more)
    • +
    +

    Exercise 3: Check the status of the workflow:

    +
    $ pegasus-status [wfdir]
    +
    +

    You can keep checking the status periodically to see that the workflow is making progress.

    +

    Exercise 4: Examine a submit file and the *.dag.dagman.out files. Do these +look familiar to you from previous modules in the book? Pegasus is based on +HTCondor and DAGMan.

    +
    $ cd [wfdir]
    +$ cat 00/00/summarize_ID0000001.sub
    +...
    +$ cat *.dag.dagman.out
    +...
    +
    +

    Exercise 5: Keep checking progress with pegasus-status. Once the workflow +is done, display statistics with pegasus-statistics:

    +
    $ pegasus-status [wfdir]
    +$ pegasus-statistics [wfdir]
    +...
    +
    +

    Exercise 6: cd to the output directory and look at the outputs. Which is +the most common word used in the 6 books? Hint:

    +
    $ cd $HOME/workflows/outputs
    +$ head -n 5 *.out
    +
    +

    Exercise 7: Want to try something larger? Copy the additional 994 ebooks from \ +the many-more-inputs/ directory to the inputs/ directory:

    +
    $ cp many-more-inputs/* inputs/
    +
    +

    As these tasks are really short, let's tell Pegasus to cluster multiple tasks +together into jobs. If you do not do this step, the jobs will still run, but not +very efficiently. This is because every job has a small scheduling overhead. For +short jobs, the overhead is obvious. If we make the jobs longer, the scheduling +overhead becomes negligible. To enable the clustering feature, edit the +workflow.py script. Find the section under Transformations:

    +
        wordfreq = Transformation(
    +                name="wordfreq",
    +                site="local",
    +                pfn=self.TOP_DIR / "bin/wordfreq",
    +                is_stageable=True
    +            ).add_pegasus_profile(clusters_size=1)
    +
    +

    Change clusters_size=1 to clusters_size=50.

    +

    This informs Pegasus that it is ok to cluster up to 50 of the jobs which use the +wordfreq executable. Save the file and re-run the script:

    +
    $ ./workflow.py
    +
    +

    Use pegasus-status and pegasus-statistics to monitor your workflow. Using +pegasus-statistics, determine how many jobs ended up in your workflow and see +how this compares with our initial workflow run.

    +

    Variant Calling Workflow

    +

    This workflow is based on the Data Carpentry lesson +Lesson Data Wrangling and Processing for Genomics.

    +

    This workflow downloads and aligns SRA data to the E. coli REL606 reference genome, and checks what differences exist in our reads versus the genome. The workflow also performs perform variant calling to see how the population changed over time.

    +

    The inputs are controlled by the recipe.json file. With 3 SRA inputs, the structure of the +workflow becomes:

    +

    fig 2

    +

    Rendering the workflow with data:

    +

    fig 3

    +

    Compared to the wordfreq example, a difference is the use of +(OSDF)[https://osg-htc.org/services/osdf.html] +for intermediate data transfers/storage. Note the extra site in the site catalog:

    +
        osdf = (
    +        Site("osdf")
    +         .add_directories(
    +            Directory(
    +                Directory.SHARED_SCRATCH, f"{osdf_local_base}/staging"
    +            ).add_file_servers(
    +                FileServer(f"osdf://{osdf_local_base}/staging", Operation.ALL)
    +            )
    +        )
    +    )
    +
    +

    Which is then referenced when planning the workflow:

    +
            self.wf.plan(
    +            dir=str(self.runs_dir),
    +            output_dir=str(self.output_dir),
    +            sites=["condorpool"],
    +            staging_sites={"condorpool": "osdf"},
    +
    +

    OSDF is recommended for data sizes over 1 GB.

    +

    To plan the workflow:

    +
    $ ./workflow.py --recipe recipe.json
    +
    +

    Getting Help

    +

    For assistance or questions, please email the OSG User Support team at +support@osg-htc.org or visit the user documentation.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/managing_data/file-transfer-via-htcondor/index.html b/htc_workloads/managing_data/file-transfer-via-htcondor/index.html new file mode 100644 index 00000000..6e43e9cb --- /dev/null +++ b/htc_workloads/managing_data/file-transfer-via-htcondor/index.html @@ -0,0 +1,2641 @@ + + + + + + + + + + + + + + + + + + Transfer Smaller Job Files to and from /home - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Transfer Smaller Job Files To and From /home

    +

    As described in the Overview: Data Staging and Transfer to Jobs +any data, files, or even software that is <1GB should be staged in +your /home directory on your Access Point. Files in your +/home directory can be transferred to jobs via your HTCondor submit file.

    +

    Transfer Files From /home Using HTCondor

    +

    Transfer Input Files from /home

    +

    To transfer input files from /home, list the files by name in the +transfer_input_files submit file option. You can use either absolute +or relative paths to your input files. Multiple files can be specified +using a comma-separated list.

    +

    To transfer files from your /home directory use the transfer_input_files +statement in your HTCondor submit file. For example:

    +
    # submit file example
    +
    +# transfer small file from /home 
    +transfer_input_files = my_data.csv
    +
    +

    Multiple files can be specified using a comma-separated list, for example:

    +
    # transfer multiple files from /home
    +transfer_input_files = my_data.csv, my_software.tar.gz, my_script.py
    +
    +

    When using transfer_input_files to transfer files located in /home, +keep in mind that the path to the file is relative to the location of +the submit file. If you have files located in a different /home subdirectory, +we recommend specifying the full path to those files, which is also a matter +of good practice, for example:

    +
    transfer_input_files = /home/username/path/to/my_software.tar.gz
    +
    +

    Note that the path is not replicated on the remote side. The job will only +see my_software.tar.gz in the top level job directory.

    +

    Above, username refers to your access point username.

    +

    Use HTCondor To Transfer Outputs

    +

    By default, HTCondor will transfer any new or modified files in the
    +job's top-level directory back to your /home directory location from
    +which the condor_submit command was performed. This behavior only
    +applies to files in the top-level directory of where your job executes, +meaning HTCondor will ignore any files created in subdirectories of the +job's top-level directory.
    Several options exist for modifying this
    +default output file transfer behavior, including those described in
    +this guide.

    +

    What is the top-level directory of a job?

    +

    Before executing a job, HTCondor will create a new directory on the execute +node just for your job - this is the top-level directory of the job and the +path is stored in the environment variable _CONDOR_SCRATCH_DIR. All of the +input files transferred via transfer_input_files will first be written to +this directory and it is from this path that a job starts to execute. After +a job has completed the top-level directory and all of it's contents are +deleted.

    +

    Select Specific Output Files To Transfer to /home Using HTCondor

    +

    As described above, HTCondor will, by default, transfer any files +that are generated during the execution of your job(s) back to your +/home directory. If your job(s) will produce multiple output files but +you only need to retain a subset of these output files, you can use a submit +file option to only transfer back this file:

    +
    transfer_output_files = output.svg
    +
    +

    Alternatively, you can delete the unrequired output files or move them to a subdirectory as +a step in the bash executable script of your job - only the output files +that remain in the top-level directory will be transferred back to your +/home directory.

    +

    Organize Output Files in /home

    +

    By default, output files will be copied back to the directory in /home +where you ran the condor_submit command. To modify these behavior, +you can use the transfer_output_remaps option in the HTCondor submit file. +The syntax for transfer_output_remaps is:

    +
    transfer_output_remaps = "Output1.txt = path/to/save/file/under/output.txt; Output2.txt = path/to/save/file/under/RenamedOutput.txt"
    +
    +

    What if my output file(s) are not written to the top-level directory?

    +

    If your output files are written to a subdirectory, use the steps described +below to convert the output +directory to a "tarball" that is written to the top-level directory.

    +

    Alternatively, you can include steps in the executable bash script of +your job to move (i.e. mv) output files from a subdirectory to +the top-level directory. For example, if there is an output file that +needs to be transferred back to the login node named job_output.txt +written to job_output/:

    +
    #! /bin/bash
    +
    +# various commands needed to run your job
    +
    +# move csv files to scratch dir
    +mv job_output/job_output.txt $_CONDOR_SCRATCH_DIR
    +
    +

    Group Multiple Output Files For Convenience

    +

    If your jobs will generate multiple output files, we recommend combining +all output into a compressed tar archive for convenience, particularly +when transferring your results to your local computer from your login +node. To create a compressed tar archive, include commands in your your +bash executable script to create a new subdirectory, move all of the +output to this new subdirectory, and create a tar archive. For example:

    +
    #! /bin/bash
    +
    +# various commands needed to run your job
    +
    +# create output tar archive
    +mkdir my_output
    +mv my_job_output.csv my_job_output.svg my_output/
    +tar -czf my_job.output.tar.gz my_ouput/
    +
    +

    The example above will create a file called my_job.output.tar.gz that +contains all the output that was moved to my_output. Be sure to create +my_job.output.tar.gz in the top-level directory of where your job +executes and HTCondor will automatically transfer this tar archive back +to your /home directory.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/managing_data/file-transfer-via-http/index.html b/htc_workloads/managing_data/file-transfer-via-http/index.html new file mode 100644 index 00000000..f1eea9cf --- /dev/null +++ b/htc_workloads/managing_data/file-transfer-via-http/index.html @@ -0,0 +1,2475 @@ + + + + + + + + + + + + + + + + + + Transfer HTTP-available Files up to 1GB In Size - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Transfer HTTP-available Files up to 1GB In Size

    +

    Overview

    +

    If some of the data or software your jobs depend on is available via the web, +you can have such files transferred by HTCondor using the appropriate HTTP address!

    +

    Important Considerations

    +

    While our Overview of Data Mangement on the OSPool +describes how you can stage data, files, or even software on OSG data locations, +any web-accessible file can be transferred directly to your jobs IF:

    +
      +
    • the file is accessible via an HTTP address
    • +
    • the file is less than 1GB in size (if larger, you'll need to pre-stage them for OSDF)
    • +
    • the server or website they're on can handle large numbers of your jobs accessing them simultaneously
    • +
    +

    Importantly, you'll also want to make sure your job executable knows how to handle the file +(un-tar, etc.) from within the working directory of the job, just like it would for any other input file.

    +

    Transfer Files via HTTP

    +

    To download a file available by HTTP into a job, use an HTTP URL in +combination with the transfer_input_files statement in your HTCondor submit file.

    +

    For example:

    +
    # submit file example
    +
    +# transfer software tarball from public via http
    +transfer_input_files = http://www.website.com/path/file.tar.gz
    +
    +...other submit file details...
    +
    +

    Multiple URLs can +be specified using a comma-separated list, and a combination of URLs and +files from /home directory can be provided in a comma separated list. For example,

    +
    # transfer software tarball from public via http
    +# transfer additional data from AP /home via htcondor file transfer
    +transfer_input_files = http://www.website.com/path/file1.tar.gz, http://www.website.com/path/file2.tar.gz, my_data.csv
    +
    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/managing_data/osdf/index.html b/htc_workloads/managing_data/osdf/index.html new file mode 100644 index 00000000..57c14bc6 --- /dev/null +++ b/htc_workloads/managing_data/osdf/index.html @@ -0,0 +1,2669 @@ + + + + + + + + + + + + + + + + + + Transfer Larger Job Files and Containers Using OSDF - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Transfer Larger Job Files and Containers Using OSDF

    +

    For input files >1GB and output files >1GB in size, the default HTCondor +file transfer mechanisms run the risk of over-taxing the Access Point and +their network capacity. And this is exactly why the OSDF +(Open Science Data Federation) +exists for researchers with larger per-job data! The OSDF is a network of +data origins and caches for data distribution.

    +

    If you have an account on an OSG Access Point, you have access to an OSDF data +origin, specifically a directory that can be used to stage input and output data for +jobs, accessible via the OSDF. This guide describes general tips for using the OSDF, +where to stage your files, and how to access files from jobs.

    +

    Important Considerations and Best Practices

    +
      +
    1. +

      Use OSDF locations for larger files and containers: We recommend using + the OSDF for files larger than 1GB (input or output) and all container files.

      +
    2. +
    3. +

      OSDF files are cached across the Open Science Pool, + any changes or modifications that you make might not be propagated. + This means that if you add a new version of a file the OSDF + directory, it must first be given a unique name (or directory path) to + distinguish it from previous versions of that file. Adding a date or + version number to directories or file names is strongly encouraged to + manage your files uniqness. This is especially important when using the + OSDF for software and containers.

      +
    4. +
    5. +

      Never submit jobs from the OSDF locations; always submit jobs from + within the /home directory. All log, error, output + files and any other files smaller than the above values should ONLY ever + exist within the user's /home directory.

      +
    6. +
    7. +

      Files placed within a public OSDF directory are publicly accessible, + discoverable and readable by anyone, via the web. At the moment, most default + OSDF locations are not public.

      +
    8. +
    +

    Where to Put Your Files

    +

    Data origins and local mount points varies between the different +access points. See the list below for the "Local Path" to use, based on your access point.

    + + + + + + + + + + + + + + +
    Access PointOSDF Origin
    ap40.uw.osg-htc.orgAccessible to user only: +
      +
    • Local Path: /ospool/ap40/data/[USERNAME]
    • +
    • Base OSDF URL: osdf:///ospool/ap40/data/[USERNAME]
    • +
    +
    +
    ap20.uc.osg-htc.orgAccessible to user only: +
      +
    • Local Path: /ospool/ap20/data/[USERNAME]
    • +
    • Base OSDF URL: osdf:///ospool/ap20/data/[USERNAME]
    • +
    + Accessible to project group only: +
      +
    • Local Path: /ospool/uc-shared/projects/[PROJECT]
    • +
    • Base OSDF URL: osdf:///ospool/uc-shared/projects/[PROJECT]
    • +
    + Public space for projects: +
      +
    • Local Path: /ospool/uc-shared/public/[PROJECT]
    • +
    • Base OSDF URL: osdf:///ospool/uc-shared/public/[PROJECT]
    • +
    +
    +
    ap21.uc.osg-htc.orgAccessible to user only: +
      +
    • Local Path: /ospool/ap21/data/[USERNAME]
    • +
    • Base OSDF URL: osdf:///ospool/ap21/data/[USERNAME]
    • +
    + Accessible to project group only: +
      +
    • Local Path: /ospool/uc-shared/project/[PROJECT]
    • +
    • Base OSDF URL: osdf:///ospool/uc-shared/project/[PROJECT]
    • +
    + Public space for projects: +
      +
    • Local Path: /ospool/uc-shared/public/[PROJECT]
    • +
    • Base OSDF URL: osdf:///ospool/uc-shared/public/[PROJECT]
    • +
    +
    +
    + +

    Transfer Files To/From Jobs Using the OSDF

    +

    Use an 'osdf://' URL to Transfer Large Input Files and Containers

    +

    Jobs will transfer data from the OSDF directory when files are indicated +with an appropriate osdf:// URL (or the older stash://) in the +transfer_input_files line of the submit file. Make sure to customize the +base URL based on your Access Point, as described in the table above.

    +

    Some examples:

    +
      +
    • +

      Transferring one file from /ospool/apXX/data/

      +
      transfer_input_files = osdf:///ospool/apXX/data/<username>/InFile.txt
      +
      +
    • +
    • +

      When using multiple files from /ospool/apXX/data/, it can be useful to use + HTCondor submit file variables to make your list of files more readable:

      +
      # Define a variable (example: OSDF_LOCATION) equal to the 
      +# path you would like files transferred to, and call this 
      +# variable using $(variable)
      +OSDF_LOCATION = osdf:///ospool/apXX/data/<username>
      +transfer_input_files = $(OSDF_LOCATION)/InputFile.txt, $(OSDF_LOCATION)/database.sql
      +
      +
    • +
    • +

      Transferring a folder from /ospool/apXX/data/

      +
      transfer_input_files = osdf:///ospool/apXX/data/<username>/<folder>?recursive
      +
      +
    • +
    +

    Please note that for transferring a folder using OSDF ?recursive needs to added after the folder name.

    +

    Use transfer_output_remaps and 'osdf://' URL for Large Output Files

    +

    To move output files into an OSDF directory, users should +use the transfer_output_remaps option +within their job's submit file, which will transfer the user's +specified file to the specific location in the data origin.

    +

    By using transfer_output_remaps, it is possible to specify what path +to save a file to and what name to save it under. Using this approach, +it is possible to save files back to specific locations in your OSDF +directory (as well as your /home directory, if desired).

    +

    The general syntax for transfer_output_remaps is:

    +
    transfer_output_remaps = "Output1.txt = path/to/save/file/under/output.txt; Output2.txt = path/to/save/file/under/RenamedOutput.txt"
    +
    +

    When saving large output files back to /ospool/apXX/data/, the path provided will look like:

    +
    transfer_output_remaps = "Output.txt = osdf:///ospool/apXX/data/<username>/Output.txt"
    +
    +

    Some examples:

    +
      +
    • +

      Transferring one output file (OutFile.txt) back to /ospool/apXX/data/:

      +
      transfer_output_remaps = "OutFile.txt=osdf:///ospool/apXX/data/<username>/OutFile.txt"
      +
      +
    • +
    • +

      When using multiple files from /ospool/apXX/data/, it can be useful to use + HTCondor submit file variables to make your list of files more readable. Also note + the semi-colon separator in the list of output files.

      +
      # Define a variable (example: OSDF_LOCATION) equal to the 
      +# path you would like files transferred to, and call this 
      +# variable using $(variable)
      +OSDF_LOCATION = osdf:///ospool/apXX/data/<username>
      +transfer_output_remaps = "file1.txt = $(OSDF_LOCATION)/file1.txt; file2.txt = $(OSDF_LOCATION)/file2.txt; file3.txt = $(OSDF_LOCATION)/file3.txt"
      +
      +
    • +
    +

    Phase out of stash:/// and stashcp command

    +

    Historically, output files could be transferred from a job to an' +OSDF location using the stashcp command within the job's +executable. However, this mechanism is no longer encouraged for OSPool +users. Instead, jobs should use transfer_output_remaps (an HTCondor +feature) to transfer output files to your assigned OSDF origin. By using +transfer_output_remaps, HTCondor will manage the output data transfer +for your jobs. Data transferred via HTCondor is more likely to be +transferred successfully and errors with transfer are more likely to be +reported to the user.

    +

    osdf:// is the new format for these kind of transfers, and is +equivalent of the old stash:// format (which will keep on being +supported for the short term).

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/managing_data/overview/index.html b/htc_workloads/managing_data/overview/index.html new file mode 100644 index 00000000..505f8dc5 --- /dev/null +++ b/htc_workloads/managing_data/overview/index.html @@ -0,0 +1,2584 @@ + + + + + + + + + + + + + + + + + + Overview: Data Staging and Transfer to Jobs - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Overview: Data Staging and Transfer to Jobs

    +

    Overview

    +

    As a distributed system, jobs on the OSPool will run in different +physical locations, where the computers that are executing jobs don't +have direct access to the files placed on the Access Point (e.g. in a +/home directory). In order to run on this +kind of distributed system, jobs need to "bring along" the data, code, +packages, and other files from the access point (where the job is +submitted) to the execute points (where the job will run). +HTCondor's file transfer tools and plugins make this possible; input and +output files are specified as part of the job submission and then moved +to and from the execution location.

    +

    This guide describes where to place files on the access +points, and how to use these files within jobs, with links to a more +detailed guide for each use case.

    +

    Always Submit From /home

    +

    Regardless of where data is placed, jobs should only be submitted with +condor_submit from /home

    +

    Use HTCondor File Transfer for Smaller Job Files

    +

    You should use your /home directory to stage job files where:

    +
      +
    • individual input files per job are less than 1GB per file, and if there + are multiple files, they total less than 1GB
    • +
    • output files per job are less than 1GB per file
    • +
    +

    Files can to be transferred to and from the /home directory +using HTCondor's file transfer mechanism. Input files can be +specified in the submit file and by default, +files created by your job will automatically be returned +to your /home directory.

    +

    See our Transfer Files To and From /home guide +for complete details on managing your files this way.

    +

    Use OSDF for Larger Files and Containers

    +

    You should use the OSDF (Open Science Data Federation) +to stage job files where:

    +
      +
    • individual input files per job are greater than 1GB per file
    • +
    • an input file (of any size) is used by many jobs
    • +
    • output files per job are greater than 1GB per file
    • +
    +

    You should also always use the OSDF to stage Singularity/Apptainer container +files (with the ending .sif) for jobs.

    +
    +

    Important Note: +Files in OSDF are cached, so it is important to use a +descriptive file name (possibly using version names or dates within the file name), or +a directory structure with unique names to +ensure you know what version of the file you are using within your job.

    +
    +

    To use the OSDF, files are placed (or returned to) a local path, and moved to +and from the job using a URL notation in the submit file.

    +

    To see where to place your files in the OSDF and how to use +OSDF URLs in transfer_input_files/transfer_output_files, +please see the OSDF guide.

    +

    Quotas

    +

    /home and OSDF origins all have quota limits. /home is usually +limited to 50 GBs, while OSDF limits vary. You can find out your current +usage by running quota or quota -vs

    +

    Note that jobs will go on hold if quotas are exceeded.

    +

    If you want an increase in your quota, please send a request with +justification to the ticket system support@osg-htc.org

    +

    External Data Transfer to/from Access Point

    +

    In general, common Unix tools such as rsync, scp, Putty, WinSCP, +gFTP, etc. can be used to upload data from your computer to access +point, or to download files from the access point.

    +

    See our Data Transfer Guide for more details.

    +

    FAQ

    +

    For additional data information, see also the "Data Storage and Transfer" section of +our FAQ.

    +

    Data Policies

    +

    Please see the OSPool Polices for important +usage polices.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/managing_data/scp/index.html b/htc_workloads/managing_data/scp/index.html new file mode 100644 index 00000000..97b75fbe --- /dev/null +++ b/htc_workloads/managing_data/scp/index.html @@ -0,0 +1,2572 @@ + + + + + + + + + + + + + + + + + + Use scp To Transfer Files To and From OSG Managed Access Points - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Use scp To Transfer Files To and From Access Point

    +

    Overview

    +

    This tutorial assumes that you will be using a command line application +for performing file transfers instead of a GUI-based application such as WinSCP.

    +

    We can transfer files to and from the access point using the +scp command. Note scp is a counterpart to the secure shell +command,ssh, that allows for secure, encrypted file transfers between +systems using your ssh credentials.

    +

    When using scp, you will always need to specify both the source of the +content that you wish to copy and the destination of where you would like +the copy to end up. For example:

    +
    $ scp <source> <destination>
    +
    +

    Files on remote systems (like an OSG Access Point) are indicated using +username@machine:/path/to/file.

    +

    Transfer Files To Access Point

    +

    Let's say you have a file you wish to transfer named my_file.txt.

    +

    Using the terminal application on your computer, navigate to the location of my_file.txt.

    +

    Then use the following scp command to tranfer my_file.txt to your /home on the access point. Note +that you will not be logged into the access point when you perform this step.

    +
    $ scp my_file.txt username@apXX.xx.osg-htc.org:/home/username/
    +
    +

    Where NN is the specific number of your assigned login node (i.e. 04 or 05).

    +

    Large files (>100MB in size) can be uploaded to your /public directory also using scp:

    +
    $ scp my_large_file.gz username@apXX.xx.osg-htc.org:/public/username/
    +
    +

    Transfer Directories To Access Point

    +

    To copy directories using scp, add the (recursive) -r option to your scp command.

    +

    For example:

    +
    $ scp -r my_Dir username@apXX.xx.osg-htc.org:/home/username/
    +
    +

    Transfer Files to Another Directory on the Access Point

    +

    If you are using the OSDF to stage some of your files, you can upload files directly +to that path by replacing /home/username in the commands above. If I wanted to +upload files to the OSDF location on ap20, which is /ospool/ap20/data/username, +I would use the following command:

    +
    $ scp my_file.txt username@ap20.uc.osg-htc.org:/ospool/ap20/data/username
    +
    +

    Transfer Files From Access Point

    +

    To transfer files from the access point back to your laptop or desktop you can use the scp +command as shown above, +but with the source being the copy that is located on the access point:

    +
    $ scp username@apXX.xx.osg-htc.org:/home/username/my_file.txt ./
    +
    +

    where ./ sets the destination of the copy to your current location on your computer. +Again, you will not be logged into the access point when you perform this step.

    +

    You can download files from a different directory in the same way as described +above when uploading files.

    +

    Transfer Files Directly Between Access Point and Another Server

    +

    scp can be used to transfer files between the OSG access point and another server that you have +ssh access to. This means that files don't have to first be transferred to your +personal computer which can save a lot of time and effort! For example, to transfer +a file from another server to your access point login node /home directory:

    +
    $ scp username@serverhostname:/path/to/my_file.txt username@lapXX.xx.osg-htc.org:/home/username
    +
    +

    Be sure to use the username assigned to you on the other server and to provide the +full path on the other server to your file. To transfer files from the OSG Access +Point to the other server, just reverse the order of the two server statements.

    +

    Other Graphical User Interface (GUI) Tools for transferring files and folders

    +

    Apart from scp, other GUI software such as- WinSCP, FileZilla, +Cyberduck can be used for transferring files and folders from and to the Access Point. Please remember to add your private +key for the authentication method.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/specific_resource/arm64/index.html b/htc_workloads/specific_resource/arm64/index.html new file mode 100644 index 00000000..e2b32361 --- /dev/null +++ b/htc_workloads/specific_resource/arm64/index.html @@ -0,0 +1,2496 @@ + + + + + + + + + + + + + + + + + + ARM64 - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    ARM64

    +

    ARM64 (AArch64) and x86_64 are both 64-bit architectures, but they +differ in design and application. ARM64 is renowned for its energy +efficiency, making it ideal for mobile devices and other low-power +environments. In contrast, x86_64, predominantly used in Intel and AMD +processors, emphasizes raw performance and compatibility with legacy +software, establishing it as the standard for desktops, laptops, and +servers. However, ARM64's energy efficiency has increasingly driven its +adoption in high-throughput and high-performance computing environments.

    +

    A small number of sites within the OSPool now offer ARM64 resources, +though these resources currently see limited demand. The availability +of these underutilized cycles provides a strong incentive for users to +incorporate ARM64 resources when running their jobs.

    +

    Listing Available Resources

    +

    To see the ARM64 resources in the OSPool, use condor_status with a +constraint for the archtechture (note that on Linux and HTCondor, the +offical label for ARM64 is aarch64):

    +
    condor_status -constraint 'Arch == "aarch64"'
    +
    +

    Requesting ARM64

    +

    By default, HTCondor will automatically send your job to the same +architechture as the access point you are submitting from, which +currently is the x86_64 architechture. If you also want to target +ARM64, add the following to your requirements.

    +
    requirements = (Arch == "X86_64" || Arch == "aarch64")
    +
    +

    Software Considerations

    +

    Since ARM64 is a different architecture, x86_64 binaries and containers +are incompatible. Additionally, OSPool's container synchronization is +not yet ARM64-compatible. Therefore, the options for software on ARM64 +resources are limited to the following:

    +
      +
    • +

      Simple Python codes. If you have a simple Python script which runs + on the OSPool default images, it will probably work fine on ARM64 as + well. All you need to this in this case, is update your requirements + as described in the previous section.

      +
    • +
    • +

      Pre-built binaries. If you have built binaries for multiple + architechtures, you can use HTCondor's machine add substitution + mechanism to switch between the binaries depending on what machine + the job lands on. Please the the HTCondor documentation + for more details.

      +
    • +
    • +

      Multiarch containers. If you are able to build multiarch + containers (for example, with docker buildx build --platform linux/amd64,linux/arm64), + you can specify which container to use similar to the pre-built + binaries case. However, the image synchronization is still a + manual process, so please contact support@osg-htc.org for help + with this setup.

      +
    • +
    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/specific_resource/el9-transition/index.html b/htc_workloads/specific_resource/el9-transition/index.html new file mode 100644 index 00000000..8a76eab8 --- /dev/null +++ b/htc_workloads/specific_resource/el9-transition/index.html @@ -0,0 +1,2568 @@ + + + + + + + + + + + + + + + + + + EL9 Transition - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Operating System Transition to EL9

    +

    During May 2024, the OSPool will transition to be mostly EL9 based. The +access points will be upgraded, and the execution points will mostly +shift to EL9.

    +

    Note that EL9 in this context refers to Enterprise Linux 9, and is +an umbrella term for CentOS Stream 9 and derived distributions such as +AlmaLinux 9 and RockyLinux 9.

    +

    What You Need to Do

    +

    The access point transitions will be mostly transparent. You will get +an email about when the switchover will happen, and the access point +will be offline for about 8 hours. Data and jobs will be retained, so no +action is required.

    +

    If your jobs use containers (Apptainer/Singularity, Docker)

    +

    No action is needed for researchers already using a +Apptainer/Singularity or Docker software containers in their jobs. Becuase +software containers have a small operating system installed inside of +them, these jobs carry everything they need with them and do not rely +signifcantly on the host operating system. By default, your jobs will +match to any operating system in the HTC pool, including the new EL9 +hosts.

    +

    All other jobs (not using containers)

    +

    Researchers not already using a Docker or Apptainer software container will need to either:

    +
      +
    • Test their software/code on an EL9 machine to see their software needs to be rebuilt, and + then update the job requirements line to refer to RHEL 9. See + Requirements
    • +
    +

    or

    +
      +
    • Switch to using a software container (recommended). See the below for additional information.
    • +
    +

    If you would like to access as much computing capacity as possible, +consider using an Apptainer or Docker software container for your jobs so +that your jobs can match to a variety of operating systems.

    +

    Options For Transitioning Your Jobs

    + +

    Using a software container to provide a base version of Linux will allow +you to run on any nodes in the OSPool regardless of the operating +system it is running, and not limit you to a subset of nodes.

    + +

    Option 2: Transition to a New Operating System

    +

    At any time, you can require a specific operating system version +(or versions) for your jobs. Instructions for requesting a specific +operating system(s) are outlined here:

    + +

    This option is more limiting because you are restricted to operating +systems used by OSPool, and the number of nodes running that operating +system.

    +

    Alternativly, you can make your job run in a provided base OS +container. For example, if you want your job to always run in RHEL 8, +remove the requirements and add +SingularityImage in your submit +file. Example:

    +
    +SingularityImage = "/cvmfs/singularity.opensciencegrid.org/htc/rocky:8"
    +requirements = True
    +
    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/specific_resource/gpu-jobs/index.html b/htc_workloads/specific_resource/gpu-jobs/index.html new file mode 100644 index 00000000..ec350764 --- /dev/null +++ b/htc_workloads/specific_resource/gpu-jobs/index.html @@ -0,0 +1,2646 @@ + + + + + + + + + + + + + + + + + + GPU Jobs - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    GPU Jobs

    +

    GPUs (Graphical Processing Units) are a special kind of computer +processor that are optimized for running very large numbers of simple +calculations in parallel, which often can be applied to problems related +to image processing or machine learning. Well-crafted GPU programs for +suitable applications can outperform implementations running on CPUs +by a factor of ten or more, but only when the program is written and +designed explicitly to run on GPUs using special libraries like CUDA.

    +

    Requesting GPUs

    +

    To request a GPU for your HTCondor job, you can use the +HTCondor request_gpus attribute in your submit file (along +with the usual request_cpus, request_memory, and request_disk +attributes). For example:

    +
    request_gpus = 1
    +request_cpus = 1
    +request_memory = 4 GB
    +request_disk = 2 GB
    +
    +

    Users can request one or multiple GPU cores on a single GPU machine.

    +

    Specific GPU Requests

    +

    If your software or code requires a certain type of GPU, or has some +other special requirement, there is a special submit file line to +request these capabilities, require_gpus. A few attributes that may +be useful:

    +
      +
    • Capability: this is NOT the GPU library, but rather a measure of the GPU's "Compute Capability," which is relted to hardware generation
    • +
    • DriverVersion: maximum version of the GPU libraries that can be supported
    • +
    • GlobalMemoryMB: amount of GPU memory available on the GPU device in megabytes (MB)
    • +
    +

    If you want a certain type or family of GPUs, we usually recommend using the GPU's +'Compute Capability', known as the Capability by HTCondor. For example, an NVIDIA A100 GPU has a +Compute Capability of 8.0, so if you wanted to run on an A100 GPU specifically, +the submit file requirement would be:

    +
    require_gpus = (Capability == 8.0)
    +
    +

    Multiple requirements can be specified by using && statements:

    +
    require_gpus = (Capability >= 7.5) && (GlobalMemoryMB >= 11000)
    +
    +

    Note that the more requirements you include, the fewer resources will be available +to you! It's always better to set the minimal possible requirements (ideally, none!) +in order to access the greatest amount of computing capacity.

    +

    Sample Submit File

    +
    universe = container
    +container_image = /cvmfs/singularity.opensciencegrid.org/htc/tensorflow:1.3
    +
    +log = job_$(Cluster)_$(Process).log
    +error = job_$(Cluster)_$(Process).err
    +output = job_$(Cluster)_$(Process).out
    +
    +executable = run_gpu_job.py
    +#arguments =
    +
    ++JobDurationCategory = "Medium"
    +
    +# specify both general requirements and gpu requirements if there are any
    +# requirements =
    +require_gpus = (Capability > 7.5)
    +
    +request_gpus = 1
    +request_cpus = 1
    +request_memory = 4GB
    +request_disk = 4GB
    +
    +queue 1
    +
    +

    Available GPUs

    +

    Capacity

    +

    There are multiple OSPool contributors providing GPUs on a regular +basis to the OSPool. Some of these contributors will make their GPUs +available only when there is demand in the job queue, so after initial +small-scale job testing, we strongly recommend submitting a signficant +batch of test jobs to explore how much throughput you can get in the +system as a whole. As a reminder, because the OSPool is dynamic, the more jobs submitted requesting GPUs, the more GPU machines will be pulled into the OSPool as execution points.

    +

    GPU Types

    +

    Because the composition of the OSPool can change from day to day, we do +not know exactly what specific GPUs are available at any given time. +Based on previous GPU job executions, you might land on one of the +following types of GPUs:

    +
      +
    • GeForce GTX 1080 Ti (Capability: 6.1)
    • +
    • V100 (Capability: 7.0)
    • +
    • GeForce GTX 2080 Ti (Capability: 7.5)
    • +
    • Quadro RTX 6000 (Capability: 7.5)
    • +
    • A100 (Capability: 8.0)
    • +
    • A40 (Capability: 8.6)
    • +
    • GeForce RTX 3090 (Capability: 8.6)
    • +
    +

    Software and Data Considerations

    +

    Software for GPUs

    +

    For GPU-enabled machine learning libraries, we recommend using +software containers to set up your software for jobs:

    + +

    See our Data Staging and Transfer guide for +details and contact the Research Computing Facilitation team with questions.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/specific_resource/large-memory-jobs/index.html b/htc_workloads/specific_resource/large-memory-jobs/index.html new file mode 100644 index 00000000..bce42125 --- /dev/null +++ b/htc_workloads/specific_resource/large-memory-jobs/index.html @@ -0,0 +1,2383 @@ + + + + + + + + + + + + + + + + + + Large Memory Jobs - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Large Memory Jobs

    +

    By default, 2 GB of RAM (aka memory) will be assigned to your jobs. However, some jobs will require +additional memory to complete successfully. To request more memory, use the HTCondor request_memory +attribute in your submit file. The default unit is MB. For example, the following will request 12 GB:

    +
    request_memory = 12228
    +
    +

    You might be wondering why the above is requesting 12228 MB for 12 GB. That's because byte units don't +actually scale by 1000 (10^10) like the metric system, but instead scale by 1024 (2^10) due to the binary +nature of bytes.

    +

    Alternatively, you can define a memory request using standard units

    +
    request_memory = 12GB
    +
    +

    We recommend always explictly defining the byte units in your request_memory statement.

    +

    Please note that the OSG has limited resources available for large memory jobs. Requesting jobs with +higher memory needs will results in longer than average queue times for these jobs.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/specific_resource/multicore-jobs/index.html b/htc_workloads/specific_resource/multicore-jobs/index.html new file mode 100644 index 00000000..299ea192 --- /dev/null +++ b/htc_workloads/specific_resource/multicore-jobs/index.html @@ -0,0 +1,2383 @@ + + + + + + + + + + + + + + + + + + Multicore Jobs - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Multicore Jobs

    +

    Please note, the OSG has limited support for multicore jobs. Multicore jobs +can be submitted for threaded or OpenMP applications. To request multiple cores +(aka cpus) use the HTCondor request_cpus attribute in your submit file.

    +

    Example:

    +
    request_cpus = 8
    +
    +

    We recommend requesting a maximum of 8 cpus.

    +

    Important considerations

    +

    When submitting multicore jobs please note that you will also have to tell +your code or application to use the number of cpus requested in your submit +file. Do not use core auto-detection as it might detect more cores than what +were actually assigned to your job.

    +

    MPI Jobs

    +

    For jobs that require MPI, see our OpenMPI Jobs guide.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/specific_resource/openmpi-jobs/index.html b/htc_workloads/specific_resource/openmpi-jobs/index.html new file mode 100644 index 00000000..9b6222ba --- /dev/null +++ b/htc_workloads/specific_resource/openmpi-jobs/index.html @@ -0,0 +1,2508 @@ + + + + + + + + + + + + + + + + + + OpenMPI Jobs - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    OpenMPI Jobs

    +

    Even though the Open Science Pool is a high throughput computing system, sometimes +there is a need to run small OpenMPI based jobs. OSG has limited support for +this, as long as the core count is small (4 is known to work well, 8 and 16 +becomes more difficult due to the limited number of resources).

    +

    Find an MPI-based Container

    +

    To get started, first compile your code using an OpenMPI container. You can create your own OpenMPI container or use the one that is available on DockerHub. OSG also has an openmpi container that can be used for compiling. Please note that the OSG provided container openmpi.sif image is available only on the ap20.uc.osg-htc.org and ap21.uc.osg-htc.org access points. For the ap40 access point, please use your desired docker imageand do apptainer pull. More information about using apptainer pull can be found here.

    +

    Compile the Code

    +

    To compile your code using the OSG provided image, start running the container first. Then run mpicc to compile the code:

    +
    $ apptainer shell /ospool/uc-shared/public/OSG-Staff/openmpi.sif
    +Apptainer> mpicc -o hello hello.c
    +
    +

    The hello.c is an example hello world code that can be executed using multiple processors. The code is given below:

    +
    #include <mpi.h>
    +#include <stdio.h>
    +
    +int main(int argc, char** argv) {
    +        MPI_Init(NULL, NULL);
    +        int world_size;
    +        MPI_Comm_size(MPI_COMM_WORLD, &world_size);
    +        int world_rank;
    +        MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
    +        char processor_name[MPI_MAX_PROCESSOR_NAME];
    +        int name_len;
    +        MPI_Get_processor_name(processor_name, &name_len);
    +        printf("Hello world from processor %s, rank %d out of %d processors\n", processor_name, world_rank, world_size);
    +        MPI_Finalize();
    +}
    +
    +

    After compiling the code, you can test the executable locally using mpiexec:

    +
    Apptainer> mpiexec -n 4 hello
    +Hello world from processor ap21.uc.osg-htc.org, rank 0 out of 4 processors
    +Hello world from processor ap21.uc.osg-htc.org, rank 1 out of 4 processors
    +Hello world from processor ap21.uc.osg-htc.org, rank 2 out of 4 processors
    +Hello world from processor ap21.uc.osg-htc.org, rank 3 out of 4 processors
    +
    +

    When testing is done be sure to exit from the apptainer shell using exit.

    +

    Run a Job Using the MPI Container and Compiled Code

    +

    The next step is to run your code as a job on the Open Science Pool. For this, first create a wrapper.sh. Example:

    +
    #!/bin/sh
    +
    +set -e
    +
    +mpiexec -n 4 hello
    +
    +

    Then, a job submit file:

    +
    +SingularityImage = "osdf:///ospool/uc-shared/public/OSG-Staff/openmpi.sif"
    +
    +executable = wrapper.sh
    +transfer_input_files = hello
    +
    ++JobDurationCategory = "Medium"
    +
    +request_cpus = 4
    +request_memory = 1 GB
    +
    +output = job.out.$(Cluster).$(Process)
    +error = job.error.$(Cluster).$(Process)
    +log = job.log.$(Cluster).$(Process)
    +
    +queue 1
    +
    +

    Note how the executable is the wrapper.sh script, and that the real executable hello is +transferred using the transfer_input_files mechanism.

    +

    Please make sure that the number of cores specified in the submit file via +request_cpus match the -n argument in the wrapper.sh file.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/specific_resource/requirements/index.html b/htc_workloads/specific_resource/requirements/index.html new file mode 100644 index 00000000..5408a678 --- /dev/null +++ b/htc_workloads/specific_resource/requirements/index.html @@ -0,0 +1,2606 @@ + + + + + + + + + + + + + + + + + + Control Where Your Jobs Run/Job Requirements - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Control Where Your Jobs Run / Job Requirements

    +

    By default, your jobs will match any available slot in the OSG. This is fine +for very generic jobs. However, in some cases a job may have one or more system +requirements in order to complete successfully. For instance, your job may need to run +on a node with a specific operating system.

    +

    HTCondor provides several options for "steering" your jobs to appropriate +nodes and system environments. The request_cpus, request_gpus, request_memory, and request_disk +submit file attributes should be used to specify the hardware needs of your jobs. +Please see our guides Multicore Jobs and Large Memory Jobs +for more details.

    +

    HTCondor also provides a requirements attribute and feature-specific +attributes that can be added to your submit files to target specific environments in +which to run your jobs.

    +

    Lastly, there are some custom attributes you can add to your submit file to +either focus on, or avoid, certain execution sites.

    +

    Requirements

    +

    The requirements attribute is formatted as an expression, so you can use logical +operators to combine multiple requirements where && is used for AND and +|| used for OR. For example, the following requirements statement will direct +jobs only to 64 bit RHEL (Red Hat Enterprise Linux) 9 nodes.

    +
    requirements = OSGVO_OS_STRING == "RHEL 9" && Arch == "X86_64"
    +
    +

    Alternatively, if you have code which can run on either RHEL 8 or 9, you can use OR:

    +
    requirements = (OSGVO_OS_STRING == "RHEL 8" || OSGVO_OS_STRING == "RHEL 9") && Arch == "X86_64"
    +
    +

    Note that parentheses placement is important for controling how the logical operations +are interpreted by HTCondor. If you are interested in seeing a list of currently +available operating systems (these are just the default ones, you can create a custom +container image if you want something else):

    +
    $ condor_status -autoformat OSGVO_OS_STRING | sort | uniq -c
    +
    +

    Another common requirement is to land on a node which has CVMFS. +Then the requirements would be:

    +
    requirements = HAS_oasis_opensciencegrid_org == True
    +
    +

    x86_64 Micro Architecture Levels

    +

    The x86_64 set of CPUs contains a large number of different CPUs with +different capabilities. Instead of trying to match on on individual attributes +like the AVX/AVX2 ones in the previous section, it can be useful to match +against a family of CPUs. There are currently 4 levels to chose from: +x86_64-v1, x86_64-v2, x86_64-v3, and x86_64-v4. A description of the levels +is available on Wikipedia.

    +

    HTCondor advertises an attribute named Microarch. An example on how make jobs +running on the two highest levels is:

    +
    requirements = (Microarch >= "x86_64-v3")
    +
    +

    Note that in the past, it was recommended to use the HAS_AVX and HAS_AVX2 +attributes to target CPUs with those capabilities. This is no longer +recommended, with the replacement being Microarch >= "x86_64-v3".

    +

    Additional Feature-Specific Attributes

    +

    There are many attributes that you can use with requirements. To see what values +you can specify for a given attribute you can run the following command while +connected to your login node:

    +
    $ condor_status -af {ATTR_NAME} | sort -u
    +
    +

    For example, to see what values you can specify for the Microarch attribute run:

    +
    $ condor_status -af Microarch | sort -u
    +x86_64-v1
    +x86_64-v2
    +x86_64-v3
    +x86_64-v4
    +
    +

    You will find many attributes will take the boolean values true or false.

    +

    Below is a list of common attributes that you can include in your submit file requirements statement.

    +
      +
    • +

      Microarch - See above. x86_64-v1, x86_64-v2, x86_64-v3, and x86_64-v4

      +
    • +
    • +

      OSGVO_OS_NAME - The name of the operating system of the compute node. + The most common name is RHEL

      +
    • +
    • +

      OSGVO_OS_VERSION - Version of the operating system

      +
    • +
    • +

      OSGVO_OS_STRING - Combined OS name and version. Please see the + requirements string above on the recommended setup.

      +
    • +
    • +

      OSGVO_CPU_MODEL - The CPU model identifier string as presented in + /proc/cpuinfo

      +
    • +
    • +

      HAS_CVMFS_oasis_opensciencegrid_org - Attribute specifying + the need to access specific oasis /cvmfs file system repositories. Other + common CVMFS repositories are HAS_CVMFS_singularity_opensciencegrid_org + and project ones like HAS_CVMFS_xenon_opensciencegrid_org.

      +
    • +
    +

    For GPU attribtues, such as GPUs' compute capability, see our +GPU guide.

    +

    Non-x86 Based Architectures

    +

    Within the computing community, there's a growing interest in exploring +non-x86 architectures, such as ARM and PowerPC. As of now, the OSPool +does not host resources based on these architectures; however, it +is designed to accommodate them once available. The OSPool operates +under a system where all tasks are configured to execute on the +same architecture as the host from which they were submitted. This +compatibility is ensured by HTCondor, which automatically adds the +appropriate architecture to the job's requirements. By inspecting the +classad of any given job, one would notice the inclusion of +(TARGET.Arch == "X86_64") among its requirements, indicating the +system's current architectural preference.

    +

    If you do wish to specify a different architechure, just add it to +your job requirements:

    +
    requirements = Arch == "PPC"
    +
    +

    You can get a list of current architechures by running:

    +
    $ condor_status -af Arch | sort | uniq
    +X86_64
    +
    +

    Specifying Sites / Avoiding Sites

    +

    To run your jobs on a list of specific execution sites, or avoid a set of +sites, use the +DESIRED_Sites/+UNDESIRED_Sites attributes in your job +submit file. These attributes should only be used as a last resort. For +example, it is much better to use feature attributes (see above) to make +your job go to nodes matching what you really require, than to broadly +allow/block whole sites. We encourage you to contact the facilitation team before taking this action, to make sure it is right for you.

    +

    To avoid certain sites, first find the site names. You can find a +current list by querying the pool:

    +
    condor_status -af GLIDEIN_Site | sort -u
    +
    +

    In your submit file, add a comma separated list of sites like:

    +
    +UNDESIRED_Sites = "ISI,SU-ITS"
    +
    +

    Those sites will now be exluded from the set of sites your job can +run at.

    +

    Similarly, you can use +DESIRED_Sites to list a subset of sites +you want to target. For example, to run your jobs at the SU-ITS site, +and only at that site, use:

    +
    +DESIRED_Sites = "ISI,SU-ITS"
    +
    +

    Note that you should only specify one of +DESIRED_Sites/+UNDESIRED_Sites +in the submit file. Using both at the same time will prevent the job from +running.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/submitting_workloads/Slurm_to_HTCondor/index.html b/htc_workloads/submitting_workloads/Slurm_to_HTCondor/index.html new file mode 100644 index 00000000..f09b656c --- /dev/null +++ b/htc_workloads/submitting_workloads/Slurm_to_HTCondor/index.html @@ -0,0 +1,2685 @@ + + + + + + + + + + + + + + + + + + Convert your workflow from Slurm to HTCondor - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    + +
    + + +
    +
    + + + + +

    Convert Your Workflow From Slurm to HTCondor

    +

    Introduction

    +

    Slurm is a common workload manager for high performance computing (HPC) systems while HTCondor +is a scheduler program developed for a high throughput computing (HTC) environment. As they are both implementations of scheduler/workload managers, they have some similarities, like needing to specify the computing resources required for a job. Some differences include the syntax for describing a job, and some of the system assumptions made by +the scheduling program. In this guide, we will go through some general similarities +and differences and provide an example of "translating" an existing Slurm submit file +into HTCondor. Skip to this example.

    +

    General Diffences Between Slurm and HTCondor

    +
      +
    • HTCondor is good at managing a large quantity of single-node jobs; Slurm is suitable for scheduling multi-node and multi-core jobs, and can struggle when managing a large quantity of jobs
    • +
    • Slurm requires a shared file system to operate, HTCondor does not.
    • +
    • Slurm script has a certain order- all the requirements on the top then the code execution step; HTCondor script does not have any order. The only requirement is that it ends with the queue statement.
    • +
    • Every requirement line in the Slurm script starts with #SBATCH. In HTCondor only the system requirements (RAM, Cores, Disk space) line starts with request_
    • +
    • The queue statement in HTCondor can be modified (include variables) to make it behave like an array job in Slurm.
    • +
    • Basic job submission and queue checking command starts with a condor_ prefix in HTCondor; Slurm commands generally start with the letter s.
    • +
    +
    +

    To know more about Slurm please visit their website and for HTCondor take a look at the HTCondor manual page

    +
    +

    Special Considerations for the OSPool

    +
      +
    • HTCondor on OSPool does not use modules and a shared file system. A user needs to identify every component of their jobs and transfer them from their access point to the execute node. The slides of the new user training contians more detils about it.
    • +
    • Instead of relying on modules, please use the different conatiners available on the OSPool or make your own container. Please remember the faciliation team is here to support you.
    • +
    • By default the wall time limit on +the OSPool is 10 hours.
    • +
    +

    Comparing Slurm and HTCondor Files

    +

    A sample Slurm script is presented below with the equivalent HTCondor transformation.

    +

    Submitting One Job

    +

    The scenario here is submitting one Matlab job, requesting +8 cores, 16GB of memory (or RAM), planning +to run for 20 hours, specifying where to save standard output and error

    +

    Slurm Example

    +
    +#!/bin/bash
    +#SBATCH --job-name=sample_slurm  # Optional in HTCondor     
    +#SBATCH --error=job.%J.error        
    +#SBATCH --output=job.%J.out
    +#SBATCH --time=20:00:00       
    +#SBATCH --nodes=1                # HTCondor equivalent does not exist                     
    +#SBATCH --ntasks-per-node=8          
    +#SBATCH --mem-per-cpu=2gb            
    +#SBATCH --partition=batch        # HTCondor equivalent does not exist
    +
    +module load matlab/r2020a            
    +matlab -nodisplay -r "matlab_program(input_arguments),quit"
    +
    + +

    HTCondor Example

    +
    ++SingularityImage = "/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2020a"
    +executable = matlab_program
    +arguments = input_arguments
    +
    +# optional
    +batch_name = sample_htcondor
    +
    +error = job.$(ClusterID).$(ProcID).error
    +output = job.$(ClusterID).$(ProcID).out
    +log = job.$(ProcID).log
    +
    +# transfer_input_files = 
    +
    ++JobDurationCategory = "Long"
    +
    +request_cpus = 8
    +request_memory = 16 GB
    +request_disk = 2 GB
    +
    +queue 1
    +
    + +

    Notice that: +- Using a Singularity image replaces module loading +- The Matlab command becomes executable and arguments in the submit file +- HTCondor has its own custom "log" format in addition to saving standard output +and standard error. +- If there are additional input files, they would need to be added in the +"transfer_input_files" line. +- Note that memory is total + not per-core. We also need to request disk + space for the job's working directory, as it is not + running on a shared file system.

    +

    Submit Multiple Jobs

    +

    Using the same base example, what options are needed if you wanted to run multiple +copies of the same basic job?

    +

    Slurm Example

    +

    In Slurm, multiple tasks are expressed as an array job:

    +
    +%%%%%%%%%%%%%%%%%highlights for submitting an array jobs %%%%%%%%%%%%%%%%%%%%%%%%%%%
    +#SBATCH --array=0-9
    +
    +module load matlab/r2020a   
    +matlab -nodisplay -r "matlab_program(input_arguments,$SLURM_ARRAY_TASK_ID),quit"
    +
    + +

    HTCondor Example

    +

    In HTCondor, multiple tasks are submitted as many independent jobs. The +$(ProcID) variable takes the place of $SLURM_ARRAY_TASK_ID above.

    +
    +%%%%%%%%%%%%%%% equivalent changes to HTCondor for array jobs%%%%%%%%%%%%%%%%%%%%%%%%%%
    +executable = matlab_program
    +arguments = input_arguments, $(ProcID)
    +
    +queue 10
    +
    + +

    HTCondor has many more ways to submit multiple jobs, behind this simple numerical +approach. See our other HTCondor guides for more details.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/submitting_workloads/checkpointing-on-OSPool/index.html b/htc_workloads/submitting_workloads/checkpointing-on-OSPool/index.html new file mode 100644 index 00000000..e947cc10 --- /dev/null +++ b/htc_workloads/submitting_workloads/checkpointing-on-OSPool/index.html @@ -0,0 +1,2618 @@ + + + + + + + + + + + + + + + + + + Checkpointing Jobs - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Checkpointing Jobs

    +

    What is Checkpointing?

    +

    Checkpointing is a technique that provides fault tolerance for a user's analysis. It consists of saving snapshots of a job's progress so the job can be restarted without losing its progress and having to restart from the beginning. We highly encourage checkpointing as a solution for jobs that will exceed the 10 hour maximum suggested runtime on the OSPool.

    +

    This section is about jobs capable of periodically saving checkpoint information, and how to make HTCondor store that information safely, in case it's needed to continue the job on another machine or at a later time.

    +

    There are two types of checkpointing: exit driven and eviction driven. In a vast majority of cases, exit driven checkpointing is preferred over eviction driven checkpointing. Therefore, this guide will focus on how to utilize exit driven checkpointing for your analysis.

    +

    Note that not all software, programs, or code are capable of creating checkpoint files and knowing how to resume from them. Consult the manual for your software or program to determine if it supports checkpointing features. Some manuals will refer this ability as "checkpoint" features, as the ability to "resume" mid-analysis if a job is interrupted, or as "checkpoint/restart" capabilities. Contact a Research Computing Facilitator if you would like help determining if your software, program, or code is able to checkpoint.

    +

    Why Checkpoint?

    +

    Checkpointing allows a job to automatically resume from approximately where it left off instead of having to start over if interrupted. This behavior is advantageous for jobs limited by a maximum runtime policy. It is also advantageous for jobs submitted to backfill resources with no runtime guarantee (i.e. jobs on the OSPool) where the compute resources may also be more prone to hardware or networking failures.

    +

    For example, checkpointing jobs that are limited by a runtime policy can enable HTCondor to exit a job and automatically requeue it to avoid hitting the maximum runtime limit. By using checkpointing, jobs circumvent hitting the maximum runtime limit and can run for extended periods of time until the completion of the analysis. This behavior avoids costly setbacks that may be caused by loosing results mid-way through an analysis due to hitting a runtime limit.

    +

    Process of Exit Driven Checkpointing

    +

    Using exit driven checkpointing, a job is specified to time out after a user-specified amount of time with an exit code value of 85 (more on this below). Upon hitting this time limit, HTCondor transfers any checkpoint files listed in the submit file attribute transfer_checkpoint_files to a directory called /spool. This directory acts as a storage location for these files in case the job is interrupted. HTCondor then knows that jobs with exit code 85 should be automatically requeued, and will transfer the checkpoint files in /spool to your job's working directory prior to restarting your executable.

    +

    The process of exit driven checkpointing relies heavily on the use of exit codes to determine the next appropriate steps for HTCondor to take with a job. In general, exit codes are used to report system responses, such as when an analysis is running, encountered an error, or successfully completes. HTCondor recognizes exit code 85 as checkpointing jobs and therefore will know to handle these jobs differently than non-checkpoiting jobs.

    +

    Requirements for Exit Driven Checkpointing

    +

    Requirements for your code or software:

    +
      +
    • Checkpoint: The software, program, or code you are using must be able to capture checkpoint files (i.e. snapshots of the progress made thus far) and know how to resume from them.
    • +
    • Resume: This means your code must be able to recognize checkpoint files and know to resume from them instead of the original input data when the code is restarted.
    • +
    • Exit: Jobs should exit with an exit code value of 85 after successfully creating checkpoint files. Additionally, jobs need to be able to exit with a non-85 value if they encounter an error or write the writing the final outputs.
    • +
    +

    In some cases, these requirements can be achieved by using a wrapper script. This means that your executable may be a script, rather than the code that is writing the checkpoint. An example wrapper script that enables some of these behaviors is below.

    +

    Contact a Research Computing Facilitator for help determining if your job is capable of using checkpointing.

    +

    Changes to the Submit File

    +

    Several modifications to the submit file are needed to enable HTCondor's checkpointing feature.

    +
      +
    • The line checkpoint_exit_code = 85 must be added. HTCondor recognizes code 85 as a checkpoint job. This means HTCondor knows to end a job with this code but to then to requeue it repeatedly until the analysis completes.
    • +
    • The value of when_to_transfer_output should be set to ON_EXIT.
    • +
    • The name of the checkpoint files or directories to be transferred to /spool should be specified using transfer_checkpoint_files.
    • +
    +

    Optional +In some cases, it is necessary to write a wrapper script to tell a job when to timeout and exit. In cases such as this, the executable will need to be changed to the name of that wrapper script. An example of a wrapper script that enables a job to checkout and exit with the proper exit codes can be found below.

    +

    An example submit file for an exit driven checkpointing job looks like:

    +
    # exit-driven-example.submit
    +
    +executable                  = exit-driven.sh
    +arguments                   = argument1 argument2
    +
    +checkpoint_exit_code        = 85
    +transfer_checkpoint_files   = my_output.txt, temp_dir, temp_file.txt
    +
    +should_transfer_files       = yes
    +when_to_transfer_output     = ON_EXIT
    +
    +output                      = example.out
    +error                       = example.err
    +log                         = example.log
    +
    ++JobDurationCategory = "Medium"
    +
    +cpu                         = 1
    +request_disk                = 2 GB
    +request_memory              = 2 GB
    +
    +queue 1
    +
    +

    Example Wrapper Script for Checkpointing Job

    +

    As previously described, it may be necessary to use a wrapper script to tell your job when and how to exit as it checkpoints. An example of a wrapper script that tells a job to exit every 4 hours looks like:

    +
    #!/bin/bash
    +
    +timeout 4h do_science arg1 arg2
    +
    +timeout_exit_status=$?
    +
    +if [ $timeout_exit_status -eq 124 ]; then
    +    exit 85
    +fi
    +
    +exit $timeout_exit_status
    +
    +

    Let's take a moment to understand what each section of this wrapper script is doing:

    +
    #!/bin/bash
    +
    +timeout 4h do_science argument1 argument2
    +# The `timeout` command will stop the job after 4 hours (4h). 
    +# This number can be increased or decreased depending on how frequent your code/software/program 
    +# is creating checkpoint files and how long it takes to create/resume from these files. 
    +# Replace `do_science argument1 argument2` with the execution command and arguments for your job.
    +
    +timeout_exit_status=$?
    +# Uses the bash notation of `$?` to call the exit value of the last executed command 
    +# and to save it in a variable called `timeout_exit_status`.
    +
    +
    +
    +if [ $timeout_exit_status -eq 124 ]; then
    +    exit 85
    +fi
    +
    +exit $timeout_exit_status
    +
    +# Programs typically have an exit code of `124` while they are actively running. 
    +# The portion above replaces exit code `124` with code `85`. HTCondor recognizes 
    +# code `85` and knows to end a job with this code once the time specified by `timeout`
    +# has been reached. Upon exiting, HTCondor saves the files from jobs with exit code `85` 
    +# in the temporary directory within `/spool`.  Once the files have been transferred,
    +# HTCondor automatically requeues that job and fetches the files found in `/spool`. 
    +# If an exit code of `124` is not observed (for example if the program is done running 
    +# or has encountered an error), HTCondor will end the job and will not automaticlally requeue it.
    +
    +

    The ideal timeout frequency for a job is every 1-5 hours with a maximum of 10 hours. For jobs that checkpoint and timeout in under an hour, it is possible that a job may spend more time with checkpointing procedures than moving forward with the analysis. After 10 hours, the likelihood of a job being inturrupted on the OSPool is higher.

    +

    Checking the Progress of Checkpointing Jobs

    +

    It is possible to investigate checkpoint files once they have been transferred to /spool.

    +

    You can explore the checkpointed files in /spool by navigating to /home/condor/spool on an OSPool +Access Point. The +directories in this folder are the last four digits of a job's cluster ID with leading zeros removed. Sub folders are labeled with the process ID for each job. For example, to investigate the checkpoint files for 17870068.220, the files in /spool would be found in folder 68 in a subdirectory called 220.

    +

    More Information

    +

    More information on checkpointing HTCondor jobs can be found in HTCondor's manual: https://htcondor.readthedocs.io/en/latest/users-manual/self-checkpointing-applications.html This documentation contains additional features available to checkpointing jobs, as well as additional examples such as a python checkpointing job.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/submitting_workloads/jupyter/index.html b/htc_workloads/submitting_workloads/jupyter/index.html new file mode 100644 index 00000000..32b4e234 --- /dev/null +++ b/htc_workloads/submitting_workloads/jupyter/index.html @@ -0,0 +1,2584 @@ + + + + + + + + + + + + + + + + + + Launch a JupyterLab Instance - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    OSPool Notebooks: Access the OSPool via JupyterLab

    +

    The OSG team supports an OSPool Notebooks service, a JupyterLab interface that +connects with an OSPool Access Point. An OSPool Notebook instance +can be used to manage files, submit jobs, summarize results, and run tutorials.

    +

    Quick Start

    +

    Go to this link to start an OSPool Notebooks instance:

    +

    Launch an OSPool Notebook

    +
      +
    • You will be prompted to "Sign in" using your institution credentials.
    • +
    • Once logged in, you will be automatically redirected to the "Server Options" page. Several server options are listed, supporting a variety of programming environment and scientific workflows.
    • +
    • Select your desired server option and click "Start" to launch your instance. This process can take several minutes to complete. You will be redirected automatically when your instance is ready.
    • +
    +

    If you have an existing account on the ap40.uw.osg-htc.org Access Point, the +started Jupyter instance will connect to your account on that Access Point. If you don't have +an existing OSPool account, your Jupyter instance will be running on a temporary +Access Point as the "joyvan" user. For more details on the differences between +these instances, see Working with your OSPool Notebooks Instance.

    +

    To log out of your session, go to the top left corner of the JupyterLab interface and click the "File" tab. Under this tab, click "Log Out".

    +

    Why use OSPool Notebooks?

    +

    There are many benefits to using this service:

    +

    Ease of access: All you need to access OSPool Notebooks is an internet connection +and web browser! You don't need an account, ssh keys, or anything else installed +on your computer.

    +

    User-friendly environment: The JupyterLab environment provides access to notebooks, terminals, and text editors in a visual environment, making it easier to use for researchers with newer command line skills.

    +

    Learn yourself, train others: We have self-serve tutorials that +anyone can use by starting up an OSPool Notebook and then going through the materials. +This can be used by individuals (with or without an OSPool account!) or by anyone who +wants to run a training on using the OSPool.

    +

    Integration with Access Point: If you have an existing OSPool account, +on ap40.uw.osg-htc.org, the OSPool Notebook service allows you to have the +above benefits as part of your full OSPool account. If you start with a guest account, +and then apply for a full account, you can keep +using the same interface to work with the full OSPool.

    +

    Working with your OSPool Notebooks Instance

    +

    Needed Submit File Options

    +

    When submitting jobs from the terminal in the OSPool Notebooks interface, make sure +to always include this option in your submit file:

    +
    should_transfer_input = YES
    +
    +

    This option is needed for jobs to start and run successfully.

    +

    OSPool Notebook Experience

    +

    There will be slight differences in your OSPool Notebook instance, depending +on whether you have an existing OSPool account and what Access Point it is on. Click on +the section below that applies to you to learn more.

    +

    For all users, notebooks will time out after an hour an inactivity and may run for a maximum of four hours. Timing out will not impact jobs submitted to the OSPool.

    +
    +For researchers with accounts on a uw.osg-htc.org access point +
    +Working in OSPool Notebooks, your account will be tied to your account on your uw.osg-htc.org access point. This means you will be able to interact with files in your /home directory, execute code, and save files, similar to like you would if you were logged into your access point via a terminal. If you submit jobs to HTCondor, by default, your jobs will run on the Open Science Pool. As of right now, these HTCondor jobs will not be able to access any data you have stored in `/protected`. +
    +
    +Unlike logging into your access point through a terminal, when you log in through an OSPool Notebooks instance, you can run computionally intensive tasks in your /home directory. This is because each researcher has a total of 8 CPUs and 16 GB memory available to their OSPool Notebook instance's /home directory. +
    +
    +If you would like your HTCondor jobs to run inside your Jupyter container and not on the OSPool, you can copy/paste these lines to your submit file: +
    +
    +requirements = Machine == "CHTC-Jupyter-User-EP-$ENV(HOSTNAME)" ++FromJupyterLab = true +
    +
    + The requirements = and +FromJupyterLab lines tell HTCondor to assign all jobs to run on the dedicated execute point server assigned to your instance upon launch. +
    +
    + +
    +For researchers with accounts on ap2*.uc.osg-htc.org access point +
    +Working in OSPool Notebooks, your account will not be tied to your account on your +ap2*.uc.osg-htc.org access point. +
    +
    +OSPool Notebooks are run on only our uw.osg-htc.org access points. This means your OSPool account will not be recognized. Therefore, while you are welcome to upload data to your OSPool Notebooks instance and to use the 8 CPUs and 16 GB memory available to your instance to submit HTCondor jobs and analyze data, we recommend you request an account on a uw.osg-htc.org access points access point to be able to run full OSPool workflows and to avoid having data deleted upon logging out. +
    +
    + +
    +For researchers with guest access on an OSPool access point +
    +Our OSPool Notebooks instance is a great way to see if you would like to request an account on an OSPool access point or to practice small High Throughput Computing workflows without needing an OSPool account. +
    +
    +Your instance has HTCondor pre-installed, which allows you to practice the job submission process required to use OSG resources. Your instance will have 8 CPUs and 16 GB of memory available to your computations. We encourage you to also attend our twice-a-month trainings (where you can use your OSPool Notebooks instance to follow along). At any time, you are welcome to request a full account that will allow you to submit jobs to the OSPool using a Jupyter-based interface. +
    +
    + +

    Read More

    +

    For more information about the JupyterLab interface in general, see the JupyterLab manual.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/submitting_workloads/monitor_review_jobs/index.html b/htc_workloads/submitting_workloads/monitor_review_jobs/index.html new file mode 100644 index 00000000..d516ac3f --- /dev/null +++ b/htc_workloads/submitting_workloads/monitor_review_jobs/index.html @@ -0,0 +1,2812 @@ + + + + + + + + + + + + + + + + + + Monitor and Review Jobs With condor_q and condor_history - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + + +

    Monitor and Review Jobs With condor_q and condor_history"

    +

    Objectives

    +

    This guide discusses how to monitor jobs in the queue with condor_q and to review jobs that have recently left the queue with condor_history.

    +

    Monitor Queued Jobs with condor_q

    +

    Default condor_q

    +

    The default behavior of condor_q is to list all of a user's jobs currently in HTCondor's queue grouped into batches. A batch consists of all jobs submitted using a single submit file. For example:

    +
    $ condor_q
    +
    +-- Schedd: ap40.uw.osg-htc.org : <192.170.227.146:9618?... @ 03/04/22 12:31:45
    +OWNER     BATCH_NAME  SUBMITTED    DONE   RUN    IDLE  TOTAL JOB_IDS
    +alice ID: 21562536   3/4  12:31      _      _      5      5 21562536.0-4
    +
    +Total for query: 5 jobs; 0 completed, 0 removed, 5 idle, 0 running, 0 held, 0 suspended 
    +Total for alice: 5 jobs; 0 completed, 0 removed, 5 idle, 0 running, 0 held, 0 suspended 
    +Total for all users: 4112 jobs; 0 completed, 0 removed, 76 idle, 904 running, 3132 held, 0 suspended
    +
    +

    Constraints for condor_q

    +

    condor_q can be used to list individual jobs associated with a username<U>, cluster ID <C>, or job ID <J> as indicated by <U/C/J>.

    +

    Additionally, the flag -nobatch can be used to list individual jobs instead of batches of jobs using the format condor_q <U/C/J> -nobatch.

    +
    $ condor_q alice -nobatch
    +
    +-- Schedd: ap40.uw.osg-htc.org : <192.170.227.146:9618?... @ 03/04/22 12:52:22
    + ID          OWNER            SUBMITTED     RUN_TIME ST PRI SIZE CMD
    +21562638.0   alice            3/4  12:52   0+00:00:00 I  0    0.0 soilModel.py parameter1.csv
    +21562638.1   alice            3/4  12:52   0+00:00:00 I  0    0.0 soilModel.py parameter2.csv
    +21562638.2   alice            3/4  12:52   0+00:00:00 I  0    0.0 soilModel.py parameter3.csv
    +21562638.3   alice            3/4  12:52   0+00:00:00 I  0    0.0 soilModel.py parameter4.csv
    +21562638.4   alice            3/4  12:52   0+00:00:00 I  0    0.0 soilModel.py parameter5.csv
    +
    +21562639.0   alice            3/4  12:52   0+00:00:00 I  0    0.0 wordcount.py Alice_in_Wonderland.tx
    +21562639.1   alice            3/4  12:52   0+00:00:00 I  0    0.0 wordcount.py Dracula.txt
    +21562639.2   alice            3/4  12:52   0+00:00:00 I  0    0.0 wordcount.py Huckleberry_Finn.txt
    +21562639.3   alice            3/4  12:52   0+00:00:00 I  0    0.0 wordcount.py Pride_and_Prejudice.tx
    +21562639.4   alice            3/4  12:52   0+00:00:00 I  0    0.0 wordcount.py Ulysses.txt
    +
    +

    View All Job Attributes

    +

    Information about HTCondor jobs are saved as "job attributes". Job attributes can be viewed using the -l flag, a shorthand for -long. The output of condor_q <U/C/J> -l can be used to learn more about a job and to diagnose errors.

    +

    Examples of job attributes listed when using condor_q <U/C/J> -l are as follows:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    AttributeDescription
    MemoryUsageMaximum memory that a job used in MB
    DiskUsageMaximum disk space that a job used in KB
    BatchNameJob batch label
    MATCH_EXP_JOBGLIDEIN_ResourceNameLocation of site at which a job is running
    RemoteHostLocation of ite and slot number where a job is running
    ExitCodeExit code of a job upon its completion
    HoldReasonHuman-readable message as to why a job was held. It can be used to determine if a job should be released or not.
    HoldReasonCodeInteger value that represents why a job was put on hold
    JobNotificationInteger indicating when the user should be emailed regarding a change of status for their job
    RemotePoolName of the pool in which a job is running
    NumRestartsNumber of restarts carried out by a job
    +

    Many additional attributes are provided by HTCondor to learn about your jobs, including attributes dedicated to workflows that utilize DAGman and containers.

    +

    For more information about these and other attributes, please see the HTCondor Manual.

    +

    Constraints for Job Attributes

    +

    To display only the output of specified attributes, it is possible to use the "auto format" flag denoted as -af with condor_q <U/C/J>. An example use case is to view the owner and location of the site where a given job, such as job ID 15244592.127, is running by using:

    +
    $ condor_q 15244592.127 -af Owner MATCH_EXP_JOBGLIDEIN_ResourceName 
    +
    +alice BNL-ATLAS
    +
    +

    In the above example, the Owner is the user alice and the job is running on resources owned by the Brookhaven National Laboratory as indicated by BNL_ATLAS.

    +

    View Specific Job Attributes Across More Than One Job

    +

    It is possible to sort and filter the output for one or more job attributes across a batch of jobs. When investigating more than one job, it is advantageous to limit the print out to a certain number of jobs to avoid flooding your screen. To limit the output to a specified number of jobs, use -limit N and replace N with the number of jobs you would like to view. For example, to view the site location where 100 jobs belonging to batch 12245532 ran, you can use:

    +
    $ condor_q 12245532 -limit 100 -af MATCH_EXP_JOBGLIDEIN_ResourceName | sort | uniq -c
    +
    +      9 Crane
    +      4 LSU-DB-CE1
    +      4 ND-CAML_gpu
    +     71 Rice-RAPID-Backfill
    +      2 SDSC-PRP-CE1
    +      6 TCNJ-ELSA
    +      1 Tufts-Cluster
    +      3 WSU-GRID
    +
    +

    In this example, 71 jobs ran at Rice University (Rice-RAPID-Backfill) while only one job ran at Tufts University (Tufts-Cluster). If you would like to know which abbreviations correspond to which compute resource provider in the OSPool, contact a Research Computing Facilitator.

    +

    View Jobs that are Held

    +

    To isolate and print out held jobs, use condor_q <U/C/J> -held. The this command will print jobs currently in the "Held" state and will not print jobs that are in the "Run", "Done", or "Idle" states.

    +

    Using the job ads and constraints described above, it is possible to print out the reasons why a subset of a user's jobs are being held.

    +
    $ condor_q alice -held -af HoldReason | sort | uniq -c
    +      4 Error from glidein_3439920_345771664@c6-6-39-2.aglt2.org: SHADOW at 192.170.227.166 failed to send file(s) to <192.41.230.81:44309>: error reading from /home/alice/InputData.txt: (errno 2) No such file or directory; STARTER failed to receive file(s) from <192.170.227.166:9618>
    +      1 Job in status 2 put on hold by SYSTEM_PERIODIC_HOLD due to memory usage 10572684.
    +
    +

    In the output above, four jobs were place on hold due to a "missing file or directory" in the path of /home/alice/InputData.txt that was specified in the transfer_input_files line of the submit file. Because HTCondor could not locate this input (possibly due to an incorrect file path), the job was placed on hold. Additionally, one job was placed on hold due to exceeding the requested memory specified in the submit file.

    +

    An in-depth guide on troubleshooting issues with held jobs on the OSPool is available on our website.

    +

    View Machine Matches for a Job

    +

    The -analyze and -better-analyze options can be used to view the number of machines that match to a job. These flags are often used to diagnose many problems, including understanding why a job has not started running.

    +

    A portion of the output from these options shows the number of machines in the pool and how many of these are able to run your job:

    +
    21607747.000:  Run analysis summary ignoring user priority.  Of 2189 machines,
    +   1605 are rejected by your job's requirements
    +     53 reject your job because of their own requirements
    +      1 match and are already running your jobs
    +      0 match but are serving other users
    +    530 are able to run your job
    +
    +

    Additional output of these options include the requirements line of the job's submit file, last successful match date, hold reason messages, and other useful information.

    +

    The -analyze and -better-analyze options deliver similar output, however, -better-analyze is a newer feature that provides additional information including the number of slots matched by your job given the different requirements specified in the submit file.

    +

    Additional information on using -analyze and -better-analyze for troubleshooting will be available in our troubleshooting guide in the near future.

    +

    Review Job History with condor_history

    +

    Default condor_history

    +

    Somewhat similar to condor_q, which shows jobs currently in the queue, condor_history is used to show information about jobs that have recently left the queue.

    +

    By default, condor_history will show every user's job that HTCondor still has a record of in its history. Because HTCondor jobs are constantly being sent to the queue on OSG-managed Access Points, HTCondor cleans its history of jobs every few days to free up space for new jobs that have recently left the queue. Once a job is cleaned from HTCondor's history, it is removed permanently from the queue.

    +

    Before a job is cleaned from HTCondor's queue, condor_history can be valuable for learning about recently completed jobs.

    +

    As previously stated, condor_history without any additional flags will list every user's job, which can be thousands of lines long. To exit this behavior, use control + C. In most cases, it is recommended to combine condor_history with one or more of the options below to help limit the output of this command to only the desired information.

    +

    Constrain Your condor_history Query

    +

    Like condor_q, it is possible to limit the output of your condor_history query by user <U>, cluster ID <C>, and job ID <J> as indicated by (<U/C/J>). By default, HTCondor will continue to search through its history of jobs by the option it is constrained by. Since HTCondor's history is extensive, this means your command line prompt will not be returned to you until HTCondor has finished its search and analysis of its entire history. To prevent this time-consuming behavior from occurring, we recommend using the -limit N flag with condor_history. This will tell HTCondor to limit its search to the first N items that appear matching its constraint. For example, condor_history alice -limit 20 will return the condor_history output of the user alice's 20 most recently submitted jobs.

    +

    Viewing and Constraining Job Attributes

    +

    Displaying the list of job attributes using -l and -af can also be used with condor_history.

    +

    It is important to note that some attributes are renamed when a job exits the queue and enters HTCondor's history. For example, RemoteHost is renamed to LastRemoteHost and HoldReason will become LastHoldReason.

    +

    Special Considerations

    +

    Although many options that exist for condor_q also exist for condor_history, some do not. For example, -analyze and -better-analyze cannot be used with condor_history. Additionally, -hold cannot be used with condor_history as no job in HTCondor's history can be in the held state.

    +

    More Information on Options for condor_q and condor_history

    +

    A full list of the options for condor_q and condor_history may be listed by using combining them with the –-help flag or by viewing the HTCondor manual.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/submitting_workloads/submit-multiple-jobs/index.html b/htc_workloads/submitting_workloads/submit-multiple-jobs/index.html new file mode 100644 index 00000000..f6188498 --- /dev/null +++ b/htc_workloads/submitting_workloads/submit-multiple-jobs/index.html @@ -0,0 +1,2804 @@ + + + + + + + + + + + + + + + + + + Easily Submit Multiple Jobs - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Easily Submit Multiple Jobs

    +

    Overview

    +

    HTCondor has several convenient features for streamlining high-throughput +job submission. This guide provides several examples +of how to leverage these features to submit multiple jobs with a +single submit file.

    +

    Why submit multiple jobs with a single submit file?

    +

    As described in our Policies for using an OSPool Access Point, +users should submit multiple jobs using a single submit file, or where applicable, as few +separate submit files as needed. Using HTCondor multi-job submission features is more +efficient for users and will help ensure reliable operation of the the login nodes.

    +

    Many options exist for streamlining your submission of multiple jobs, +and this guide only covers a few examples of what is truly possible with +HTCondor. If you are interested in a particular approach that isn't described here, +please contact OSG Facilitators and we will +work with you to identify options to meet the needs of your work.

    +

    Submit Multiple Jobs Using queue

    +

    All HTCondor submit files require a queue attribute (which must also be +the last line of the submit file). By default, queue will submit one job, but +users can also configure the queue attribute to behave like a for loop +that will submit multiple jobs, with each job varying as predefined by the user.

    +

    Below are different HTCondor submit file examples for submitting batches of multiple +jobs and, where applicable, how to indicate the differences between jobs in a batch +with user-defined variables. Additional examples and use cases are provided further below:

    +
      +
    1. queue <N> - will submit N number of jobs. Examples + include performing replications, where the same job must be repeated N number + of times, looping through files named with numbers, and looping through + a matrix where each job uses information from a specific row or column.
    2. +
    3. queue <var> from <list> - will loop through a + list of file names, parameters, etc. as defined in separate text file (i.e. ). + This queue option is very flexible and provides users with many options for + submitting multiple jobs.
    4. +
    5. Organizing Jobs Into Individual Directories - + another option that can be helpful in organizing multi-job submissions.
    6. +
    +

    These queue options are also described in the following video from HTCondor Week 2020: + + 2020 HTCondor Week Presentation

    +

    Submitting Multiple Jobs Using HTCondor Video

    +

    What makes these queue options powerful is the ability to use user-defined +variables to specify details about your jobs in the HTCondor submit file. The +examples below will include the use of $(variable_name) to specify details +like input file names, file locations (aka paths), etc. When selecting a +variable name, users must avoid bespoke HTCondor submit file variables +such as Cluster, Process, output, and input, arguments, etc.

    +

    1. Use queue N in you HTCondor submit files

    +

    When using queue N, HTCondor will submit a total of N +jobs, counting from 0 to N - 1 and each job will be assigned +a unique Process id number spanning this range of values. Because +the Process variable will be unique for each job, it can be used in +the submit file to indicate unique filenames and filepaths for each job.

    +

    The most straightforward example of using queue N is to submit +N number of identical jobs. The example shown below demonstrates +how to use the Cluster and Process variables to assign unique names +for the HTCondor error, output, and log files for each job in the batch:

    +
    # 100jobs.sub
    +# submit 100 identical jobs
    +
    +log = job_$(Cluster)_$(Process).log
    +error = job_$(Cluster)_$(Process).err
    +output = job_$(Cluster)_$(Process).out
    +
    +... remaining submit details ...
    +
    +queue 100
    +
    +

    For each job, the appropriate number, 0, 1, 2, ... 99 will replace $(Process). +$(Cluster) will be a unique number assigned to the entire 100 job batch. Each +time you run condor_submit job.sub, you will be provided +with the Cluster number which you will also see in the output produced by +the command condor_q.

    +

    If a uniquely named results file needs to be returned by each job, +$(Process) and $(Cluster) can also be used as arguments, and anywhere +else as needed, in the submit file:

    +
    arguments = $(Cluster)_$(Process).results
    +
    +... remaining submit details ...
    +
    +queue 100
    +
    +

    Be sure to properly format the arguments statement according to the +executable used by the job.

    +

    What if my jobs are not identical? queue N may still be a great +option! Additional examples for using this option include:

    +

    A. Use integer numbered input files

    +
    [user@login]$ ls *.data
    +0.data   1.data   2.data   3.data
    +...      97.data  98.data  99.data
    +
    +

    In the submit file, use:

    +
    transfer_input_files = $(Process).data
    +
    +... remaining submit details ...
    +
    +queue 100
    +
    +

    B. Specify a row or column number for each job

    +

    $(Process) can be used to specify a unique row or column of information in a +matrix to be used by each job in the batch. The matrix needs to then be transferred +with each job as input. For exmaple:

    +
    transfer_input_files = matrix.csv
    +arguments = $(Process)
    +
    +... remaining submit details ...
    +
    +queue 100
    +
    +

    The above exmaples assumes that your job is set up to use an argument to +specify the row or column to be used by your software.

    +

    C. Need N to start at 1

    +

    If your input files are numbered 1 - 100 instead of 0 - 99, or your matrix +row starts with 1 instead of 0, you can perform basic arithmetic in the submit +file:

    +
    plusone = $(Process) + 1
    +NewProcess = $INT(plusone, %d)
    +arguments = $(NewProcess)
    +
    +... remaining submit details ...
    +
    +queue 100
    +
    +

    Then use $(NewProcess) anywhere in the submit file that you would +have otherwise used $(Process). Note that there is nothing special about the +names plusone and NewProcess, you can use any names you want as variables.

    +

    2. Submit multiple jobs with one or more distinct variables per job

    +

    Think about what's different between each job that needs to be submitted. +Will each job use a different input file or combination of software parameters? Do +some of the jobs need more memory or disk space? Do you want to use a different +software or script on a common set of input files? Using queue <var> from <list> +in your submit files can make that possible! <var> can be a single user-defined +variable or comma-separated list of variables to be used anywhere in the submit file. +<list> is a plain text file that defines <var> for each individual job to be submitted in the batch.

    +

    Suppose you need to run a program called compare_states that will run on +on the following set of input files: illinois.data, nebraska.data, and +wisconsin.data and each input file can analyzed as a separate job.

    +

    To create a submit file that will submit all three jobs, first create a +text file that lists each .data file (one file per line). +This step can be performed directly on the login node, for example:

    +
    [user@state-analysis]$ ls *.data > states.txt
    +[user@state-analysis]$ cat states.txt
    +illinois.data
    +nebraska.data
    +wisconsin.data
    +
    +

    Then, in the submit file, following the pattern queue <var> from <list>, +replace <var> with a variable name like state and replace <list> +with the list of .data files saved in states.txt:

    +
    queue state from states.txt
    +
    +

    For each line in states.txt, HTCondor will submit a job and the variable +$(state) can be used anywhere in the submit file to represent the name of the .data file +to be used by that job. For the first job, $(state) will be illinois.data, for the +second job $(state) will be nebraska.data, and so on. For example:

    +
    # run_compare_states_per_state.sub
    +
    +transfer_input_files = $(state)
    +arguments = $(state)
    +executable = compare_states
    +
    +... remaining submit details ...
    +
    +queue state from states.txt
    +
    +

    For a working example of this kind of job submission, see our Word Frequency Tutorial.

    +

    Use multiple variables for each job

    +

    Let's imagine that each state .data file contains data spanning several +years and that each job needs to analyze a specific year of data. Then +the states.txt file can be modified to specify this information:

    +
    [user@state-analysis]$ cat states.txt
    +illinois.data, 1995
    +illinois.data, 2005
    +nebraska.data, 1999
    +nebraska.data, 2005
    +wisconsin.data, 2000
    +wisconsin.data, 2015
    +
    +

    Then modify the queue to define two <var> named state and year:

    +
    queue state,year from states.txt
    +
    +

    Then the variables $(state) and $(year) can be used in the submit file:

    +
    # run_compare_states_by_year.sub
    +arguments = $(state) $(year)
    +transfer_input_files = $(state)
    +executable = compare_states
    +
    +... remaining submit details ...
    +
    +queue state,year from states.txt
    +
    +

    3. Organizing Jobs Into Individual Directories

    +

    One way to organize jobs is to assign each job to its own directory, +instead of putting files in the same directory with unique names. To +continue our \"compare_states\" example, suppose there\'s a directory +for each state you want to analyze, and each of those directories has +its own input file named input.data:

    +
    [user@state-analysis]$ ls -F
    +compare_states  illinois/  nebraska/  wisconsin/
    +
    +[user@state-analysis]$ ls -F illinois/
    +input.data
    +
    +[user@state-analysis]$ ls -F nebraska/
    +input.data
    +
    +[user@state-analysis]$ ls -F wisconsin/
    +input.data
    +
    +

    The HTCondor submit file attribute initialdir can be used +to define a specific directory from which each job in the batch will be +submitted. The default initialdir location is the directory from which the +command condor_submit myjob.sub is executed.

    +

    Combining queue var from list with initiadir, each line of will include +the path to each state directory and initialdir set to this path for +each job:

    +
    #state-per-dir-job.sub
    +initial_dir = $(state_dir)
    +transfer_input_files = input.data   
    +executable = compare_states
    +
    +... remaining submit details ...
    +
    +queue state_dir from state-dirs.txt
    +
    +

    Where state-dirs.txt is a list of each directory with state data:

    +
    [user@state-analysis]$ cat state-dirs.txt
    +illinois
    +nebraska
    +wisconsin
    +
    +

    Notice that executable = compare_states has remained unchanged in the above example. +When using initialdir, only the input and output file path (including the HTCondor log, error, and +output files) will be changed by initialdir.

    +

    In this example, HTCondor will create a job for each directory in state-dirs.txt and use +that state\'s directory as the initialdir from which the job will be submitted. +Therefore, transfer_input_files = input.data can be used without specifying +the path to this input.data file. Any output generated by the job will then be returned to the initialdir +location.

    +

    Get Help

    +

    For assistance or questions, please email the OSG Research Facilitation team at support@osg-htc.org or visit the help desk and community forums.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/submitting_workloads/tutorial-command/index.html b/htc_workloads/submitting_workloads/tutorial-command/index.html new file mode 100644 index 00000000..29518c9f --- /dev/null +++ b/htc_workloads/submitting_workloads/tutorial-command/index.html @@ -0,0 +1,2474 @@ + + + + + + + + + + + + + + + + + + List of Available Tutorials - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Workflow Tutorials

    +

    OSPool workflow tutorials on Github

    +

    All of the OSG provided tutorials are available as repositories on +Github. These +tutorials are tested regularly and should work as is, but if +you experience any issues please contact us.

    +

    Available tutorials

    +

    The following tutorials are available and are compatible with OSG-provided Access Points:

    +
    Currently available tutorials:
    +R ...................... Estimate Pi using the R programming language
    +R-addlibSNA ............ Shows how to add R external libraries for the R jobs
    +ScalingUp-Python ....... Scaling up compute resources - Python example to optimize a function on grid points
    +blast-split ............ How to run BLAST on the OSPool by splitting a large input file
    +fastqc ................. How to run FastQC on the OSPool
    +dagman-wordfreq ........ DAGMan based wordfreq example
    +error101 ............... Use condor_q -better-analyze to analyze stuck jobs
    +matlab-HelloWorld ...... Creating standalone MATLAB application - Hello World 
    +osg-locations .......... Tutorial based on OSPool location exercise from the User School
    +pegasus ................ An introduction to the Pegasus job workflow manager
    +quickstart ............. How to run your first OSPool job
    +scaling ................ Learn to steer jobs to particular resources
    +scaling-up-resources ... A simple multi-job demonstration
    +software ............... Software access tutorial
    +tensorflow-matmul ...... Tensorflow math operations as a singularity container job on the OSPool - matrix multiplication
    +
    +

    Install and setup a tutorial

    +

    On an OSPool Access Point, type the following to download a tutorial's materials:

    +
    $ git clone https://github.com/OSGConnect/<tutorial-name>
    +
    +

    This command will clone the tutorial repository to your current working directory. +cd to the repository directory and follow the steps described in the readme.md file. +Alternatively, you can view the readme.md file at the tutorial's corresponding GitHub page.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/submitting_workloads/tutorial-error101/index.html b/htc_workloads/submitting_workloads/tutorial-error101/index.html new file mode 100644 index 00000000..82d94956 --- /dev/null +++ b/htc_workloads/submitting_workloads/tutorial-error101/index.html @@ -0,0 +1,2466 @@ + + + + + + + + + + + + + + + + + + Troubleshooting Job Errors - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Troubleshooting Job Errors

    +

    In this lesson, we'll learn how to troubleshoot jobs that never start or fail in unexpected ways.

    +

    Troubleshooting techniques

    +

    Diagnostics with condor_q

    +

    The condor_q command shows the status of the jobs and it can be used +to diagnose why jobs are not running. Using the -better-analyze flag +with condor_q can show you detailed information about why a job isn't +starting on a specific pool. Since OSG Connect sends jobs to many places, we also need to +specify a pool name with the -pool flag.

    +

    Unless you know a specific pool you would like to query, checking the flock.opensciencegrid.org pool is usually a good place to start.

    +
    $ condor_q -better-analyze JOB-ID -pool POOL-NAME
    +
    +

    Let's do an example. First we'll need to login as usual, and then load the tutorial error101.

    +
    $ ssh username@login.osgconnect.net
    +
    +$ tutorial error101
    +$ cd tutorial-error101
    +$ condor_submit error101_job.submit
    +
    +

    We'll check the job status the normal way:

    +
    condor_q username
    +
    +

    For some reason, our job is still idle. Why? Try using condor_q +-better-analyze to find out. Remember that you will also need to +specify a pool name. In this case we'll use flock.opensciencegrid.org:

    +
    $ condor_q -better-analyze JOB-ID -pool flock.opensciencegrid.org
    +
    +# Produces a long ouput. 
    +# The following lines are part of the output regarding the job requirements.
    +
    +The Requirements expression for your job reduces to these conditions:
    +
    +         Slots
    +Step    Matched  Condition
    +-----  --------  ---------
    +[0]       10674  TARGET.Arch == "X86_64"
    +[1]       10674  TARGET.OpSys == "LINUX"
    +[3]       10674  TARGET.Disk >= RequestDisk
    +[5]           0  TARGET.Memory >= RequestMemory
    +[8]       10674  TARGET.HasFileTransfer
    +
    +

    By looking through the match conditions, we see that many nodes match our requests for the Linux operating system and the x86_64 architecture, but none of them match our requirement for 51200 MB of memory.

    +

    Let's look at our submit script and see if we can find the source of this error:

    +
    $ cat error101_job.submit 
    +Universe = vanilla
    +
    +Executable = error101.sh
    +
    +# to sleep an hour
    +Arguments = 3600
    +
    +request_memory = 2 TB
    +
    +Error = job.err 
    +Output = job.out 
    +Log = job.log 
    +Queue 1
    +
    +

    See the request_memory line? We are asking for 2 Terabytes of memory, when we meant to only +ask for 2 Gigabytes of memory. Our job is not matching any available job slots because +none of the slots offer 2 TB of memory. Let's fix that by changing that line to read request_memory = 2 GB.

    +
    $ nano error101_job.submit
    +
    +

    Let's cancel our idle job with the condor_rm command and then resubmit our edited job:

    +
    $ condor_rm JOB-ID
    +$ condor_submit error101_job.submit
    +
    +

    Alternatively, you can edit the resource requirements of the idle job in queue:

    +
    condor_qedit JOB_ID RequestMemory 2048
    +
    +

    Held jobs and condor_release

    +

    Occasionally, a job can fail in various ways and go into "Held" +state. Held state means that the job has encountered some error, and +cannot run. This doesn't necessarily mean that your job has failed, but, +for whatever reason, Condor cannot fulfill your request(s).

    +

    In this particular case, a user had this in his or her Condor submit file:

    +
    transfer_output_files = outputfile
    +
    +

    However, when the job executed, it went into Held state:

    +
    $ condor_q -analyze 372993.0
    +-- Submitter: login01.osgconnect.net : <192.170.227.195:56174> : login01.osgconnect.net
    +---
    +372993.000:  Request is held.
    +Hold reason: Error from glidein_9371@compute-6-28.tier2: STARTER at 10.3.11.39 failed to send file(s) to <192.170.227.195:40485>: error reading from /wntmp/condor/compute-6-28/execute/dir_9368/glide_J6I1HT/execute/dir_16393/outputfile: (errno 2) No such file or directory; SHADOW failed to receive file(s) from <192.84.86.100:50805>
    +
    +

    Let's break down this error message piece by piece:

    +
    Hold reason: Error from glidein_9371@compute-6-28.tier2: STARTER at 10.3.11.39 failed to send file(s) to <192.170.227.195:40485>
    +
    +

    This part is quite cryptic, but it simply means that the worker node +where your job executed (glidein_9371@compute-6-28.tier2 or 10.3.11.39) +tried to transfer a file to the OSG Connect login node (192.170.227.195) +but did not succeed. The next part explains why:

    +
    error reading from /wntmp/condor/compute-6-28/execute/dir_9368/glide_J6I1HT/execute/dir_16393/outputfile: (errno 2) No such file or directory
    +
    +

    This bit has the full path of the file that Condor tried to transfer back to login.osgconnect.net. The reason why the file transfer failed is because outputfile was never created on the worker node. Remember that at the beginning we said that the user specifically requested transfer_outputfiles = outputfile! Condor could not complete this request, and so the job went into Held state instead of finishing normally.

    +

    It's quite possible that the error was simply transient, and if we retry, the job will succeed. We can re-queue a job that is in Held state by using condor_release:

    +
    condor_release JOB-ID
    +
    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/submitting_workloads/tutorial-organizing/index.html b/htc_workloads/submitting_workloads/tutorial-organizing/index.html new file mode 100644 index 00000000..099295cc --- /dev/null +++ b/htc_workloads/submitting_workloads/tutorial-organizing/index.html @@ -0,0 +1,2579 @@ + + + + + + + + + + + + + + + + + + Organizing and Submitting HTC Workloads - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Organizing and Submitting HTC Workloads

    +

    Imagine you have a collection of books, and you want to analyze how word +usage varies from book to book or author to author.

    +

    This tutorial starts with the same set up as +our Wordcount Tutorial for Submitting Multiple Jobs, but +focuses on how to organize that example more effectively on the Access Point, +with an eye to scaling up to a larger HTC workload in the future.

    +

    Our Workload

    +

    We can analyze one book by running the wordcount.py script, with the +name of the book we want to analyze:

    +
    $ ./wordcount.py Alice_in_Wonderland.txt
    +
    +

    Try running the command to see what the output is for the script. Once you have done that +delete the output file created (rm counts.Alice_in_Wonderland.txt).

    +

    We want to run this script on all the books we have copies of.

    +
      +
    1. What is the input set for this HTC workload?
    2. +
    3. What is the output set?
    4. +
    +

    Make an Organization Plan

    +

    Based on what you know about the script, inputs, and outputs, how would +you organize this HTC workload in directories (folders) on the access point?

    +

    There will also be system and HTCondor files produced when we submit a job -- +how would you organize the log, standard error and standard output files?

    +

    Try making those changes before moving on to the next section of the tutorial.

    +

    Organize Files

    +

    There are many different ways to organize files; a simple example that works +for most workloads is having a directory for your input files and a directory +for your output files. We can set up this structure on the command line by running:

    +
    $ mkdir input
    +$ mv *.txt input/
    +$ mkdir output/
    +
    +

    We can view our current directory and its subdirectories by using the recursive flag +with the ls command:

    +
    $ ls -R
    +README.md    books.submit input        output       wordcount.py
    +
    +./input:
    +Alice_in_Wonderland.txt Huckleberry_Finn.txt    Ulysses.txt
    +Dracula.txt             Pride_and_Prejudice.txt
    +
    +./output:
    +
    +

    We are also going to create directories for the HTCondor log files and the +standard error and standard output files (in one directory):

    +
    $ mkdir logs
    +$ mkdir errout
    +
    +

    Submit One Job

    +

    Now we want to submit a test job that uses this organizing scheme, using just +one item in our input set -- in this example, we'll use the Alice_in_Wonderland.txt +file from our input/ directory. The lines that need to be filled in are shown +below and can be edited using the nano text editor:

    +
    $ nano books.submit
    +
    +executable    = wordcount.py
    +arguments     = Alice_in_Wonderland.txt
    +
    +transfer_input_files    = input/Alice_in_Wonderland.txt
    +transfer_output_files   = counts.Alice_in_Wonderland.txt
    +transfer_output_remaps  = "counts.Alice_in_Wonderland.txt=output/counts.Alice_in_Wonderland.txt"
    +
    +

    Note that to tell HTCondor the location of the input file, we need to include +the input directory. We're also using a submit file option called +transfer_output_remaps that will essentially move the output file to our +output/ directory by renaming or remapping it.

    +

    We also want to edit the submit file lines that tell the log, error and output +files where to go:

    +
    $ nano books.submit
    +output        = logs/job.$(ClusterID).$(ProcID).out
    +error         = errout/job.$(ClusterID).$(ProcID).err
    +log           = errout/job.$(ClusterID).$(ProcID).log
    +
    +

    Once you've made the above changes to the books.submit file, you can submit it, +and monitor its progress:

    +
    $ condor_submit books.submit
    +$ condor_watch_q
    +
    +

    (Type CTRL-C to stop the condor_watch_q command.)

    +

    Submit Multiple Jobs

    +

    We are now sufficiently organized to submit our whole workload.

    +

    First, we need to create a file with our input set -- in this case, it will be a list of the +book files we want to analyze. We can do this by using the shell's listing command ls and +redirecting the output to a file:

    +
    $ cd input
    +$ ls > booklist.txt
    +$ cat booklist.txt
    +$ mv booklist.txt ..
    +$ cd ..
    +
    +

    Then, we modify our submit file to reference this input list and replace the static values +from our test job (Alice_in_Wonderland.txt) with a variable -- we've chosen $(book) below:

    +
    $ nano books.submit
    +
    +executable    = wordcount.py
    +arguments     = $(book)
    +
    +transfer_input_files    = input/$(book)
    +transfer_output_files   = counts.$(book)
    +transfer_output_remaps  = "counts.$(book)=output/counts.$(book)"
    +
    +# other options
    +
    +queue book from booklist.txt
    +
    +

    Once this is done, you can submit the jobs as usual:

    +
    $ condor_submit books.submit
    +
    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/submitting_workloads/tutorial-osg-locations/index.html b/htc_workloads/submitting_workloads/tutorial-osg-locations/index.html new file mode 100644 index 00000000..8bf4cca9 --- /dev/null +++ b/htc_workloads/submitting_workloads/tutorial-osg-locations/index.html @@ -0,0 +1,2539 @@ + + + + + + + + + + + + + + + + + + Finding OSG Locations - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Finding OSG Locations

    +

    In this section, we will learn how to quickly submit multiple jobs simultaneously using HTCondor and we will visualize where these jobs run so we can get an idea of where and jobs are distributed on the Open Science Pool.

    +

    Gathering network information from the OSG

    +

    Now to create a submit file that will run in the OSG!

    +
      +
    1. Use the tutorial command to download the job submission files: tutorial osg-locations.
    2. +
    3. Change into the tutorial-osg-locations directory with cd tutorial-osg-locations.
    4. +
    +

    Hostname fetching code

    +

    The following Python script finds the ClassAd of the machine it's running on and finds a network identity that can be used to perform lookups:

    +
    #!/bin/env python
    +
    +import re
    +import os
    +import socket
    +
    +machine_ad_file_name = os.getenv('_CONDOR_MACHINE_AD')
    +try:
    +    machine_ad_file = open(machine_ad_file_name, 'r')
    +    machine_ad = machine_ad_file.read()
    +    machine_ad_file.close()
    +except TypeError:
    +    print socket.getfqdn()
    +    exit(1)
    +
    +try:
    +    print re.search(r'GLIDEIN_Gatekeeper = "(.*):\d*/jobmanager-\w*"', machine_ad, re.MULTILINE).group(1)
    +except AttributeError:
    +    try:
    +        print re.search(r'GLIDEIN_Gatekeeper = "(\S+) \S+:9619"', machine_ad, re.MULTILINE).group(1)
    +    except AttributeError:
    +        exit(1)
    +
    +

    This script (wn-geoip.py) is contained in the zipped archive (wn-geoip.tar.gz) that is transferred to the job and unpacked by the job wrapper script location-wrapper.sh. You will be using location-wrapper.sh as your executable and wn-geoip.tar.gz as an input file.

    +

    The submit file for this job, scalingup.submit, is setup to specify these files and +submit 100 jobs simultaneously. It also uses the job's process value to create unique output, error and log files for each of the job.

    +
    $ cat scalingup.submit
    +
    +# The following requirments ensure we land on compute nodes
    +# which have all the dependencies (modules, so we can 
    +# module load python2.7) and avoid some machines where 
    +# GeoIP does not work (such as Kubernetes containers)
    +requirements = OSG_OS_STRING == "RHEL 7" && HAS_MODULES && GLIDEIN_Gatekeeper =!= UNDEFINED
    +
    +# We need the job to run our executable script, with the
    +#  input.txt filename as an argument, and to transfer the
    +#  relevant input and output files:
    +executable = location-wrapper.sh
    +transfer_input_files = wn-geoip.tar.gz
    +
    +# We can specify unique filenames for each job by using
    +#  the job's 'process' value.
    +error = job.$(Process).error
    +output = job.$(Process).output
    +log = job.$(Process).log
    +
    +# The below are good base requirements for first testing jobs on OSG, 
    +#  if you don't have a good idea of memory and disk usage.
    +request_cpus = 1
    +request_memory = 1 GB
    +request_disk = 1 GB
    +
    +# Queue 100 jobs with the above specifications.
    +queue 100
    +
    +

    Submit this job using the condor_submit command:

    +
    $ condor_submit scalingup.submit
    +
    +

    Wait for the results. Remember, you can use watch condor_q to monitor the status of your jobs.

    +

    Collating your results

    +

    Now that you have your results, it's time to summarize them. +Rather than inspecting each output file individually, you can use the cat command +to print the results from all of your output files at once. If all of your output +files have the format job.#.output (e.g., job.10.output), your command will +look something like this:

    +
    $ cat job.*.output
    +
    +

    The * is a wildcard so the above cat command runs on all files that start with job- and end in .output. +Additionally, you can use cat in combination with the sort and uniq commands to print only the unique results:

    +
    $ cat job.*.output | sort | uniq
    +
    +

    Mapping your results

    +

    To visualize the locations of the machines that your jobs ran on, you will be using http://www.mapcustomizer.com/. Copy and paste the collated results into the text box that pops up when clicking on the 'Bulk Entry' button on the right-hand side. Where did your jobs run?

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/submitting_workloads/tutorial-quickstart/index.html b/htc_workloads/submitting_workloads/tutorial-quickstart/index.html new file mode 100644 index 00000000..1f524b52 --- /dev/null +++ b/htc_workloads/submitting_workloads/tutorial-quickstart/index.html @@ -0,0 +1,2781 @@ + + + + + + + + + + + + + + + + + + Quickstart-Submit Example HTCondor Jobs - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Quickstart - Submit Example HTCondor Jobs

    +

    Login to OSG Access Point

    +

    To begin, login to your OSG Access Point.

    +

    Pretyped Setup

    +

    To save some typing, you can download the tutorial materials into your home +directory on the access point. This is highly recommended to +ensure that you don't encounter transcription errors during the +tutorials

    +
    $ git clone https://github.com/OSGConnect/tutorial-quickstart
    + +

    Now, let's start the quickstart tutorial:

    +
    $ cd tutorial-quickstart 
    + +

    Manual Setup

    +

    Alternatively, if you want the full manual experience, create a new +directory for the tutorial work:

    +
    $ mkdir tutorial-quickstart
    +$ cd tutorial-quickstart
    +
    + +

    Example Jobs

    +

    Job 1: A single discovery job

    +

    Inside the tutorial directory that you created or installed previously, +let's create a test script to execute as your job. For pretyped setup, this is +the short.sh file:

    +
    #!/bin/bash
    +# short.sh: a short discovery job
    +printf "Start time: "; /bin/date
    +printf "Job is running on node: "; /bin/hostname
    +printf "Job running as user: "; /usr/bin/id
    +printf "Job is running in directory: "; /bin/pwd
    +echo
    +echo "Working hard..."
    +sleep 20
    +echo "Science complete!"
    +
    + +

    Now, make the script executable.

    +
    $ chmod +x short.sh
    +
    + +

    Run the script locally

    +

    When setting up a new job submission, it's important to test your job outside +of HTCondor before submitting into the Open Science Pool.

    +
    $ ./short.sh
    +Start time: Wed Aug 08 09:21:35 CDT 2023
    +Job is running on node: ap50.ux.osg-htc.org
    +Job running as user: uid=54161(alice), gid=5782(osg) groups=5782(osg),5513(osg.login-nodes),7158(osg.OSG-Staff)
    +Job is running in directory: /home/alice/tutorial-quickstart
    +Working hard...
    +Science complete!
    +
    + +

    Create an HTCondor submit file

    +

    So far, so good! Let's create a simple (if verbose) HTCondor submit file. This can be found in tutorial01.submit.

    +
    # Our executable is the main program or script that we've created
    +# to do the 'work' of a single job.
    +executable = short.sh
    +
    +# We need to name the files that HTCondor should create to save the
    +#  terminal output (stdout) and error (stderr) created by our job.
    +#  Similarly, we need to name the log file where HTCondor will save
    +#  information about job execution steps.
    +log = short.log
    +error = short.error
    +output = short.output
    +
    +# This is the default category for jobs
    ++JobDurationCategory = "Medium"
    +
    +# The below are good base requirements for first testing jobs on OSG, 
    +#  if you don't have a good idea of memory and disk usage.
    +request_cpus = 1
    +request_memory = 1 GB
    +request_disk = 1 GB
    +
    +# The last line of a submit file indicates how many jobs of the above
    +#  description should be queued. We'll start with one job.
    +queue 1
    +
    + +

    Submit the job

    +

    Submit the job using condor_submit:

    +
    $ condor_submit tutorial01.submit
    +Submitting job(s). 
    +1 job(s) submitted to cluster 144121.
    +
    + +

    Check the job status

    +

    The condor_q command tells the status of currently running jobs.

    +
     $ condor_q
    +-- Schedd: ap50.ux.osg-htc.org : <192.170.227.22:9618?... @ 08/10/23 14:19:08
    +OWNER      BATCH_NAME     SUBMITTED   DONE   RUN    IDLE  TOTAL JOB_IDS
    +alice    ID: 1441271  08/10 14:18    _  1      _      1 1441271.0
    +
    +Total for query: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended
    +Total for alice: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended
    +Total for all users: 3001 jobs; 0 completed, 0 removed, 2189 idle, 754 running, 58 held, 0 suspended
    +
    + +

    You can also get the status of a specific job cluster:

    +
    $ condor_q 1441271
    +-- Schedd: ap50.ux.osg-htc.org : <192.170.227.22:9618?... @ 08/10/23 14:19:08
    +OWNER      BATCH_NAME     SUBMITTED   DONE   RUN    IDLE  TOTAL JOB_IDS
    +alice    ID: 1441271  08/10 14:18    _  1      _      1 1441271.0
    +
    +Total for query: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended
    +Total for alice: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended
    +Total for all users: 3001 jobs; 0 completed, 0 removed, 2189 idle, 754 running, 58 held, 0 suspended
    +
    + +

    Note the DONE, RUN, and IDLE columns. Your job will be listed in the IDLE column if +it hasn't started yet. If it's currently scheduled and running, it will +appear in the RUN column. As it finishes up, it will then show in the DONE column. +Once the job completes completely, it will not appear in condor_q.

    +

    Let's wait for your job to finish – that is, for condor_q not to show +the job in its output. A useful tool for this is condor_watch_q – it efficiently +monitors the status of your jobs by monitoring their corresponding log files. +Let's submit the job again, and use condor_watch_q to follow the progress of +your job (the status will update at two-second intervals):

    +
    $ condor_submit tutorial01.submit
    +Submitting job(s). 
    +1 job(s) submitted to cluster 1441272
    +$ condor_watch_q 
    +... 
    +
    + +

    Note: To close condor_watch_q, hold down Ctrl and press C.

    +

    Check the job output

    +

    Once your job has finished, you can look at the files that HTCondor has +returned to the working directory. The names of these files were specified in our +submit file. If everything was successful, it should have returned:

    +
      +
    • a log file from HTCondor for the job cluster: short.log
        +
      • This file can tell you where a job ran, how long it ran, and what resources it used.
      • +
      • If a job shows up as "held" in condor_q, this file will have a message +that gives a reason why.
      • +
      +
    • +
    • an output file for each job's output: short.output
        +
      • This file can have useful messages that describe how the job progressed.
      • +
      +
    • +
    • an error file for each job's errors: short.error
        +
      • If the job encountered any errors, they will likely be in this file.
      • +
      +
    • +
    +

    In this case, we will read the output file, which should contain the information printed by our script. It should look something like this:

    +
    $ cat short.output
    +Start time: Mon Aug 10 20:18:56 UTC 2023
    +Job is running on node: osg-84086-0-cmswn2030.fnal.gov
    +Job running as user: uid=12740(osg) gid=9652(osg) groups=9652(osg)
    +Job is running in directory: /srv
    +
    +Working hard...
    +Science complete!
    +
    + +

    Job 2: Using Inputs and Arguments in a Job

    +

    Sometimes it's useful to pass arguments to your executable from your +submit file. For example, you might want to use the same job script +for more than one run, varying only the parameters. You can do that +by adding Arguments to your submission file.

    +

    First, let's edit our existing short.sh script to accept arguments. To avoid +losing our original script, we make a copy of the file under the name short_transfer.sh +if you haven't already downloaded this entire tutorial.

    +
    $ cp short.sh short_transfer.sh
    + +

    Now, edit the file to include the added lines below or use cat to view the +existing short_transfer.sh file:

    +
    #!/bin/bash
    +# short_transfer.sh: a short discovery job
    +printf "Start time: "; /bin/date
    +printf "Job is running on node: "; /bin/hostname
    +printf "Job running as user: "; /usr/bin/id
    +printf "Job is running in directory: "; /bin/pwd
    +printf "The command line argument is: "; echo $@
    +printf "Job number is: "; echo $2
    +printf "Contents of $1 is "; cat $1
    +cat $1 > output$2.txt
    +echo
    +echo "Working hard..."
    +ls -l $PWD
    +sleep 20
    +echo "Science complete!"
    +
    + +

    We need to make our new script executable just as we did before:

    +
    $ chmod +x short_transfer.sh
    + +

    Notice that with our changes, the new script will now print out the contents of whatever file we specify in our arguments, specified by the $1. It will also copy the contents of that file into another file called output.txt.

    +

    Make a simple text file called input.txt that we can pass to our script:

    +
    "Hello World"
    + +

    Once again, before submitting our job we should test it locally to ensure it runs as we expect:

    +
    $ ./short_transfer.sh input.txt
    +Start time: Tue Dec 11 10:19:12 CST 2018
    +Job is running on node: ap50.ux.osg-htc.org
    +Job running as user: uid=54161(alice), gid=5782(osg) groups=5782(osg),5513(osg.login-nodes),7158(osg.OSG-Staff)
    +Job is running in directory: /home/alice/tutorial-quickstart
    +The command line argument is: input.txt
    +Job number is: 
    +Contents of input.txt is "Hello World"
    +Working hard
    +total 28
    +drwxrwxr-x 2 alice users   34 Aug 15 09:37 Images
    +-rw-rw-r-- 1 alice users   13 Aug 15 09:37 input.txt
    +drwxrwxr-x 2 alice users  114 Aug 11 09:50 log
    +-rw-r--r-- 1 alice users   13 Aug 11 10:19 output.txt
    +-rwxrwxr-x 1 alice users  291 Aug 15 09:37 short.sh
    +-rwxrwxr-x 1 alice users  390 Aug 11 10:18 short_transfer.sh
    +-rw-rw-r-- 1 alice users  806 Aug 15 09:37 tutorial01.submit
    +-rw-rw-r-- 1 alice users  547 Aug 11 09:49 tutorial02.submit
    +-rw-rw-r-- 1 alice users 1321 Aug 15 09:37 tutorial03.submit
    +Science complete!
    +
    + +

    Now, let's edit our submit file to properly handle these new arguments and output files and save this as tutorial02.submit

    +
    # We need the job to run our executable script, with the
    +#  input.txt filename as an argument, and to transfer the
    +#  relevant input file.
    +executable = short_transfer.sh
    +arguments = input.txt
    +
    +transfer_input_files = input.txt
    +# output files will be transferred back automatically 
    +
    +log = job.log
    +error = job.error
    +output = job.output
    +
    ++JobDurationCategory = "Medium"
    +
    +request_cpus = 1
    +request_memory = 1 GB
    +request_disk = 1 GB
    +
    +# Queue one job with the above specifications.
    +queue 1
    +
    + +

    Notice the added arguments = input.txt information. The arguments option specifies what arguments should be passed to the executable.

    +

    The transfer_input_files options needs to be included as well. When jobs are executed on the Open Science Pool via HTCondor, they are sent only with files that are specified. +Any new files generated by the job in the working directory will be returned to the +Access Point.

    +

    Submit the new submit file using condor_submit. Be sure to check your output files once the job completes.

    +
    $ condor_submit tutorial02.submit
    +Submitting job(s).
    +1 job(s) submitted to cluster 1444781.
    +
    + +

    Run the commands from the previous section to check on the job in the queue, and +view the outputs when the job completes.

    +

    Job 3: Submitting Multiple Jobs at Once

    +

    What do we need to do to submit several jobs simultaneously? In the +first example, Condor returned three files: out, error, and log. If we +want to submit several jobs, we need to track these three files for each +job. An easy way to do this is to add the $(Cluster) and $(Process) +macros to the HTCondor submit file. Since this can make our working +directory really messy with a large number of jobs, let's tell HTCondor +to put the files in a directory called log.

    +

    We will also include the $(Process) value as a second argument to our +script, which will cause it to give our output files unique names. If you want to +try it out, you can do so like this:

    +
    $ ./short_transfer.sh input.txt 12
    + +

    Incorporating all these ideas, +here's what the third submit file looks like, called tutorial03.submit:

    +
    # For this example, we'll specify unique filenames for each job by using
    +#  the job's 'Process' value.
    +executable = short_transfer.sh
    +arguments = input.txt $(Process)
    +
    +transfer_input_files = input.txt
    +
    +log = log/job.$(Cluster).log
    +error = log/job.$(Cluster).$(Process).error
    +output = log/job.$(Cluster).$(Process).output
    +
    ++JobDurationCategory = "Medium"
    +
    +request_cpus = 1
    +request_memory = 1 GB
    +request_disk = 1 GB
    +
    +# Let's queue ten jobs with the above specifications
    +queue 10
    +
    + +

    Before submitting, we also need to make sure the log directory exists.

    +
    $ mkdir -p log
    + +

    You'll see something like the following upon submission:

    +
    $ condor_submit tutorial03.submit
    +Submitting job(s)..........
    +10 job(s) submitted to cluster 1444786.
    + +

    Look at the output files in the log directory and notice how each job received its own separate output file:

    +
    $ ls log
    +job.1444786.0.error    job.1444786.3.error    job.1444786.6.error    job.1444786.9.error
    +job.1444786.0.output  job.1444786.3.output  job.1444786.6.output  job.1444786.9.output
    +job.1444786.2.error    job.1444786.4.error    job.1444786.7.error    job.1444786.log
    +job.1444786.1.output  job.1444786.4.output  job.1444786.7.output
    +job.1444786.2.error    job.1444786.5.error    job.1444786.8.error
    +job.1444786.2.output  job.1444786.5.output  job.1444786.8.output
    +
    + +

    Removing Jobs

    +

    On occasion, jobs will need to be removed for a variety of reasons +(incorrect parameters, errors in submission, etc.). In these instances, +the condor_rm command can be used to remove an entire job submission +or just particular jobs in a submission. The condor_rm command accepts +a cluster id, a job id, or username and will remove an entire cluster +of jobs, a single job, or all the jobs belonging to a given user +respectively. E.g. if a job submission generates 100 jobs and is +assigned a cluster id of 103, then condor_rm 103.0 will remove the +first job in the cluster. Likewise, condor_rm 103 will remove all +the jobs in the job submission and condor_rm [username] will remove +all jobs belonging to the user. The condor_rm documenation has more +details on using condor_rm including ways to remove jobs based on other +constraints.

    +

    Getting Your Work Running

    +

    Now that you have some practice with running HTCondor jobs, consider +reviewing our Getting Started +Roadmap +to see what next steps will get your own computational work running on the OSPool.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/using_software/available-containers-list/index.html b/htc_workloads/using_software/available-containers-list/index.html new file mode 100644 index 00000000..4039d38f --- /dev/null +++ b/htc_workloads/using_software/available-containers-list/index.html @@ -0,0 +1,3210 @@ + + + + + + + + + + + + + + + + + + Containers - Predefined List - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Existing OSPool-Supported Containers

    +

    This is list of commonly used containers in the Open Science Pool. These can be used +directly in your jobs or as base images if you want to define your own. Please +see the pages on Apptainer containers and Docker containers +for detailed instructions on how to use containers.

    +

    Base

    +
    +Debian 12 (htc/debian:12) +

    Debian 12 base image +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__debian__12.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/htc/debian:12
    +
    Project Website
    +
    Container Definition

    +
    +
    +EL 7 (htc/centos:7) +

    Enterprise Linux (CentOS) 7 base image +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__centos__7.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/htc/centos:7
    +
    Project Website
    +
    Container Definition

    +
    +
    +Rocky 8 (htc/rocky:8) +

    Rocky Linux 8 base image +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__rocky__8.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/htc/rocky:8
    +
    Project Website
    +
    Container Definition

    +
    +
    +Rocky 8 / CUDA 11.0.3 (htc/rocky:8-cuda-11.0.3) +

    Rocky Linux 8 / CUDA 11.0.3 image +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__rocky__8-cuda-11.0.3.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/htc/rocky:8-cuda-11.0.3
    +
    Project Website
    +
    Container Definition

    +
    +
    +Rocky 9 (htc/rocky:9) +

    Rocky Linux 9 base image +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__rocky__9.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/htc/rocky:9
    +
    Project Website
    +
    Container Definition

    +
    +
    +Rocky 9 / CUDA 2.6.0 (htc/rocky:9-cuda-12.6.0) +

    Rocky Linux 9 / CUDA 12.6.0 image +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__rocky__9-cuda-12.6.0.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/htc/rocky:9-cuda-12.6.0
    +
    Project Website
    +
    Container Definition

    +
    +
    +Ubuntu 20.04 (htc/ubuntu:20.04) +

    Ubuntu 20.04 (Focal) base image +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__ubuntu__20.04.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/htc/ubuntu:20.04
    +
    Project Website
    +
    Container Definition

    +
    +
    +Ubuntu 22.04 (htc/ubuntu:22.04) +

    Ubuntu 22.04 (Jammy Jellyfish) base image +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__ubuntu__22.04.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/htc/ubuntu:22.04
    +
    Project Website
    +
    Container Definition

    +
    +
    +Ubuntu 24.04 (htc/ubuntu:24.04) +

    Ubuntu 24.04 (Nobel Numbat) base image +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__ubuntu__24.04.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/htc/ubuntu:24.04
    +
    Project Website
    +
    Container Definition

    +
    +

    AI

    +
    +Tensorflow 2.15 (htc/tensorflow:2.15) +

    Tensorflow image from the Tensorflow project, with OSG additions +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__tensorflow__2.15.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/htc/tensorflow:2.15
    +
    Project Website
    +
    Container Definition

    +
    +
    +scikit-learn:1.3.2 (htc/scikit-learn:1.3) +

    scikit-learn, configured for execution on OSG +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__scikit-learn__1.3.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/htc/scikit-learn:1.3
    +
    Project Website
    +
    Container Definition

    +
    +

    Languages

    +
    +Julia (opensciencegrid/osgvo-julia) +

    Ubuntu based image with Julia +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-julia__1.0.3.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-julia__1.5.3.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-julia__1.7.3.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-julia__latest.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-julia:1.0.3
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-julia:1.5.3
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-julia:1.7.3
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-julia:latest
    +
    Project Website
    +
    Container Definition

    +
    +
    +Julia (m8zeng/julia-packages) +

    Ubuntu based image with Julia +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/m8zeng__julia-packages__latest.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/m8zeng/julia-packages:latest
    +
    Project Website
    +
    Container Definition

    +
    +
    +Matlab Runtime (opensciencegrid/osgvo-matlab-runtime) +

    This is the Matlab runtime component you can use to execute compiled Matlab codes +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-matlab-runtime__R2018b.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-matlab-runtime__R2019a.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-matlab-runtime__R2019b.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-matlab-runtime__R2020a.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-matlab-runtime__R2020b.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-matlab-runtime__R2021b.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-matlab-runtime__R2022b.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-matlab-runtime__R2023a.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2018b
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2019a
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2019b
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2020a
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2020b
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2021b
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2022b
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2023a
    +
    Project Website
    +
    Container Definition

    +
    +
    +Matlab Runtime (htc/matlab-runtime:R2023a) +

    This is the Matlab runtime component you can use to execute compiled Matlab codes +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__matlab-runtime__R2023a.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/htc/matlab-runtime:R2023a
    +
    Project Website
    +
    Container Definition

    +
    +
    +R (opensciencegrid/osgvo-r) +

    Example for building R images +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-r__3.5.0.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-r__4.0.2.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-r__latest.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-r:3.5.0
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-r:4.0.2
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-r:latest
    +
    Project Website
    +
    Container Definition

    +
    +
    +R (clkwisconsin/spacetimer) +

    Example for building R images +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/clkwisconsin__spacetimer__latest.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/clkwisconsin/spacetimer:latest
    +
    Project Website
    +
    Container Definition

    +
    +

    Project

    +
    +XENONnT (opensciencegrid/osgvo-xenon) +

    Base software environment for XENONnT, including Python 3.6 and data management tools +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2020.11.06.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2020.11.25.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2020.12.21.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2020.12.23.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.01.04.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.01.06.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.01.11.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.04.18.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.05.04.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.06.25.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.07.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.08.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.08.2.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.10.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.10.2.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.10.3.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.10.4.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.10.5.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.11.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.11.2.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.11.3.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.11.4.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.11.5.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.11.6.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.12.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.12.2.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.12.3.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.01.2.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.01.3.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.01.4.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.02.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.02.2.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.02.3.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.02.4.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.03.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.03.3.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.03.4.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.03.5.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.04.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.04.2.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.04.3.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.05.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.05.2.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.06.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.06.2.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.06.3.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.06.4.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.06.5.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.06.6.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.07.27.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.09.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.11.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__add_latex.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__gpu.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__latex_test3.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__py38.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__stable.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__straxen_0-13-1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__straxen_v100.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__switch_deployhq_user.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__upgrade-boost.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2020.11.06
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2020.11.25
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2020.12.21
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2020.12.23
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.01.04
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.01.06
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.01.11
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.04.18
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.05.04
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.06.25
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.07.1
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.08.1
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.08.2
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.10.1
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.10.2
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.10.3
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.10.4
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.10.5
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.11.1
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.11.2
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.11.3
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.11.4
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.11.5
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.11.6
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.12.1
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.12.2
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.12.3
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.01.2
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.01.3
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.01.4
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.02.1
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.02.2
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.02.3
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.02.4
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.03.1
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.03.3
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.03.4
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.03.5
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.04.1
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.04.2
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.04.3
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.05.1
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.05.2
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.06.1
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.06.2
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.06.3
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.06.4
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.06.5
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.06.6
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.07.27
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.09.1
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.11.1
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:add_latex
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:gpu
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:latex_test3
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:py38
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:stable
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:straxen_0-13-1
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:straxen_v100
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:switch_deployhq_user
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:upgrade-boost
    +
    Project Website
    +
    Container Definition

    +
    +
    +XENONnT (xenonnt/base-environment) +

    Base software environment for XENONnT, including Python 3.6 and data management tools +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2020.11.06.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2020.11.25.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2020.12.21.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2020.12.23.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2020.12.24.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.01.04.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.01.06.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.01.11.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.04.18.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.05.04.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.06.25.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.07.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.08.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.08.2.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.10.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.10.2.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.10.3.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.10.4.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.10.5.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.11.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.11.2.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.11.3.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.11.4.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.11.5.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.11.6.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.12.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.12.2.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.12.3.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.01.2.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.01.3.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.01.4.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.02.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.02.2.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.02.3.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.02.4.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.03.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.03.3.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.03.4.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.03.5.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.04.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.04.2.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.04.3.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.05.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.05.2.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.06.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.06.2.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.06.3.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.06.4.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.06.5.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.06.6.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.07.27.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.09.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.11.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__add_latex.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__gpu.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__latex_test3.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__py38.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__stable.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__straxen_v100.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__switch_deployhq_user.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__testing.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__upgrade-boost.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2020.11.06
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2020.11.25
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2020.12.21
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2020.12.23
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2020.12.24
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.01.04
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.01.06
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.01.11
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.04.18
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.05.04
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.06.25
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.07.1
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.08.1
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.08.2
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.10.1
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.10.2
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.10.3
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.10.4
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.10.5
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.11.1
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.11.2
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.11.3
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.11.4
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.11.5
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.11.6
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.12.1
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.12.2
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.12.3
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.01.2
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.01.3
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.01.4
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.02.1
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.02.2
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.02.3
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.02.4
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.03.1
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.03.3
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.03.4
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.03.5
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.04.1
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.04.2
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.04.3
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.05.1
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.05.2
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.06.1
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.06.2
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.06.3
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.06.4
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.06.5
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.06.6
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.07.27
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.09.1
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.11.1
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:add_latex
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:gpu
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:latex_test3
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:py38
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:stable
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:straxen_v100
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:switch_deployhq_user
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:testing
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:upgrade-boost
    +
    Project Website
    +
    Container Definition

    +
    +
    +XENONnT (xenonnt/osg_dev) +

    Base software environment for XENONnT, including Python 3.6 and data management tools +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__osg_dev__latest.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/xenonnt/osg_dev:latest
    +
    Project Website
    +
    Container Definition

    +
    +

    Tools

    +
    +DeepLabCut 3.0.0rc3 (htc/deeplabcut:3.0.0rc4) +

    A software package for animal pose estimation +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__deeplabcut__3.0.0rc4.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/htc/deeplabcut:3.0.0rc4
    +
    Project Website
    +
    Container Definition

    +
    +
    +FreeSurfer (opensciencegrid/osgvo-freesurfer) +

    A software package for the analysis and visualization of structural and functional neuroimaging data from cross-sectional or longitudinal studies +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-freesurfer__6.0.0.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-freesurfer__6.0.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-freesurfer__7.0.0.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-freesurfer__7.1.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-freesurfer__latest.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-freesurfer:6.0.0
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-freesurfer:6.0.1
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-freesurfer:7.0.0
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-freesurfer:7.1.1
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-freesurfer:latest
    +
    Project Website
    +
    Container Definition

    +
    +
    +GROMACS (opensciencegrid/osgvo-gromacs) +

    A versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-gromacs__2018.4.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-gromacs__2020.2.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-gromacs__latest.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-gromacs:2018.4
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-gromacs:2020.2
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-gromacs:latest
    +
    Project Website
    +
    Container Definition

    +
    +
    +GROMACS GPU (opensciencegrid/osgvo-gromacs-gpu) +

    A versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. This is a GPU enabled version. +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-gromacs-gpu__latest.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-gromacs-gpu:latest
    +
    Project Website
    +
    Container Definition

    +
    +
    +Gromacs 2023.4 (htc/gromacs:2023.4) +

    Gromacs 2023.4 for use on OSG +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__gromacs__2023.4.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/htc/gromacs:2023.4
    +
    Project Website
    +
    Container Definition

    +
    +
    +Gromacs 2024.2 (htc/gromacs:2024.2) +

    Gromacs 2024.2 for use on OSG +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__gromacs__2024.2.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/htc/gromacs:2024.2
    +
    Project Website
    +
    Container Definition

    +
    +
    +Minimal (htc/minimal:0) +

    Minimal image - used for testing +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__minimal__0.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/htc/minimal:0
    +
    Project Website
    +
    Container Definition

    +
    +
    +PyTorch 2.3.1 (htc/pytorch:2.3.1-cuda11.8) +

    A rich ecosystem of tools and libraries extends PyTorch and supports development in computer vision, NLP and more. +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__pytorch__2.3.1-cuda11.8.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/htc/pytorch:2.3.1-cuda11.8
    +
    Project Website
    +
    Container Definition

    +
    +
    +Quantum Espresso (opensciencegrid/osgvo-quantum-espresso) +

    A suite for first-principles electronic-structure calculations and materials modeling +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-quantum-espresso__6.6.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-quantum-espresso__6.8.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-quantum-espresso:6.6
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-quantum-espresso:6.8
    +
    Project Website
    +
    Container Definition

    +
    +
    +RASPA2 (opensciencegrid/osgvo-raspa2) +

    General purpose classical simulation package. It can be used for the simulation of molecules in gases, fluids, zeolites, aluminosilicates, metal-organic frameworks, carbon nanotubes and external fields. +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-raspa2__2.0.41.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-raspa2:2.0.41
    +
    Project Website
    +
    Container Definition

    +
    +
    +TensorFlow (opensciencegrid/tensorflow) +

    TensorFlow image (CPU only) +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__tensorflow__2.3.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__tensorflow__latest.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow:2.3
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow:latest
    +
    Project Website
    +
    Container Definition

    +
    +
    +TensorFlow (rynge/tensorflow-cowsay) +

    TensorFlow image (CPU only) +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/rynge__tensorflow-cowsay__latest.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/rynge/tensorflow-cowsay:latest
    +
    Project Website
    +
    Container Definition

    +
    +
    +TensorFlow (jiahe58/tensorflow) +

    TensorFlow image (CPU only) +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/jiahe58__tensorflow__latest.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/jiahe58/tensorflow:latest
    +
    Project Website
    +
    Container Definition

    +
    +
    +TensorFlow GPU (opensciencegrid/tensorflow-gpu) +

    TensorFlow image with GPU support +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__tensorflow-gpu__2.2-cuda-10.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__tensorflow-gpu__2.3-cuda-10.1.sif
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__tensorflow-gpu__latest.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow-gpu:2.2-cuda-10.1
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow-gpu:2.3-cuda-10.1
    +/cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow-gpu:latest
    +
    Project Website
    +
    Container Definition

    +
    +
    +TensorFlow GPU (efajardo/astroflow) +

    TensorFlow image with GPU support +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/efajardo__astroflow__latest.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/efajardo/astroflow:latest
    +
    Project Website
    +
    Container Definition

    +
    +
    +TensorFlow GPU (ssrujanaa/catsanddogs) +

    TensorFlow image with GPU support +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/ssrujanaa__catsanddogs__latest.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/ssrujanaa/catsanddogs:latest
    +
    Project Website
    +
    Container Definition

    +
    +
    +TensorFlow GPU (weiphy/skopt) +

    TensorFlow image with GPU support +
    +
    +OSDF Locations:
    +osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/weiphy__skopt__latest.sif
    +CVMFS Locations:
    +/cvmfs/singularity.opensciencegrid.org/weiphy/skopt:latest
    +
    Project Website
    +
    Container Definition

    +
    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/using_software/compiling-applications/index.html b/htc_workloads/using_software/compiling-applications/index.html new file mode 100644 index 00000000..c78c3bef --- /dev/null +++ b/htc_workloads/using_software/compiling-applications/index.html @@ -0,0 +1,2644 @@ + + + + + + + + + + + + + + + + + + Compiling Software - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Compiling Software

    +

    Introduction

    +

    Due to the distributed nature of the Open Science Pool, you will always need to +ensure that your jobs have access to the software that will be executed. You have two options for using code on the OSG – transferring the code files by themselves, or putting the code files into a container. Sometimes code is already compiled and they offer a direct executable for the UNIX or Linux system. Those types of software can be directly used on the OSPool. If your software is dependent on different library functions and does not have a make or install command consider using containers. To learn more about containers please follow the instructions on our container guide. If your code is written in C or C++, and has instructions including make-this guide will help you. Moreover, this guide provides general information for compiling and using your software in the OSPool. A detailed +example of a specific software compilation process is additionally available +at Example Compilation Guide.

    +
    +

    What is compiling? +The process of compiling converts human readable code into binary, +machine readable code that will execute the steps of the program.

    +
    +

    Get software source code

    +

    The first step to compiling your software is to locate and download the source +code, being sure to select the version that you want. Source code +will often be made available as a compressed tar archive which will need to be +extracted for before compilation.

    +

    You should also carefully review the installation instructions provided by the +software developers. The installation instructions should include important +information regarding various options for configuring and performing the compilation. +Also carefully note any system dependencies (hardware, other software, and libraries) that are +required for your software.

    +

    Select the appropriate compiler and compilation options

    +

    A compiler is a program that is used to peform source code compilation. The GNU Compiler +Collection (GCC) is a common, open source collection of compilers with support for C, C++, +fotran, and other languages, and includes important libraries for supporting your compilation +and sometimes software execution. Your software compilation may require certain versions +of a compiler which should be noted in the installation instructions or system dependencies +documention. Currently the Access Points have GCC 8.5.0 as the default version, but newer +versions of GCC may also be available - to learn more please contact support@osg-htc.org.

    +

    Static versus dynamic linking during compilation

    +

    Binary code often depends on additional information (i.e. instructions) from other software, +known as libraries, for proper execution. The default behavior when compiling, is for the +final binary to be "dynamically linked" to libraries that it depends on, such that when +the binary is executed, it will look for these library files on the system that it is +running on. Thus a copy of the appropriate library files will need to be available to your +software wherever it runs. OSPool users can transfer a copy of the necessary +libraries along with with their jobs to manage such dependencies if not supported by the +execute node that your jobs run on.

    +

    However, the option exists to "statically link" the library dependencies of your software. +By statically linking libraries during compilation, the library code will be +directly packaged with your software binary meaning the libraries will always be +available to your software which your software to run on more execute nodes.

    +

    To statically link libraries during compilation, use the -static flag when running gcc, +use --enable-static when running a configure script, or set your LD_FLAGS +environment variable to --enable-static (e.g. export LD_FLAGS="--enable-static").

    +

    Get access to libraries needed for your software

    +

    As described above, your software may require additional software, known as libraries, for +compilation and execution. For greatest portability of your software, we recommend installing +the libraries needed for your software and transferring a copy of the libraries along with +your subsequent jobs. When using libraries that you have installed yourself, you will likely +need to add these libraries to your LIBRARY_PATH environment variable before compiling +your software. There may also be additional environment variables that will need to be +defined or modified for software compilation, this information should be provided +in the installtion instructions of your software. For any libraries added to LIBRARY_PATH +before software compilation, you'll also need to add these same libraries to +your LD_LIBRARY_PATH as a step in your job's executable bash script before executing +your software.

    +

    Perform your compilation

    +

    Software compilation is easiest to perform interactively, and OSPool users are +welcome to compile software directly on their assigned Access Point. This will ensure +that your application is built on an environment that is similar to the majority +of the compute nodes on OSG. Because OSG Access Points currently use the +Alma/CentOS Linux 8 operating system (which are similar to the
    +more general Red Hat Enterprise Linux, or RHEL distribution), your software will, generally, only be +compatible for execution on RHEL 9 or similar operating systems. You can use the +requirements statement of your HTCondor submit file to direct your jobs to execute +nodes with specific operating systems, for instance:

    +
    requirements = (OSGVO_OS_STRING == "RHEL 9")
    +
    +

    Software installation typically includes three steps: 1.) configuration, 2.) compilation, and 3.) +"installation" which places the compiled code in a specific location. In most cases, +these steps will be achieved with the following commands:

    +
    ./configure
    +make
    +make install
    +
    +

    Most software is written to install to a default location, however your OSG Access Point +account is not authorized to write to these default system locations. Instead, you will want to +create a folder for your software installation in your home directory and use an option in the +configuration step that will install the software to this folder:

    +
    ./configure --prefix=/home/username/path
    +
    +

    where username should be replaced with your OSG username and path replaced with the +path to the directory you created for your software installation.

    +

    Watch out for hardware feature detection

    +

    Some software builds might try to optimize the software for the particular host you are +building on. In general this is a good idea (optimized code will perform better), but be +aware that not all execution endpoints on OSG are the same. If your software picks up +hardware features such as AVX/AVX2, you might have to ensure the jobs are running on +hardware with those features. For example, if your software requires AVX2:

    +
    requirements = (OSGVO_OS_STRING == "RHEL 9") && (HAS_AVX2 == True)
    +
    +

    Please see Control Where Your Jobs Run / Job Requirements

    +

    Use Your Software

    +

    When submitting jobs, you will need to transfer a copy of your compiled software, +and any dynamically-linked dependencies that you also installed. Our +Introduction to Data Management on OSG +guide is a good starting point for more information for selecting the appropriate +methods for transferring you software. Depending on your job workflow, it may be possible +to directly specify your executable binary as the executable in your HTCondor +submit file.

    +

    When using your software in subsequent job submissions, be sure to add additional
    +commands to the executable bash script to define evironment variables, like +for instance LD_LIBRARY_PATH, that may be needed to properly execute your software.

    +

    Get Additional Assistance

    +

    If you have questions or need assistance, please contact support@osg-htc.org.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/using_software/containers-docker/index.html b/htc_workloads/using_software/containers-docker/index.html new file mode 100644 index 00000000..c73fb78a --- /dev/null +++ b/htc_workloads/using_software/containers-docker/index.html @@ -0,0 +1,2757 @@ + + + + + + + + + + + + + + + + + + Containers - Docker - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Containers - Docker

    +

    The OSPool is using Apptainer/Singularity to execute containers. It is recommended +that if you are building your own custom container, you use the +Apptainer/Singularity image defintion format. +However, Docker images can also be used on the OSPool and a Docker image is +sometimes the more appropriate choice. For example:

    +
      +
    • There is an existing image on Docker Hub
    • +
    • You found a Dockerfile which meets your requirements
    • +
    • You have Docker installed on your own machine and want to + develop the code/image locally before using it on the OSPool
    • +
    +

    This guide contains examples on how to build your own Docker image, how +to convert a Docker image to Apptainer/Singularity, and how to import a +Docker image from the Docker Hub.

    +

    Building Your Own Docker Image

    +

    If you already have an existing Docker container image, skip +to Preparing Docker Containers for HTCondor Jobs. Otherwise, continue reading.

    +

    Identify Components

    +

    What software do you want to install? Make sure that you have either the source +code or a command that can be used to install it through Linux (like apt-get or +yum). You'll also need to choose a "base" container, on which to add your particular +software or tools.

    +

    Building

    +

    There are two main methods for generating your own container image.

    +
      +
    1. Editing the Dockerfile
    2. +
    3. Editing the default image using local Docker
    4. +
    +

    We recommend the first option, as it is more reproducible, but the second option +can be useful for troubleshooting or especially tricky installs.

    +

    Dockerfile

    +

    Create a folder on your computer and inside it, create a blank text file +called Dockerfile.

    +

    The first line of this file should include the keyword FROM and then +the name of a Docker image (from Docker Hub) you want +to use as your starting point. If using the OSG's Ubuntu 22.04 image that +would look like this:

    +
    FROM hub.opensciencegrid.org/htc/ubuntu:22.04
    +
    +

    Then, for each command you want to run to add libraries or software, use the +keyword RUN and then the command. Sometimes it makes sense to string +commands together using the && operator and line breaks \, like so:

    +
    RUN apt-get update -y && \
    +    apt-get install -y build-essentials
    +
    +

    or

    +
    RUN wget https://cran.r-project.org/src/base/R-3/R-3.6.0.tar.gz && \
    +    tar -xzf R-3.6.0.tar.gz && \
    +    cd R-3.6.0 && \
    +    ./configure && \
    +    make && \
    +    make install
    +
    +

    Typically it's good to group together commands installing the same kind of thing +(system libraries, or software packages, or an installation process) under one RUN command, +and then have multiple RUN commands, one for each of the different type of +software or package you're installing.

    +

    (For all the possible Dockerfile keywords, see the Docker Documentation)

    +

    Once your Dockerfile is ready, you can "build" the container image by running this command:

    +
    $ docker build -t namespace/repository_name .
    +
    +

    Note that the naming convention for Docker images is your Docker Hub username and then +a name you choose for that particular container image. So if my Docker Hub username +is alice and I created an image with the NCBI blast tool, I might use this name:

    +
    $ docker build -t alice/NCBI-blast .
    +
    +

    Editing an Image Interactively

    +

    You can also build an image interactively, without a Dockerfile. First, get +the desired starting image from Docker Hub. Again, we will +look at the OSG Ubuntu 22.04 image.

    +
    $ docker pull hub.opensciencegrid.org/htc/ubuntu:22.04
    +
    +

    We will run the image in a docker interactive session

    +
    $ docker run -it --name <docker_session_name_here> hub.opensciencegrid.org/htc/ubuntu:22.04 /bin/bash
    +
    +

    Giving the session a name is important because it will make it easier to +reattach the session later and commit the changes later on. Now you will +be greeted by a new command line prompt that will look something like this

    +
    [root@740b9db736a1 /]#
    +
    +

    You can now install the software that you need through the default package +manager, in this case apt-get.

    +
    [root@740b9db736a1 /]# apt-get install build-essentials
    +
    +

    Once you have installed all the software, you simply exit

    +
    [root@740b9db736a1 /]# exit
    +
    +

    Now you can commit the changes to the image and give it a name:

    +
    docker commit <docker_session_name_here> namespace/repository_name
    +
    +

    You can also use the session's hash as found in the command prompt (740b9db736a1 +in the above example) in place of the docker session name.

    +

    Preparing Docker Containers for HTCondor Jobs

    +

    Once you have a Docker container image, whether created by you or found +on DockerHub, you should convert it to the "sif" image format for +the best experience on the OSpool.

    +

    Convert Docker containers on Docker Hub or online

    +

    If the Docker container you want to use is online, on a site like Docker Hub, you can +log in to your Access Point and run a single command to convert it to a .sif image:

    +
    $ apptainer build my-container.sif docker://owner/repository:tag
    +
    +

    Where the path at the end of the command is customized to be the container image +you want to use.

    +

    Convert Docker containers on your computer

    +

    If you have built a Docker image on your own host, you can save it as a +tar file and then convert it to an Apptainer/Singularity SIF image. First +find the image id:

    +
    $ docker image list
    +REPOSITORY              IMAGE ID
    +awesome/science         f1e7972c55bc
    +
    +

    Using the image id, save the image to a tar file:

    +
    $ docker save f1e7972c55bc -o my-container.tar
    +
    +

    Transfer my-container.tar to the OSPool access point, and use +Apptainer to convert it to a SIF image:

    +
    $ apptainer build my-container.sif docker-archive://my-container.tar
    +
    +

    Using Containers in HTCondor Jobs

    +

    After converting the Docker image to a sif format, you can use the +image in your job as described in the +Apptainer/Singularity Guide.

    +

    Special Cases

    +

    ENTRYPOINT and ENV

    +

    Two options that can be used in the Dockerfile to set the environment or +default command are ENTRYPOINT and ENV. Unfortunately, both of these +aspects of the Docker container are deleted when it is converted to a +Singularity image in the Open Science Pool.

    +

    Apptainer/Singularity Environment

    +

    One approach for setting up the environment for an image which will +be converted to Apptainer/Singularity, is to put a file under +/.singularity.d/env/. These files will be sourced when the container +get instantiated. For example, if you have Conda environment, add this +to the end of your Dockerfile:

    +
    # set up environment for when using the container, this is for when 
    +# we invoke the container with Apptainer/Singularity
    +RUN mkdir -p /.singularity.d/env && \
    +    echo ". /opt/conda/etc/profile.d/conda.sh" >>/.singularity.d/env/91-environment.sh && \
    +    echo "conda activate" >>/.singularity.d/env/91-environment.sh
    +
    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/using_software/containers-singularity/index.html b/htc_workloads/using_software/containers-singularity/index.html new file mode 100644 index 00000000..55588fed --- /dev/null +++ b/htc_workloads/using_software/containers-singularity/index.html @@ -0,0 +1,2730 @@ + + + + + + + + + + + + + + + + + + Containers - Apptainer/Singularity - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Containers - Apptainer/Singularity

    +

    This guide is meant to accompany the instructions for using containers +in the Open Science Pool. You can use your own custom container to run +jobs in the Open Science Pool. This guide describes how to create your +own Apptainer/Singularity container "image" (the blueprint for the container).

    +

    Do You Need to Build a Container?

    +

    If there is an existing Docker container or Apptainer/Singularity container with +the software you need, you can proceed with using these options to submit a job. +* See OSPool-provided containers here +* Using an existing Docker container +* Using an existing Apptainer/Singularity container

    +

    If you can't find a good option among existing containers, you may need to +build your own. See this section of the guide for more information.

    +

    OSG-Provided Apptainer/Singularity Images

    +

    The OSG Team maintains a set of images that are already in the OSG +Apptainer/Singularity repository. A list of ready-to-use containers can be found on this page.

    +

    If the software you need isn't already supported in a listed container, +you can create your own container or use any container image in Docker Hub.

    +

    How to explore these containers is shown below.

    +

    Building Your Own Apptainer/Singularity Container

    +

    Identify Components

    +

    What software do you want to install? Make sure that you have either the source +code or a command that can be used to install it through Linux (like apt-get or +yum).

    +

    You'll also need to choose a "base" container, on which to add your particular +software or tools. We recommend using one of the OSG's published containers +as your starting point. See the available containers on Docker Hub here: +OSG Docker Containers +The best candidates for you will be containers that have "osgvo" in the name.

    +

    Apptainer/Singularity Build

    +

    If you are building an image for the first time, the temporary cache directory of the apptainer image needs to be defined. The following commands define the cache location of the apptainer image to be built. Please run the commands in the terminal of your access point.

    +
    $mkdir $HOME/tmp
    +$export TMPDIR=$HOME/tmp
    +$export APPTAINER_TMPDIR=$HOME/tmp
    +$export APPTAINER_CACHEDIR=$HOME/tmp
    +
    +

    To build a custom a Apptainer/Singularity image, create a folder on your access point. Inside it, create a blank text file +called image.def.

    +

    The first lines of this file should include where to get the base image +from. If using the OSG's Ubuntu 20.04 image that would look like this:

    +
    Bootstrap: docker
    +From: hub.opensciencegrid.org/htc/ubuntu:22.04
    +
    +

    Then there is a section called %post where you put the additional +commands to make the image just like you need it. For example:

    +
    %post
    +
    +    # system packages
    +    apt-get update -y
    +    apt-get install -y \
    +            build-essential \
    +            cmake \
    +            g++
    +
    +    # install miniconda
    +    wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
    +    bash Miniconda3-latest-Linux-x86_64.sh -b -f -p /opt/conda
    +    rm Miniconda3-latest-Linux-x86_64.sh
    +
    +    # install conda components - add the packages you need here
    +    . /opt/conda/etc/profile.d/conda.sh
    +    conda create -y -n "myenv" python=3.9
    +    conda activate myenv
    +    conda update --all
    +    conda install -y -n "myenv" -c conda-forge pytorch
    +
    +

    Another good section to include is %environment. This is executed before +your job and lets the container configure the environment. Example:

    +
    %environment
    +
    +    # set up environment for when using the container
    +    . /opt/conda/etc/profile.d/conda.sh
    +    conda activate myenv
    +
    +

    See the Apptainer documentation +for a full reference on how to specify build specs. Note that the %runscript +section is ignored when the container is executed on OSG.

    +

    The final image.def looks like:

    +
    Bootstrap: docker
    +From: hub.opensciencegrid.org/htc/ubuntu:22.04
    +
    +%post
    +
    +    # system packages
    +    apt-get update -y
    +    apt-get install -y \
    +            build-essential \
    +            wget
    +
    +    # install miniconda
    +    wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
    +    bash Miniconda3-latest-Linux-x86_64.sh -b -f -p /opt/conda
    +    rm Miniconda3-latest-Linux-x86_64.sh
    +
    +    # install conda components - add the packages you need here
    +    . /opt/conda/etc/profile.d/conda.sh
    +    conda create -y -n "myenv" python=3.9
    +    conda activate myenv
    +    conda update --all
    +    conda install -y -n "myenv" -c conda-forge pytorch
    +
    +%environment
    +
    +    # set up environment for when using the container
    +    . /opt/conda/etc/profile.d/conda.sh
    +    conda activate myenv
    +
    +

    Once your build spec is ready, you can "build" the container image by running this command:

    +
    $ apptainer build my-container.sif image.def
    +
    +

    Once the image is built, test it on an OSG-managed access point, +and use it in your HTCondor jobs.

    +

    Exploring Apptainer/Singularity Images on the Access Points

    +

    Just like it is important to test your codes and jobs at a small scale, +you should make sure that your Apptainer/Singularity container is working correctly before using it in jobs. One way +to test your container image on our system is to test it on +an OSG-managed access point.

    +

    To do so, first log in to your assigned access point. Start an interactive session with the +Apptainer/Singularity "shell" mode. The recommended command line, similar +to how containers are started for jobs, is:

    +
    apptainer shell my-container.sif
    +
    +

    If you want to test an existing container produced by OSG Staff, use the +full path provided in this guide.

    +

    This example will give you an interactive shell. You can explore the +container and test your code with your own inputs from your /home +directory, which is automatically mounted (but note - $HOME will not be +available to your jobs later). Once you are down exploring, exit the +container by running exit or with CTRL+D.

    +

    Using Singularity or Apptainer Images in an HTCondor Job

    +

    Once you have a ".sif" container image file with all your needed software, +you can use this file as part of an HTCondor job.

    +

    Upload the Container Image to the OSDF

    +

    The image will be resused for +each job, and thus the preferred transfer method is OSDF. +Store the .sif file under your personal data area on your access point +(see table here).

    +

    Use the Container in an HTCondor Job

    +

    Once the image is placed in your OSDF space, you can use an OSDF +url directly in the +SingularityImage attribute. Note that you can not +use shell variable expansion in the submit file - be sure to replace the +username with your actual OSPool username. Example:

    +
    +SingularityImage = "osdf:///ospool/apXX/data/USERNAME/my-custom-image-v1.sif"
    +
    +<other usual submit file lines>
    +queue
    +
    +

    Be aware that OSDF aggressively caches the image based on file naming. +If you need to do quick changes, please use versioning of the .sif file +so that the caches see a "new" name. In this example, replacing +my-custom-image-v1.sif with new content will probably mean that some +nodes get the old version and some nodes the new version. Prevent this +by creating a new file named with v2.

    +

    Common Issues

    +
    +FATAL: kernel too old +
    +If you get a *FATAL: kernel too old* error, it means that the glibc version in the +image is too new for the kernel on the host. You can work around this problem by +specifying the minimum host kernel. For example, if you want to run the Ubuntu 18.04 +image, specfy a minimum host kernel of 3.10.0, formatted as 31000 +(major * 10000 + minor * 100 + patch): +
    + Requirements = HAS_SINGULARITY == True && OSG_HOST_KERNEL_VERSION >= 31000 +
    +
    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/using_software/example-compilation/index.html b/htc_workloads/using_software/example-compilation/index.html new file mode 100644 index 00000000..db391bfd --- /dev/null +++ b/htc_workloads/using_software/example-compilation/index.html @@ -0,0 +1,3039 @@ + + + + + + + + + + + + + + + + + + Example Software Compilation - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Example of Compiling Software For Use on the OSPool

    +

    Introduction

    +

    This guide provides a detailed example of compiling software for use from an +OSG Access Point. For this example, we will be compiling Samtools which is a very +common bioinformatics software for working with aligned sequencing data. We hope that +this specific example helps illustrate the general compilation steps that +can be applied to many other software compilations. For a general introduction to software +compilation, please see our Compiling Software guide.

    +

    Two Examples

    +

    This guide provides two examples of compiling Samtools, one without CRAM file +support and one with CRAM file support. +Why two examples? Currently, to install Samtools +with CRAM support requires additional dependencies (aka libraries) that will also need to be +installed and most Samtools users are only working with BAM files which does not require CRAM +support.

    +

    Do I need CRAM support for my work? CRAM is an alternative compressed sequence alignment +file format to BAM. Learn more at https://www.sanger.ac.uk/tool/cram/.

    +

    Compile Samtools Without CRAM Support

    +

    Step 1. Acquire Samtools source code

    +

    Samtools source code is available at http://www.htslib.org/download/. The +development code is also available via GitHub at https://github.com/samtools/samtools. On the download page is some important information to make note of:

    +
    +

    "[Samtools] uses HTSlib internally [and] these source packages contain their own copies of htslib"

    +
    +

    What this means is 1.) HTSlib is a dependency of Samtools and 2.) the HTSlib source code is included +with the Samtools source code.

    +

    Either download the Samtools source code to your computer and upload to your login node, or +right-click on the Samtools source code link and copy the link location. Login in to your OSG Access Point +and use wget to download the source code directly and extract the tarball:

    +
    [user@apXX ~]$ wget https://github.com/samtools/samtools/releases/download/1.10/samtools-1.10.tar.bz2
    +[user@apXX ~]$ tar -xjf samtools-1.10.tar.bz2
    +
    +

    The above two commands will create a directory named samtools-1.10 which contains all the code +and instructions needed for compiling Samtools and HTSlib. Take a moment to look at the content available +in this new directory.

    +

    Step 2. Read through installation instructions

    +

    What steps need to be performed for our compilation? What system dependencies exist for our +software? Answers to these questions, and other important information, should be available +in the installation instructions for your software which will be available online and/or +included in the source code.

    +

    The HTSlib website where the Samtools source code is hosted provides basic installation instructions +and refers users to INSTALL (which is a plain text file that can be found in samtools-1.10/) for +more information. You will also see a README file in the source code directory which will provide +important information. README files will always be included with your source code and we +recommend reviewing before compiling software. There is also a README and INSTALL file available +for HTSlib in the source code directory samtools-1.10/htslib-1.10/.

    +

    cd to samtools-1.10 and read through README and INSTALL. As described in INSTALL, +the Samtools installation will follow the common configure, make, make install process:

    +
    Basic Installation
    +==================
    +
    +To build and install Samtools, 'cd' to the samtools-1.x directory containing
    +the package's source and type the following commands:
    +
    +    ./configure
    +    make
    +    make install
    +
    +The './configure' command checks your build environment and allows various
    +optional functionality to be enabled (see Configuration below).
    +
    +

    Also described in INSTALL are a number of required and optional system dependencies +for installing Samtools and HTSlib (which is itself a dependency of Samtools):

    +
    System Requirements
    +===================
    +
    +Samtools and HTSlib depend on the following libraries:
    +
    +  Samtools:
    +    zlib       <http://zlib.net>
    +    curses or GNU ncurses (optional, for the 'tview' command)
    +               <http://www.gnu.org/software/ncurses/>
    +
    +  HTSlib:
    +    zlib       <http://zlib.net>
    +    libbz2     <http://bzip.org/>
    +    liblzma    <http://tukaani.org/xz/>
    +    libcurl    <https://curl.haxx.se/>
    +               (optional but strongly recommended, for network access)
    +    libcrypto  <https://www.openssl.org/>
    +               (optional, for Amazon S3 support; not needed on MacOS)
    +
    +...
    +
    +The bzip2 and liblzma dependencies can be removed if full CRAM support
    +is not needed - see HTSlib's INSTALL file for details.
    +
    +

    Some dependencies are needed to support certain features from Samtools (such as tview and +CRAM compression). You will not need tview as this is intended for interactive work which is +not currently supported from the OSG Access Points. For this specific compilation example, we will disable +both tview and CRAM support - see below for our +compilation example that will provide CRAM file support.

    +

    Following the suggestion in the Samtools INSTALL file, we can view the HTSlib INSTALL +file at samtools-1.10/htslib-1.10/INSTALL. Here we will find the necessary +information for disabling bzip2 and liblzma dependencies:

    +
    --disable-bz2
    +    Bzip2 is an optional compression codec format for CRAM, included
    +    in HTSlib by default.  It can be disabled with --disable-bz2, but
    +    be aware that not all CRAM files may be possible to decode.
    +
    +--disable-lzma
    +    LZMA is an optional compression codec for CRAM, included in HTSlib
    +    by default.  It can be disabled with --disable-lzma, but be aware
    +    that not all CRAM files may be possible to decode.
    +
    +

    These are two flags that will need to be used when performing our installation.

    +

    To determine what libraries are available on our OSG Access Point, we can look at /usr/lib +and /usr/lib64 for the various Samtools library dependencies, for example:

    +
    [user@apXX ~]$ ls /usr/lib* | grep libcurl
    +[user@apXX ~]$ ls /usr/lib* | grep htslib
    +
    +

    Although we will find matches for libcurl, we will not find any htslib files meaning that +HTSlib is not currently installed on the login node, nor is it currently available +as a module. This means that HTSlib will also need to be compiled. Luckly, the Samtools +developers have conveniently included the HTSlib source code with the Samtools source code +and have made it possible to compile both Samtools and HTSlib at the same time. From the +Samtools INSTALL file, is the following:

    +
        By default, configure looks for an HTSlib source tree within or alongside
    +    the samtools source directory; if there are several likely candidates,
    +    you will have to choose one via this option.
    +
    +

    This mean that we don't have to do anything extra to get HTSlib installed because +the Samtools installation will do it by default.

    +
    +

    When performing your compilation, if your compiler is unable to locate the necessary +libraries, or if newer versions of libraries are needed, it will result in an error - this +makes for an alternative method of determining whether your system has the appropriate +libraries for your software and more often than not, installation by trial and error is +a common approach. However, taking a little bit of time before hand +and looking for library files can save you time and frustration during software compilation.

    +
    +

    Step 3. Perform Samtools compilation

    +

    We now have all of the information needed to start our compilation of Samtools without CRAM support.

    +

    First, we will create a new directory in our home directory that will store the +Samtools compiled software. The example here will use a directory, called my-software, +for organizing all compiled software in the home directory:

    +
    [user@apXX ~]$ mkdir $HOME/my-software
    +[user@apXX ~]$ mkdir $HOME/my-software/samtools-1.10
    +
    +
    +

    As a best practice, always include the version name of your software in the directory name.

    +
    +

    Next we'll change to the Samtools source code directory that was created in +Step 1. You should see the INSTALL and README files +as well as a file called configure.

    +

    The first command we will run is ./configure - this step will execute the configure script +and allows us to modify various details about our Samtools installation. We will be executing configure +with several flags:

    +
    [user@apXX samtools-1.10]$ ./configure --prefix=$HOME/my-software/samtools-1.10 --disable-bz2 --disable-lzma --without-curses
    +
    +

    Here we used --prefix to specify where we would like the final Samtools software +to be installed, --disable-bz2 and --disable-lzma to disable lzma +and bzip2 dependencies for CRAM, and --without-curses to disable tview support.

    +

    Next run the final two commands:

    +
    [user@apXX samtools-1.10]$ make
    +[user@apXX samtools-1.10]$ make install
    +
    +

    Once make install has finished running, the compilation is complete. We can +also confirm this by looking at the content of ~/my-software/samtools-1.10/ where +we had Samtools installed:

    +
    [user@apXX samtools-1.10]$ cd ~
    +[user@apXX ~]$ ls -F my-software/samtools-1.10/
    +bin/ share/
    +
    +

    There will be two directories present in my-software/samtools-1.10, one named bin and +another named share. The Samtools executable will be located in bin and we can give +it a quick test to make sure it runs as expected:

    +
    [user@apXX ~]$ ./my-software/samtools-1.10/bin/samtools view
    +
    +

    which will return the Samtools view usage statement.

    +

    Step 4. Make our software portable

    +

    Our subsequent job submissions on the OSPool will need a copy of our software. For +convenience, we recommend converting your software directory to a tar archive. +First move to my-software/, then create the tar archive:

    +
    [user@apXX ~]$ cd my-software/
    +[user@apXX my-software]$ tar -czf samtools-1.10.tar.gz samtools-1.10/
    +[user@apXX my-software]$ ls samtools-1.10*
    +samtools-1.10/ samtools-1.10.tar.gz
    +[user@apXX my-software]$ du -h samtools-1.10.tar.gz
    +2.0M    samtools-1.10.tar.gz
    +
    +

    The last command in the above example returns the size of our tar archive. This is +important for determine the appropriate method that we should use for transferring +this file along with our subsequent jobs. To learn more, please see +Overview: Data Staging and Transfer to Jobs.

    +

    To clean up and clear out space in your home directory, we recommend deleting the Samtools source +code directory.

    +

    Step 5. Use Samtools in our jobs

    +

    Now that Samtools has been compiled we can submit jobs that use this software. Below is an example submit file +for a job that will use Samtools with a BAM file named my-sample.bam which is <100MB in size:

    +
    #samtools.sub
    +log = samtools.$(Cluster).log
    +error = samtools.$(Cluster)_$(Process).err
    +output = samtools.$(Cluster)_$(Process).out
    +
    +executable = samtools.sh
    +
    +transfer_input_files = /home/username/my-software/samtools-1.10.tar.gz, my-sample.bam
    +
    +should_transfer_files = YES
    +when_to_transfer_output = ON_EXIT
    +
    ++JobDurationCategory = "Medium"
    +
    +requirements = (OSGVO_OS_STRING == "RHEL 9")
    +request_memory = 1.3GB
    +request_disk = 1.5GB
    +request_cpus = 1
    +
    +queue 1
    +
    +

    The above submit file will transfer a complete copy of the Samtools tar archive +created in Step 4 and also includes an important +requirements attribute which tells HTCondor to run our job on +execute nodes running Red Hat Linux version 7 operating system.

    +
    +

    The resource requests for your jobs may differ from what is shown in the +above example. Always run tests to determine the appropriate requests for your jobs.

    +
    +

    Some additional steps are then needed in the executable bash script used by this job +to "untar" the Samtools and add this software to the PATH enviroment variable:

    +
    #!/bin/bash
    +# samtools.sh
    +
    +# untar software
    +tar -xzf samtools-1.10.tar.gz
    +
    +# modify environment variables 
    +export PATH=$_CONDOR_SCRATCH_DIR/samtools-1.10/bin:$PATH
    +
    +# run samtools commands
    +...
    +
    +

    Compile Samtools With CRAM Support

    +

    This example includes steps to install and use a library and to use a module, +which are both currently needed for compiling Samtools with CRAM support.

    +

    The steps in this example assume that you have performed +Step 1 and +Step 2 in the above example for +compiling Samtools without CRAM support.

    +

    Step 2. Read through installation instructions, continued

    +

    From both the Samtools and HTSlib INSTALL files, we know that both bzip2 and +libzlma are required for CRAM support. We can check our system for these libraries:

    +
    [user@apXX ~]$ ls /usr/lib* | grep libz
    +[user@apXX ~]$ ls /usr/lib* | grep libbz2
    +
    +

    which will reveal that both sets of libraries are available on the login. However +if we were to attempt Samtools installation with CRAM support right now +we would find that this results in an error when performing the configure step.

    +

    If the libraries are present, why do we get this error? This error is due to +differences between types of library files. For example, running +ls /usr/lib* | grep libbz2 will return two matches, libbz2.so.1 and libbz2.so.1.0.6. +But running ls /usr/lib* | grep liblz will return four matches including three +.so and one .a files. Our Samtools compilation specifically requires +the .a type of library file for both libbz2 and liblzma and the absence of +this type of library file in /usr/lib64 is why compilation +will fail without additional steps.

    +

    Step 3. Compile liblzma

    +

    To compile Samtools with CRAM support requires that we first compile +liblzma. Following the same approach as we did for Samtools, first we +acquire a copy of the the latest liblzma source code, then review the installation instructions. +From our online search we will that liblzma +is availble from the XZ Utils library package.

    +
    [user@apXX ~]$ wget https://tukaani.org/xz/xz-5.2.5.tar.gz
    +[user@apXX ~]$ tar -xzf xz-5.2.5.tar.gz
    +
    +

    Then review the installation instructions and check for dependencies. Everything +that is needed for the default installation of XZ utils is currently available on the login node.

    +
    [user@apXX ~]$ cd xz-5.2.5/
    +[user@apXX xz-5.2.5]$ less INSTALL
    +
    +

    Perform the XZ Utils compilation:

    +
    [user@apXX xz-5.2.5]$ mkdir $HOME/my-software/xz-5.2.5
    +[user@apXX xz-5.2.5]$ ./configure --prefix=$HOME/my-software/xz-5.2.5
    +[user@apXX xz-5.2.5]$ make
    +[user@apXX xz-5.2.5]$ make install
    +[user@apXX xz-5.2.5]$ ls -F $HOME/my-software/xz-5.2.5
    +/bin  /include  /lib  /share
    +
    +

    Success!

    +

    Lastly we need to set some environment variables so that Samtools knows where to +find this library:

    +
    [user@apXX xz-5.2.5]$ export PATH=$HOME/my-software/xz-5.2.5/bin:$PATH
    +[user@apXX xz-5.2.5]$ export LIBRARY_PATH=$HOME/my-software/xz-5.2.5/lib:$LIBRARY_PATH
    +[user@apXX xz-5.2.5]$ export LD_LIBRARY_PATH=$LIBRARY_PATH
    +
    +

    Step 4. Load bzip2 module

    +

    After installing XZ Utils and setting our environment variable, next we will +load the bzip2 module:

    +
    [user@apXX xz-5.2.5]$ module load bzip2/1.0.6
    +
    +

    Loading this module will further modify some of your environment variables +so that Samtools is able to locate the bzip2 library files.

    +

    Step 5. Compile Samtools

    +

    After compiling XZ Utils (which provides liblzma) and loading the bzip2 1.0.6 module, +we are now ready to compile Samtools with CRAM support.

    +

    First, we will create a new directory in our home directory that will store the +Samtools compiled software. The example here will use a common directory, called my-software, +for organizing all compiled software in the home directory:

    +
    [user@apXX ~]$ mkdir $HOME/my-software
    +[user@apXX ~]$ mkdir $HOME/my-software/samtools-1.10
    +
    +
    +

    As a best practice, always include the version name of your software in the directory name.

    +
    +

    Next, we will change our directory to the Samtools source code directory that was created in +Step 1. You should see the INSTALL and README files +as well as a file called configure.

    +

    The first command we will run is ./configure - this file is a script that allows us +to modify various details about our Samtools installation and we will be executing configure +with a flag that disables tview:

    +
    [user@apXX samtools-1.10]$ ./configure --prefix=$HOME/my-software/samtools-1.10 --without-curses
    +
    +

    Here we used --prefix to specify where we would like the final Samtools software +to be installed and --without-curses to disable tview support.

    +

    Next run the final two commands:

    +
    [user@apXX samtools-1.10]$ make
    +[user@apXX samtools-1.10]$ make install
    +
    +

    Once make install has finished running, the compilation is complete. We can +also confirm this by looking at the content of ~/my-software/samtools-1.10/ where +we had Samtools installed:

    +
    [user@apXX samtools-1.10]$ cd ~
    +[user@apXX ~]$ ls -F my-software/samtools-1.10/
    +bin/ share/
    +
    +

    There will be two directories present in my-software/samtools-1.10, one named bin and +another named share. The Samtools executable will be located in bin and we can give +it a quick test to make sure it runs as expected:

    +
    [user@apXX ~]$ ./my-software/samtools-1.10/bin/samtools view
    +
    +

    which will return the Samtools view usage statement.

    +

    Step 6. Make our software portable

    +

    Our subsequent job submissions on the OSPool will need a copy of our software. For +convenience, we recommend converting your software directory to a tar archive. +First move to my-software/, then create the tar archive:

    +
    [user@apXX ~]$ cd my-software/
    +[user@apXX my-software]$ tar -czf samtools-1.10.tar.gz samtools-1.10/
    +[user@apXX my-software]$ ls samtools-1.10*
    +samtools-1.10/ samtools-1.10.tar.gz
    +[user@apXX my-software]$ du -h samtools-1.10.tar.gz
    +2.0M    samtools-1.10.tar.gz
    +
    +

    The last command in the above example returns the size of our tar archive. This is +important for determine the appropriate method that we should use for transferring +this file along with our subsequent jobs. To learn more, please see +Introduction to Data Management on OSG.

    +

    Follow the these same steps for creating a tar archive of the xz-5.2.5 library as well.

    +

    To clean up and clear out space in your home directory, we recommend deleting the Samtools source +code directory.

    +

    Step 7. Use Samtools in our jobs

    +

    Now that Samtools has been compiled we can submit jobs that use this software. For Samtools +with CRAM we will also need to bring along a copy of XZ Utils (which includes the liblzma library) +and ensure that our jobs have access to the bzip2 1.0.6 module. Below is an example submit file +for a job that will use Samtools with a Fasta file genome.fa' and CRAM file namedmy-sample.cram` which is <100MB in size:

    +
    #samtools-cram.sub
    +log = samtools-cram.$(Cluster).log
    +error = samtools-cram.$(Cluster)_$(Process).err
    +output = samtools-cram.$(Cluster)_$(Process).out
    +
    +executable = samtools-cram.sh
    +
    +transfer_input_files = /home/username/my-software/samtools-1.10.tar.gz, /home/username/my-software/xz-5.2.5.tar.gz, genome.fa, my-sample.cram
    +
    +should_transfer_files = YES
    +when_to_transfer_output = ON_EXIT
    +
    ++JobDurationCategory = "Medium"
    +
    +requirements = (OSGVO_OS_STRING == "RHEL 9")
    +request_memory = 1.3GB
    +request_disk = 1.5GB
    +request_cpus = 1
    +
    +queue 1
    +
    +

    The above submit file will transfer a complete copy of the Samtools tar archive +created in Step 6 as well as a copy of XZ Utils installation from +Step 3. This submit file +also includes an important requirements which tell HTCondor to run our job on +execute nodes running Red Hat Linux version 7 operating system/

    +
    +

    The resource requests for your jobs may differ from what is shown in the +above example. Always run tests to determine the appropriate requests for your jobs.

    +
    +

    Some additional steps are then needed in the executable bash script used by this job +to "untar" the Samtools and XZ Util tar archives, modify the PATH and +LD_LIBRARY_PATH enviroments of our job, and load the bzip2 module:

    +
    #!/bin/bash
    +# samtools-cram.sh
    +
    +# untar software and libraries
    +tar -xzf samtools-1.10.tar.gz
    +tar -xzf xz-5.2.5.tar.gz
    +
    +# modify environment variables 
    +export LD_LIBRARY_PATH=$_CONDOR_SCRATCH_DIR/xz-5.2.5/lib:$LD_LIBRARY_PATH
    +export PATH=$_CONDOR_SCRATCH_DIR/samtools-1.10/bin:$_CONDOR_SCRATCH_DIR/xz-5.2.5/bin:$PATH
    +
    +# load bzip2 module
    +module load bzip2/1.0.6
    +
    +# run samtools commands
    +...
    +
    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/using_software/software-overview/index.html b/htc_workloads/using_software/software-overview/index.html new file mode 100644 index 00000000..46df15c1 --- /dev/null +++ b/htc_workloads/using_software/software-overview/index.html @@ -0,0 +1,2550 @@ + + + + + + + + + + + + + + + + + + Overview: Software on the Open Science Pool - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Using Software on the Open Science Pool

    +

    Overview of Software Options

    +

    There are several options available for managing the software needs of your work within the Open Science Pool (OSPool). For most cases, it will be advantageous for you to install the software needed for your jobs. This not only gives you the greatest control over your computing environment, but will also make your jobs more distributable, allowing you to run jobs at more locations. +* The OSPool can support most popular, open source software that fit the distributed +high throughput computing model. +* We do not have or support most commercial software +due to licensing issues.

    +

    Here we review options, and provide links to additonal information, for using software +installed by users, software available as precompiled binaries or via containers.

    +

    More details and instructions on installing software from source code, precompiled binaries/prebuilt executables, and on creating and using containers can be found on the OSPool documentation website, under the "Software" section.

    +

    Use Precompiled Binaries and Prebuilt Executables

    +

    Some software may be available as a precompiled binary or prebuilt executable +which provides a quick and easy way to run a program without the need for installation +from source code. Binaries and executables are software files that are ready to +run as is, however binaries should always be tested beforehand. There are several +important considerations for using precompiled binaries on the OSPool:

    +

    1) only binary files compiled against a Linux operating system are suitable +for use on the OSPool, +2) some softwares have system and hardware dependencies that must +be met in order to run properly, and +3) the available binaries may not have been +compiled with the feaures or configuration needed for your work.

    +

    Install Software from Source Code

    +

    When installing software from source code on an OSPool Access Point, your software will be specifically compiled against +the Red Hat Enterprise Linux (RHEL) 9 operating system used on these nodes. In most cases, subsequent +jobs that use this software will also need to run on a RHEL 9 OS, which can be specified by the +requirements attribute of your HTCondor submit files as described in the guide linked above.

    +

    Use Docker and Apptainer Containers

    +

    Container systems provide users with customizable and reproducable computing and software +environments. The Open Science Pool is compatible with both Apptainer and Docker containers - the +latter will be converted to a Apptainer image and added to the OSG container image +repository.

    +

    For more information about Docker, please see:

    + +

    and Apptainer/Singularity, please see:

    + +

    Apptainer/ Singularity has become the preferred containerization method in scientific computing. This talk is an example of how containers are used in scientific computing.

    +

    Users can choose from a set of pre-defined containers already available within OSG, +or can use published or custom made containers.

    +

    For jobs submitted to the OSPool, it does not matter whether you provide a Docker or +Apptainer/Singularity image. Either is compatible with our system and can be +used with little to no modification. Determining factors on when to +use Apptainer/Singularity images over Docker images include if an image already +exists and if you have +experience building images in one for format and not the other.

    +

    When using a container for your jobs, the container image is +automatically started up when HTCondor matches your job to a slot. The +executable provided in the submit script will be run within the context +of the container image, having access to software and libraries that +were installed to the image, as if they were already on the server where +the job is running. Job executables do not need to run any +commands to start the container.

    +

    Request Help with Installing Software

    +

    If you believe none of the options described above are applicable for your software, send an email to +support@osg-htc.org that describes: +1. the software name, version, and/or website with download and install instructions +2. what science each job does, using the software +3. what you've tried so far (if anything), and what indications of issues you've experienced

    +

    We will do our best to help you create a portable installation.

    +

    Additional Resources

    +

    Watch this video from the 2021 OSG Virtual School for more information about using software on OSG:

    +

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/using_software/software-request/index.html b/htc_workloads/using_software/software-request/index.html new file mode 100644 index 00000000..91f03bd5 --- /dev/null +++ b/htc_workloads/using_software/software-request/index.html @@ -0,0 +1,2332 @@ + + + + + + + + + + + + + + + + + + Software request - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Request Help with Your Software

    +

    A large number of software packages can be used by compiling a portable installation or using a container +(many community sofwares are already available in authoritative containers). If you believe none of +these options (described here) are applicable for your software, please get in touch with a simple email to +[support@osg-htc.org][support] that describes: +1. the software name, version, and/or website with download and install instructions +2. what science each job does, using the software +3. what you've tried so far (if anything), and what indications of issues you've experienced

    +

    As long as this code is:

    +
      +
    1. available to the public in source form (e.g. open source)
    2. +
    3. licensed to all users, and does not require a license key
    4. +
    5. would not be better supported by another approach (which are usually preferable)
    6. +
    +

    we should be able to help you create a portable installation with the 'right' solution.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/workload_planning/htcondor_job_submission/index.html b/htc_workloads/workload_planning/htcondor_job_submission/index.html new file mode 100644 index 00000000..3e4ad5e0 --- /dev/null +++ b/htc_workloads/workload_planning/htcondor_job_submission/index.html @@ -0,0 +1,2765 @@ + + + + + + + + + + + + + + + + + + Overview: Submit Jobs to the OSPool using HTCondor - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Overview: Submit Jobs to the OSPool using HTCondor

    +

    Purpose

    +

    This guide discusses the mechanics of creating and submitting jobs to the OSPool using HTCondor.

    +

    OSPool Workflow Overview

    +

    The process of running computational workflows on OSG resources follows the following outline:

    +

    +

    Terminology:

    +
      +
    • Access point is where you login and stage your data, executables/scripts, and software to use in jobs.
    • +
    • HTCondor is a job scheduling software that will run your jobs out on the OSPool execution points. All jobs must be submitted to HTCondor to run out on the OSPool.
    • +
    • The Open Science Pool (OSPool) is the set of resources your job runs on. It is composed of execution points, as well as other technologies, that compose the cpus, memory, and disk space that will run the computations of your jobs.
    • +
    +

    Run Jobs on the OSPool using HTCondor

    +

    We are going to run the traditional 'hello world' program with a OSPool twist. In order to demonstrate the distributed resource nature of OSPool HTC System, we will produce a 'Hello CHTC' message 3 times, where each message is produced within is its own 'job'. Since you will not run execution commands yourself (HTCondor will do it for you), you need to tell HTCondor how to run the jobs for you in the form of a submit file, which describes the set of jobs.

    +
    +

    Note: You must be logged into an OSPool Access Point for the following example to work.

    +
    +

    1. Prepare an executable

    +

    First, create the executable script you would like HTCondor to run. For our example, copy the text below and paste it into a file called hello-ospool.sh (we recommend using a command line text editor) in your home directory.

    +
    #!/bin/bash
    +#
    +# hello-ospool.sh
    +# My very first OSPool job
    +#
    +# print a 'hello' message to the job's terminal output:
    +echo "Hello OSPool from Job $1 running on `whoami`@`hostname`"
    +#
    +# keep this job running for a few minutes so you'll see it in the queue:
    +sleep 180
    +
    +

    This script would be run locally on our terminal by typing hello-ospool.sh <FirstArgument>. However, to run it on the OSPool, we will use our HTCondor submit file to run the hello-ospool.sh executable and to automatically pass different arguments to our script.

    +

    2. Prepare a submit file

    +

    Create your HTCondor submit file, which you will use to tell HTCondor what job to run and how to run it. Copy the text below, and paste it into file called hello-ospool.sub. This is the file you will submit to HTCondor to describe your jobs (known as the submit file).

    +
    # hello-ospool.sub
    +# My very first HTCondor submit file
    +
    +# Specify your executable (single binary or a script that runs several
    +#  commands) and arguments to be passed to jobs. 
    +#  $(Process) will be a integer number for each job, starting with "0"
    +#  and increasing for the relevant number of jobs.
    +executable = hello-ospool.sh
    +arguments = $(Process)
    +
    +# Specify the name of the log, standard error, and standard output (or "screen output") files. Wherever you see $(Cluster), HTCondor will insert the 
    +#  queue number assigned to this set of jobs at the time of submission.
    +
    +log = hello-ospool_$(Cluster)_$(Process).log
    +error = hello-ospool_$(Cluster)_$(Process).err
    +output = hello-ospool_$(Cluster)_$(Process).out
    +
    +# This lines *would* be used if there were any other files
    +# needed for the executable to use.
    +# transfer_input_files = file1,/absolute/pathto/file2,etc
    +
    +# Specify Job duration category as "Medium" (expected runtime <10 hr) or "Long" (expected runtime <20 hr). 
    ++JobDurationCategory = "Medium"
    +
    +# Tell HTCondor requirements (e.g., operating system) your job needs, 
    +# what amount of compute resources each job will need on the computer where it runs.
    +requirements = (OSGVO_OS_STRING == "RHEL 9")
    +request_cpus = 1
    +request_memory = 1GB
    +request_disk = 5GB
    +
    +# Tell HTCondor to run 3 instances of our job:
    +queue 3
    +
    +

    By using the "$1" variable in our hello-ospool.shexecutable, we are telling HTCondor to fetch the value of the argument in the first position in the submit file and to insert it in location of "$1" in our executable file.

    +

    Therefore, when HTCondor runs this executable, it will pass the $(Process) value for each job and hello-ospool.sh will insert that value for "$1" in hello-ospool.sh.

    +

    More information on special variables like "$1", "$2", and "$@" can be found here.

    +

    Additionally, the JobDurationCategory must be listed anywhere prior to the final ‘queue’ statement of the submit file, as below:

    +
    +JobDurationCategory = “Medium”
    +
    + + + + + + + + + + + + + + + + + + + + +
    JobDurationCategoryExpected Job DurationMaximum Allowed Duration
    Medium (default)<10 hrs20 hrs
    Long<20 hrs40 hrs
    +

    If the user does not indicate a JobDurationCategory in the submit file, the relevant job(s) will be +labeled as Medium by default. Batches with jobs that individually execute for longer than 20 hours +are not a good fit for the OSPool. We encourage users with long jobs to implement self-checkpoint when possible.

    +
    +Why Job Duration Categories? +
    +To maximize the value of the capacity contributed by the different organizations to the OSPool, +users are requested to identify a duration categories for their jobs. These categories should be selected based upon test +jobs (run on the OSPool) and allow for more effective scheduling of the capacity contributed to the pool. +
    +
    +Every job submitted from an OSG-managed access point must be labeled with a Job Duration Category upon submission. +By knowing the expected duration, the OSG is working to be able to direct longer-running jobs to resources that are +faster and are interrupted less, while shorter jobs can run across more of the OSPool for better overall throughput. +
    +
    +Jobs with single executions longer than 20 hours in tests on the OSPool should not be submitted, without self-checkpointing. +
    + +

    3. Submit the job

    +

    Now, submit your job to HTCondor’s queue by using the command condor_submit and providing the name of the submit file you created above:

    +
    [alice@ap40]$ condor_submit hello-ospool.sub
    +
    +

    The condor_submit command actually submits your jobs to HTCondor. If all goes well, you will see output from the condor_submit command that appears as:

    +
    Submitting job(s)...
    +3 job(s) submitted to cluster 36062145.
    +
    +

    4. Check the job status

    +

    To check on the status of your jobs in the queue, run the following command:

    +
    [alice@ap40]$ condor_q
    +
    +The output of `condor_q` should look like this:
    +-- Schedd: ap40.uw.osg-htc.org : <128.104.101.92:9618?... @ 04/14/23 15:35:17
    +OWNER     BATCH_NAME     SUBMITTED   DONE   RUN    IDLE  TOTAL JOB_IDS
    +Alice ID: 3606214       4/14 12:31      2     1       _      3 36062145.0-2
    +
    +3 jobs; 2 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended
    +
    +

    By default, condor_q shows jobs grouped into batches by batch name (if provided), or executable name. To show all of your jobs on individual lines, add the -nobatch option. +To see a live update of the status of your jobs, use the command condor_watch_q. (To exit the live view, use the keyboard shortcut Ctrl+C.)

    +

    5. Examine the results

    +

    When your jobs complete after a few minutes, they'll leave the queue. If you do a listing of your /home directory with the command ls -l, you should see something like:

    +
    [alice@submit]$ ls -l
    +total 28
    +-rw-r--r-- 1 alice alice    0 Apr  14 15:37 hello-ospool_36062145_0.err
    +-rw-r--r-- 1 alice alice   60 Apr  14 15:37 hello-ospool_36062145_0.out
    +-rw-r--r-- 1 alice alice    0 Apr  14 15:37 hello-ospool_36062145_0.log
    +-rw-r--r-- 1 alice alice    0 Apr  14 15:37 hello-ospool_36062145_1.err
    +-rw-r--r-- 1 alice alice   60 Apr  14 15:37 hello-ospool_36062145_1.out
    +-rw-r--r-- 1 alice alice    0 Apr  14 15:37 hello-ospool_36062145_1.log
    +-rw-r--r-- 1 alice alice    0 Apr  14 15:37 hello-ospool_36062145_2.err
    +-rw-r--r-- 1 alice alice   60 Apr  14 15:37 hello-ospool_36062145_2.out
    +-rw-r--r-- 1 alice alice    0 Apr  14 15:37 hello-ospool_36062145_2.log
    +-rw-rw-r-- 1 alice alice  241 Apr  14 15:33 hello-ospool.sh
    +-rw-rw-r-- 1 alice alice 1387 Apr  14 15:33 hello-ospool.sub
    +
    +

    Useful information is provided in the user log, standard error, and standard output files.

    +

    HTCondor creates a transaction log of everything that happens to your jobs. Looking at the log file is very useful for debugging problems that may arise. Additionally, at the completion of a job, the .log file will print a table describing the amount of compute resources requested in the submit file compared to the amount the job actually used. An excerpt from hello-ospool_36062145_0.log produced due the submission of the 3 jobs will looks like this:

    +
    …
    +005 (36062145.000.000) 2023-04-14 12:36:09 Job terminated.
    +    (1) Normal termination (return value 0)
    +        Usr 0 00:00:00, Sys 0 00:00:00  -  Run Remote Usage
    +        Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
    +        Usr 0 00:00:00, Sys 0 00:00:00  -  Total Remote Usage
    +        Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
    +    72  -  Run Bytes Sent By Job
    +    265  -  Run Bytes Received By Job
    +    72  -  Total Bytes Sent By Job
    +    265  -  Total Bytes Received By Job
    +    Partitionable Resources :    Usage  Request  Allocated 
    +       Cpus                 :        0        1          1 
    +       Disk (KB)            :      118     1024 1810509281 
    +       Memory (MB)          :       54     1024       1024 
    +
    +    Job terminated of its own accord at 2023-04-14T17:36:09Z with exit-code 0.
    +
    +

    And, if you look at one of the output files, you should see something like this: +Hello OSPool from Job 0 running on alice@e389.chtc.wisc.edu.

    +

    Congratulations. You've run your first jobs in the OSPool!

    +

    Important Workflow Elements

    +

    A. Removing Jobs +To remove a specific job, use condor_rm <JobID, ClusterID, Username>. Example:

    +

    [alice@ap40]$ condor_rm 845638.0

    +

    B. Importance of Testing & Resource Optimization

    +
      +
    1. +

      Examine Job Success Within the log file, you can see information about the completion of each job, including a system error code (as seen in "return value 0"). You can use this code, as well as information in your ".err" file and other output files, to determine what issues your job(s) may have had, if any.

      +
    2. +
    3. +

      Improve Efficiency Researchers with input and output files greater than 1GB, should store them in their /protected directory instead of /home to improve file transfer efficiency. See our data transfer guides to learn more.

      +
    4. +
    5. +

      Get the Right Resource Requests +Be sure to always add or modify the following lines in your submit files, as appropriate, and after running a few tests.

      +
    6. +
    + +  +    +    +  +  +    +    +  +  +    +    +  +  +    +    +  +
    Submit file entryResources your jobs will run on
    request_cpus = cpusMatches each job to a computer "slot" with at least this many CPU cores.
    request_disk = kilobytesMatches each job to a slot with at least this much disk space, in units of KB.
    request_memory = megabytesMatches each job to a slot with at least this much memory (RAM), in units of MB.
    + +

    Determining Memory and Disk Requirements. The log file also indicates how much memory and disk each job used, so that you can first test a few jobs before submitting many more with more accurate request values. When you request too little, your jobs will be terminated by HTCondor and set to "hold" status to flag that job as requiring your attention. To learn more about why a job as gone on hold, use condor_q -hold. +When you request too much, your jobs may not match to as many available "slots" as they could otherwise, and your overall throughput will suffer.

    +

    You Have the Basics, Now Run Your OWN Jobs

    +

    Check out the HTCondor Job Submission Intro video, which introduces various ways to specify differences between jobs (e.g. parameters, different input filenames, etc.), ways to organize your data, etc. and our full set of OSPool User Guides to begin submitting your own jobs.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/workload_planning/jobdurationcategory/index.html b/htc_workloads/workload_planning/jobdurationcategory/index.html new file mode 100644 index 00000000..825a01c1 --- /dev/null +++ b/htc_workloads/workload_planning/jobdurationcategory/index.html @@ -0,0 +1,2423 @@ + + + + + + + + + + + + + + + + + + Jobdurationcategory - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Indicate the Duration Category of Your Jobs

    +

    Why Job Duration Categories?

    +

    To maximize the value of the capacity contributed by the different organizations to the Open Science Pool (OSPool), +users are requested to identify one of three duration categories for their jobs. These categories should be selected based upon test +jobs (run on the OSPool) and allow for more effective scheduling of the capacity contributed to the pool, +honoring the community’s shared responsibility for efficient use of the contributed resources. As a reminder, +jobs with single executions longer than 20 hours in tests on the OSPool should not be submitted, without +self-checkpointing (see further below).

    +

    Every job submitted from an OSG-managed access point must +be labeled with a Job Duration Category upon submission. +By knowing the expected duration, the OSG is working to be able to direct longer-running jobs to resources that are +faster and are interrupted less, while shorter jobs can run across more of the OSPool for better overall throughput.

    +

    Specify a Job Duration Category

    +

    The JobDurationCategory must be listed anywhere prior to the final ‘queue’ statement of the submit file, as below:

    +
    +JobDurationCategory = “Long”
    +
    + + + + + + + + + + + + + + + + + + + + +
    JobDurationCategoryExpected Job DurationMaximum Allowed Duration
    Medium (default)<10 hrs20 hrs
    Long<20 hrs40 hrs
    +

    If the user does not indicate a JobDurationCategory in the submit file, the relevant job(s) will be +labeled as Medium by default. Batches with jobs that individually execute for longer than 20 hours +are not a good fit for the OSPool. If your jobs are self-checkpointing, +see “Self-Checkpointing Jobs”, further below.

    +

    Test Jobs for Expected Duration

    +

    As part of the preparation for running a full-scale job batch, +users should test a subset (first ~10, then 100 or 1000) of their jobs with the Medium or Long categories, +and then review actual job execution durations in the job log files. +If the user expects potentially significant variation in job durations within a single batch, a longer JobDurationCategory may be warranted relative to the duration of test jobs. Or, if variations in job duration may be predictable, the user may choose to submit different +subsets of jobs with different Job Duration Categories.

    +

    OSG Facilitators have a lot of experience with approaches for achieving shorter jobs (e.g. breaking up work into shorter, more numerous jobs; self-checkpointing; automated sequential job submissions; etc.) Get in touch, and we'll help you work through a solution!! support@osg-htc.org

    +

    Maximum Allowed Duration

    +

    Jobs in each category will be placed on hold in the queue if they run longer than their Maximum Allowed Duration +(starting Tuesday, Nov 16, 2021). In that case, the user may remove and resubmit the jobs, identifying a longer category.

    +

    Jobs that test as longer than 20 hours are not a good fit for the OSPool resources, and should not be submitted prior to contacting +support@osg-htc.org to discuss options. The Maximum Allowed Durations +are longer than the Expected Job Durations in order to accommodate CPU speed variations across OSPool computing resources, +as well as other contributions to job duration that may not be apparent in smaller test batches. +Similarly, Long jobs held after running longer +than 40 hours represent significant wasted capacity and should never be released or resubmitted by the user without +first taking steps to modify and test the jobs to run shorter.

    +

    Self-Checkpointing Jobs

    +

    Jobs that self-checkpoint +at least every 10 hours are an excellent way for users to run jobs that would otherwise be longer in total execution time +than the durations listed above. Jobs that complete a checkpoint at least as often as allowed for their JobDurationCategory will not be held.

    +

    We are excited to help you think through and implement self-checkpointing. Get in touch via support@osg-htc.org if you have questions. :)

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/workload_planning/preparing-to-scale-up/index.html b/htc_workloads/workload_planning/preparing-to-scale-up/index.html new file mode 100644 index 00000000..ed6e7ab8 --- /dev/null +++ b/htc_workloads/workload_planning/preparing-to-scale-up/index.html @@ -0,0 +1,2762 @@ + + + + + + + + + + + + + + + + + + Determining the Amount of Resources to Request in a Submit File - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Determining the Amount of Resources to Request in a Submit File

    +

    Learning Objectives

    +

    This guide discuses the following:

    +
      +
    • Best practices for testing jobs and scaling up your analysis.
    • +
    • How to determine the amount of resources (CPU, memory, disk space) to request in a submit file.
    • +
    +

    Overview

    +

    Much of HTCondor's power comes from the ability to run a large number +of jobs simultaneously. To optimize your work with a high-throughput computing (HTC) +approach, you will need to test and optimize the resource requests of those jobs to +only request the amount of memory, disk, and cpus truly needed. +This is an important practice that will maximize your throughput by optimizing the +number of potential 'slots' in the OSPool that your jobs can match to, reducing the overall +turnaround time for completing a whole batch.

    +

    This guide will describe best practices and general tips for testing +your job resource requests before scaling up to submit your full set of jobs. +Additional information is also available from the following "Introduction to High Throughput Computing with HTCondor" 2020 OSG Virtual +Pilot School lecture video:

    +

    + 2020 VSP dHTC with HTCondor

    +

    Always Start With Test Jobs

    +

    Submitting test jobs is an important first step for optimizing +the resource requests of your jobs. We always recommend submitting a few (3-10) +test jobs first before scaling up. If you plan to submit +thousands of jobs, you may even want to run an intermediate test of 100-1,000 jobs to catch any +failures or holds that mean your jobs have additional requirements they need to specify.

    +

    Some general tips for test jobs:

    +
      +
    • +

      Select smaller data sets or subsets of data for your first test jobs. Using +smaller data will keep the resource needs of your jobs low which will help get +test jobs to start and complete sooner, when you're just making sure that your submit file +and other logistical aspects of jobs submission are as you want them.

      +
    • +
    • +

      If possible, submit test jobs that will reproduce results you've gotten +using another system. This approach can be used as a good "sanity check" as you'll be able +to compare the results of the test to those previously obtained.

      +
    • +
    • +

      After initial tests complete successfully, scale up to larger or full-size +data sets; if your jobs span a range of input file sizes, submit tests using the smallest +and largest inputs to examine the range of resources that these jobs may need.

      +
    • +
    • +

      Give your test jobs and associated HTCondor log, error, output, +and submit files meaningful names so you know which results refer to which tests.

      +
    • +
    +

    Requesting CPUs, Memory, and Disk Space in the HTCondor Submit File

    +

    In the HTCondor submit file, you must explicitly request the number of +CPUs (i.e. cores), and the amount of disk and memory that the job needs +to complete successfully, and identify a JobDurationCategory. +When you submit a job for the +first time, you may not know just how much to request and that's OK. +Below are some suggestions for making resource requests for initial test +jobs.

    +
      +
    • +

      For requesting CPU cores start by requesting a single cpu. With single-cpu jobs, you will see +your jobs start sooner. Ultimately you will be able to achieve +greater throughput with single cpus jobs compared to jobs that request +and use multiple cpus.

      +
        +
      • +

        Keep in mind, requesting more CPU cores for a job +does not mean that your jobs will use more cpus. Rather, you want to make sure +that your CPU request matches the number of cores (i.e. 'threads' or 'processes') +that you expect your software to use. (Most softwares only use 1 CPU core, by default.)

        +
      • +
      • +

        There is limited support for multicore work in OSG. To learn more, +see our guide on +Multicore Jobs

        +
      • +
      • +

        Depending on how long you expect your test jobs to take on a single core, you may need to identify a +non-default JobDurationCategory, or consider implementing self-checkpointing.

        +
      • +
      +
    • +
    • +

      To inform initial disk requests always look at the size of your input +files. At a minimum, you need to request enough disk to support all +of the input files, executable, and the output you expect, but don't forget that the standard 'error' and 'output' +files you specify will capture 'terminal' output that may add up, too.

      +
        +
      • +

        If many of your input and output files are compressed +(i.e. zipped or tarballs) you will need to factor that into your +estimates for disk usage as these files will take up additional space once uncompressed +in the job.

        +
      • +
      • +

        For your initial tests it is OK to request more disk than +your job may need so that the test completes successfully. The key +is to adjust disk requests for subsequent jobs based on the results +of these test jobs.

        +
      • +
      +
    • +
    • +

      Estimating memory requests can sometimes be tricky. If you've performed the +same or similar work on another computer, consider using the amount of +memory (i.e. RAM) from that computer as a starting point. For instance, +most laptop computers these days will have 8 or 16 GB of memory, which is okay to start +with if you know a single job will succeed on your laptop.

      +
        +
      • +

        For your initial tests it is OK to request more memory than +your job may need so that the test completes successfully. The key +is to adjust memory requests for subsequent jobs based on the results +of these test jobs.

        +
      • +
      • +

        If you find that memory usage will vary greatly across a +batch of jobs, we can assist you with creating dynamic memory requests +in your submit files.

        +
      • +
      +
    • +
    +

    Optimize Job Resource Requests For Subsequent Jobs

    +

    As always, reviewing the HTCondor log file from past jobs is +a great way to learn about the resource needs of your jobs. Optimizing the resources requested for each job may help your job run faster and achieve more throughput.

    +

    HTCondor will report +the memory, disk, and cpu usage of your jobs at the end of the HTCondor .log file. The amount of each resource requested in the submit file is listed under the "Request" column and information about the amount of each resource actually utilized to complete the job is provided in the "Usage" column.

    +

    For example:

    +
            Partitionable Resources :    Usage  Request Allocated
    +           Cpus                 :                 1         1
    +           Disk (KB)            :       12  1000000  26703078
    +           Memory (MB)          :        0     1000      1000
    +
    +
      +
    • +

      One quick option to query your log files is to use the Unix tool grep. For example: + [user@login]$ grep "Disk (KB)" my-job.log + The above will return all lines in my-job.log that report the disk + usage, request, and allocation of all jobs reported in that log file.

      +

      Alternatively, condor_history can be used to query details from +recently completed job submissions. HTCondor's history is continuously updating with information from new jobs, so condor_history is best performed shortly after the jobs of interest enter/leave the queue.

      +
    • +
    +

    Submit Multiple Jobs Using A Single Submit File

    +

    Once you have a single test job that completes successfully, the next +step is to submit a small batch of test jobs (e.g. 5 or 10 jobs) +using a single submit file. Use this small-scale +multi-job submission test to ensure that all jobs complete successfully, produce the +desired output, and do not conflict with each other when submitted together. Once +you are confident that the jobs will complete as desired, then scale up to submitting +the entire set of jobs.

    +

    Monitoring Job Status and Obtaining Run Information

    +

    Gathering information about how, what, and where a job ran can be important for both troubleshooting and optimizing a workflow. The following commands are a great way to learn more about your jobs:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    CommandDescription
    condor_qShows the queue information for your jobs. Includes information such as batch name and total jobs.
    condor_q <JobID> -lPrints all information related to a job including attributes and run information about a job in the queue. Output includes JobDurationCategory, ServerTime, SubmitFile, etc. Also works with condor_history.
    condor_q <JobID> -af <AttributeName1> <AttributeName2>Prints information about an attribute or list of attributes for a single job using the autoformat -af flag. The list of possible attributes can be found using condor_q <JobID> -l. Also works with condor_history.
    condor_q -constraint '<Attribute> == "<value>"'The -constraint flag allows users to find all jobs with a certain value for a given parameter. This flag supports searching by more than one parameter and different operators (e.g. =!=). Also works with condor_history.
    condor_q -better-analyze <JobID> -pool <PoolName>Shows a list of the number of slots matching a job's requirements. For more information, see Troubleshooting Job Errors.
    +

    Additional condor_q flags involved in optimizing and troubleshooting jobs include:

    + + + + + + + + + + + + + + + + + + + + + + + + + +
    FlagDescription
    -nobatchCombined with condor_q, this flag will list jobs individually and not by batch.
    -holdShow only jobs in the "on hold" state and the reason for that. An action from the user is expected to solve the problem.
    -runShow your running jobs and related info, like how much time they have been running, where they are running, etc.
    -dagOrganize condor_q output by DAG.
    +

    More information about the commands and flags above can be found in the HTCondor manual.

    +

    Avoid Exceeding Disk Quotas in /home and /protected

    +

    To prevent errors or workflow interruption, be sure to estimate the +input and output needed for all of your concurrently running +jobs. By default, after your job terminates HTCondor will transfer back +any new or modified files from the top-level directory where the job ran, +back to your /home directory. Efficiently manage output by including steps +to remove intermediate and/or unnecessary files as part of your job.

    +

    Workflow Management

    +

    To help manage complicated workflows, consider a workflow manager such +as HTCondor's built-in DAGman +or the HTCondor-compatible Pegasus +workflow tool.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/htc_workloads/workload_planning/roadmap/index.html b/htc_workloads/workload_planning/roadmap/index.html new file mode 100644 index 00000000..ac18ab2a --- /dev/null +++ b/htc_workloads/workload_planning/roadmap/index.html @@ -0,0 +1,2687 @@ + + + + + + + + + + + + + + + + + + Roadmap to HTC Workload Submission - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Roadmap to HTC Workload Submission

    +

    Overview

    +

    This guide lays out the steps needed to go from logging in to an OSG Access Point to running a full scale high throughput computing +(HTC) workload on OSG's Open Science Pool (OSPool). +The steps listed here apply to any new workload +submission, whether you are a long-time OSG user or just getting +started with your first workload, with helpful links to our documentation pages.

    +

    This guide assumes that you have applied for an OSG Access Point account and +have been approved after meeting with an OSG Research Computing Facilitator. +If you don't yet have an account, you can apply for one here +or contact us with any questions you have.

    +

    Learning how to get started on the OSG does not need to end with this document or +our guides! Learn about our training opportunities and personal facilitation support +in the Getting Help section below.

    +

    1. Introduction to the OSPool and OSG Resources

    +

    The OSG's Open Science Pool is best-suited for computing work that can be run as many, independent +tasks, in an approach called "high throughput computing." For more information +on what kind of work is a good fit for the OSG, +see Is the Open Science Pool for You?.

    +

    Learn more about the services provided by the OSG in this video:

    +

    OSG Introduction

    + + +

    2. Log on to an OSG Access Point

    +

    If you have not done so, apply for an account here. A Research Computing Facilitator will contact you within one business day to arrange a meeting to discuss your computational goals and to activate your account.

    +

    Note that there are multiple classes of access points provided. +When your account was activated, you should have been told which +access point your account belongs to:

    +
    +Log In to "uw.osg-htc.org" Access Points (e.g., ap40.uw.osg-htc.org) +
    +If your account is on the uw.osg-htc.org Access Points (e.g., accounts on ap40.uw.osg-htc.org), follow instructions in this guide for logging in: +Log In to uw.osg-htc.org Access Points +
    + +
    +Log In to "OSG Connect" Access Points (e.g., ap20.uc.osg-htc.org) +
    +If your account is on the OSG Connect Access points (e.g., accounts on ap20.uc.osg-htc.org, ap21.uc.osg-htc.org), follow instructions in this guide for logging in: +Log In to OSG Connect Access Points +
    + +

    3. Learn to Submit HTCondor Jobs

    +

    Computational work is run on the OSPool by submitting it as “jobs” to the +HTCondor scheduler. Jobs submitted to HTCondor are then scheduled and +run on different resources that are part of the Open Science Pool. +Before submitting your own computational work, it is important to +understand how HTCondor job submission works. The following guides show +how to submit basic HTCondor jobs.

    + +

    4. Test a First Job

    +

    After learning about the basics of HTCondor job submission, you will +need to generate your own HTCondor job -- including the software needed +by the job and the appropriate mechanism to handle the data. We +recommend doing this using a single test job.

    +

    Prepare your software

    +

    Software is an integral part of your HTC workflow. Whether you’ve written it yourself, inherited it from your research group, or use common open-source packages, any required executables and libraries will need to be made available to your jobs if they are to run on the OSPool.

    +

    Read through this overview of Using Software to help you determine the best way to provide your software. We also have the following guides/tutorials for each major software portability approach:

    + +

    Finally, here are some additional guides specific to some of the most common scripting languages and software tools used on OSG**:

    + +

    **This is not a complete list. Feel free to search for your software in our Knowledge base.

    +

    Manage your data

    +

    The data for your jobs will need to be transferred to each job that runs in the OSPool, +and HTCondor has built-in features for getting data to jobs. Our Data Management guide +discussed the relevant approaches, when to use them, and where to stage data for each.

    + + + + +

    Assign the Appropriate Job Duration Category

    +

    Jobs running in the OSPool may be interrupted at any time, and will be re-run by HTCondor, unless a single execution of a job exceeds the allowed duration. Jobs expected to take longer than 10 hours will need to identify themselves as 'Long' according to our Job Duration policies. Remember that jobs expected to take longer than 20 hours are not a good fit for the OSPool (see Is the Open Science Pool for You?) without implementing self-checkpointing (further below).

    +

    5. Scale Up

    +

    After you have a sample job running successfully, you’ll want to scale +up in one or two steps (first run several jobs, before running ALL of them). +HTCondor has many useful features that make it easy to submit +multiple jobs with the same submit file.

    + + + +

    6. Special Use Cases

    +

    If you think any of the below applies to you, +please get in touch +and our facilitation team will be happy to discuss your individual case.

    + +

    Getting Help

    +

    The OSG Facilitation team is here to help with questions and issues that come up as you work +through these roadmap steps. We are available via email, office hours, appointments, and offer +regular training opportunities. See our Get Help page and OSG Training page +for all the different ways you can reach us. Our purpose +is to assist you with achieving your computational goals, so we want to hear from you!

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/index.html b/index.html new file mode 100644 index 00000000..211dae86 --- /dev/null +++ b/index.html @@ -0,0 +1,3035 @@ + + + + + + + + + + + + + + + + + + + + Home - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + + + + + + +
    +
    +
    +
    + +
    +
    +

    OSPool Documentation

    +

    Documentation and Support hub for researchers using OSPool's compute and storage resources to help with their research.

    + + Get started + + + Sign Up + +
    +
    +
    +
    + + + + +
    + + + + + + + + +
    +

    Support And Training Resources

    + +
    + + + +
    +

    Submit High Throughput Computing (HTC) Workloads

    +
    + +
    +

    How to Use the OSPool

    + +
    + + + + + + + +
    +

    Monitor, Review, and Troubleshoot Jobs

    + +
    + +
    +

    Considerations For Specific Resource Needs

    + +
    + + + +
    +
    + + + +
    +

    Tutorials and Software Examples

    +
    + + + +
    +

    Artificial Intelligence

    + +
    + + + + + +
    +

    Conda/Miniconda

    + +
    + +
    +

    DAGMan

    + +
    + + + +
    +

    Julia and Java

    + +
    + +
    +

    Bioinformatics

    + +
    + +
    +

    Drug Discovery

    + +
    + +
    +
    + + + +
    +

    Additional Resources

    +
    + + + +
    +
    + + +
    + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/overview/account_setup/ap20-ap21-migration/index.html b/overview/account_setup/ap20-ap21-migration/index.html new file mode 100644 index 00000000..4355e412 --- /dev/null +++ b/overview/account_setup/ap20-ap21-migration/index.html @@ -0,0 +1,2517 @@ + + + + + + + + + + + + + + + + + + Migrating to ap20/ap21.uc.osg-htc.org - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Migrating to New Access Points From login04, login05

    +

    The login04/login05.osgconnect.net access points were replaced with +new improved access points during July and August of 2023. If you did not +migrate your account or data during this time, you can likely still access +the new access points, following these steps. Please contact the facilitation +team with any questions.

    +

    Migration Steps

    +

    Step 1: Determine Your Assigned Access Point

    +

    Your new access point assignment will be based on your former access point:

    +
      +
    • If your current assigment is login04.osgconnect.net, your new access point + will be ap20.uc.osg-htc.org
    • +
    • If your current assigment is login05.osgconnect.net, your new access point + will be ap21.uc.osg-htc.org
    • +
    +

    You can also see this information on your profile page on osgconnect.net

    +

    Step 2: Set Up Multi Factor Authentication

    +

    An important change is that the new access points will require multi factor authentication. +As part of the migration process, you will connect to your account to a time-based one-time password (TOTP) client. +When connecting to an access point via ssh, you will be asked to provide the +generated 6 digit verification code when logging in. Please see detailed instructions +here.

    +

    Step 3 (If Needed): Modify Workflows to Use New Data Paths

    +

    OSDF locations have changed. We recommend +that most data from the old /public/ or /protected/ folders transition to the new access point- +specific user-only areas (/ospool/ap20/data/ or /ospool/ap21/data based on which access +point you are assigned to). This will offer the the best performance. You will also +need to upload submit files and scripts to use these new data locations. Consult the +updated Data Overview and +OSDF guides for more information, and contact the +Facilitation team with any questions.

    +

    Get Help

    +

    We understand transitions may raise questions or difficulties. Should you require +any assistance, please feel free to reach out to us via email, or join one of +our office hours sessions.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/overview/account_setup/ap7-access/index.html b/overview/account_setup/ap7-access/index.html new file mode 100644 index 00000000..216149f2 --- /dev/null +++ b/overview/account_setup/ap7-access/index.html @@ -0,0 +1,2297 @@ + + + + + + + + + + + + + + + + + + Ap7 access - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    + + + + +

    Ap7 access

    + +

    The latest version of this guide is at this link

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/overview/account_setup/comanage-access/index.html b/overview/account_setup/comanage-access/index.html new file mode 100644 index 00000000..e57b2b65 --- /dev/null +++ b/overview/account_setup/comanage-access/index.html @@ -0,0 +1,2579 @@ + + + + + + + + + + + + + + + + + + Log In to uw.osg-htc.org Access Points - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Log In to uw.osg-htc.org Access Points

    +

    This guide is for users who were notified by a member of the OSG team that they +will be using the uw.osg-htc.org Access Points.

    +

    To join and use the uw.osg-htc.org Access Points (ap40.uw.osg-htc.org), you will go through the following steps:

    +
      +
    1. Apply for a uw.osg-htc.org Access Point account
    2. +
    3. Have your account approved by an OSG Team member
    4. +
    5. Log in to ap40.uw.osg-htc.org
    6. +
    +

    Request Access to uw.osg-htc.org Access Points

    +

    To request access to ap40.uw.osg-htc.org, submit an application using the following steps:

    +
      +
    1. +

      To request an OSPool account, visit this account registration page. You will be redirected to the CILogon sign in page. Select your institution and use your institutional credentials to login. You will use these credentials later to login so it is important to remember the institution you use at this step.

      +

      +
    2. +
    +

    If you have issues signing in using your institutional credentials, contact us at support@osg-htc.org.

    +
      +
    1. +

      Once you sign in, you will be redirected to the User Enrollment page. Click "Begin" and enter your name and email address in the following page. In many cases, this information will be automatically populated. If desired, it is possible to manually edit any information automatically filled in. Once you have entered your information, click "SUBMIT".

      +

      +
    2. +
    3. +

      After submitting your application, you will receive an email from registry@cilogon.org to verify your email address. Click the link listed in the email to be redirected to a page confirm your invitation details. Click the "ACCEPT" button to complete this step.

      +

      +
    4. +
    +

    Account Approval by a Research Computing Facilitator

    +

    If a meeting has not already been scheduled with a Research Computing Facilitator, one of the facilitation team will contact you about arranging a short consultation.

    +

    Following the meeting, the Facilitator will approve your account and add your profile to any relevant OSG ‘project’ names. Once your account is ready, the Facilitator will email you with your account details including the 'username' you will use to log in to the ap40.uw.osg-htc.org access point.

    +

    Log in

    +

    Once your account has been added to the ap40.uw.osg-htc.org access point, you will be able to log in using a terminal or SSH program. Logging in requires authenticating your credientials using one of two options: web authentication or SSH key pair authentication. Additional information on this process will be provided during and/or following your discussion with a Research Computing Facilitator.

    +

    Option 1: Log in via Web Authentication

    +

    Logging in via web authentication requires no preparatory steps beyond having access to an internet browser.

    +

    To authenticate using this approach:

    +
      +
    1. +

      Open a terminal and type ssh username@ap40.uw.osg-htc.org, being sure to replace username with your uw.osg-htc.org username. Upon hitting enter, the following text should appear with a unique, but similar, URL:

      +
      Authenticate at
      +-----------------
      +https://cilogon.org/device/?user_code=FF4-ZX6-9LK
      +-----------------
      +Type 'Enter' when you authenticate.
      +
      +
    2. +
    3. +

      Copy the https:// link, paste it into a web browser, and hit enter.

      +
    4. +
    5. +

      You will be redirected to a new page where you will be prompted to login using your institutional credentials. Once you have done so, a new page will appear with the following text: "You have successfully approved the user code. Please return to your device for further instructions."

      +
    6. +
    7. +

      Return to your terminal, and type 'Enter' to complete the login process.

      +
    8. +
    +

    Option 2: Log in via SSH Key Pair Authentication

    +

    It is also possible to authenticate using an SSH key pair, if you prefer. Logging in using SSH keys does not require access to an internet browser to log in into the OSG Access Point, ap40.uw.osg-htc.org.

    +

    The process below describes how to upload a public key to the registration website. It assumes that a private/public key pair has already been generated. If you need to generate a key pair, see this OSG guide.

    +
      +
    1. +

      Return to the Registration Page and login using your institutional credentials if prompted.

      +
    2. +
    3. +

      Click your name at the top right. In the dropdown box, click "My Profile (OSG)" button.

      +

      +
    4. +
    5. +

      On the right hand side of your profile, click "Authenticators" link.

      +

      +
    6. +
    7. +

      On the authenticators page, click the "Manage" button.

      +

      +
    8. +
    9. +

      On the new SSH Keys page, click "Add SSH Key" and browse your computer to upload your public SSH key.

      +

      +
    10. +
    +

    You can now log in to ap40.uw.osg-htc.org from the terminal, using ssh username@ap40.uw.osg-htc.org. When you log in, instead of being prompted with a web link, you should either authenticate automatically or be asked for your ssh key passphrase to complete logging in.

    +

    Get Help

    +

    For questions regarding logging in or creating an account, contact us at support@osg-htc.org.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/overview/account_setup/connect-access/index.html b/overview/account_setup/connect-access/index.html new file mode 100644 index 00000000..31e8f8e4 --- /dev/null +++ b/overview/account_setup/connect-access/index.html @@ -0,0 +1,2698 @@ + + + + + + + + + + + + + + + + + + Log In to OSG Connect Access Points - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Log In to "OSG Connect" Access Points

    +

    This guide is for users who were notified by a member of the OSG team that they +will be using the "OSG Connect" Access Points. Do not go through the steps of this +guide until advised to by a Research Computing Facilitator

    +

    To join and use the "OSG Connect" Access Points (ap20.uc.osg-htc.org, +ap21.uc.osg-htc.org), you will go through the following steps:

    +
      +
    1. Apply for an OSG Connect Access Point account
    2. +
    3. Have your account approved by an OSG Team member
    4. +
    5. Generate an ssh key and add it to your web profile
    6. +
    7. Log in to the appropriate Access Point
    8. +
    +

    Apply for an OSG Connect Access Point account

    +

    If prompted by a Research Computing Facilitator, you can apply for OSG Connect Access Points here:

    +

    OSG Connect Account Request

    +

    Account Approval by a Research Computing Facilitator

    +

    If a meeting has not already been scheduled with a Research Computing Facilitator, one of the facilitation team will contact you about arranging a short consultation.

    +

    Following the meeting, the Facilitator will approve your account and add your profile to +any relevant OSG ‘project’ names. Once your account is ready, the Facilitator will email +you with your account details.

    +

    Add a public SSH key to your web profile

    +

    Log in to OSG Connect Access Points is via SSH key. To generate an SSH key pair, +see this guide and then proceed with the following steps.

    +

    To add your public key to the OSG Connect log in node:

    +
      +
    1. +

      Go to www.osgconnect.net and sign in with the institutional identity you used when requesting an OSG Connect account.

      +
    2. +
    3. +

      Click "Profile" in the top right corner.

      +
    4. +
    5. +

      Click the "Edit Profile" button located after the user information in the left hand box.

      +
    6. +
    7. +

      Copy/paste the public key which is found in the .pub file into the "SSH Public Key" text box. +The expected key is a single line, with three fields looking something like +ssh-rsa ASSFFSAF... user@host. If you used the first set of key-generating +instructions it is the content of ~/.ssh/id_rsa.pub and for the second (using +PuTTYgen), it is the content from step 7 above.

      +
    8. +
    9. +

      Click "Update Profile"

      +
    10. +
    +

    The key is now added to your profile in the OSG Connect website. This will automatically +be added to the login nodes within a couple hours.

    +
    +

    Can I Use Multiple Keys?

    +

    Yes! If you want to log into OSG Connect from multiple computers, you can do so by generating +a keypair on each computer you want to use, and then adding the public key to your OSG +Connect profile.

    +
    +

    Add multi factor authentication to your web profile

    +

    Multi factor authentication means that you will use 2 different methods to authenticate +when you log in. The first factor is the ssh key you added above. The second factor +is a 6 digit code from one of your devices. OSGConnect uses the TOTP +(Time-based One-time Password) standard - any TOTP client should work. Some common +clients include:

    + +
    +

    TOTP clients are most commonly used from smartphones. If you do not have +a smartphone or are otherwise struggling to access or use a TOTP client, +please contact the facilitation team: support@osg-htc.org

    +
    +

    Once you have a TOTP client, configure it to be used with OSG Connect:

    +
      +
    1. +

      Go to https://osgconnect.net and sign in with the institutional identity you used when requesting an OSG Connect account.

      +
    2. +
    3. +

      Click "Profile" in the top right corner.

      +
    4. +
    5. +

      Click the "Edit Profile" button located after the user information in the left hand box.

      +
    6. +
    7. +

      Check the "Set up Multi-Factor Authentication" at the bottom and hit Apply.

      +
    8. +
    9. +

      In the Multi-Factor Authentication box, follow the instructions (scan the QR code with your TOTP client)

      +
    10. +
    +

    Important: after setting up multi-factor authentication using your TOTP client, you will +need to wait 15 minutes before logging in.

    +

    Logging In

    +

    After following the steps above to upload your key and set up multi factor authentication, once +about fifteen minutes have passed, you should be able to log in to OSG Connect.

    +

    Determine which login node to use

    +

    Before you can connect, you will need to know which login node your account is assigned to. You can find +this information on your profile from the OSG Connect website.

    +
      +
    1. +

      Go to www.osgconnect.net and sign in with your institution credentials that you used to request an account.

      +
    2. +
    3. +

      Click "Profile" in the top right corner.

      +
    4. +
    5. +

      The assigned login nodes are listed in the left side box. Make note of the address of +your assigned login node as you will use this to connect to OSG Connect.

      +
    6. +
    +

    Identify Login Node

    +

    For Mac, Linux, or newer versions of Windows

    +

    Open a terminal and type in:

    +
    ssh <your_osg_connect_username>@<your_osg_login_node>
    +
    +

    It will ask for the passphrase for your ssh key (if you set one), then for +a "Verification code" which you should get by going to the TOTP client you +used to set up two factor authentication above. After entering the six digit +code, you should be logged in.

    +

    Note that when you are typing your passphrase and verification code, your typing will +NOT appear on the terminal, but the information is being entered!

    +

    For older versions of Windows

    +

    On older versions of Windows, you can use the Putty program to log in.

    +

    PuTTY Intructions Screenshot

    +
      +
    1. +

      Open the PutTTY program. If necessary, you can download PuTTY from the website here PuTTY download page.

      +
    2. +
    3. +

      Type the address of your assigned login node as the hostname (see "Determine which login node to use" above).

      +
    4. +
    5. +

      In the left hand menu, click the "+" next to "SSH" to expand the menu.

      +
    6. +
    7. +

      Click "Auth" in the "SSH" menu.

      +
    8. +
    9. +

      Click "Browse" and specify the private key file you saved in step 5 above.

      +
    10. +
    11. +

      Return to "Session".
      +    a. Name your session
      +    b. Save session for future use

      +
    12. +
    13. Click "Open" to launch shell. Provide your ssh-key passphrase (created at Step 4 in PuTTYgen) when prompted to do so.
    14. +
    15. When prompted for a "Verification Code", go to the TOTP client you used to set up +two-factor authentication, above, and enter the six digit code from the client into +your PuTTY terminal prompt.
    16. +
    +

    The following video demonstrates the key generation and login process from the Putty

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/overview/account_setup/generate-add-sshkey/index.html b/overview/account_setup/generate-add-sshkey/index.html new file mode 100644 index 00000000..20a0ffca --- /dev/null +++ b/overview/account_setup/generate-add-sshkey/index.html @@ -0,0 +1,2581 @@ + + + + + + + + + + + + + + + + + + Generate SSH Keys - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Generate SSH Keys For Login

    +

    Overview

    +

    One way to connect to an OSG-managed Access Point is an +SSH key. This guide details how to create an SSH key. +Once created, it needs to be added to your web profile +in order to enable log in to an Access Point.

    +

    Generate SSH Keys

    +

    We will discuss how to generate a SSH key pair for two cases:

    +
      +
    • "Unix" systems (Linux, Mac) and certain, latest versions of Windows
    • +
    • Older Windows systems
    • +
    +

    Please note: The key pair consist of a private key and a public key. You will upload the +public key to the OSG Connect website or COmanage, but you also need to keep a copy of the private key +to log in! +You should keep the private key on machines that you have +direct access to, i.e. your local computer (your laptop or desktop).

    +

    Unix-based operating system (Linux/Mac) or latest Windows 10 versions

    +

    We will create a key in the .ssh directory of your computer. Open a terminal on your local computer and run the following commands:

    +
     mkdir ~/.ssh
    + chmod 700 ~/.ssh
    + ssh-keygen -t rsa
    +
    +

    For the newer OS versions the .ssh directory is already created and the first command is redundant. The last command will produce a prompt similar to

    +
     Generating public/private rsa key pair.
    + Enter file in which to save the key (/home/<local_user_name>/.ssh/id_rsa):
    +
    +

    Unless you want to change the location of the key, continue by pressing enter. +Now you will be asked for a passphrase. Enter a passphrase that you will be +able to remember and which is secure:

    +
     Enter passphrase (empty for no passphrase):
    + Enter same passphrase again:
    +
    +

    When everything has successfully completed, the output should resemble the +following:

    +
     Your identification has been saved in /home/<local_user_name>/.ssh/id_rsa.
    + Your public key has been saved in /home/<local_user_name>/.ssh/id_rsa.pub.
    + The key fingerprint is:
    + ...
    +
    +

    The part you want to upload is the content of the .pub file (~/.ssh/id_rsa.pub) +The following video demonstrates the key generation process from the terminal

    +

    Windows, using Putty to log in

    +

    If you can connect using the ssh command within the Command Prompt (Windows 10 build version 1803 and later), please follow the Mac/Linux directions above. If not, +continue with the directions below.

    +
      +
    1. +

      Open the PuTTYgen program. You can download PuttyGen +here: PuttyGen Download Page, +scroll down until you see the puttygen.exe file.

      +
    2. +
    3. +

      For Type of key to generate, select RSA or SSH-2 RSA.

      +
    4. +
    5. +

      Click the "Generate" button.

      +
    6. +
    7. +

      Move your mouse in the area below the progress bar. +When the progress bar is full, PuTTYgen generates your key pair.

      +
    8. +
    9. +

      Type a passphrase in the "Key passphrase" field. Type the same passphrase in the "Confirm passphrase" field. You +can use a key without a passphrase, but this is not recommended.

      +
    10. +
    11. +

      Click the "Save private key" button to save the private key. You must save the private key. You will need it to connect to your machine.

      +
    12. +
    13. +

      Right-click in the text field labeled "Public key for pasting into OpenSSH authorized_keys file" and choose Select All.

      +
    14. +
    15. +

      Right-click again in the same text field and choose Copy.

      +
    16. +
    +

    alt text

    +

    Next Steps

    +

    After generating the key, you will need to upload it to a web profile to use it +for log in.

    +
      +
    • If you have an account on an uw.osg-htc.org Access Point (account created through https://registry.cilogon.org/registry/) follow the instructions here: Log In to uw.osg-htc.org Access Points
    • +
    • If you have an account on "OSG Connect" Access Points (account created through https://www.osgconnect.net/), follow the instructions here: Log In to OSG Connect Access Points
    • +
    +

    Getting Help

    +

    For assistance or questions, please email the OSG Research Facilitation team at support@osg-htc.org or visit the help desk and community forums.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/overview/account_setup/is-it-for-you/index.html b/overview/account_setup/is-it-for-you/index.html new file mode 100644 index 00000000..3022121a --- /dev/null +++ b/overview/account_setup/is-it-for-you/index.html @@ -0,0 +1,2513 @@ + + + + + + + + + + + + + + + + + + Computation on the Open Science Pool - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Computation on the Open Science Pool

    +

    The OSG is a nationally-funded consortium of computing resources +at more than one hundred institutional partners that, together, offer a strategic +advantage for computing work that can be run as numerous short tasks that can execute +independent of one another. For researchers +who are not part of an organization with their own pool in the OSG, we offer the +Open Science Pool (OSPool), with dozens +of campuses contributing excess computing capacity in support of open science. The OSPool is available +for US-affiliated academic, government, and non-profit research projects and groups for their High Throughput Computing (HTC) workflows.

    +

    Learn more about the services provided by the OSG that can support your HTC workload:

    +

    OSG Introduction

    +

    For problems that can be run as numerous independent jobs (a high-throughput approach) and have requirements +represented in the first two columns +of the table below, the significant capacity of the OSPool can transform the types of +questions that researchers are able to tackle. Importantly, +many compute tasks that may appear to not be a good fit can be modified in simple ways +to take advantage, and we'd love to discuss options with you!

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Ideal jobs!Still advantageousMaybe not, but get in touch!
    Expected Throughput:1000s concurrent jobs100s concurrent jobslet's discuss!
    Per-Job Requirements
    CPU cores1< 8> 8 (or MPI)
    GPUs01> 1
    Walltime< 10 hrs*< 20 hrs*> 20 hrs (not a good fit)
    RAM< few GB< 40 GB> 40 GB
    Input< 500 MB< 10 GB> 10 GB**
    Output< 1GB< 10 GB> 10 GB**
    Softwarepre-compiled binaries, containersMost other than --->Licensed software, non-Linux
    +

    * or checkpointable

    +

    ** per job; you can work with a multi-TB dataset on the OSPool if it can be split into pieces!

    +

    Some examples of work that have been a good fit for the OSPool and benefited from +using its resources include:

    +
      +
    • image analysis (including MRI, GIS, etc.)
    • +
    • text-based analysis, including DNA read mapping and other bioinformatics
    • +
    • hyper/parameter sweeps
    • +
    • Monte Carlo methods and other model optimization
    • +
    +

    Resources to Quickly Learn More

    +

    Introduction to OSG the Distributed High Throughput Computing framework from the annual OSG User School:

    +

    +

    Full OSG User Documentation including our Roadmap to HTC Workload Submission

    +

    OSG User Training materials . Any researcher affiliated with an academic, non-profit, or government US-based research project is welcome to attend our trainings.

    +

    Learn more and chat with a Research Computing Facilitator by signing up for OSPool account

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/overview/account_setup/registration-and-login/index.html b/overview/account_setup/registration-and-login/index.html new file mode 100644 index 00000000..7ab92cc7 --- /dev/null +++ b/overview/account_setup/registration-and-login/index.html @@ -0,0 +1,2501 @@ + + + + + + + + + + + + + + + + + + Start Here: Overview of Requesting OSPool Access - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Start Here: Overview of Requesting OSPool Access

    +

    The major steps to get started on the OSPool are:

    +
      +
    • apply for access to the OSPool
    • +
    • meet with a facilitation team member for an short consultation and orientation.
    • +
    • register for a specific OSPool Access Point
    • +
    • log in to your designated Access Point
    • +
    +

    Each of these is detailed in the guide below. +Once you've gone through these steps, you should be able to begin running work!

    +

    Apply for OSPool Access

    +

    To start, fill out the interest form on this OSG Portal site:

    +

    OSPool Consultation Request

    +

    This will send the Research Facilitation team an email. We will be in +touch to set up an orientation meeting, and confirm if you are joining +an existing project on the OSPool or starting a new one.

    +

    Orientation Meeting

    +

    The orientation meeting generally takes about 20-30 minutes and is a chance to +talk about your work, how it will +fit on the OSPool, and some practical next steps for getting started.

    +

    Register for an Access Point

    +

    +

    Before or during the orientation meeting, you will be prompted to register +for an account on a specific OSPool Access Point. The current default are +uw.osg-htc.org Access Points.

    +

    You will be directed to follow instructions on this page to register +for an account.

    +

    Log In

    +

    Once you've gone through the steps above, you should have an account on +on OSPool Access Point!

    +

    Follow the instructions below to learn how to log in to you OSPool Access Point.

    +

    Accounts for all new users are created on uw.osg-htc.org Access Points unless otherwise specified.

    +
    +Log In to "uw.osg-htc.org" Access Points (e.g., ap40.uw.osg-htc.org) +
    +If your account is on the uw.osg-htc.org Access Points (e.g., accounts on ap40.uw.osg-htc.org), follow instructions in this guide for logging in: +Log In to uw.osg-htc.org Access Points +
    + +
    +Log In to "OSG Connect" Access Points (e.g., ap20.uc.osg-htc.org) +
    +If your account is on the OSG Connect Access points (e.g., accounts on ap20.uc.osg-htc.org, ap21.uc.osg-htc.org), follow instructions in this guide for logging in: +Log In to OSG Connect Access Points +
    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/overview/account_setup/starting-project/index.html b/overview/account_setup/starting-project/index.html new file mode 100644 index 00000000..8914652c --- /dev/null +++ b/overview/account_setup/starting-project/index.html @@ -0,0 +1,2485 @@ + + + + + + + + + + + + + + + + + + Set and View Project Usage - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Set and View Project Usage

    +

    Background

    +

    The OSG team assigns individual user accounts to "projects". These projects +are a way to track usage hours and capture information about the types of +research using the OSPool.

    +

    A project typically corresponds to a research group headed by a single PI, but can +sometimes represent a long-term multi-institutional project or some other grouping.

    +

    You must be a member of a project before you can use an OSPool Access Point to submit jobs. +The next section of this guide describes the process for joining a project.

    +

    Default Behavior (one project)

    +

    By default, you are added to a project when your OSG account is created. This +project will be automatically added to your job submissions for tracking usage.

    +

    Choose a Project (multiple projects)

    +

    If you are affiliated with multiple groups using the OSPool and are a member of +multiple projects, you will want to set the project name in your submit file.

    +

    Run the following command to see a list of projects you belong to:

    +
    grep $USER /etc/condor/UserToProjectMap.txt
    + +

    You can manually set the project for a set of jobs by putting this option in +the submit file:

    +
    +ProjectName="ProjectName"
    + +

    View Metrics For Your Project

    +

    The project's resource usage appears in the OSG accounting system, GRACC, +specifically, in this OSPool Usage +Dashboard

    +

    At the top of that dashboard, there is a set of filters that you can use to examine +the number of hours used by your project, or your institution. You can adjust the time +range displayed on the top right corner.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/overview/references/acknowledgeOSG/index.html b/overview/references/acknowledgeOSG/index.html new file mode 100644 index 00000000..f7fae70a --- /dev/null +++ b/overview/references/acknowledgeOSG/index.html @@ -0,0 +1,2370 @@ + + + + + + + + + + + + + + + + + + Acknowledge the OSG - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Acknowledge the OSG

    +

    This page has been moved to the OSG Website.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/overview/references/contact-information/index.html b/overview/references/contact-information/index.html new file mode 100644 index 00000000..ebb3d9d1 --- /dev/null +++ b/overview/references/contact-information/index.html @@ -0,0 +1,2376 @@ + + + + + + + + + + + + + + + + + + Contact OSG for non-Support Inquiries - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Contact OSG for non-Support Inquiries

    +

    For media contact, leadership, or general questions about OSG, please see our +main website or send an email to webmaster@osg-htc.org.

    +

    For OSG policies and executive information, email Frank Wuerthwein (OSG Executive Director).

    +

    For help managing an OSG Mailing list membership, lease refer to our managing mailing list membership document.

    +

    To get started using OSG resources, for support or operational issues, or to request OSPool account information, email support@osg-htc.org.

    +

    For any assistance or technical questions regarding jobs or data, please see our page on how to Get Help +and/or contact the OSG Research Facilitation team at support@osg-htc.org

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/overview/references/frequently-asked-questions/index.html b/overview/references/frequently-asked-questions/index.html new file mode 100644 index 00000000..92376ed2 --- /dev/null +++ b/overview/references/frequently-asked-questions/index.html @@ -0,0 +1,2698 @@ + + + + + + + + + + + + + + + + + + Frequently Asked Questions - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Frequently Asked Questions

    +

    Getting Started

    +
    +Who is eligible to request an OSG account? +
    +Any researcher affiliated with a U.S. institution (college, university, national laboratory or research foundation) is eligible to use OSG resources for their work. Researchers outside of the U.S. with affiliations to U.S. groups may be eligible for membership if they are sponsored by a collaborator within the U.S. +
    +
    + +
    +How do I request an OSG account? +
    +Please visit our website for the most up-to-date information on requesting an account. Once your account request has been received, a Research Computing Facilitator will contact you within one business day to arrange a meeting to learn about your computational goals and to create your account. +
    +
    + +
    +How do I change the project my jobs are affiliated with? +
    +The OSG team assigns individual user accounts to "projects" upon account creation. These projects are a way to track usage hours and capture information about the types of research running on OSG resources. A project typically corresponds to a research group headed by a single PI, but can sometimes represent a long-term multi-institutional project or some other grouping. If you only belong to a single project, that project will be charged automatically when you submit jobs. Run the following command to see a list of projects you belong to: +
    +
    +$ grep $USER /etc/condor/UserToProjectMap.txt +
    +
    +If need to run jobs under a different project you are a member of, you can manually set the project for those jobs by putting this option in the submit file: +
    +
    ++ProjectName="ProjectName" +
    +
    + +
    +Can I use my ACCESS allocation? +
    +There are two ways OSG interfaces with ACCESS: +
      +
    1. You can get an allocation for the OSPool. This will allow you to run + OSPool jobs and have the usage charged to your ACCESS credits, and can + be useful if you already have an allocation. If you only need to use + OSG resources, we recommend you come directly to our system.
    2. +
    3. You can manage your workloads on the OSPool access points, and run those + jobs on other ACCESS resources. This is a capability still in + development.
    4. +
    +
    +
    + +

    Workshops and Training

    +
    +Do you offer training sessions and workshops? +
    +We offer virtual trainings twice-a-month, as well as an annual, week-long summer school for OSG users. We also participate in additional external conferences and events throughout the year. Information about upcoming and past events, including workshop dates and locations, is available on our website. +
    +
    + +
    +Who may attend OSG workshops? +
    +Workshops are available to any researcher affiliated with a U.S. academic, non-profit, or government institution. +
    +
    + +
    +How to cite or acknowledge OSG? +
    +Whenever you make use of OSG resources, services or tools, we request you acknowledge OSG in your presentations and publications using the informtion provided on the Acknowledging the OSG Consortium page. +
    +
    + +

    Software

    +
    +What software packages are available? +
    +In general, we support most software that fits the distributed high throughput computing model (e.g., open source). Users are encouraged to download and install their own software on our Access Points. +
    +
    +Additionally, users may install their software into a Docker container which can run on OSG as an Apptainer image or use one of our existing containers. See the Software guides on the OSPool documentation website for more information. +
    +
    + +
    +Are there any restrictions on installing commercial softwares? +
    +We can only *directly* support software that is freely distributable. At present, we do not have or support most commercial software due to licensing issues. (One exception is running MATLAB standalone executables which have been compiled with the MATLAB Compiler Runtime). Software that is licensed to individual users (and not to be shared between users) can be staged within the user's `/home` or `/protected` directories, but should not be staged in OSG's `/public` data staging locations. See OSPool policies for more information. Please get in touch with any questions about licensed software. +
    +
    + +
    +Can I request for system wide installation of the open source software useful for my research? +
    +We recommend users use Docker or Apptainer containers if jobs require system wide installations of software. Visit the OSPool Documentation website to learn more about creating your own container. +
    +
    + +

    Running Jobs

    +
    +What type of computation is a good match or NOT a good match for the OSG's Open Science Pool? +
    +The OSG provides computing resources through the Open Science Pool for high throughput computing workloads. You can get the most of out OSG resources by breaking up a single large computational task into many smaller tasks for the fastest overall turnaround. This approach can be +invaluable in accelerating your computational work and thus your research. Please see our Computation on the Open Science Pool page for more details on how to determine if your work matches up well with OSG's high throughput computing model. +
    +
    + +
    +What job scheduler is being used on the Open Science Pool? +
    +We use a task scheduling software called HTCondor to schedule and run jobs. +
    +
    + +
    +How do I submit a computing job? +
    +Jobs are submitted via HTCondor scheduler. Please see our Roadmap to HTC Workload Submission guide for more details on submitting and managing jobs. +
    +
    + +
    +How many jobs can I have in the queue? +
    +The number of jobs that are submitted to the queue by any one user cannot not exceed 10,000 without adding a special statement to the submit file. If you have more jobs than that, we ask that you include the following statement in your submit file: +
    +
    +max_idle = 2000 +
    +
    +This is the maximum number of jobs that you will have in the "Idle" or "Held" state for the submitted batch of jobs at any given time. Using a value of 2000 will ensure that your jobs continue to apply a constant pressure on the queue, but will not fill up the queue unnecessarily (which helps the scheduler to perform optimally). +
    +
    + +
    +How do I view usage metrics for my project? +
    +The project's resource usage appears in the OSG accounting system, GRid ACcounting Collector (GRACC). Additional dashboards are available to help filter information of interest. +
    +
    +At the top of that dashboard, there is a set of filters that you can use to examine the number of hours used by your project, or your institution. +
    +
    + +
    +Why specify +JobDurationCategory in the HTCondor submit file? +
    +To maximize the value of the capacity contributed by the different organizations to the OSPool, users are requested to identify a duration categories for their jobs. These categories should be selected based upon test jobs (run on the OSPool) and allow for more effective scheduling of the capacity contributed to the pool. +
    +
    +Every job submitted from an OSG-managed access point must be labeled with a Job Duration Category upon submission. By knowing the expected job duration, OSG will be able to direct longer-running jobs to resources that are faster and are interrupted less, while shorter jobs can run across more of the OSPool for better overall throughput. +
    +
    +Jobs with single executions longer than 20 hours in tests on the OSPool should not be submitted, without self-checkpointing. +
    +
    +Details on how to specify +JobDurationCategory can be found in our Overview: Submit Jobs to the OSPool using HTCondor and Roadmap to HTC Workload Submission guides. +
    +
    + +

    Data Storage and Transfer

    +
    +What is the best way to process large volume of data? +
    +There may be more than one solution that is available to researchers to process large amounts of data. Contact a Facilitator at for a free, individual consulation to learn about your options. +
    +
    + +
    +How do I transfer my data to and from OSG Access Points? +
    +You can transfer data using `scp`, `rsync`, or other common Unix tools. See Using scp To Transfer Files for more details. +
    +
    + +
    +Is there any support for private data? +
    +Data stored in `/protected` and in `/home` is not publically accessible. Sensitive data, such as HIPAA data, is not allowed to be uploaded or analyzed using OSG resources. +
    +
    + +
    +Is data backed up on OSG resources? +
    +Our data storage locations are not backed up nor are they intended for long-term storage. If the data is not being used for active computing work, it should not be stored on OSG systems. +
    +
    + +
    +Can I get a quota increase? +
    +Yes. Contact support@osg-htc.org if you think you'll need a quota increase for `/home`, `/public`, or `/protected` to accommodate a set of concurrently-running jobs. We can suppport very large amounts of data, the default quotas are just a starting point. +
    +
    + +
    +Will I get notified about hitting quota limits? +
    +The best place to see your quota status is in the login message. +
    +
    + +

    Workflow Management

    +
    +How do I run and manage complex workflows? +
    +For workflows that have multiple steps and/or multiple files, we advise using a workflow management system. A workflow management system allows you to define different computational steps in your workflow and indicate how inputs and outputs should be transferred between these steps. Once you define a workflow, the workflow management system will then run your workflow, automatically retrying failed jobs and transferrring files between different steps. +
    +
    + +
    +What workflow management systems are recommended on OSG? +
    +We support DAGMan and Pegasus for workflow management. +
    +
    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/overview/references/gracc/index.html b/overview/references/gracc/index.html new file mode 100644 index 00000000..d9b416b3 --- /dev/null +++ b/overview/references/gracc/index.html @@ -0,0 +1,2366 @@ + + + + + + + + + + + + + + + + + + OSG Accounting (GRACC) - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    OSG Accounting (GRACC)

    +

    GRACC is the Open Science Pool's accounting system. If you need graphs or high level statistics +on your OSG usage, please go to:

    +

    https://gracc.opensciencegrid.org/

    +

    GRACC contains an overwhelming amount of data. As an OSPool user, you are most +likely interested in seeing your own usage over time. This can be found under +the Open Science Pool - All Usage dashboard here

    +

    Under the Project drop-down, find your project. You can select multiple ones.

    +

    In the upper right corner, you can select a different time period. You can then select a +different Bin size time range. For example, if you want data for the last year grouped +monthly, select "Last 1 year" for the Period, and "1M" for the Bin size.

    +

    Here is an example of what the information provided will look like:

    +

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/overview/references/policy/index.html b/overview/references/policy/index.html new file mode 100644 index 00000000..6e141a48 --- /dev/null +++ b/overview/references/policy/index.html @@ -0,0 +1,2404 @@ + + + + + + + + + + + + + + + + + + Policies for Using OSG Services and the OSPool - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Policies for Using OSG Services and the OSPool

    +

    Access to OSG services and the Open Science Pool (OSPool) is contingent on compliance with the below and with any requests from OSG staff to change practices that cause issues for OSG systems and/or users. Please contact us if you have any questions! We can often help with exceptions to default policies and/or identify available alternative approaches to help you with a perceived barrier.

    +

    As the below do not cover every possible scenario of potentially disruptive practices, OSG staff reserve the right to take any necessary corrective actions to ensure performance and resource availability for all users from OSG-managed Access Points. This may include the hold or removal of jobs, deletion of user data, deactivation of accounts, etc. In some cases, these actions may need to be taken without notifying the user.

    +
      +
    1. +

      By using the OSG resources, users are expected to follow the Open Science Pool acceptable use policy, which includes appropriate scope of use and common user security practices. OSG resources are only available to individuals affiliated with a US-based academic, government, or non-profit organization, or with a research project led by an affiliated sponsor.

      +
    2. +
    3. +

      Users can have up to 10,000 jobs queued, without taking additional steps, and should submit multiple jobs via a single submit file, according to our online guides. Please write to us if you’d like to easily submit more!

      +
    4. +
    5. +

      Do not run computationally-intensive or persistent processes on the Access Points (login nodes). Exceptions include single-threaded software compilation and data management tasks (transfer to/from the Access Point, directory creation, file moving/renaming, untar-ing, etc.). The execution of multi-threaded tasks for job setup or post-processing or software testing will almost certainly cause performance issues and may result in loss of access. Software testing should be executed from within submitted jobs, where job scheduling also provides a more accurate test environment to the user without compromising performance of the Access Points. OSG staff reserve the right to kill any tasks running on the login nodes, in order to ensure performance for all users. Similarly, please contact us to discuss appropriate features and options, rather than running scripts (including cron) to automate job submission, throttling, resubmission, or ordered execution (e.g. workflows), even if these are executed remotely to coordinate work on OSG-managed Access Points. These almost always end up causing significant issues and/or wasted computing capacity, and we're happy to help you to implement automation tools the integrate with HTCondor.

      +
    6. +
    7. +

      Data Policies: OSG-managed filesystems are not backed up and should be treated as temporary (“scratch”-like) space for active work, only, following OSG policies for data storage and per-job transfers. Some OSG-managed storage spaces are truly ‘open’ with data available to be downloaded publicly. Of note:

      +
        +
      • Users should keep copies of essential data and software in non-OSG locations, as OSG staff reserve the right to remove data at any time in order to ensure and/or restore system availability, and without prior notice to users.
      • +
      • Proprietary data, HIPAA, and data with any other privacy concerns should not be stored on any OSG-managed filesystems or computed on using OSG-managed resources. Similarly, users should follow all licensing requirements when storing and executing software via OSG-managed Access Points.
      • +
      • Users should keep their /home directory privileges restricted to their user or group, and should not add ‘global’ permissions, which will allow other users to potentially make your data public.
      • +
      • User-created ‘open’ network ports are disallowed, unless explicitly permitted following an accepted justification to support@osg-htc.org. (If you’re not sure whether something you want to do will open a port, just get in touch!)
      • +
      +
    8. +
    9. +

      The following actions may be taken automatedly or by OSG staff to stop or prevent jobs from causing problems. Please contact us if you’d like help understanding why your jobs were held or removed, and so we can help you avoid problems in the future.

      +
        +
      • Jobs using more memory or disk than requested may be automatically held (see Scaling Up after Test Jobs for tips on requesting the ‘right’ amount of job resources in your submit file).
      • +
      • Jobs running longer than their JobDurationCategory allows for will be held (see Indicate the Job Duration Category of Your Jobs).
      • +
      • Jobs that have executed more than 30 times without completing may be automatically held (likely because they’re too long for OSG).
      • +
      • Jobs that have been held more than 14 days may be automatically removed.
      • +
      • Jobs queued for more than three months may be automatically removed.
      • +
      • Jobs otherwise causing known problems may be held or removed, without prior notification to the user.
      • +
      • Held jobs may also be edited to prevent automated release/retry
      • +
      • NOTE: in order to respect user email clients, job holds and removals do not come with specific notification to the user, unless configured by the user at the time of submission using HTCondor’s ‘notification’ feature.
      • +
      +
    10. +
    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/search/search_index.json b/search/search_index.json new file mode 100644 index 00000000..8e683801 --- /dev/null +++ b/search/search_index.json @@ -0,0 +1 @@ +{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"","title":"Home"},{"location":"hpc_administration/test-document/","text":"Header 1 \u00b6 Header 2 \u00b6 Header 3 \u00b6 Header 4 \u00b6 Header 5 \u00b6 Header 6 \u00b6","title":"Header 1"},{"location":"hpc_administration/test-document/#header-1","text":"","title":"Header 1"},{"location":"hpc_administration/test-document/#header-2","text":"","title":"Header 2"},{"location":"hpc_administration/test-document/#header-3","text":"","title":"Header 3"},{"location":"hpc_administration/test-document/#header-4","text":"","title":"Header 4"},{"location":"hpc_administration/test-document/#header-5","text":"","title":"Header 5"},{"location":"hpc_administration/test-document/#header-6","text":"","title":"Header 6"},{"location":"hpc_administration/administrators/osg-flock/","text":"Submit Node Flocking to OSG \u00b6 This page has moved to https://opensciencegrid.org/docs/submit/osg-flock/","title":"Osg flock"},{"location":"hpc_administration/administrators/osg-flock/#submit-node-flocking-to-osg","text":"This page has moved to https://opensciencegrid.org/docs/submit/osg-flock/","title":"Submit Node Flocking to OSG"},{"location":"htc_workloads/automated_workflows/dagman-simple-example/","text":"Simple Example of a DAGMan Workflow \u00b6 This guide walks you step-by-step through the construction and submission of a simple DAGMan workflow. We recommend this guide if you are interested in automating your job submissions. Overview \u00b6 In this guide: Introduction Structure of the DAG The Minimal DAG Input File The Submit Files Running the Simple DAG Monitoring the Simple DAG Wrapping Up For the full details on various DAGMan features, see the HTCondor manual pages: HTCondor's DAGMan Documentation 1. Introduction \u00b6 Consider the case of two HTCondor jobs that use the submit files A.sub and B.sub . Let's say that A.sub generates an output file ( output.txt ) that B.sub will analyze. To run this workflow manually, we would Submit the first HTCondor job with condor_submit A.sub . Wait for the first HTCondor job to complete successfully. Submit the second HTCondor job with condor_submit B.sub . If the first HTCondor job using A.sub is fairly short, then manually running this workflow is not a big deal. But if the first HTCondor job takes a long time to complete (maybe takes several hours to run, or has to wait for special resources), this can be very inconvenient. Instead, we can use DAGMan to automatically submit B.sub once the first HTCondor job using A.sub has completed successfully. This guide walks through the process of creating such a DAGMan workflow. 2. Structure of the DAG \u00b6 In this scenario, our workflow could be described as a DAG consisting of two nodes ( A.sub and B.sub ) connected by a single edge ( output.txt ). To represent this relationship, we will define nodes A and B - corresponding to A.sub and B.sub , respectively - and connect them with a line pointing from A and B , like in this figure: In order to use DAGMan to run this workflow, we need to communicate this structure to DAGMan via the .dag input file. 3. The Minimal DAG Input File \u00b6 Let's call the input file simple.dag . At minimum, the contents of the simple.dag input file are # simple.dag # Define the DAG jobs JOB A A.sub JOB B B.sub # Define the connections PARENT A CHILD B In a DAGMan input file, a node is defined using the JOB keyword, followed by the name of the node and the name of the corresponding submit file. In this case, we have created a node named A and instructed DAGMan to use the submit file A.sub for executing that node. We have similarly created node B and instructed DAGMan to use the submit file B.sub . (While there is no requirement that the name of the node match the name of the corresponding submit file, it is convenient to use a consistent naming scheme.) To connect the nodes, we use the PARENT .. CHILD .. syntax. Since node B requires that node A has completed successfully, we say that node A is the PARENT while node B is the CHILD . Note that we do not need to define why node B is dependent on node A , only that it is. 4. The Submit Files \u00b6 Now let's define simple examples of the submit files A.sub and B.sub . Node A \u00b6 First, the submit file A.sub uses the executable A.sh , which will generate the file called output.txt . We have explicitly told HTCondor to transfer back this file by using the transfer_output_files command. # A.sub executable = A.sh log = A.log output = A.out error = A.err transfer_output_files = output.txt +JobDurationCategory = \"Medium\" request_cpus = 1 request_memory = 1GB request_disk = 1GB queue The executable file simply saves the hostname of the machine running the script: #!/bin/bash # A.sh hostname > output.txt sleep 1m # so we can see the job in \"running\" status Node B \u00b6 Second, the submit file B.sub uses the executable B.sh to print a message using the contents of the output.txt file generated by A.sh . We have explicitly told HTCondor to transfer output.txt as an input file for this job, using the transfer_input_files command. Thus we have finally defined the \"edge\" that connects nodes A and B : the use of output.txt . # B.sub executable = B.sh log = B.log output = B.out error = B.err transfer_input_files = output.txt +JobDurationCategory = \"Medium\" request_cpus = 1 request_memory = 1GB request_disk = 1GB queue The executable file contains the command for printing the desired message, which will be printed to B.out . #!/bin/bash # B.sh echo \"The previous job was executed on the following machine:\" cat output.txt sleep 1m # so we can see the job in \"running\" status The directory structure \u00b6 Based on the contents of simple.dag , DAGMan is expecting that the submit files A.sub and B.sub are in the same directory as simple.dag . The submit files in turn are expecting A.sh and B.sh be in the same directory as A.sub and B.sub . Thus, we have the following directory structure: DAG_simple/ |-- A.sh |-- A.sub |-- B.sh |-- B.sub |-- simple.dag It is possible to organize each job into its own directory, but for now we will use this simple, flat organization. 5. Running the Simple DAG \u00b6 To run the DAG workflow described by simple.dag , we use the HTCondor command condor_submit_dag : condor_submit_dag simple.dag The DAGMan utility will then parse the input file and generate an assortment of related files that it will use for monitoring and managing your workflow. Here is the output of running the above command: [user@ap40 DAG_simple]$ condor_submit_dag simple.dag Loading classad userMap 'checkpoint_destination_map' ts=1699037029 from /etc/condor/checkpoint-destination-mapfile ----------------------------------------------------------------------- File for submitting this DAG to HTCondor : simple.dag.condor.sub Log of DAGMan debugging messages : simple.dag.dagman.out Log of HTCondor library output : simple.dag.lib.out Log of HTCondor library error messages : simple.dag.lib.err Log of the life of condor_dagman itself : simple.dag.dagman.log Submitting job(s). 1 job(s) submitted to cluster 562265. ----------------------------------------------------------------------- The output shows the list of standard files that are created with every DAG submission along with brief descriptions. A couple of additional files, some of them temporary, will be created during the lifetime of the DAG. 6. Monitoring the Simple DAG \u00b6 You can see the status of the DAG in your queue just like with any other HTCondor job submission. [user@ap40 DAG_simple]$ condor_q -- Schedd: ap40.uw.osg-htc.org : <128.105.68.92:9618?... @ 12/14/23 11:26:51 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE TOTAL JOB_IDS user simple.dag+562265 12/14 11:26 _ _ 1 2 562279.0 There are a couple of things to note about the condor_q output: The BATCH_NAME for the DAGMan job is the name of the input DAG file, simple.dag , plus the Job ID of the DAGMan scheduler job ( 562265 in this case): simple.dag+562265 . The total number of jobs for simple.dag+562265 corresponds to the total number of nodes in the DAG (2). Only 1 node is listed as \"Idle\", meaning that DAGMan has only submitted 1 job so far. This is consistent with the fact that node A has to complete before DAGMan can submit the job for node B . Note that if you are very quick to run your condor_q command after running your condor_submit_dag command, then you may see only the DAGMan scheduler job. It may take a few seconds for DAGMan to start up and submit the HTCondor job associated with the first node. To see more detailed information about the DAG workflow, use condor_q -nob -dag . For example, [user@ap40 DAG_simple]$ condor_q -dag -nob -- Schedd: ap40.uw.osg-htc.org : <128.105.68.92:9618?... @ 12/14/23 11:27:03 ID OWNER/NODENAME SUBMITTED RUN_TIME ST PRI SIZE CMD 562265.0 user 12/14 11:26 0+00:00:37 R 0 0.5 condor_dagman -p 0 -f -l . -Loc 562279.0 |-A 12/14 11:26 0+00:00:00 I 0 0.0 A.sh In this case, the first entry is the DAGMan scheduler job that you created when you first submitted the DAG. The following entries correspond to the nodes whose jobs are currently in the queue. Nodes that have not yet been submitted by DAGMan or that have completed and thus left the queue will not show up in your condor_q output. 7. Wrapping Up \u00b6 After waiting enough time, this simple DAG workflow should complete without any issues. But of course, that will not be the case for every DAG, especially as you start to create your own. DAGMan has a lot more features for managing and submitting DAG workflows, ranging from how to handle errors, combining DAG workflows, and restarting failed DAG workflows. For now, we recommend that you continue exploring DAGMan by going through our Intermediate DAGMan Tutorial . There is also our guide Overview: Submit Workflows with HTCondor's DAGMan , which contains links to more resources in the More Resources section. Finally, the definitive guide to DAGMan and DAG workflows is HTCondor's DAGMan Documentation .","title":"Simple Example of a DAGMan Workflow"},{"location":"htc_workloads/automated_workflows/dagman-simple-example/#simple-example-of-a-dagman-workflow","text":"This guide walks you step-by-step through the construction and submission of a simple DAGMan workflow. We recommend this guide if you are interested in automating your job submissions.","title":"Simple Example of a DAGMan Workflow"},{"location":"htc_workloads/automated_workflows/dagman-simple-example/#overview","text":"In this guide: Introduction Structure of the DAG The Minimal DAG Input File The Submit Files Running the Simple DAG Monitoring the Simple DAG Wrapping Up For the full details on various DAGMan features, see the HTCondor manual pages: HTCondor's DAGMan Documentation","title":"Overview"},{"location":"htc_workloads/automated_workflows/dagman-simple-example/#1-introduction","text":"Consider the case of two HTCondor jobs that use the submit files A.sub and B.sub . Let's say that A.sub generates an output file ( output.txt ) that B.sub will analyze. To run this workflow manually, we would Submit the first HTCondor job with condor_submit A.sub . Wait for the first HTCondor job to complete successfully. Submit the second HTCondor job with condor_submit B.sub . If the first HTCondor job using A.sub is fairly short, then manually running this workflow is not a big deal. But if the first HTCondor job takes a long time to complete (maybe takes several hours to run, or has to wait for special resources), this can be very inconvenient. Instead, we can use DAGMan to automatically submit B.sub once the first HTCondor job using A.sub has completed successfully. This guide walks through the process of creating such a DAGMan workflow.","title":"1. Introduction"},{"location":"htc_workloads/automated_workflows/dagman-simple-example/#2-structure-of-the-dag","text":"In this scenario, our workflow could be described as a DAG consisting of two nodes ( A.sub and B.sub ) connected by a single edge ( output.txt ). To represent this relationship, we will define nodes A and B - corresponding to A.sub and B.sub , respectively - and connect them with a line pointing from A and B , like in this figure: In order to use DAGMan to run this workflow, we need to communicate this structure to DAGMan via the .dag input file.","title":"2. Structure of the DAG"},{"location":"htc_workloads/automated_workflows/dagman-simple-example/#3-the-minimal-dag-input-file","text":"Let's call the input file simple.dag . At minimum, the contents of the simple.dag input file are # simple.dag # Define the DAG jobs JOB A A.sub JOB B B.sub # Define the connections PARENT A CHILD B In a DAGMan input file, a node is defined using the JOB keyword, followed by the name of the node and the name of the corresponding submit file. In this case, we have created a node named A and instructed DAGMan to use the submit file A.sub for executing that node. We have similarly created node B and instructed DAGMan to use the submit file B.sub . (While there is no requirement that the name of the node match the name of the corresponding submit file, it is convenient to use a consistent naming scheme.) To connect the nodes, we use the PARENT .. CHILD .. syntax. Since node B requires that node A has completed successfully, we say that node A is the PARENT while node B is the CHILD . Note that we do not need to define why node B is dependent on node A , only that it is.","title":"3. The Minimal DAG Input File"},{"location":"htc_workloads/automated_workflows/dagman-simple-example/#4-the-submit-files","text":"Now let's define simple examples of the submit files A.sub and B.sub .","title":"4. The Submit Files"},{"location":"htc_workloads/automated_workflows/dagman-simple-example/#node-a","text":"First, the submit file A.sub uses the executable A.sh , which will generate the file called output.txt . We have explicitly told HTCondor to transfer back this file by using the transfer_output_files command. # A.sub executable = A.sh log = A.log output = A.out error = A.err transfer_output_files = output.txt +JobDurationCategory = \"Medium\" request_cpus = 1 request_memory = 1GB request_disk = 1GB queue The executable file simply saves the hostname of the machine running the script: #!/bin/bash # A.sh hostname > output.txt sleep 1m # so we can see the job in \"running\" status","title":"Node A"},{"location":"htc_workloads/automated_workflows/dagman-simple-example/#node-b","text":"Second, the submit file B.sub uses the executable B.sh to print a message using the contents of the output.txt file generated by A.sh . We have explicitly told HTCondor to transfer output.txt as an input file for this job, using the transfer_input_files command. Thus we have finally defined the \"edge\" that connects nodes A and B : the use of output.txt . # B.sub executable = B.sh log = B.log output = B.out error = B.err transfer_input_files = output.txt +JobDurationCategory = \"Medium\" request_cpus = 1 request_memory = 1GB request_disk = 1GB queue The executable file contains the command for printing the desired message, which will be printed to B.out . #!/bin/bash # B.sh echo \"The previous job was executed on the following machine:\" cat output.txt sleep 1m # so we can see the job in \"running\" status","title":"Node B"},{"location":"htc_workloads/automated_workflows/dagman-simple-example/#the-directory-structure","text":"Based on the contents of simple.dag , DAGMan is expecting that the submit files A.sub and B.sub are in the same directory as simple.dag . The submit files in turn are expecting A.sh and B.sh be in the same directory as A.sub and B.sub . Thus, we have the following directory structure: DAG_simple/ |-- A.sh |-- A.sub |-- B.sh |-- B.sub |-- simple.dag It is possible to organize each job into its own directory, but for now we will use this simple, flat organization.","title":"The directory structure"},{"location":"htc_workloads/automated_workflows/dagman-simple-example/#5-running-the-simple-dag","text":"To run the DAG workflow described by simple.dag , we use the HTCondor command condor_submit_dag : condor_submit_dag simple.dag The DAGMan utility will then parse the input file and generate an assortment of related files that it will use for monitoring and managing your workflow. Here is the output of running the above command: [user@ap40 DAG_simple]$ condor_submit_dag simple.dag Loading classad userMap 'checkpoint_destination_map' ts=1699037029 from /etc/condor/checkpoint-destination-mapfile ----------------------------------------------------------------------- File for submitting this DAG to HTCondor : simple.dag.condor.sub Log of DAGMan debugging messages : simple.dag.dagman.out Log of HTCondor library output : simple.dag.lib.out Log of HTCondor library error messages : simple.dag.lib.err Log of the life of condor_dagman itself : simple.dag.dagman.log Submitting job(s). 1 job(s) submitted to cluster 562265. ----------------------------------------------------------------------- The output shows the list of standard files that are created with every DAG submission along with brief descriptions. A couple of additional files, some of them temporary, will be created during the lifetime of the DAG.","title":"5. Running the Simple DAG"},{"location":"htc_workloads/automated_workflows/dagman-simple-example/#6-monitoring-the-simple-dag","text":"You can see the status of the DAG in your queue just like with any other HTCondor job submission. [user@ap40 DAG_simple]$ condor_q -- Schedd: ap40.uw.osg-htc.org : <128.105.68.92:9618?... @ 12/14/23 11:26:51 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE TOTAL JOB_IDS user simple.dag+562265 12/14 11:26 _ _ 1 2 562279.0 There are a couple of things to note about the condor_q output: The BATCH_NAME for the DAGMan job is the name of the input DAG file, simple.dag , plus the Job ID of the DAGMan scheduler job ( 562265 in this case): simple.dag+562265 . The total number of jobs for simple.dag+562265 corresponds to the total number of nodes in the DAG (2). Only 1 node is listed as \"Idle\", meaning that DAGMan has only submitted 1 job so far. This is consistent with the fact that node A has to complete before DAGMan can submit the job for node B . Note that if you are very quick to run your condor_q command after running your condor_submit_dag command, then you may see only the DAGMan scheduler job. It may take a few seconds for DAGMan to start up and submit the HTCondor job associated with the first node. To see more detailed information about the DAG workflow, use condor_q -nob -dag . For example, [user@ap40 DAG_simple]$ condor_q -dag -nob -- Schedd: ap40.uw.osg-htc.org : <128.105.68.92:9618?... @ 12/14/23 11:27:03 ID OWNER/NODENAME SUBMITTED RUN_TIME ST PRI SIZE CMD 562265.0 user 12/14 11:26 0+00:00:37 R 0 0.5 condor_dagman -p 0 -f -l . -Loc 562279.0 |-A 12/14 11:26 0+00:00:00 I 0 0.0 A.sh In this case, the first entry is the DAGMan scheduler job that you created when you first submitted the DAG. The following entries correspond to the nodes whose jobs are currently in the queue. Nodes that have not yet been submitted by DAGMan or that have completed and thus left the queue will not show up in your condor_q output.","title":"6. Monitoring the Simple DAG"},{"location":"htc_workloads/automated_workflows/dagman-simple-example/#7-wrapping-up","text":"After waiting enough time, this simple DAG workflow should complete without any issues. But of course, that will not be the case for every DAG, especially as you start to create your own. DAGMan has a lot more features for managing and submitting DAG workflows, ranging from how to handle errors, combining DAG workflows, and restarting failed DAG workflows. For now, we recommend that you continue exploring DAGMan by going through our Intermediate DAGMan Tutorial . There is also our guide Overview: Submit Workflows with HTCondor's DAGMan , which contains links to more resources in the More Resources section. Finally, the definitive guide to DAGMan and DAG workflows is HTCondor's DAGMan Documentation .","title":"7. Wrapping Up"},{"location":"htc_workloads/automated_workflows/dagman-workflows/","text":"Overview: Submit Workflows with HTCondor's DAGMan \u00b6 If you want to automate job submission, keep reading to learn about HTCondor's DAGMan utility. Overview \u00b6 In this guide: Introduction What is DAGMan? The Basics of the DAG Input File Running a DAG Workflow DAGMan Features More Resources Introduction \u00b6 If your work requires jobs that run in a particular sequence, you may benefit from a workflow tool that submits and monitors jobs for you in the correct order. HTCondor has a built in utility called \"DAGMan\" that automates the job submission of such a workflow. This talk (originally presented at HTCondor Week 2020) gives a good introduction to DAGMan and its most useful features: DAGMan can be a powerful tool for creating large and complex HTCondor workflows. What is DAGMan? \u00b6 DAGMan is short for \"DAG Manager\", and is a utility built into HTCondor for automatically running a workflow (DAG) of jobs, where the results of an earlier job are required for running a later job. This workflow is similar to a flowchart with a definite beginning and ending. More specificially, \"DAG\" is an acronym for Directed Acyclic Graph , a concept from the mathematic field of graph theory: Graph: a collection of points (\"nodes\" or \"vertices\") connected to each other by lines (\"edges\"). Directed: the edges between nodes have direction, that is, each edge begins on one node and ends on a different node. Acyclic: the graph does not have a cycle - or loop - where the graph returns to a previous node. By using a directed acyclic graph, we can guarantee that the workflow has a defined 'start' and 'end'. In DAGMan, each node in the workflow corresponds to a job submission (i.e., condor_submit ). Each edge in the workflow corresponds to a set of files that are the output of one job submission and the input of another job submission. For convenience, we refer to such a workflow and the files necessary to execute it as \"the DAG\". The Basics of the DAG Input File \u00b6 The purpose of the DAG input file (typically .dag ) is to instruct DAGMan on the structure of the workflow you want to run. Additional instructions can be included in the DAG input file about how to manage the job submissions, rerun jobs (nodes), or to run pre- or post-processing scripts. In general, the structure of the .dag input file consists of one instruction per line, with each line starting with a keyword defining the type of instruction. 1. Defining the DAG jobs \u00b6 To define a DAG job, we begin a new line with JOB then provide the name, the submit file, and any additional options. The syntax is JOB JobName JobSubmitFile [additional options] where you need to replace JobName with the name you would like the DAG job to have, and JobSubmitFile with the name or path of the corresponding submit file. Both JobName and JobSubmitFile need to be specified. Every node in your workflow must have a JOB entry in the .dag input file. While there are other instructions that can reference a particular node, they will only work if the node in question has a corresponding JOB entry. 2. Defining the connections \u00b6 To define the relationship between DAG jobs in a workflow, we begin a new line with PARENT then the name of the first DAG job, followed by CHILD and the name of the second DAG job. That is, the PARENT DAG job must complete successfully before DAGMan will submit the CHILD DAG job. In fact, you can define such relationship for many DAG jobs (nodes) at the same time. Thus, the syntax is PARENT p1 [p2 ...] CHILD c1 [c2 ...] where you replace p# with the JobName for each parent DAG job, and c# with the JobName for each child DAG job. The child DAG jobs will only be submitted if all of the parent DAG jobs are completed successfully. Each JobName you provide must have a corresponding JOB entry elsewhere in the .dag input file. Technically, DAGMan does not require that each DAG job in a workflow is connected to another DAG job. This allows you to submit many unrelated DAG jobs at one time using DAGMan. Note that in defining the PARENT - CHILD relationship, there is no definition of how they are related. Effectively, DAGMan does not need to know the reason why the PARENT DAG jobs must complete successfully in order to submit the CHILD DAG jobs. There can be many reasons why you might want to execute the DAG jobs in this order, although the most common reason is that the PARENT DAG jobs create files that are required by the CHILD DAG jobs. In that case, it is up to you to organize the submit files of those DAG jobs in such a way that the output of the PARENT DAG jobs can be used as the input of the CHILD DAG jobs. In the DAGMan Features section, we will discuss tools that can assist you with this endeavor. Running a DAG Workflow \u00b6 1. Submitting the DAG \u00b6 Because the DAG workflow represents a special type of job, a special command is used to submit it. To submit the DAG workflow, use condor_submit_dag example.dag where example.dag is the name of your DAG input file containing the JOB and PARENT - CHILD definitions for your workflow. This will create and submit a \"DAGMan job\" that will in turn be responsible for submitting and monitoring the job nodes described in your DAG input file. A set of files is created for every DAG submission, and the output of the condor_submit_dag lists the files with a brief description. For the above submit command, the output will look like: ------------------------------------------------------------------------ File for submitting this DAG to HTCondor : example.dag.condor.sub Log of DAGMan debugging messages : example.dag.dagman.out Log of HTCondor library output : example.dag.lib.out Log of HTCondor library error messages : example.dag.lib.err Log of the life of condor_dagman itself : example.dag.dagman.log Submitting job(s). 1 job(s) submitted to cluster ######. ------------------------------------------------------------------------ 2. Monitoring the DAG \u00b6 The DAGMan job is actually a \"scheduler\" job (described by example.dag.condor.sub ) and the status and progress of the DAGMan job is saved to example.dag.dagman.out . Using condor_q or condor_watch_q , the DAGMan job will be under the name example.dag+###### , where ###### is the Cluster ID of the DAGMan scheduler job. Each job submitted by DAGMan, however, will be assigned a separate Cluster ID. For a more detailed status display, you can use condor_q -dag -nobatch If you want to see the status of just the DAGMan job proper, use condor_q -dag -nobatch -constr 'JobUniverse == 7' (Technically, this shows all \"scheduler\" type HTCondor jobs, but for most users this will only include DAGMan jobs.) For even more details about the execution of the DAG workflow, you can examine the contents of the example.dag.dagman.out file. The file contains timestamped log information of the execution and status of nodes in the DAG, along with statistics. As the DAG progresses, it will also create the files example.dag.metrics and example.dag.nodes.log , where the metrics file contains the current statistics of the DAG and the log file is an aggregate of the individual nodes' user log files. If you want to see the status of a specific node, use condor_q -dag -nobatch -constr 'DAGNodeName == \"YourNodeName\"' where YourNodeName should be replaced with the name of the node you want to know the status of. Note that this works only for jobs that are currently in the queue; if the node has not yet been submitted, or if it has completed and thus exited the queue, then you will not see the node using this command. To see if the node has completed, you should examine the contents of the .dagman.out file. A simple way to see the relevant log messages is to use a command like grep \"Node YourNodeName\" example.dag.dagman.out If you'd like to monitor the status of the individual nodes in your DAG workflow using condor_watch_q , then wait long enough for the .nodes.log file to be generated. Then run condor_watch_q -file example.dag.nodes.log Now condor_watch_q will update when DAGMan submits another job. 3. Removing the DAG \u00b6 To remove the DAG, you need to condor_rm the Cluster ID corresponding to the DAGMan scheduler job. This will also remove the jobs that the DAGMan scheduler job submitted as part of executing the DAG workflow. A removed DAG is almost always marked as a failed DAG, and as such will generate a rescue DAG (see below). DAGMan Features \u00b6 1. Pre- and post-processing for DAG jobs \u00b6 You can tell DAGMan to execute a script before or after it submits the HTCondor job for a particular node. Such a script will be executed on the submit server itself and can be used to set up the files needed for the HTCondor job, or to clean up or validate the files after a successful HTCondor job. The instructions for executing these scripts are placed in the input .dag file. You must specify the name of the node the script is attached to and whether the script is to be executed before ( PRE ) or after ( POST ) the HTCondor job. Here is a simple example: # Define the node (required) (example node named \"my_node\") JOB my_node run.sub # Define the script for executing before submitting run.sub (optional) SCRIPT PRE my_node setup.sh # Define a script for executing after run.sub has completed (optional) SCRIPT POST my_node cleanup.sh In this example, when it is time for DAGMan to execute the node my_node , it will take the following steps: Execute setup.sh (the PRE script) Submit the HTCondor job run.sub (the node's JOB ) Wait for the HTCondor job to complete Execute cleanup.sh (the POST script) All of these steps count as part of DAGMan's attempt to execute the node my_node and may affect whether DAGMan considers the node to have succeeded or failed. For more information on PRE and POST scripts as well as other scripts that DAGMan can use, see the HTCondor documentation . 2. Retrying failed nodes \u00b6 You can tell DAGMan to automatically retry a node if it fails. This way you don't have to manually restart the DAG if the job failed due to a transient issue. The instructions for how many times to retry a node go in the input .dag file. You must specify the node and the maximum number of times that DAGMan should attempt to retry that node. Here is a simple example: # Define the node (required) (example node named \"my_node\") JOB my_node run.sub # Define the number of times to retry \"my_node\" RETRY my_node 2 In this example, if the job associated with node my_node fails for some reason, then DAGMan will resubmit run.sub up to 2 more times. You can also apply the retry for statement to all nodes in the DAG by specifying ALL_NODES instead of a specific node name. For example, RETRY ALL_NODES 2 As a general rule, you should not set the number of retry attempts to more than 1 or 2 times. If a job is failing repeatedly, it is better to troubleshoot the cause of that failure. This is especially true when you applying the RETRY statement to all of the nodes in your DAG. DAGMan considers the exit code of the last executed step when it considers the success or failure of the node overall. There are various possible combinations that can determine the success or failure of the node itself, as discussed in the HTCondor documentation here . DAGMan only considers the success/failure of the node as a whole when deciding if it needs to attempt a retry. Importantly, if the .sub file for a node submits multiple HTCondor jobs, when any one of those jobs fails, DAGMan considers all of the jobs to have failed and will remove them from queue. Finally, note that DAGMan does not consider an HTCondor job with a \"hold\" status as being completed. In that case, you can include a command in the submit file to automatically remove a held job from the queue. When a job is removed from the queue, DAGMan considers that job to be failed (though as noted above, failure of the HTCondor job does not necessarily mean the node has failed). For more information on the RETRY statement, see the HTCondor documentation . 3. Restarting a failed DAG \u00b6 Generally, a DAG is considered failed if any one of its component nodes has failed. That does not mean, however, that DAGMan immediately stops the DAG. Instead, when DAGMan encounters a failed node, it will attempt to complete as much of the DAG as possible that does not require that node. Only then will DAGMan stop running the workflow. When the DAGMan job exits from a failed DAG, it generates a report of the status of the nodes in a file called a \"Rescue DAG\" with the extension .rescue### , starting from .rescue001 and counting up each time a Rescue DAG is generated. The Rescue DAG can then be used by DAGMan to restart the DAG, skipping over nodes that are marked as completed successfully and jumping directly to the failed nodes that need to be resubmitted. The power of this feature is that DAGMan will not duplicate the work of already completed nodes, which is especially useful when there is an issue at the end of a large DAG. DAGMan will automatically use a Rescue DAG if it exists when you use condor_submit_dag to submit the original .dag input file. If more than one Rescue DAG exists for a given .dag input file, then DAGMan will use the most recent Rescue DAG (the one with the highest number at the end of .rescue### ). # Automatically use the Rescue DAG if it exists condor_submit_dag example.dag If you do NOT want DAGMan to use an existing Rescue DAG, then you can use the `-force` option to start the DAG completely from scratch: # Do NOT use the Rescue DAG if it exists condor_submit_dag -force example.dag For more information on Rescue DAGs and how to explicitly control them, see the HTCondor documentation . If the DAGMan scheduler job itself crashes (or is placed on hold) and is unable to write a Rescue DAG, then when the DAGMan job is resubmitted (or released), DAGMan will go into \"recovery mode\". Essentially this involves DAGMan reconstructing the Rescue DAG that should have been written, but wasn't due to the job interruption. DAGMan will then resume the DAG based on its analysis of the files that do exist. More Resources \u00b6 Tutorials \u00b6 If you are interested in using DAGMan to automatically run a workflow, we highly recommend that you first go through our tutorial Simple Example of a DAG Workflow . This tutorial takes you step by step through the mechanics of creating and submitting a DAG. Once you've understood the basics from the simple tutorial, you are ready to explore more examples and scenarios in our Intermediate DAGMan Tutorial . Trainings & Videos \u00b6 A recent live training covering the materials in the Intermediate DAGMan Tutorial was held by the current lead developer for HTCondor's DAGMan utility: DAGMan: HTCondor's Workflow Manager . An introductory tutorial to DAGMan previously presented at HTCondor Week was recorded and is available on YouTube: HTCondor DAGMan Workflows tutorial . More recently, the current lead developer of HTCondor's DAGMan utility gave an intermediate tutorial: HTC23 DAGMan intermediate . Documentation \u00b6 HTCondor's DAGMan Documentation The HTCondor documentation is the definitive guide to DAGMan and contains a wealth of information about DAGMan, its features, and its behaviors.","title":"Overview: Submit Workflows with HTCondor's DAGMan"},{"location":"htc_workloads/automated_workflows/dagman-workflows/#overview-submit-workflows-with-htcondors-dagman","text":"If you want to automate job submission, keep reading to learn about HTCondor's DAGMan utility.","title":"Overview: Submit Workflows with HTCondor's DAGMan"},{"location":"htc_workloads/automated_workflows/dagman-workflows/#overview","text":"In this guide: Introduction What is DAGMan? The Basics of the DAG Input File Running a DAG Workflow DAGMan Features More Resources","title":"Overview"},{"location":"htc_workloads/automated_workflows/dagman-workflows/#introduction","text":"If your work requires jobs that run in a particular sequence, you may benefit from a workflow tool that submits and monitors jobs for you in the correct order. HTCondor has a built in utility called \"DAGMan\" that automates the job submission of such a workflow. This talk (originally presented at HTCondor Week 2020) gives a good introduction to DAGMan and its most useful features: DAGMan can be a powerful tool for creating large and complex HTCondor workflows.","title":"Introduction"},{"location":"htc_workloads/automated_workflows/dagman-workflows/#what-is-dagman","text":"DAGMan is short for \"DAG Manager\", and is a utility built into HTCondor for automatically running a workflow (DAG) of jobs, where the results of an earlier job are required for running a later job. This workflow is similar to a flowchart with a definite beginning and ending. More specificially, \"DAG\" is an acronym for Directed Acyclic Graph , a concept from the mathematic field of graph theory: Graph: a collection of points (\"nodes\" or \"vertices\") connected to each other by lines (\"edges\"). Directed: the edges between nodes have direction, that is, each edge begins on one node and ends on a different node. Acyclic: the graph does not have a cycle - or loop - where the graph returns to a previous node. By using a directed acyclic graph, we can guarantee that the workflow has a defined 'start' and 'end'. In DAGMan, each node in the workflow corresponds to a job submission (i.e., condor_submit ). Each edge in the workflow corresponds to a set of files that are the output of one job submission and the input of another job submission. For convenience, we refer to such a workflow and the files necessary to execute it as \"the DAG\".","title":"What is DAGMan?"},{"location":"htc_workloads/automated_workflows/dagman-workflows/#the-basics-of-the-dag-input-file","text":"The purpose of the DAG input file (typically .dag ) is to instruct DAGMan on the structure of the workflow you want to run. Additional instructions can be included in the DAG input file about how to manage the job submissions, rerun jobs (nodes), or to run pre- or post-processing scripts. In general, the structure of the .dag input file consists of one instruction per line, with each line starting with a keyword defining the type of instruction.","title":"The Basics of the DAG Input File"},{"location":"htc_workloads/automated_workflows/dagman-workflows/#1-defining-the-dag-jobs","text":"To define a DAG job, we begin a new line with JOB then provide the name, the submit file, and any additional options. The syntax is JOB JobName JobSubmitFile [additional options] where you need to replace JobName with the name you would like the DAG job to have, and JobSubmitFile with the name or path of the corresponding submit file. Both JobName and JobSubmitFile need to be specified. Every node in your workflow must have a JOB entry in the .dag input file. While there are other instructions that can reference a particular node, they will only work if the node in question has a corresponding JOB entry.","title":"1. Defining the DAG jobs"},{"location":"htc_workloads/automated_workflows/dagman-workflows/#2-defining-the-connections","text":"To define the relationship between DAG jobs in a workflow, we begin a new line with PARENT then the name of the first DAG job, followed by CHILD and the name of the second DAG job. That is, the PARENT DAG job must complete successfully before DAGMan will submit the CHILD DAG job. In fact, you can define such relationship for many DAG jobs (nodes) at the same time. Thus, the syntax is PARENT p1 [p2 ...] CHILD c1 [c2 ...] where you replace p# with the JobName for each parent DAG job, and c# with the JobName for each child DAG job. The child DAG jobs will only be submitted if all of the parent DAG jobs are completed successfully. Each JobName you provide must have a corresponding JOB entry elsewhere in the .dag input file. Technically, DAGMan does not require that each DAG job in a workflow is connected to another DAG job. This allows you to submit many unrelated DAG jobs at one time using DAGMan. Note that in defining the PARENT - CHILD relationship, there is no definition of how they are related. Effectively, DAGMan does not need to know the reason why the PARENT DAG jobs must complete successfully in order to submit the CHILD DAG jobs. There can be many reasons why you might want to execute the DAG jobs in this order, although the most common reason is that the PARENT DAG jobs create files that are required by the CHILD DAG jobs. In that case, it is up to you to organize the submit files of those DAG jobs in such a way that the output of the PARENT DAG jobs can be used as the input of the CHILD DAG jobs. In the DAGMan Features section, we will discuss tools that can assist you with this endeavor.","title":"2. Defining the connections"},{"location":"htc_workloads/automated_workflows/dagman-workflows/#running-a-dag-workflow","text":"","title":"Running a DAG Workflow"},{"location":"htc_workloads/automated_workflows/dagman-workflows/#1-submitting-the-dag","text":"Because the DAG workflow represents a special type of job, a special command is used to submit it. To submit the DAG workflow, use condor_submit_dag example.dag where example.dag is the name of your DAG input file containing the JOB and PARENT - CHILD definitions for your workflow. This will create and submit a \"DAGMan job\" that will in turn be responsible for submitting and monitoring the job nodes described in your DAG input file. A set of files is created for every DAG submission, and the output of the condor_submit_dag lists the files with a brief description. For the above submit command, the output will look like: ------------------------------------------------------------------------ File for submitting this DAG to HTCondor : example.dag.condor.sub Log of DAGMan debugging messages : example.dag.dagman.out Log of HTCondor library output : example.dag.lib.out Log of HTCondor library error messages : example.dag.lib.err Log of the life of condor_dagman itself : example.dag.dagman.log Submitting job(s). 1 job(s) submitted to cluster ######. ------------------------------------------------------------------------","title":"1. Submitting the DAG"},{"location":"htc_workloads/automated_workflows/dagman-workflows/#2-monitoring-the-dag","text":"The DAGMan job is actually a \"scheduler\" job (described by example.dag.condor.sub ) and the status and progress of the DAGMan job is saved to example.dag.dagman.out . Using condor_q or condor_watch_q , the DAGMan job will be under the name example.dag+###### , where ###### is the Cluster ID of the DAGMan scheduler job. Each job submitted by DAGMan, however, will be assigned a separate Cluster ID. For a more detailed status display, you can use condor_q -dag -nobatch If you want to see the status of just the DAGMan job proper, use condor_q -dag -nobatch -constr 'JobUniverse == 7' (Technically, this shows all \"scheduler\" type HTCondor jobs, but for most users this will only include DAGMan jobs.) For even more details about the execution of the DAG workflow, you can examine the contents of the example.dag.dagman.out file. The file contains timestamped log information of the execution and status of nodes in the DAG, along with statistics. As the DAG progresses, it will also create the files example.dag.metrics and example.dag.nodes.log , where the metrics file contains the current statistics of the DAG and the log file is an aggregate of the individual nodes' user log files. If you want to see the status of a specific node, use condor_q -dag -nobatch -constr 'DAGNodeName == \"YourNodeName\"' where YourNodeName should be replaced with the name of the node you want to know the status of. Note that this works only for jobs that are currently in the queue; if the node has not yet been submitted, or if it has completed and thus exited the queue, then you will not see the node using this command. To see if the node has completed, you should examine the contents of the .dagman.out file. A simple way to see the relevant log messages is to use a command like grep \"Node YourNodeName\" example.dag.dagman.out If you'd like to monitor the status of the individual nodes in your DAG workflow using condor_watch_q , then wait long enough for the .nodes.log file to be generated. Then run condor_watch_q -file example.dag.nodes.log Now condor_watch_q will update when DAGMan submits another job.","title":"2. Monitoring the DAG"},{"location":"htc_workloads/automated_workflows/dagman-workflows/#3-removing-the-dag","text":"To remove the DAG, you need to condor_rm the Cluster ID corresponding to the DAGMan scheduler job. This will also remove the jobs that the DAGMan scheduler job submitted as part of executing the DAG workflow. A removed DAG is almost always marked as a failed DAG, and as such will generate a rescue DAG (see below).","title":"3. Removing the DAG"},{"location":"htc_workloads/automated_workflows/dagman-workflows/#dagman-features","text":"","title":"DAGMan Features"},{"location":"htc_workloads/automated_workflows/dagman-workflows/#1-pre-and-post-processing-for-dag-jobs","text":"You can tell DAGMan to execute a script before or after it submits the HTCondor job for a particular node. Such a script will be executed on the submit server itself and can be used to set up the files needed for the HTCondor job, or to clean up or validate the files after a successful HTCondor job. The instructions for executing these scripts are placed in the input .dag file. You must specify the name of the node the script is attached to and whether the script is to be executed before ( PRE ) or after ( POST ) the HTCondor job. Here is a simple example: # Define the node (required) (example node named \"my_node\") JOB my_node run.sub # Define the script for executing before submitting run.sub (optional) SCRIPT PRE my_node setup.sh # Define a script for executing after run.sub has completed (optional) SCRIPT POST my_node cleanup.sh In this example, when it is time for DAGMan to execute the node my_node , it will take the following steps: Execute setup.sh (the PRE script) Submit the HTCondor job run.sub (the node's JOB ) Wait for the HTCondor job to complete Execute cleanup.sh (the POST script) All of these steps count as part of DAGMan's attempt to execute the node my_node and may affect whether DAGMan considers the node to have succeeded or failed. For more information on PRE and POST scripts as well as other scripts that DAGMan can use, see the HTCondor documentation .","title":"1. Pre- and post-processing for DAG jobs"},{"location":"htc_workloads/automated_workflows/dagman-workflows/#2-retrying-failed-nodes","text":"You can tell DAGMan to automatically retry a node if it fails. This way you don't have to manually restart the DAG if the job failed due to a transient issue. The instructions for how many times to retry a node go in the input .dag file. You must specify the node and the maximum number of times that DAGMan should attempt to retry that node. Here is a simple example: # Define the node (required) (example node named \"my_node\") JOB my_node run.sub # Define the number of times to retry \"my_node\" RETRY my_node 2 In this example, if the job associated with node my_node fails for some reason, then DAGMan will resubmit run.sub up to 2 more times. You can also apply the retry for statement to all nodes in the DAG by specifying ALL_NODES instead of a specific node name. For example, RETRY ALL_NODES 2 As a general rule, you should not set the number of retry attempts to more than 1 or 2 times. If a job is failing repeatedly, it is better to troubleshoot the cause of that failure. This is especially true when you applying the RETRY statement to all of the nodes in your DAG. DAGMan considers the exit code of the last executed step when it considers the success or failure of the node overall. There are various possible combinations that can determine the success or failure of the node itself, as discussed in the HTCondor documentation here . DAGMan only considers the success/failure of the node as a whole when deciding if it needs to attempt a retry. Importantly, if the .sub file for a node submits multiple HTCondor jobs, when any one of those jobs fails, DAGMan considers all of the jobs to have failed and will remove them from queue. Finally, note that DAGMan does not consider an HTCondor job with a \"hold\" status as being completed. In that case, you can include a command in the submit file to automatically remove a held job from the queue. When a job is removed from the queue, DAGMan considers that job to be failed (though as noted above, failure of the HTCondor job does not necessarily mean the node has failed). For more information on the RETRY statement, see the HTCondor documentation .","title":"2. Retrying failed nodes"},{"location":"htc_workloads/automated_workflows/dagman-workflows/#3-restarting-a-failed-dag","text":"Generally, a DAG is considered failed if any one of its component nodes has failed. That does not mean, however, that DAGMan immediately stops the DAG. Instead, when DAGMan encounters a failed node, it will attempt to complete as much of the DAG as possible that does not require that node. Only then will DAGMan stop running the workflow. When the DAGMan job exits from a failed DAG, it generates a report of the status of the nodes in a file called a \"Rescue DAG\" with the extension .rescue### , starting from .rescue001 and counting up each time a Rescue DAG is generated. The Rescue DAG can then be used by DAGMan to restart the DAG, skipping over nodes that are marked as completed successfully and jumping directly to the failed nodes that need to be resubmitted. The power of this feature is that DAGMan will not duplicate the work of already completed nodes, which is especially useful when there is an issue at the end of a large DAG. DAGMan will automatically use a Rescue DAG if it exists when you use condor_submit_dag to submit the original .dag input file. If more than one Rescue DAG exists for a given .dag input file, then DAGMan will use the most recent Rescue DAG (the one with the highest number at the end of .rescue### ). # Automatically use the Rescue DAG if it exists condor_submit_dag example.dag If you do NOT want DAGMan to use an existing Rescue DAG, then you can use the `-force` option to start the DAG completely from scratch: # Do NOT use the Rescue DAG if it exists condor_submit_dag -force example.dag For more information on Rescue DAGs and how to explicitly control them, see the HTCondor documentation . If the DAGMan scheduler job itself crashes (or is placed on hold) and is unable to write a Rescue DAG, then when the DAGMan job is resubmitted (or released), DAGMan will go into \"recovery mode\". Essentially this involves DAGMan reconstructing the Rescue DAG that should have been written, but wasn't due to the job interruption. DAGMan will then resume the DAG based on its analysis of the files that do exist.","title":"3. Restarting a failed DAG"},{"location":"htc_workloads/automated_workflows/dagman-workflows/#more-resources","text":"","title":"More Resources"},{"location":"htc_workloads/automated_workflows/dagman-workflows/#tutorials","text":"If you are interested in using DAGMan to automatically run a workflow, we highly recommend that you first go through our tutorial Simple Example of a DAG Workflow . This tutorial takes you step by step through the mechanics of creating and submitting a DAG. Once you've understood the basics from the simple tutorial, you are ready to explore more examples and scenarios in our Intermediate DAGMan Tutorial .","title":"Tutorials"},{"location":"htc_workloads/automated_workflows/dagman-workflows/#trainings-videos","text":"A recent live training covering the materials in the Intermediate DAGMan Tutorial was held by the current lead developer for HTCondor's DAGMan utility: DAGMan: HTCondor's Workflow Manager . An introductory tutorial to DAGMan previously presented at HTCondor Week was recorded and is available on YouTube: HTCondor DAGMan Workflows tutorial . More recently, the current lead developer of HTCondor's DAGMan utility gave an intermediate tutorial: HTC23 DAGMan intermediate .","title":"Trainings & Videos"},{"location":"htc_workloads/automated_workflows/dagman-workflows/#documentation","text":"HTCondor's DAGMan Documentation The HTCondor documentation is the definitive guide to DAGMan and contains a wealth of information about DAGMan, its features, and its behaviors.","title":"Documentation"},{"location":"htc_workloads/automated_workflows/tutorial-dagman-intermediate/","text":"Intermediate DAGMan: Uses and Features \u00b6 This tutorial helps you explore HTCondor's DAGMan its many features. You can download the tutorial materials with the following command: $ git clone https://github.com/OSGConnect/tutorial-dagman-intermediate Now move into the new directory to see the contents of the tutorial: $ cd tutorial-dagman-intermediate At the top level is a worked example of a \"Diamond DAG\" that summarizes the basic components of a creating, submitting, and managing DAGMan workflows. In the lower level additional_examples directory are more worked examples with their own README s highlighting specific features that can be used with DAGMan. Brief descriptions of these examples are provided in the Additional Examples section at the end of this tutorial. Before working on this tutorial, we recommend that you read through our other DAGMan guides: Overview: Submit Workflows with HTCondor's DAGMan Simple Example of a DAGMan Workflow The definitive guide to DAGMan is HTCondor's DAGMan Documentation . Types of DAGs \u00b6 While any workflow that satisfies the definition of a \"Directed Acyclic Graph\" (DAG) can be executed using DAGMan, there are certain types that are the most commonly used: Sequential DAG : all the nodes are connected in a sequence of one after the other, with no branching or splitting. This is good for conducting increasingly refined analyses of a dataset or initial result, or chaining together a long-running calculation. The simplest example of this type is used in the guide Simple Example of a DAGMan Workflow . Split and recombine DAG : the first node is connected to many nodes of the same layer (split) which then all connect back to the final node (recombine). Here, you can set up the shared environment in the first node and use it to parallelize the work into many individual jobs, then finally combine/analyze the results in the final node. The simplest example of this type is the \"Diamond DAG\" - the subject of this tutorial. Collection DAG : no node is connected to any other node. This is good for the situation where you need to run a bunch of otherwise unrelated jobs, perhaps ones that are competing for a limited resource. The simplest example of this type is a DAG consisting of a single node. These types are by no means \"official\", nor are they the only types of structure that a DAG can take. Rather, they serve as starting points from which you can build your own DAG workflow, which will likely consist of some combination of the above elements. The Diamond DAG \u00b6 As mentioned above, the \"Diamond DAG\" is the simplest example of a \"split and recombine\" DAG. In this case, the first node TOP is connected to two nodes LEFT and RIGHT (the \"split\"), which are then connected to the final node BOTTOM (the \"recombine\"). To describe the flow of the DAG and the parts needed to execute it, DAGMan uses a custom description language in an input file, typically named .dag . The two most important commands in the DAG description language are: JOB - Describes a node and the submit file it will use to run the node. PARENT CHILD - Describes the edge starting from and pointing to . These commands have been used to construct the Diamond DAG and are saved in the file diamond.dag . To view the contents of diamond.dag , run $ cat diamond.dag Before you continue, we recommend that you closely examine the contents of diamond.dag and identify its components. Furthermore, try to identify the submit file for each node, and use that submit file to determine the nature of the HTCondor job that will be submitted for each node. Submitting a DAG \u00b6 To submit a DAGMan workflow to HTCondor, you can use one of the following commands: $ condor_submit_dag diamond.dag or $ htcondor dag submit diamond.dag What Happens? \u00b6 When a DAG is submitted to HTCondor a special job is created to run DAGMan on behalf of you the user. This job runs the provided HTCSS DAGMan executable in the AP job queue. This is an actual job that can be queried and acted upon. You may also notice that lots of files are created. These files are all part of DAGMan and have various purposes. In general, the files that should always exist are as follows: DAGMan job proper files .condor.sub - Submit file for the DAGMan job proper .dagman.log - Job event log file for the DAGMan job proper .lib.err - Standard error stream file for the DAGMan job proper .lib.out - Standard output stream file for the DAGMan job proper Informational DAGMan files .dagman.out - General DAGMan process logging file .nodes.log - Collective job event log file for all managed jobs (Heart of DAGMan) .metrics - JSON formatted information about the DAG Of these files, the two most important are the .dagman.out and .nodes.log . The .dagman.out file contains the entire history and status of DAGMan's execution of your workflow. The .nodes.log file on the other hand is the accumulated log entries for every HTCondor job that DAGMan submitted, and DAGMan monitors the contents of this file to generate the contents of the .dagman.out file. Note: these are not all the files that DAGMan can produce. Depending on the options and features you employ in your DAG input file, more files with different purposes can be created. Monitoring DAGMan \u00b6 The DAGMan job and the jobs in the DAG workflow can be found in the AP job queue and so the normal methods of job monitoring work. That also means that you can interact with these jobs, though in a more limited fashion than a regular job (see Running and Managing DAGMan for more details). A plain condor_q command will show a condensed batch view of the jobs submitted, running, and managed by the DAGMan job proper. For more information about jobs running under DAGMan, use the -nobatch and -dag flags: # Basic job query (Batched/Condensed) $ condor_q # Non-Batched query $ condor_q -nobatch # Increased information $ condor_q -nobatch -dag You can also watch the progress of the DAG and the jobs running under it by running: $ condor_watch_q Note that condor_watch_q works by monitoring the log files of jobs that are in the queue, but only at the time of its execution. Additional jobs submitted by DAGMan while condor_watch_q is running will not appear in condor_watch_q . To see additional jobs as they are submitted, wait for DAGMan to create the .nodes.log file, then run $ condor_watch_q -files *.log For more detail about the status and progress of your DAG workflow, you can use the noun-verb command: $ htcondor dag status DAGManJobID where DAGManJobID is the ID for the DAGMan job proper. Note that the information in the output of this command does not update frequently, and so it is not suited for short-lived DAG workflows such as the current example. When your DAG workflow has completed, the DAGMan job proper will disappear from the queue. If the DAG workflow completed successfully, then the .dag.dagman.out file should have a message that All jobs Completed! , though it may be difficult to find manually (try using grep \"All jobs Completed!\" *.dag.dagman.out instead). If the DAG workflow was aborted due to an error, then the .dag.dagman.out file should have the message Aborting DAG... . Assuming that the DAGMan job proper did not crash, then regardless the final line of the .dag.dagman.out file should contain (condor_DAGMAN) pid ####### EXITING WITH STATUS # , where the number after STATUS is the exit code (0 if success, not 0 if failure). How DAGMan Handles Relative Paths \u00b6 By default, the directory that DAGMan submits all jobs from is the same directory you are in when you run condor_submit_dag . This directory (let's call it the submit directory) is the starting directory for any relative path in the .dag input file or in the node .sub files that DAGMan submits . This can be observed by inspecting the sleep.sub submit file in the SleepJob sub-directory and by inspecting the diamond.dag input file. In the diamond.dag file, the jobs are declared using a relative path. For example: JOB TOP ./SleepJob/sleep.sub This tells DAGMan that the submit file for the JOB TOP is sleep.sub , located in the SleepJob in the submit directory ( . ). Similarly, the submit file sleep.sub uses paths relative to the submit directory for defining the save locations for the .log , .out , and .err files, i.e., log = ./SleepJob/$(JOB).log This behavior is consistent with submission of regular (non-DAGMan) jobs, e.g. condor_submit SleepJob/sleep.sub . Contrary to the above behavior, the .dag.* log/output files generated by the DAGMan job proper will always be in the same directory as the .dag input file. This is just the default behavior, and there are ways to make the location of job submission/management more obvious. See the HTCondor documentation for more details: File Paths in DAGs . Additional Examples \u00b6 Additional examples that cover various topics related to DAGMan are provided in the folder additional_examples with corresponding READMEs. The following order of the examples is recommended: RescueDag - Example for DAGs that don't exit successfully PreScript - Example using a pre-script for a node PostScript - Example using a post-script for a node Retry - Example for retrying a failed node VARS - Example of reusing a single submit file for multiple nodes with differing variables SubDAG (advanced) - Example using a subDAG Splice (advanced) - Example of using DAG splices","title":"Intermediate DAGMan: Uses and Features"},{"location":"htc_workloads/automated_workflows/tutorial-dagman-intermediate/#intermediate-dagman-uses-and-features","text":"This tutorial helps you explore HTCondor's DAGMan its many features. You can download the tutorial materials with the following command: $ git clone https://github.com/OSGConnect/tutorial-dagman-intermediate Now move into the new directory to see the contents of the tutorial: $ cd tutorial-dagman-intermediate At the top level is a worked example of a \"Diamond DAG\" that summarizes the basic components of a creating, submitting, and managing DAGMan workflows. In the lower level additional_examples directory are more worked examples with their own README s highlighting specific features that can be used with DAGMan. Brief descriptions of these examples are provided in the Additional Examples section at the end of this tutorial. Before working on this tutorial, we recommend that you read through our other DAGMan guides: Overview: Submit Workflows with HTCondor's DAGMan Simple Example of a DAGMan Workflow The definitive guide to DAGMan is HTCondor's DAGMan Documentation .","title":"Intermediate DAGMan: Uses and Features"},{"location":"htc_workloads/automated_workflows/tutorial-dagman-intermediate/#types-of-dags","text":"While any workflow that satisfies the definition of a \"Directed Acyclic Graph\" (DAG) can be executed using DAGMan, there are certain types that are the most commonly used: Sequential DAG : all the nodes are connected in a sequence of one after the other, with no branching or splitting. This is good for conducting increasingly refined analyses of a dataset or initial result, or chaining together a long-running calculation. The simplest example of this type is used in the guide Simple Example of a DAGMan Workflow . Split and recombine DAG : the first node is connected to many nodes of the same layer (split) which then all connect back to the final node (recombine). Here, you can set up the shared environment in the first node and use it to parallelize the work into many individual jobs, then finally combine/analyze the results in the final node. The simplest example of this type is the \"Diamond DAG\" - the subject of this tutorial. Collection DAG : no node is connected to any other node. This is good for the situation where you need to run a bunch of otherwise unrelated jobs, perhaps ones that are competing for a limited resource. The simplest example of this type is a DAG consisting of a single node. These types are by no means \"official\", nor are they the only types of structure that a DAG can take. Rather, they serve as starting points from which you can build your own DAG workflow, which will likely consist of some combination of the above elements.","title":"Types of DAGs"},{"location":"htc_workloads/automated_workflows/tutorial-dagman-intermediate/#the-diamond-dag","text":"As mentioned above, the \"Diamond DAG\" is the simplest example of a \"split and recombine\" DAG. In this case, the first node TOP is connected to two nodes LEFT and RIGHT (the \"split\"), which are then connected to the final node BOTTOM (the \"recombine\"). To describe the flow of the DAG and the parts needed to execute it, DAGMan uses a custom description language in an input file, typically named .dag . The two most important commands in the DAG description language are: JOB - Describes a node and the submit file it will use to run the node. PARENT CHILD - Describes the edge starting from and pointing to . These commands have been used to construct the Diamond DAG and are saved in the file diamond.dag . To view the contents of diamond.dag , run $ cat diamond.dag Before you continue, we recommend that you closely examine the contents of diamond.dag and identify its components. Furthermore, try to identify the submit file for each node, and use that submit file to determine the nature of the HTCondor job that will be submitted for each node.","title":"The Diamond DAG"},{"location":"htc_workloads/automated_workflows/tutorial-dagman-intermediate/#submitting-a-dag","text":"To submit a DAGMan workflow to HTCondor, you can use one of the following commands: $ condor_submit_dag diamond.dag or $ htcondor dag submit diamond.dag","title":"Submitting a DAG"},{"location":"htc_workloads/automated_workflows/tutorial-dagman-intermediate/#what-happens","text":"When a DAG is submitted to HTCondor a special job is created to run DAGMan on behalf of you the user. This job runs the provided HTCSS DAGMan executable in the AP job queue. This is an actual job that can be queried and acted upon. You may also notice that lots of files are created. These files are all part of DAGMan and have various purposes. In general, the files that should always exist are as follows: DAGMan job proper files .condor.sub - Submit file for the DAGMan job proper .dagman.log - Job event log file for the DAGMan job proper .lib.err - Standard error stream file for the DAGMan job proper .lib.out - Standard output stream file for the DAGMan job proper Informational DAGMan files .dagman.out - General DAGMan process logging file .nodes.log - Collective job event log file for all managed jobs (Heart of DAGMan) .metrics - JSON formatted information about the DAG Of these files, the two most important are the .dagman.out and .nodes.log . The .dagman.out file contains the entire history and status of DAGMan's execution of your workflow. The .nodes.log file on the other hand is the accumulated log entries for every HTCondor job that DAGMan submitted, and DAGMan monitors the contents of this file to generate the contents of the .dagman.out file. Note: these are not all the files that DAGMan can produce. Depending on the options and features you employ in your DAG input file, more files with different purposes can be created.","title":"What Happens?"},{"location":"htc_workloads/automated_workflows/tutorial-dagman-intermediate/#monitoring-dagman","text":"The DAGMan job and the jobs in the DAG workflow can be found in the AP job queue and so the normal methods of job monitoring work. That also means that you can interact with these jobs, though in a more limited fashion than a regular job (see Running and Managing DAGMan for more details). A plain condor_q command will show a condensed batch view of the jobs submitted, running, and managed by the DAGMan job proper. For more information about jobs running under DAGMan, use the -nobatch and -dag flags: # Basic job query (Batched/Condensed) $ condor_q # Non-Batched query $ condor_q -nobatch # Increased information $ condor_q -nobatch -dag You can also watch the progress of the DAG and the jobs running under it by running: $ condor_watch_q Note that condor_watch_q works by monitoring the log files of jobs that are in the queue, but only at the time of its execution. Additional jobs submitted by DAGMan while condor_watch_q is running will not appear in condor_watch_q . To see additional jobs as they are submitted, wait for DAGMan to create the .nodes.log file, then run $ condor_watch_q -files *.log For more detail about the status and progress of your DAG workflow, you can use the noun-verb command: $ htcondor dag status DAGManJobID where DAGManJobID is the ID for the DAGMan job proper. Note that the information in the output of this command does not update frequently, and so it is not suited for short-lived DAG workflows such as the current example. When your DAG workflow has completed, the DAGMan job proper will disappear from the queue. If the DAG workflow completed successfully, then the .dag.dagman.out file should have a message that All jobs Completed! , though it may be difficult to find manually (try using grep \"All jobs Completed!\" *.dag.dagman.out instead). If the DAG workflow was aborted due to an error, then the .dag.dagman.out file should have the message Aborting DAG... . Assuming that the DAGMan job proper did not crash, then regardless the final line of the .dag.dagman.out file should contain (condor_DAGMAN) pid ####### EXITING WITH STATUS # , where the number after STATUS is the exit code (0 if success, not 0 if failure).","title":"Monitoring DAGMan"},{"location":"htc_workloads/automated_workflows/tutorial-dagman-intermediate/#how-dagman-handles-relative-paths","text":"By default, the directory that DAGMan submits all jobs from is the same directory you are in when you run condor_submit_dag . This directory (let's call it the submit directory) is the starting directory for any relative path in the .dag input file or in the node .sub files that DAGMan submits . This can be observed by inspecting the sleep.sub submit file in the SleepJob sub-directory and by inspecting the diamond.dag input file. In the diamond.dag file, the jobs are declared using a relative path. For example: JOB TOP ./SleepJob/sleep.sub This tells DAGMan that the submit file for the JOB TOP is sleep.sub , located in the SleepJob in the submit directory ( . ). Similarly, the submit file sleep.sub uses paths relative to the submit directory for defining the save locations for the .log , .out , and .err files, i.e., log = ./SleepJob/$(JOB).log This behavior is consistent with submission of regular (non-DAGMan) jobs, e.g. condor_submit SleepJob/sleep.sub . Contrary to the above behavior, the .dag.* log/output files generated by the DAGMan job proper will always be in the same directory as the .dag input file. This is just the default behavior, and there are ways to make the location of job submission/management more obvious. See the HTCondor documentation for more details: File Paths in DAGs .","title":"How DAGMan Handles Relative Paths"},{"location":"htc_workloads/automated_workflows/tutorial-dagman-intermediate/#additional-examples","text":"Additional examples that cover various topics related to DAGMan are provided in the folder additional_examples with corresponding READMEs. The following order of the examples is recommended: RescueDag - Example for DAGs that don't exit successfully PreScript - Example using a pre-script for a node PostScript - Example using a post-script for a node Retry - Example for retrying a failed node VARS - Example of reusing a single submit file for multiple nodes with differing variables SubDAG (advanced) - Example using a subDAG Splice (advanced) - Example of using DAG splices","title":"Additional Examples"},{"location":"htc_workloads/automated_workflows/tutorial-pegasus/","text":"Pegasus Workflows \u00b6 Introduction \u00b6 The Pegasus project encompasses a set of technologies that help workflow-based applications execute in a number of different environments including desktops, campus clusters, grids, and clouds. Pegasus bridges the scientific domain and the execution environment by automatically mapping high-level workflow descriptions onto distributed resources. It automatically locates the necessary input data and computational resources necessary for workflow execution. Pegasus enables scientists to construct workflows in abstract terms without worrying about the details of the underlying execution environment or the particulars of the low-level specifications required by the middleware. Some of the advantages of using Pegasus include: Portability / Reuse - User created workflows can easily be run in different environments without alteration. Pegasus currently runs workflows on compute systems scheduled via HTCondor, including the OSPool, and other other systems or via other schedulers (e.g. XSEDE resources, Amazon EC2, Google Cloud, and many campus clusters). The same workflow can run on a single system or across a heterogeneous set of resources. Performance - The Pegasus mapper can reorder, group, and prioritize tasks in order to increase the overall workflow performance. Scalability - Pegasus can easily scale both the size of the workflow, and the resources that the workflow is distributed over. Pegasus runs workflows ranging from just a few computational tasks up to 1 million tasks. The number of resources involved in executing a workflow can scale as needed without any impediments to performance. Provenance - By default, all jobs in Pegasus are launched via the kickstart process that captures runtime provenance of the job and helps in debugging. The provenance data is collected in a database, and the data can be summarized with tools such as pegasus-statistics or directly with SQL queries. Data Management - Pegasus handles replica selection, data transfers and output registrations in data catalogs. These tasks are added to a workflow as auxiliary jobs by the Pegasus planner. Reliability - Jobs and data transfers are automatically retried in case of failures. Debugging tools such as pegasus-analyzer help the user to debug the workflow in case of non-recoverable failures. Error Recovery - When errors occur, Pegasus tries to recover when possible by retrying tasks, retrying the entire workflow, providing workflow-level checkpointing, re-mapping portions of the workflow, trying alternative data sources for staging data, and, when all else fails, providing a rescue workflow containing a description of only the work that remains to be done. Pegasus keeps track of what has been done (provenance) including the locations of data used and produced, and which software was used with which parameters. As mentioned earlier in this book, OSG has no read/write enabled shared file system across the resources. Jobs are required to either bring inputs along with the job, or as part of the job stage the inputs from a remote location. The following examples highlight how Pegasus can be used to manage workloads in such an environment by providing an abstraction layer around things like data movements and job retries, enabling the users to run larger workloads, spending less time developing job management tools and babysitting their computations. Pegasus workflows have 4 components: Site Catalog - Describes the execution environment in which the workflow will be executed. Transformation Catalog - Specifies locations of the executables used by the workflow. Replica Catalog - Specifies locations of the input data to the workflow. Workflow Description - An abstract workflow description containing compute steps and dependencies between the steps. We refer to this workflow as abstract because it does not contain data locations and available software. When developing a Pegasus Workflow using the Python API , all four components may be defined in the same file. For details, please refer to the Pegasus documentation . Wordfreq Workflow \u00b6 wordfreq is an example application and workflow that can be used to introduce Pegasus tools and concepts. The application is available on the OSG Access Points. This example uses a custom container to run jobs. The container capability is provided by OSG ( Containers - Apptainer/Singularity ) and is used by setting HTCondor properties when defining your workflow. Exercise 1 : create a copy of the Pegasus tutorial and change the working directory to the wordfreq workflow by running the following commands: $ git clone https://github.com/OSGConnect/tutorial-pegasus $ cd tutorial-pegasus/wordfreq In the wordfreq directory, you will find: wordfreq/ \u251c\u2500\u2500 bin | \u251c\u2500\u2500 summarize | \u2514\u2500\u2500 wordfreq \u251c\u2500\u2500 inputs | \u251c\u2500\u2500 Alices_Adventures_in_Wonderland_by_Lewis_Carroll.txt | \u251c\u2500\u2500 Dracula_by_Bram_Stoker.txt | \u251c\u2500\u2500 Pride_and_Prejudice_by_Jane_Austen.txt | \u251c\u2500\u2500 The_Adventures_of_Huckleberry_Finn_by_Mark_Twain.txt | \u251c\u2500\u2500 Ulysses_by_James_Joyce.txt | \u2514\u2500\u2500 Visual_Signaling_By_Signal_Corps_United_States_Army.txt \u251c\u2500\u2500 many-more-inputs | \u2514\u2500\u2500 ... \u2514\u2500\u2500 workflow.py The inputs/ directory contains 6 public domain ebooks. The wordreq workflow uses the two executables in the bin/ directory. bin/wordfreq takes a text file as input and produces a summary output file containting the counts and names of the top five most frequently used words from the input file. A wordfreq job is created for each file in inputs/ . bin/summarize concatenates the the output of each wordfreq job into a single output file called summary.txt . This workflow structure, which is a set of independent tasks joining into a single summary or analysis type of task, is a common use case on OSG and therefore this workflow can be thought of as a template for such problems. For example, instead of using wordfreq on ebooks, the application could be protein folding on a set of input structures. When invoked, the workflow script ( workflow.py ) does the following major steps: Generates a site catalog, which describes the execution environment in which the workflow will be run. def generate_site_catalog(self): username = getpass.getuser() local = ( Site(\"local\") .add_directories( Directory( Directory.SHARED_STORAGE, self.output_dir ).add_file_servers( FileServer(f\"file://{self.output_dir}\", Operation.ALL) ) ) .add_directories( Directory( Directory.SHARED_SCRATCH, self.scratch_dir ).add_file_servers( FileServer(f\"file://{self.scratch_dir}\", Operation.ALL) ) ) ) condorpool = ( Site(\"condorpool\") .add_pegasus_profile(style=\"condor\") .add_condor_profile( universe=\"vanilla\", requirements=\"HAS_SINGULARITY == True\", request_cpus=1, request_memory=\"1 GB\", request_disk=\"1 GB\", ) .add_profiles( Namespace.CONDOR, key=\"+SingularityImage\", value='\"/cvmfs/singularity.opensciencegrid.org/htc/rocky:9\"' ) ) self.sc.add_sites(local, condorpool) In order for the workflow to use the container capability provided by OSG ( Containers - Apptainer/Singularity ), the following HTCondor profiles must be added to the condorpool execution site: +SingularityImage='\"/cvmfs/singularity.opensciencegrid.org/htc/rocky:9\"' . Generates the transformation catalog, which specifies the executables used in the workflow and contains the locations where they are physically located. In this example, we have two entries: wordfreq and summarize . def generate_transformation_catalog(self): wordfreq = Transformation( name=\"wordfreq\", site=\"local\", pfn=self.TOP_DIR / \"bin/wordfreq\", is_stageable=True ).add_pegasus_profile(clusters_size=1) summarize = Transformation( name=\"summarize\", site=\"local\", pfn=self.TOP_DIR / \"bin/summarize\", is_stageable=True ) self.tc.add_transformations(wordfreq, summarize) Generates the replica catalog, which specifies the physical locations of any input files used by the workflow. In this example, there is an entry for each file in the inputs/ directory. def generate_replica_catalog(self): input_files = [File(f.name) for f in (self.TOP_DIR / \"inputs\").iterdir() if f.name.endswith(\".txt\")] for f in input_files: self.rc.add_replica(site=\"local\", lfn=f, pfn=self.TOP_DIR / \"inputs\" / f.lfn) Builds the wordfreq workflow. Note that in this step there is no mention of data movement and job details as these are added by Pegasus when the workflow is planned into an executable workflow. As part of the planning process, additional jobs which handle scratch directory creation, data staging, and data cleanup are added to the workflow. def generate_workflow(self): # last job, child of all others summarize_job = ( Job(\"summarize\") .add_outputs(File(\"summary.txt\")) ) self.wf.add_jobs(summarize_job) input_files = [File(f.name) for f in (self.TOP_DIR / \"inputs\").iterdir() if f.name.endswith(\".txt\")] for f in input_files: out_file = File(f.lfn + \".out\") wordfreq_job = ( Job(\"wordfreq\") .add_args(f, out_file) .add_inputs(f) .add_outputs(out_file) ) self.wf.add_jobs(wordfreq_job) # establish the relationship between the jobs summarize_job.add_inputs(out_file) Exercise 2: Submit the workflow by executing workflow.py . $ ./workflow.py Note that when Pegasus plans/submits a workflow, a workflow directory is created and presented in the output. In the following example output, the workflow directory is /home/ryantanaka/workflows/runs/ryantanaka/pegasus/wordfreq-workflow/run0014 . 2020.12.18 14:33:07.059 CST: ----------------------------------------------------------------------- 2020.12.18 14:33:07.064 CST: File for submitting this DAG to HTCondor : wordfreq-workflow-0.dag.condor.sub 2020.12.18 14:33:07.070 CST: Log of DAGMan debugging messages : wordfreq-workflow-0.dag.dagman.out 2020.12.18 14:33:07.075 CST: Log of HTCondor library output : wordfreq-workflow-0.dag.lib.out 2020.12.18 14:33:07.080 CST: Log of HTCondor library error messages : wordfreq-workflow-0.dag.lib.err 2020.12.18 14:33:07.086 CST: Log of the life of condor_dagman itself : wordfreq-workflow-0.dag.dagman.log 2020.12.18 14:33:07.091 CST: 2020.12.18 14:33:07.096 CST: -no_submit given, not submitting DAG to HTCondor. You can do this with: 2020.12.18 14:33:07.107 CST: ----------------------------------------------------------------------- 2020.12.18 14:33:10.381 CST: Your database is compatible with Pegasus version: 5.1.0dev 2020.12.18 14:33:11.347 CST: Created Pegasus database in: sqlite:////home/ryantanaka/workflows/runs/ryantanaka/pegasus/wordfreq-workflow/run0014/wordfreq-workflow-0.replicas.db 2020.12.18 14:33:11.352 CST: Your database is compatible with Pegasus version: 5.1.0dev 2020.12.18 14:33:11.404 CST: Output replica catalog set to jdbc:sqlite:/home/ryantanaka/workflows/runs/ryantanaka/pegasus/wordfreq-workflow/run0014/wordfreq-workflow-0.replicas.db [WARNING] Submitting to condor wordfreq-workflow-0.dag.condor.sub 2020.12.18 14:33:12.060 CST: Time taken to execute is 5.818 seconds Your workflow has been started and is running in the base directory: /home/ryantanaka/workflows/runs/ryantanaka/pegasus/wordfreq-workflow/run0014 *** To monitor the workflow you can run *** pegasus-status -l /home/ryantanaka/workflows/runs/ryantanaka/pegasus/wordfreq-workflow/run0014 *** To remove your workflow run *** pegasus-remove /home/ryantanaka/workflows/runs/ryantanaka/pegasus/wordfreq-workflow/run0014 This directory is the handle to the workflow instance and is used by Pegasus command line tools. Some useful tools to know about: pegasus-status -v [wfdir] Provides status on a currently running workflow. ( more ) pegasus-analyzer [wfdir] Provides debugging clues why a workflow failed. Run this after a workflow has failed. ( more ) pegasus-statistics [wfdir] Provides statistics, such as walltimes, on a workflow after it has completed. ( more ) pegasus-remove [wfdir] Removes a workflow from the system. ( more ) Exercise 3: Check the status of the workflow: $ pegasus-status [wfdir] You can keep checking the status periodically to see that the workflow is making progress. Exercise 4: Examine a submit file and the *.dag.dagman.out files. Do these look familiar to you from previous modules in the book? Pegasus is based on HTCondor and DAGMan. $ cd [wfdir] $ cat 00/00/summarize_ID0000001.sub ... $ cat *.dag.dagman.out ... Exercise 5: Keep checking progress with pegasus-status . Once the workflow is done, display statistics with pegasus-statistics : $ pegasus-status [wfdir] $ pegasus-statistics [wfdir] ... Exercise 6: cd to the output directory and look at the outputs. Which is the most common word used in the 6 books? Hint: $ cd $HOME/workflows/outputs $ head -n 5 *.out Exercise 7: Want to try something larger? Copy the additional 994 ebooks from \\ the many-more-inputs/ directory to the inputs/ directory: $ cp many-more-inputs/* inputs/ As these tasks are really short, let's tell Pegasus to cluster multiple tasks together into jobs. If you do not do this step, the jobs will still run, but not very efficiently. This is because every job has a small scheduling overhead. For short jobs, the overhead is obvious. If we make the jobs longer, the scheduling overhead becomes negligible. To enable the clustering feature, edit the workflow.py script. Find the section under Transformations : wordfreq = Transformation( name=\"wordfreq\", site=\"local\", pfn=self.TOP_DIR / \"bin/wordfreq\", is_stageable=True ).add_pegasus_profile(clusters_size=1) Change clusters_size=1 to clusters_size=50 . This informs Pegasus that it is ok to cluster up to 50 of the jobs which use the wordfreq executable. Save the file and re-run the script: $ ./workflow.py Use pegasus-status and pegasus-statistics to monitor your workflow. Using pegasus-statistics , determine how many jobs ended up in your workflow and see how this compares with our initial workflow run. Variant Calling Workflow \u00b6 This workflow is based on the Data Carpentry lesson Lesson Data Wrangling and Processing for Genomics . This workflow downloads and aligns SRA data to the E. coli REL606 reference genome, and checks what differences exist in our reads versus the genome. The workflow also performs perform variant calling to see how the population changed over time. The inputs are controlled by the recipe.json file. With 3 SRA inputs, the structure of the workflow becomes: Rendering the workflow with data: Compared to the wordfreq example, a difference is the use of (OSDF)[https://osg-htc.org/services/osdf.html] for intermediate data transfers/storage. Note the extra site in the site catalog: osdf = ( Site(\"osdf\") .add_directories( Directory( Directory.SHARED_SCRATCH, f\"{osdf_local_base}/staging\" ).add_file_servers( FileServer(f\"osdf://{osdf_local_base}/staging\", Operation.ALL) ) ) ) Which is then referenced when planning the workflow: self.wf.plan( dir=str(self.runs_dir), output_dir=str(self.output_dir), sites=[\"condorpool\"], staging_sites={\"condorpool\": \"osdf\"}, OSDF is recommended for data sizes over 1 GB. To plan the workflow: $ ./workflow.py --recipe recipe.json Getting Help \u00b6 For assistance or questions, please email the OSG User Support team at support@osg-htc.org or visit the user documentation .","title":"Use Pegasus to Manage Workflows on OSPool Access Points"},{"location":"htc_workloads/automated_workflows/tutorial-pegasus/#pegasus-workflows","text":"","title":"Pegasus Workflows"},{"location":"htc_workloads/automated_workflows/tutorial-pegasus/#introduction","text":"The Pegasus project encompasses a set of technologies that help workflow-based applications execute in a number of different environments including desktops, campus clusters, grids, and clouds. Pegasus bridges the scientific domain and the execution environment by automatically mapping high-level workflow descriptions onto distributed resources. It automatically locates the necessary input data and computational resources necessary for workflow execution. Pegasus enables scientists to construct workflows in abstract terms without worrying about the details of the underlying execution environment or the particulars of the low-level specifications required by the middleware. Some of the advantages of using Pegasus include: Portability / Reuse - User created workflows can easily be run in different environments without alteration. Pegasus currently runs workflows on compute systems scheduled via HTCondor, including the OSPool, and other other systems or via other schedulers (e.g. XSEDE resources, Amazon EC2, Google Cloud, and many campus clusters). The same workflow can run on a single system or across a heterogeneous set of resources. Performance - The Pegasus mapper can reorder, group, and prioritize tasks in order to increase the overall workflow performance. Scalability - Pegasus can easily scale both the size of the workflow, and the resources that the workflow is distributed over. Pegasus runs workflows ranging from just a few computational tasks up to 1 million tasks. The number of resources involved in executing a workflow can scale as needed without any impediments to performance. Provenance - By default, all jobs in Pegasus are launched via the kickstart process that captures runtime provenance of the job and helps in debugging. The provenance data is collected in a database, and the data can be summarized with tools such as pegasus-statistics or directly with SQL queries. Data Management - Pegasus handles replica selection, data transfers and output registrations in data catalogs. These tasks are added to a workflow as auxiliary jobs by the Pegasus planner. Reliability - Jobs and data transfers are automatically retried in case of failures. Debugging tools such as pegasus-analyzer help the user to debug the workflow in case of non-recoverable failures. Error Recovery - When errors occur, Pegasus tries to recover when possible by retrying tasks, retrying the entire workflow, providing workflow-level checkpointing, re-mapping portions of the workflow, trying alternative data sources for staging data, and, when all else fails, providing a rescue workflow containing a description of only the work that remains to be done. Pegasus keeps track of what has been done (provenance) including the locations of data used and produced, and which software was used with which parameters. As mentioned earlier in this book, OSG has no read/write enabled shared file system across the resources. Jobs are required to either bring inputs along with the job, or as part of the job stage the inputs from a remote location. The following examples highlight how Pegasus can be used to manage workloads in such an environment by providing an abstraction layer around things like data movements and job retries, enabling the users to run larger workloads, spending less time developing job management tools and babysitting their computations. Pegasus workflows have 4 components: Site Catalog - Describes the execution environment in which the workflow will be executed. Transformation Catalog - Specifies locations of the executables used by the workflow. Replica Catalog - Specifies locations of the input data to the workflow. Workflow Description - An abstract workflow description containing compute steps and dependencies between the steps. We refer to this workflow as abstract because it does not contain data locations and available software. When developing a Pegasus Workflow using the Python API , all four components may be defined in the same file. For details, please refer to the Pegasus documentation .","title":"Introduction"},{"location":"htc_workloads/automated_workflows/tutorial-pegasus/#wordfreq-workflow","text":"wordfreq is an example application and workflow that can be used to introduce Pegasus tools and concepts. The application is available on the OSG Access Points. This example uses a custom container to run jobs. The container capability is provided by OSG ( Containers - Apptainer/Singularity ) and is used by setting HTCondor properties when defining your workflow. Exercise 1 : create a copy of the Pegasus tutorial and change the working directory to the wordfreq workflow by running the following commands: $ git clone https://github.com/OSGConnect/tutorial-pegasus $ cd tutorial-pegasus/wordfreq In the wordfreq directory, you will find: wordfreq/ \u251c\u2500\u2500 bin | \u251c\u2500\u2500 summarize | \u2514\u2500\u2500 wordfreq \u251c\u2500\u2500 inputs | \u251c\u2500\u2500 Alices_Adventures_in_Wonderland_by_Lewis_Carroll.txt | \u251c\u2500\u2500 Dracula_by_Bram_Stoker.txt | \u251c\u2500\u2500 Pride_and_Prejudice_by_Jane_Austen.txt | \u251c\u2500\u2500 The_Adventures_of_Huckleberry_Finn_by_Mark_Twain.txt | \u251c\u2500\u2500 Ulysses_by_James_Joyce.txt | \u2514\u2500\u2500 Visual_Signaling_By_Signal_Corps_United_States_Army.txt \u251c\u2500\u2500 many-more-inputs | \u2514\u2500\u2500 ... \u2514\u2500\u2500 workflow.py The inputs/ directory contains 6 public domain ebooks. The wordreq workflow uses the two executables in the bin/ directory. bin/wordfreq takes a text file as input and produces a summary output file containting the counts and names of the top five most frequently used words from the input file. A wordfreq job is created for each file in inputs/ . bin/summarize concatenates the the output of each wordfreq job into a single output file called summary.txt . This workflow structure, which is a set of independent tasks joining into a single summary or analysis type of task, is a common use case on OSG and therefore this workflow can be thought of as a template for such problems. For example, instead of using wordfreq on ebooks, the application could be protein folding on a set of input structures. When invoked, the workflow script ( workflow.py ) does the following major steps: Generates a site catalog, which describes the execution environment in which the workflow will be run. def generate_site_catalog(self): username = getpass.getuser() local = ( Site(\"local\") .add_directories( Directory( Directory.SHARED_STORAGE, self.output_dir ).add_file_servers( FileServer(f\"file://{self.output_dir}\", Operation.ALL) ) ) .add_directories( Directory( Directory.SHARED_SCRATCH, self.scratch_dir ).add_file_servers( FileServer(f\"file://{self.scratch_dir}\", Operation.ALL) ) ) ) condorpool = ( Site(\"condorpool\") .add_pegasus_profile(style=\"condor\") .add_condor_profile( universe=\"vanilla\", requirements=\"HAS_SINGULARITY == True\", request_cpus=1, request_memory=\"1 GB\", request_disk=\"1 GB\", ) .add_profiles( Namespace.CONDOR, key=\"+SingularityImage\", value='\"/cvmfs/singularity.opensciencegrid.org/htc/rocky:9\"' ) ) self.sc.add_sites(local, condorpool) In order for the workflow to use the container capability provided by OSG ( Containers - Apptainer/Singularity ), the following HTCondor profiles must be added to the condorpool execution site: +SingularityImage='\"/cvmfs/singularity.opensciencegrid.org/htc/rocky:9\"' . Generates the transformation catalog, which specifies the executables used in the workflow and contains the locations where they are physically located. In this example, we have two entries: wordfreq and summarize . def generate_transformation_catalog(self): wordfreq = Transformation( name=\"wordfreq\", site=\"local\", pfn=self.TOP_DIR / \"bin/wordfreq\", is_stageable=True ).add_pegasus_profile(clusters_size=1) summarize = Transformation( name=\"summarize\", site=\"local\", pfn=self.TOP_DIR / \"bin/summarize\", is_stageable=True ) self.tc.add_transformations(wordfreq, summarize) Generates the replica catalog, which specifies the physical locations of any input files used by the workflow. In this example, there is an entry for each file in the inputs/ directory. def generate_replica_catalog(self): input_files = [File(f.name) for f in (self.TOP_DIR / \"inputs\").iterdir() if f.name.endswith(\".txt\")] for f in input_files: self.rc.add_replica(site=\"local\", lfn=f, pfn=self.TOP_DIR / \"inputs\" / f.lfn) Builds the wordfreq workflow. Note that in this step there is no mention of data movement and job details as these are added by Pegasus when the workflow is planned into an executable workflow. As part of the planning process, additional jobs which handle scratch directory creation, data staging, and data cleanup are added to the workflow. def generate_workflow(self): # last job, child of all others summarize_job = ( Job(\"summarize\") .add_outputs(File(\"summary.txt\")) ) self.wf.add_jobs(summarize_job) input_files = [File(f.name) for f in (self.TOP_DIR / \"inputs\").iterdir() if f.name.endswith(\".txt\")] for f in input_files: out_file = File(f.lfn + \".out\") wordfreq_job = ( Job(\"wordfreq\") .add_args(f, out_file) .add_inputs(f) .add_outputs(out_file) ) self.wf.add_jobs(wordfreq_job) # establish the relationship between the jobs summarize_job.add_inputs(out_file) Exercise 2: Submit the workflow by executing workflow.py . $ ./workflow.py Note that when Pegasus plans/submits a workflow, a workflow directory is created and presented in the output. In the following example output, the workflow directory is /home/ryantanaka/workflows/runs/ryantanaka/pegasus/wordfreq-workflow/run0014 . 2020.12.18 14:33:07.059 CST: ----------------------------------------------------------------------- 2020.12.18 14:33:07.064 CST: File for submitting this DAG to HTCondor : wordfreq-workflow-0.dag.condor.sub 2020.12.18 14:33:07.070 CST: Log of DAGMan debugging messages : wordfreq-workflow-0.dag.dagman.out 2020.12.18 14:33:07.075 CST: Log of HTCondor library output : wordfreq-workflow-0.dag.lib.out 2020.12.18 14:33:07.080 CST: Log of HTCondor library error messages : wordfreq-workflow-0.dag.lib.err 2020.12.18 14:33:07.086 CST: Log of the life of condor_dagman itself : wordfreq-workflow-0.dag.dagman.log 2020.12.18 14:33:07.091 CST: 2020.12.18 14:33:07.096 CST: -no_submit given, not submitting DAG to HTCondor. You can do this with: 2020.12.18 14:33:07.107 CST: ----------------------------------------------------------------------- 2020.12.18 14:33:10.381 CST: Your database is compatible with Pegasus version: 5.1.0dev 2020.12.18 14:33:11.347 CST: Created Pegasus database in: sqlite:////home/ryantanaka/workflows/runs/ryantanaka/pegasus/wordfreq-workflow/run0014/wordfreq-workflow-0.replicas.db 2020.12.18 14:33:11.352 CST: Your database is compatible with Pegasus version: 5.1.0dev 2020.12.18 14:33:11.404 CST: Output replica catalog set to jdbc:sqlite:/home/ryantanaka/workflows/runs/ryantanaka/pegasus/wordfreq-workflow/run0014/wordfreq-workflow-0.replicas.db [WARNING] Submitting to condor wordfreq-workflow-0.dag.condor.sub 2020.12.18 14:33:12.060 CST: Time taken to execute is 5.818 seconds Your workflow has been started and is running in the base directory: /home/ryantanaka/workflows/runs/ryantanaka/pegasus/wordfreq-workflow/run0014 *** To monitor the workflow you can run *** pegasus-status -l /home/ryantanaka/workflows/runs/ryantanaka/pegasus/wordfreq-workflow/run0014 *** To remove your workflow run *** pegasus-remove /home/ryantanaka/workflows/runs/ryantanaka/pegasus/wordfreq-workflow/run0014 This directory is the handle to the workflow instance and is used by Pegasus command line tools. Some useful tools to know about: pegasus-status -v [wfdir] Provides status on a currently running workflow. ( more ) pegasus-analyzer [wfdir] Provides debugging clues why a workflow failed. Run this after a workflow has failed. ( more ) pegasus-statistics [wfdir] Provides statistics, such as walltimes, on a workflow after it has completed. ( more ) pegasus-remove [wfdir] Removes a workflow from the system. ( more ) Exercise 3: Check the status of the workflow: $ pegasus-status [wfdir] You can keep checking the status periodically to see that the workflow is making progress. Exercise 4: Examine a submit file and the *.dag.dagman.out files. Do these look familiar to you from previous modules in the book? Pegasus is based on HTCondor and DAGMan. $ cd [wfdir] $ cat 00/00/summarize_ID0000001.sub ... $ cat *.dag.dagman.out ... Exercise 5: Keep checking progress with pegasus-status . Once the workflow is done, display statistics with pegasus-statistics : $ pegasus-status [wfdir] $ pegasus-statistics [wfdir] ... Exercise 6: cd to the output directory and look at the outputs. Which is the most common word used in the 6 books? Hint: $ cd $HOME/workflows/outputs $ head -n 5 *.out Exercise 7: Want to try something larger? Copy the additional 994 ebooks from \\ the many-more-inputs/ directory to the inputs/ directory: $ cp many-more-inputs/* inputs/ As these tasks are really short, let's tell Pegasus to cluster multiple tasks together into jobs. If you do not do this step, the jobs will still run, but not very efficiently. This is because every job has a small scheduling overhead. For short jobs, the overhead is obvious. If we make the jobs longer, the scheduling overhead becomes negligible. To enable the clustering feature, edit the workflow.py script. Find the section under Transformations : wordfreq = Transformation( name=\"wordfreq\", site=\"local\", pfn=self.TOP_DIR / \"bin/wordfreq\", is_stageable=True ).add_pegasus_profile(clusters_size=1) Change clusters_size=1 to clusters_size=50 . This informs Pegasus that it is ok to cluster up to 50 of the jobs which use the wordfreq executable. Save the file and re-run the script: $ ./workflow.py Use pegasus-status and pegasus-statistics to monitor your workflow. Using pegasus-statistics , determine how many jobs ended up in your workflow and see how this compares with our initial workflow run.","title":"Wordfreq Workflow"},{"location":"htc_workloads/automated_workflows/tutorial-pegasus/#variant-calling-workflow","text":"This workflow is based on the Data Carpentry lesson Lesson Data Wrangling and Processing for Genomics . This workflow downloads and aligns SRA data to the E. coli REL606 reference genome, and checks what differences exist in our reads versus the genome. The workflow also performs perform variant calling to see how the population changed over time. The inputs are controlled by the recipe.json file. With 3 SRA inputs, the structure of the workflow becomes: Rendering the workflow with data: Compared to the wordfreq example, a difference is the use of (OSDF)[https://osg-htc.org/services/osdf.html] for intermediate data transfers/storage. Note the extra site in the site catalog: osdf = ( Site(\"osdf\") .add_directories( Directory( Directory.SHARED_SCRATCH, f\"{osdf_local_base}/staging\" ).add_file_servers( FileServer(f\"osdf://{osdf_local_base}/staging\", Operation.ALL) ) ) ) Which is then referenced when planning the workflow: self.wf.plan( dir=str(self.runs_dir), output_dir=str(self.output_dir), sites=[\"condorpool\"], staging_sites={\"condorpool\": \"osdf\"}, OSDF is recommended for data sizes over 1 GB. To plan the workflow: $ ./workflow.py --recipe recipe.json","title":"Variant Calling Workflow"},{"location":"htc_workloads/automated_workflows/tutorial-pegasus/#getting-help","text":"For assistance or questions, please email the OSG User Support team at support@osg-htc.org or visit the user documentation .","title":"Getting Help"},{"location":"htc_workloads/managing_data/file-transfer-via-htcondor/","text":"Transfer Smaller Job Files To and From /home \u00b6 As described in the Overview: Data Staging and Transfer to Jobs any data, files, or even software that is <1GB should be staged in your /home directory on your Access Point. Files in your /home directory can be transferred to jobs via your HTCondor submit file. Transfer Files From /home Using HTCondor \u00b6 Transfer Input Files from /home \u00b6 To transfer input files from /home , list the files by name in the transfer_input_files submit file option. You can use either absolute or relative paths to your input files. Multiple files can be specified using a comma-separated list. To transfer files from your /home directory use the transfer_input_files statement in your HTCondor submit file. For example: # submit file example # transfer small file from /home transfer_input_files = my_data.csv Multiple files can be specified using a comma-separated list, for example: # transfer multiple files from /home transfer_input_files = my_data.csv, my_software.tar.gz, my_script.py When using transfer_input_files to transfer files located in /home , keep in mind that the path to the file is relative to the location of the submit file. If you have files located in a different /home subdirectory, we recommend specifying the full path to those files, which is also a matter of good practice, for example: transfer_input_files = /home/username/path/to/my_software.tar.gz Note that the path is not replicated on the remote side. The job will only see my_software.tar.gz in the top level job directory. Above, username refers to your access point username. Use HTCondor To Transfer Outputs \u00b6 By default, HTCondor will transfer any new or modified files in the job's top-level directory back to your /home directory location from which the condor_submit command was performed. This behavior only applies to files in the top-level directory of where your job executes, meaning HTCondor will ignore any files created in subdirectories of the job's top-level directory. Several options exist for modifying this default output file transfer behavior, including those described in this guide. What is the top-level directory of a job? \u00b6 Before executing a job, HTCondor will create a new directory on the execute node just for your job - this is the top-level directory of the job and the path is stored in the environment variable _CONDOR_SCRATCH_DIR . All of the input files transferred via transfer_input_files will first be written to this directory and it is from this path that a job starts to execute. After a job has completed the top-level directory and all of it's contents are deleted. Select Specific Output Files To Transfer to /home Using HTCondor \u00b6 As described above, HTCondor will, by default, transfer any files that are generated during the execution of your job(s) back to your /home directory. If your job(s) will produce multiple output files but you only need to retain a subset of these output files, you can use a submit file option to only transfer back this file: transfer_output_files = output.svg Alternatively, you can delete the unrequired output files or move them to a subdirectory as a step in the bash executable script of your job - only the output files that remain in the top-level directory will be transferred back to your /home directory. Organize Output Files in /home \u00b6 By default, output files will be copied back to the directory in /home where you ran the condor_submit command. To modify these behavior, you can use the transfer_output_remaps option in the HTCondor submit file. The syntax for transfer_output_remaps is: transfer_output_remaps = \"Output1.txt = path/to/save/file/under/output.txt; Output2.txt = path/to/save/file/under/RenamedOutput.txt\" What if my output file(s) are not written to the top-level directory? \u00b6 If your output files are written to a subdirectory, use the steps described below to convert the output directory to a \"tarball\" that is written to the top-level directory. Alternatively, you can include steps in the executable bash script of your job to move (i.e. mv ) output files from a subdirectory to the top-level directory. For example, if there is an output file that needs to be transferred back to the login node named job_output.txt written to job_output/ : #! /bin/bash # various commands needed to run your job # move csv files to scratch dir mv job_output/job_output.txt $_CONDOR_SCRATCH_DIR Group Multiple Output Files For Convenience \u00b6 If your jobs will generate multiple output files, we recommend combining all output into a compressed tar archive for convenience, particularly when transferring your results to your local computer from your login node. To create a compressed tar archive, include commands in your your bash executable script to create a new subdirectory, move all of the output to this new subdirectory, and create a tar archive. For example: #! /bin/bash # various commands needed to run your job # create output tar archive mkdir my_output mv my_job_output.csv my_job_output.svg my_output/ tar -czf my_job.output.tar.gz my_ouput/ The example above will create a file called my_job.output.tar.gz that contains all the output that was moved to my_output . Be sure to create my_job.output.tar.gz in the top-level directory of where your job executes and HTCondor will automatically transfer this tar archive back to your /home directory.","title":"Transfer Smaller Job Files to and from /home"},{"location":"htc_workloads/managing_data/file-transfer-via-htcondor/#transfer-smaller-job-files-to-and-from-home","text":"As described in the Overview: Data Staging and Transfer to Jobs any data, files, or even software that is <1GB should be staged in your /home directory on your Access Point. Files in your /home directory can be transferred to jobs via your HTCondor submit file.","title":"Transfer Smaller Job Files To and From /home"},{"location":"htc_workloads/managing_data/file-transfer-via-htcondor/#transfer-files-from-home-using-htcondor","text":"","title":"Transfer Files From /home Using HTCondor"},{"location":"htc_workloads/managing_data/file-transfer-via-htcondor/#transfer-input-files-from-home","text":"To transfer input files from /home , list the files by name in the transfer_input_files submit file option. You can use either absolute or relative paths to your input files. Multiple files can be specified using a comma-separated list. To transfer files from your /home directory use the transfer_input_files statement in your HTCondor submit file. For example: # submit file example # transfer small file from /home transfer_input_files = my_data.csv Multiple files can be specified using a comma-separated list, for example: # transfer multiple files from /home transfer_input_files = my_data.csv, my_software.tar.gz, my_script.py When using transfer_input_files to transfer files located in /home , keep in mind that the path to the file is relative to the location of the submit file. If you have files located in a different /home subdirectory, we recommend specifying the full path to those files, which is also a matter of good practice, for example: transfer_input_files = /home/username/path/to/my_software.tar.gz Note that the path is not replicated on the remote side. The job will only see my_software.tar.gz in the top level job directory. Above, username refers to your access point username.","title":"Transfer Input Files from /home"},{"location":"htc_workloads/managing_data/file-transfer-via-htcondor/#use-htcondor-to-transfer-outputs","text":"By default, HTCondor will transfer any new or modified files in the job's top-level directory back to your /home directory location from which the condor_submit command was performed. This behavior only applies to files in the top-level directory of where your job executes, meaning HTCondor will ignore any files created in subdirectories of the job's top-level directory. Several options exist for modifying this default output file transfer behavior, including those described in this guide.","title":"Use HTCondor To Transfer Outputs"},{"location":"htc_workloads/managing_data/file-transfer-via-htcondor/#what-is-the-top-level-directory-of-a-job","text":"Before executing a job, HTCondor will create a new directory on the execute node just for your job - this is the top-level directory of the job and the path is stored in the environment variable _CONDOR_SCRATCH_DIR . All of the input files transferred via transfer_input_files will first be written to this directory and it is from this path that a job starts to execute. After a job has completed the top-level directory and all of it's contents are deleted.","title":"What is the top-level directory of a job?"},{"location":"htc_workloads/managing_data/file-transfer-via-htcondor/#select-specific-output-files-to-transfer-to-home-using-htcondor","text":"As described above, HTCondor will, by default, transfer any files that are generated during the execution of your job(s) back to your /home directory. If your job(s) will produce multiple output files but you only need to retain a subset of these output files, you can use a submit file option to only transfer back this file: transfer_output_files = output.svg Alternatively, you can delete the unrequired output files or move them to a subdirectory as a step in the bash executable script of your job - only the output files that remain in the top-level directory will be transferred back to your /home directory.","title":"Select Specific Output Files To Transfer to /home Using HTCondor"},{"location":"htc_workloads/managing_data/file-transfer-via-htcondor/#organize-output-files-in-home","text":"By default, output files will be copied back to the directory in /home where you ran the condor_submit command. To modify these behavior, you can use the transfer_output_remaps option in the HTCondor submit file. The syntax for transfer_output_remaps is: transfer_output_remaps = \"Output1.txt = path/to/save/file/under/output.txt; Output2.txt = path/to/save/file/under/RenamedOutput.txt\"","title":"Organize Output Files in /home"},{"location":"htc_workloads/managing_data/file-transfer-via-htcondor/#what-if-my-output-files-are-not-written-to-the-top-level-directory","text":"If your output files are written to a subdirectory, use the steps described below to convert the output directory to a \"tarball\" that is written to the top-level directory. Alternatively, you can include steps in the executable bash script of your job to move (i.e. mv ) output files from a subdirectory to the top-level directory. For example, if there is an output file that needs to be transferred back to the login node named job_output.txt written to job_output/ : #! /bin/bash # various commands needed to run your job # move csv files to scratch dir mv job_output/job_output.txt $_CONDOR_SCRATCH_DIR","title":"What if my output file(s) are not written to the top-level directory?"},{"location":"htc_workloads/managing_data/file-transfer-via-htcondor/#group-multiple-output-files-for-convenience","text":"If your jobs will generate multiple output files, we recommend combining all output into a compressed tar archive for convenience, particularly when transferring your results to your local computer from your login node. To create a compressed tar archive, include commands in your your bash executable script to create a new subdirectory, move all of the output to this new subdirectory, and create a tar archive. For example: #! /bin/bash # various commands needed to run your job # create output tar archive mkdir my_output mv my_job_output.csv my_job_output.svg my_output/ tar -czf my_job.output.tar.gz my_ouput/ The example above will create a file called my_job.output.tar.gz that contains all the output that was moved to my_output . Be sure to create my_job.output.tar.gz in the top-level directory of where your job executes and HTCondor will automatically transfer this tar archive back to your /home directory.","title":"Group Multiple Output Files For Convenience"},{"location":"htc_workloads/managing_data/file-transfer-via-http/","text":"Transfer HTTP-available Files up to 1GB In Size \u00b6 Overview \u00b6 If some of the data or software your jobs depend on is available via the web, you can have such files transferred by HTCondor using the appropriate HTTP address! Important Considerations \u00b6 While our Overview of Data Mangement on the OSPool describes how you can stage data, files, or even software on OSG data locations, any web-accessible file can be transferred directly to your jobs IF : the file is accessible via an HTTP address the file is less than 1GB in size (if larger, you'll need to pre-stage them for OSDF ) the server or website they're on can handle large numbers of your jobs accessing them simultaneously Importantly, you'll also want to make sure your job executable knows how to handle the file (un-tar, etc.) from within the working directory of the job, just like it would for any other input file. Transfer Files via HTTP \u00b6 To download a file available by HTTP into a job, use an HTTP URL in combination with the transfer_input_files statement in your HTCondor submit file. For example: # submit file example # transfer software tarball from public via http transfer_input_files = http://www.website.com/path/file.tar.gz ...other submit file details... Multiple URLs can be specified using a comma-separated list, and a combination of URLs and files from /home directory can be provided in a comma separated list. For example, # transfer software tarball from public via http # transfer additional data from AP /home via htcondor file transfer transfer_input_files = http://www.website.com/path/file1.tar.gz, http://www.website.com/path/file2.tar.gz, my_data.csv","title":"Transfer HTTP-available Files up to 1GB In Size"},{"location":"htc_workloads/managing_data/file-transfer-via-http/#transfer-http-available-files-up-to-1gb-in-size","text":"","title":"Transfer HTTP-available Files up to 1GB In Size"},{"location":"htc_workloads/managing_data/file-transfer-via-http/#overview","text":"If some of the data or software your jobs depend on is available via the web, you can have such files transferred by HTCondor using the appropriate HTTP address!","title":"Overview"},{"location":"htc_workloads/managing_data/file-transfer-via-http/#important-considerations","text":"While our Overview of Data Mangement on the OSPool describes how you can stage data, files, or even software on OSG data locations, any web-accessible file can be transferred directly to your jobs IF : the file is accessible via an HTTP address the file is less than 1GB in size (if larger, you'll need to pre-stage them for OSDF ) the server or website they're on can handle large numbers of your jobs accessing them simultaneously Importantly, you'll also want to make sure your job executable knows how to handle the file (un-tar, etc.) from within the working directory of the job, just like it would for any other input file.","title":"Important Considerations"},{"location":"htc_workloads/managing_data/file-transfer-via-http/#transfer-files-via-http","text":"To download a file available by HTTP into a job, use an HTTP URL in combination with the transfer_input_files statement in your HTCondor submit file. For example: # submit file example # transfer software tarball from public via http transfer_input_files = http://www.website.com/path/file.tar.gz ...other submit file details... Multiple URLs can be specified using a comma-separated list, and a combination of URLs and files from /home directory can be provided in a comma separated list. For example, # transfer software tarball from public via http # transfer additional data from AP /home via htcondor file transfer transfer_input_files = http://www.website.com/path/file1.tar.gz, http://www.website.com/path/file2.tar.gz, my_data.csv","title":"Transfer Files via HTTP"},{"location":"htc_workloads/managing_data/osdf/","text":"Transfer Larger Job Files and Containers Using OSDF \u00b6 For input files >1GB and output files >1GB in size, the default HTCondor file transfer mechanisms run the risk of over-taxing the Access Point and their network capacity. And this is exactly why the OSDF ( Open Science Data Federation ) exists for researchers with larger per-job data! The OSDF is a network of data origins and caches for data distribution. If you have an account on an OSG Access Point, you have access to an OSDF data origin, specifically a directory that can be used to stage input and output data for jobs, accessible via the OSDF. This guide describes general tips for using the OSDF, where to stage your files, and how to access files from jobs. Important Considerations and Best Practices \u00b6 Use OSDF locations for larger files and containers : We recommend using the OSDF for files larger than 1GB (input or output) and all container files. OSDF files are cached across the Open Science Pool, any changes or modifications that you make might not be propagated. This means that if you add a new version of a file the OSDF directory, it must first be given a unique name (or directory path) to distinguish it from previous versions of that file. Adding a date or version number to directories or file names is strongly encouraged to manage your files uniqness. This is especially important when using the OSDF for software and containers. Never submit jobs from the OSDF locations; always submit jobs from within the /home directory. All log , error , output files and any other files smaller than the above values should ONLY ever exist within the user's /home directory. Files placed within a public OSDF directory are publicly accessible , discoverable and readable by anyone, via the web. At the moment, most default OSDF locations are not public. Where to Put Your Files \u00b6 Data origins and local mount points varies between the different access points. See the list below for the \"Local Path\" to use, based on your access point. Access Point OSDF Origin ap40.uw.osg-htc.org Accessible to user only: Local Path: /ospool/ap40/data/[USERNAME] Base OSDF URL: osdf:///ospool/ap40/data/[USERNAME] ap20.uc.osg-htc.org Accessible to user only: Local Path: /ospool/ap20/data/[USERNAME] Base OSDF URL: osdf:///ospool/ap20/data/[USERNAME] Accessible to project group only: Local Path: /ospool/uc-shared/projects/[PROJECT] Base OSDF URL: osdf:///ospool/uc-shared/projects/[PROJECT] Public space for projects: Local Path: /ospool/uc-shared/public/[PROJECT] Base OSDF URL: osdf:///ospool/uc-shared/public/[PROJECT] ap21.uc.osg-htc.org Accessible to user only: Local Path: /ospool/ap21/data/[USERNAME] Base OSDF URL: osdf:///ospool/ap21/data/[USERNAME] Accessible to project group only: Local Path: /ospool/uc-shared/project/[PROJECT] Base OSDF URL: osdf:///ospool/uc-shared/project/[PROJECT] Public space for projects: Local Path: /ospool/uc-shared/public/[PROJECT] Base OSDF URL: osdf:///ospool/uc-shared/public/[PROJECT] Transfer Files To/From Jobs Using the OSDF \u00b6 Use an 'osdf://' URL to Transfer Large Input Files and Containers \u00b6 Jobs will transfer data from the OSDF directory when files are indicated with an appropriate osdf:// URL (or the older stash:// ) in the transfer_input_files line of the submit file. Make sure to customize the base URL based on your Access Point, as described in the table above . Some examples: Transferring one file from /ospool/apXX/data/ transfer_input_files = osdf:///ospool/apXX/data//InFile.txt When using multiple files from /ospool/apXX/data/ , it can be useful to use HTCondor submit file variables to make your list of files more readable: # Define a variable (example: OSDF_LOCATION) equal to the # path you would like files transferred to, and call this # variable using $(variable) OSDF_LOCATION = osdf:///ospool/apXX/data/ transfer_input_files = $(OSDF_LOCATION)/InputFile.txt, $(OSDF_LOCATION)/database.sql Transferring a folder from /ospool/apXX/data/ transfer_input_files = osdf:///ospool/apXX/data//?recursive Please note that for transferring a folder using OSDF ?recursive needs to added after the folder name. Use transfer_output_remaps and 'osdf://' URL for Large Output Files \u00b6 To move output files into an OSDF directory, users should use the transfer_output_remaps option within their job's submit file, which will transfer the user's specified file to the specific location in the data origin. By using transfer_output_remaps , it is possible to specify what path to save a file to and what name to save it under. Using this approach, it is possible to save files back to specific locations in your OSDF directory (as well as your /home directory, if desired). The general syntax for transfer_output_remaps is: transfer_output_remaps = \"Output1.txt = path/to/save/file/under/output.txt; Output2.txt = path/to/save/file/under/RenamedOutput.txt\" When saving large output files back to /ospool/apXX/data/ , the path provided will look like: transfer_output_remaps = \"Output.txt = osdf:///ospool/apXX/data//Output.txt\" Some examples: Transferring one output file ( OutFile.txt ) back to /ospool/apXX/data/ : transfer_output_remaps = \"OutFile.txt=osdf:///ospool/apXX/data//OutFile.txt\" When using multiple files from /ospool/apXX/data/ , it can be useful to use HTCondor submit file variables to make your list of files more readable. Also note the semi-colon separator in the list of output files. # Define a variable (example: OSDF_LOCATION) equal to the # path you would like files transferred to, and call this # variable using $(variable) OSDF_LOCATION = osdf:///ospool/apXX/data/ transfer_output_remaps = \"file1.txt = $(OSDF_LOCATION)/file1.txt; file2.txt = $(OSDF_LOCATION)/file2.txt; file3.txt = $(OSDF_LOCATION)/file3.txt\" Phase out of stash:/// and stashcp command \u00b6 Historically, output files could be transferred from a job to an' OSDF location using the stashcp command within the job's executable. However, this mechanism is no longer encouraged for OSPool users. Instead, jobs should use transfer_output_remaps (an HTCondor feature) to transfer output files to your assigned OSDF origin. By using transfer_output_remaps , HTCondor will manage the output data transfer for your jobs. Data transferred via HTCondor is more likely to be transferred successfully and errors with transfer are more likely to be reported to the user. osdf:// is the new format for these kind of transfers, and is equivalent of the old stash:// format (which will keep on being supported for the short term).","title":"Transfer Larger Job Files and Containers Using OSDF"},{"location":"htc_workloads/managing_data/osdf/#transfer-larger-job-files-and-containers-using-osdf","text":"For input files >1GB and output files >1GB in size, the default HTCondor file transfer mechanisms run the risk of over-taxing the Access Point and their network capacity. And this is exactly why the OSDF ( Open Science Data Federation ) exists for researchers with larger per-job data! The OSDF is a network of data origins and caches for data distribution. If you have an account on an OSG Access Point, you have access to an OSDF data origin, specifically a directory that can be used to stage input and output data for jobs, accessible via the OSDF. This guide describes general tips for using the OSDF, where to stage your files, and how to access files from jobs.","title":"Transfer Larger Job Files and Containers Using OSDF"},{"location":"htc_workloads/managing_data/osdf/#important-considerations-and-best-practices","text":"Use OSDF locations for larger files and containers : We recommend using the OSDF for files larger than 1GB (input or output) and all container files. OSDF files are cached across the Open Science Pool, any changes or modifications that you make might not be propagated. This means that if you add a new version of a file the OSDF directory, it must first be given a unique name (or directory path) to distinguish it from previous versions of that file. Adding a date or version number to directories or file names is strongly encouraged to manage your files uniqness. This is especially important when using the OSDF for software and containers. Never submit jobs from the OSDF locations; always submit jobs from within the /home directory. All log , error , output files and any other files smaller than the above values should ONLY ever exist within the user's /home directory. Files placed within a public OSDF directory are publicly accessible , discoverable and readable by anyone, via the web. At the moment, most default OSDF locations are not public.","title":"Important Considerations and Best Practices"},{"location":"htc_workloads/managing_data/osdf/#where-to-put-your-files","text":"Data origins and local mount points varies between the different access points. See the list below for the \"Local Path\" to use, based on your access point. Access Point OSDF Origin ap40.uw.osg-htc.org Accessible to user only: Local Path: /ospool/ap40/data/[USERNAME] Base OSDF URL: osdf:///ospool/ap40/data/[USERNAME] ap20.uc.osg-htc.org Accessible to user only: Local Path: /ospool/ap20/data/[USERNAME] Base OSDF URL: osdf:///ospool/ap20/data/[USERNAME] Accessible to project group only: Local Path: /ospool/uc-shared/projects/[PROJECT] Base OSDF URL: osdf:///ospool/uc-shared/projects/[PROJECT] Public space for projects: Local Path: /ospool/uc-shared/public/[PROJECT] Base OSDF URL: osdf:///ospool/uc-shared/public/[PROJECT] ap21.uc.osg-htc.org Accessible to user only: Local Path: /ospool/ap21/data/[USERNAME] Base OSDF URL: osdf:///ospool/ap21/data/[USERNAME] Accessible to project group only: Local Path: /ospool/uc-shared/project/[PROJECT] Base OSDF URL: osdf:///ospool/uc-shared/project/[PROJECT] Public space for projects: Local Path: /ospool/uc-shared/public/[PROJECT] Base OSDF URL: osdf:///ospool/uc-shared/public/[PROJECT]","title":"Where to Put Your Files"},{"location":"htc_workloads/managing_data/osdf/#transfer-files-tofrom-jobs-using-the-osdf","text":"","title":"Transfer Files To/From Jobs Using the OSDF"},{"location":"htc_workloads/managing_data/osdf/#use-an-osdf-url-to-transfer-large-input-files-and-containers","text":"Jobs will transfer data from the OSDF directory when files are indicated with an appropriate osdf:// URL (or the older stash:// ) in the transfer_input_files line of the submit file. Make sure to customize the base URL based on your Access Point, as described in the table above . Some examples: Transferring one file from /ospool/apXX/data/ transfer_input_files = osdf:///ospool/apXX/data//InFile.txt When using multiple files from /ospool/apXX/data/ , it can be useful to use HTCondor submit file variables to make your list of files more readable: # Define a variable (example: OSDF_LOCATION) equal to the # path you would like files transferred to, and call this # variable using $(variable) OSDF_LOCATION = osdf:///ospool/apXX/data/ transfer_input_files = $(OSDF_LOCATION)/InputFile.txt, $(OSDF_LOCATION)/database.sql Transferring a folder from /ospool/apXX/data/ transfer_input_files = osdf:///ospool/apXX/data//?recursive Please note that for transferring a folder using OSDF ?recursive needs to added after the folder name.","title":"Use an 'osdf://' URL to Transfer Large Input Files and Containers"},{"location":"htc_workloads/managing_data/osdf/#use-transfer_output_remaps-and-osdf-url-for-large-output-files","text":"To move output files into an OSDF directory, users should use the transfer_output_remaps option within their job's submit file, which will transfer the user's specified file to the specific location in the data origin. By using transfer_output_remaps , it is possible to specify what path to save a file to and what name to save it under. Using this approach, it is possible to save files back to specific locations in your OSDF directory (as well as your /home directory, if desired). The general syntax for transfer_output_remaps is: transfer_output_remaps = \"Output1.txt = path/to/save/file/under/output.txt; Output2.txt = path/to/save/file/under/RenamedOutput.txt\" When saving large output files back to /ospool/apXX/data/ , the path provided will look like: transfer_output_remaps = \"Output.txt = osdf:///ospool/apXX/data//Output.txt\" Some examples: Transferring one output file ( OutFile.txt ) back to /ospool/apXX/data/ : transfer_output_remaps = \"OutFile.txt=osdf:///ospool/apXX/data//OutFile.txt\" When using multiple files from /ospool/apXX/data/ , it can be useful to use HTCondor submit file variables to make your list of files more readable. Also note the semi-colon separator in the list of output files. # Define a variable (example: OSDF_LOCATION) equal to the # path you would like files transferred to, and call this # variable using $(variable) OSDF_LOCATION = osdf:///ospool/apXX/data/ transfer_output_remaps = \"file1.txt = $(OSDF_LOCATION)/file1.txt; file2.txt = $(OSDF_LOCATION)/file2.txt; file3.txt = $(OSDF_LOCATION)/file3.txt\"","title":"Use transfer_output_remaps and 'osdf://' URL for Large Output Files"},{"location":"htc_workloads/managing_data/osdf/#phase-out-of-stash-and-stashcp-command","text":"Historically, output files could be transferred from a job to an' OSDF location using the stashcp command within the job's executable. However, this mechanism is no longer encouraged for OSPool users. Instead, jobs should use transfer_output_remaps (an HTCondor feature) to transfer output files to your assigned OSDF origin. By using transfer_output_remaps , HTCondor will manage the output data transfer for your jobs. Data transferred via HTCondor is more likely to be transferred successfully and errors with transfer are more likely to be reported to the user. osdf:// is the new format for these kind of transfers, and is equivalent of the old stash:// format (which will keep on being supported for the short term).","title":"Phase out of stash:/// and stashcp command"},{"location":"htc_workloads/managing_data/overview/","text":"Overview: Data Staging and Transfer to Jobs \u00b6 Overview \u00b6 As a distributed system, jobs on the OSPool will run in different physical locations, where the computers that are executing jobs don't have direct access to the files placed on the Access Point (e.g. in a /home directory). In order to run on this kind of distributed system, jobs need to \"bring along\" the data, code, packages, and other files from the access point (where the job is submitted) to the execute points (where the job will run). HTCondor's file transfer tools and plugins make this possible; input and output files are specified as part of the job submission and then moved to and from the execution location. This guide describes where to place files on the access points, and how to use these files within jobs, with links to a more detailed guide for each use case. Always Submit From /home \u00b6 Regardless of where data is placed, jobs should only be submitted with condor_submit from /home Use HTCondor File Transfer for Smaller Job Files \u00b6 You should use your /home directory to stage job files where: individual input files per job are less than 1GB per file, and if there are multiple files, they total less than 1GB output files per job are less than 1GB per file Files can to be transferred to and from the /home directory using HTCondor's file transfer mechanism. Input files can be specified in the submit file and by default, files created by your job will automatically be returned to your /home directory. See our Transfer Files To and From /home guide for complete details on managing your files this way. Use OSDF for Larger Files and Containers \u00b6 You should use the OSDF ( Open Science Data Federation ) to stage job files where: individual input files per job are greater than 1GB per file an input file (of any size) is used by many jobs output files per job are greater than 1GB per file You should also always use the OSDF to stage Singularity/Apptainer container files (with the ending .sif ) for jobs. Important Note: Files in OSDF are cached, so it is important to use a descriptive file name (possibly using version names or dates within the file name), or a directory structure with unique names to ensure you know what version of the file you are using within your job. To use the OSDF, files are placed (or returned to) a local path, and moved to and from the job using a URL notation in the submit file. To see where to place your files in the OSDF and how to use OSDF URLs in transfer_input_files / transfer_output_files , please see the OSDF guide. Quotas \u00b6 /home and OSDF origins all have quota limits. /home is usually limited to 50 GBs, while OSDF limits vary. You can find out your current usage by running quota or quota -vs Note that jobs will go on hold if quotas are exceeded. If you want an increase in your quota, please send a request with justification to the ticket system support@osg-htc.org External Data Transfer to/from Access Point \u00b6 In general, common Unix tools such as rsync , scp , Putty, WinSCP, gFTP , etc. can be used to upload data from your computer to access point, or to download files from the access point. See our Data Transfer Guide for more details. FAQ \u00b6 For additional data information, see also the \"Data Storage and Transfer\" section of our FAQ . Data Policies \u00b6 Please see the OSPool Polices for important usage polices.","title":"Overview: Data Staging and Transfer to Jobs"},{"location":"htc_workloads/managing_data/overview/#overview-data-staging-and-transfer-to-jobs","text":"","title":"Overview: Data Staging and Transfer to Jobs"},{"location":"htc_workloads/managing_data/overview/#overview","text":"As a distributed system, jobs on the OSPool will run in different physical locations, where the computers that are executing jobs don't have direct access to the files placed on the Access Point (e.g. in a /home directory). In order to run on this kind of distributed system, jobs need to \"bring along\" the data, code, packages, and other files from the access point (where the job is submitted) to the execute points (where the job will run). HTCondor's file transfer tools and plugins make this possible; input and output files are specified as part of the job submission and then moved to and from the execution location. This guide describes where to place files on the access points, and how to use these files within jobs, with links to a more detailed guide for each use case.","title":"Overview"},{"location":"htc_workloads/managing_data/overview/#always-submit-from-home","text":"Regardless of where data is placed, jobs should only be submitted with condor_submit from /home","title":"Always Submit From /home"},{"location":"htc_workloads/managing_data/overview/#use-htcondor-file-transfer-for-smaller-job-files","text":"You should use your /home directory to stage job files where: individual input files per job are less than 1GB per file, and if there are multiple files, they total less than 1GB output files per job are less than 1GB per file Files can to be transferred to and from the /home directory using HTCondor's file transfer mechanism. Input files can be specified in the submit file and by default, files created by your job will automatically be returned to your /home directory. See our Transfer Files To and From /home guide for complete details on managing your files this way.","title":"Use HTCondor File Transfer for Smaller Job Files"},{"location":"htc_workloads/managing_data/overview/#use-osdf-for-larger-files-and-containers","text":"You should use the OSDF ( Open Science Data Federation ) to stage job files where: individual input files per job are greater than 1GB per file an input file (of any size) is used by many jobs output files per job are greater than 1GB per file You should also always use the OSDF to stage Singularity/Apptainer container files (with the ending .sif ) for jobs. Important Note: Files in OSDF are cached, so it is important to use a descriptive file name (possibly using version names or dates within the file name), or a directory structure with unique names to ensure you know what version of the file you are using within your job. To use the OSDF, files are placed (or returned to) a local path, and moved to and from the job using a URL notation in the submit file. To see where to place your files in the OSDF and how to use OSDF URLs in transfer_input_files / transfer_output_files , please see the OSDF guide.","title":"Use OSDF for Larger Files and Containers"},{"location":"htc_workloads/managing_data/overview/#quotas","text":"/home and OSDF origins all have quota limits. /home is usually limited to 50 GBs, while OSDF limits vary. You can find out your current usage by running quota or quota -vs Note that jobs will go on hold if quotas are exceeded. If you want an increase in your quota, please send a request with justification to the ticket system support@osg-htc.org","title":"Quotas"},{"location":"htc_workloads/managing_data/overview/#external-data-transfer-tofrom-access-point","text":"In general, common Unix tools such as rsync , scp , Putty, WinSCP, gFTP , etc. can be used to upload data from your computer to access point, or to download files from the access point. See our Data Transfer Guide for more details.","title":"External Data Transfer to/from Access Point"},{"location":"htc_workloads/managing_data/overview/#faq","text":"For additional data information, see also the \"Data Storage and Transfer\" section of our FAQ .","title":"FAQ"},{"location":"htc_workloads/managing_data/overview/#data-policies","text":"Please see the OSPool Polices for important usage polices.","title":"Data Policies"},{"location":"htc_workloads/managing_data/scp/","text":"Use scp To Transfer Files To and From Access Point \u00b6 Overview \u00b6 This tutorial assumes that you will be using a command line application for performing file transfers instead of a GUI-based application such as WinSCP. We can transfer files to and from the access point using the scp command. Note scp is a counterpart to the secure shell command, ssh , that allows for secure, encrypted file transfers between systems using your ssh credentials. When using scp , you will always need to specify both the source of the content that you wish to copy and the destination of where you would like the copy to end up. For example: $ scp Files on remote systems (like an OSG Access Point) are indicated using username@machine:/path/to/file . Transfer Files To Access Point \u00b6 Let's say you have a file you wish to transfer named my_file.txt . Using the terminal application on your computer, navigate to the location of my_file.txt . Then use the following scp command to tranfer my_file.txt to your /home on the access point. Note that you will not be logged into the access point when you perform this step. $ scp my_file.txt username@apXX.xx.osg-htc.org:/home/username/ Where NN is the specific number of your assigned login node (i.e. 04 or 05 ). Large files (>100MB in size) can be uploaded to your /public directory also using scp : $ scp my_large_file.gz username@apXX.xx.osg-htc.org:/public/username/ Transfer Directories To Access Point \u00b6 To copy directories using scp , add the (recursive) -r option to your scp command. For example: $ scp -r my_Dir username@apXX.xx.osg-htc.org:/home/username/ Transfer Files to Another Directory on the Access Point \u00b6 If you are using the OSDF to stage some of your files, you can upload files directly to that path by replacing /home/username in the commands above. If I wanted to upload files to the OSDF location on ap20 , which is /ospool/ap20/data/username , I would use the following command: $ scp my_file.txt username@ap20.uc.osg-htc.org:/ospool/ap20/data/username Transfer Files From Access Point \u00b6 To transfer files from the access point back to your laptop or desktop you can use the scp command as shown above, but with the source being the copy that is located on the access point: $ scp username@apXX.xx.osg-htc.org:/home/username/my_file.txt ./ where ./ sets the destination of the copy to your current location on your computer. Again, you will not be logged into the access point when you perform this step. You can download files from a different directory in the same way as described above when uploading files. Transfer Files Directly Between Access Point and Another Server \u00b6 scp can be used to transfer files between the OSG access point and another server that you have ssh access to. This means that files don't have to first be transferred to your personal computer which can save a lot of time and effort! For example, to transfer a file from another server to your access point login node /home directory: $ scp username@serverhostname:/path/to/my_file.txt username@lapXX.xx.osg-htc.org:/home/username Be sure to use the username assigned to you on the other server and to provide the full path on the other server to your file. To transfer files from the OSG Access Point to the other server, just reverse the order of the two server statements. Other Graphical User Interface (GUI) Tools for transferring files and folders \u00b6 Apart from scp, other GUI software such as- WinSCP , FileZilla , Cyberduck can be used for transferring files and folders from and to the Access Point. Please remember to add your private key for the authentication method.","title":"Use scp To Transfer Files To and From OSG Managed Access Points"},{"location":"htc_workloads/managing_data/scp/#use-scp-to-transfer-files-to-and-from-access-point","text":"","title":"Use scp To Transfer Files To and From Access Point"},{"location":"htc_workloads/managing_data/scp/#overview","text":"This tutorial assumes that you will be using a command line application for performing file transfers instead of a GUI-based application such as WinSCP. We can transfer files to and from the access point using the scp command. Note scp is a counterpart to the secure shell command, ssh , that allows for secure, encrypted file transfers between systems using your ssh credentials. When using scp , you will always need to specify both the source of the content that you wish to copy and the destination of where you would like the copy to end up. For example: $ scp Files on remote systems (like an OSG Access Point) are indicated using username@machine:/path/to/file .","title":"Overview"},{"location":"htc_workloads/managing_data/scp/#transfer-files-to-access-point","text":"Let's say you have a file you wish to transfer named my_file.txt . Using the terminal application on your computer, navigate to the location of my_file.txt . Then use the following scp command to tranfer my_file.txt to your /home on the access point. Note that you will not be logged into the access point when you perform this step. $ scp my_file.txt username@apXX.xx.osg-htc.org:/home/username/ Where NN is the specific number of your assigned login node (i.e. 04 or 05 ). Large files (>100MB in size) can be uploaded to your /public directory also using scp : $ scp my_large_file.gz username@apXX.xx.osg-htc.org:/public/username/","title":"Transfer Files To Access Point"},{"location":"htc_workloads/managing_data/scp/#transfer-directories-to-access-point","text":"To copy directories using scp , add the (recursive) -r option to your scp command. For example: $ scp -r my_Dir username@apXX.xx.osg-htc.org:/home/username/","title":"Transfer Directories To Access Point"},{"location":"htc_workloads/managing_data/scp/#transfer-files-to-another-directory-on-the-access-point","text":"If you are using the OSDF to stage some of your files, you can upload files directly to that path by replacing /home/username in the commands above. If I wanted to upload files to the OSDF location on ap20 , which is /ospool/ap20/data/username , I would use the following command: $ scp my_file.txt username@ap20.uc.osg-htc.org:/ospool/ap20/data/username","title":"Transfer Files to Another Directory on the Access Point"},{"location":"htc_workloads/managing_data/scp/#transfer-files-from-access-point","text":"To transfer files from the access point back to your laptop or desktop you can use the scp command as shown above, but with the source being the copy that is located on the access point: $ scp username@apXX.xx.osg-htc.org:/home/username/my_file.txt ./ where ./ sets the destination of the copy to your current location on your computer. Again, you will not be logged into the access point when you perform this step. You can download files from a different directory in the same way as described above when uploading files.","title":"Transfer Files From Access Point"},{"location":"htc_workloads/managing_data/scp/#transfer-files-directly-between-access-point-and-another-server","text":"scp can be used to transfer files between the OSG access point and another server that you have ssh access to. This means that files don't have to first be transferred to your personal computer which can save a lot of time and effort! For example, to transfer a file from another server to your access point login node /home directory: $ scp username@serverhostname:/path/to/my_file.txt username@lapXX.xx.osg-htc.org:/home/username Be sure to use the username assigned to you on the other server and to provide the full path on the other server to your file. To transfer files from the OSG Access Point to the other server, just reverse the order of the two server statements.","title":"Transfer Files Directly Between Access Point and Another Server"},{"location":"htc_workloads/managing_data/scp/#other-graphical-user-interface-gui-tools-for-transferring-files-and-folders","text":"Apart from scp, other GUI software such as- WinSCP , FileZilla , Cyberduck can be used for transferring files and folders from and to the Access Point. Please remember to add your private key for the authentication method.","title":"Other Graphical User Interface (GUI) Tools for transferring files and folders"},{"location":"htc_workloads/specific_resource/arm64/","text":"ARM64 \u00b6 ARM64 (AArch64) and x86_64 are both 64-bit architectures, but they differ in design and application. ARM64 is renowned for its energy efficiency, making it ideal for mobile devices and other low-power environments. In contrast, x86_64, predominantly used in Intel and AMD processors, emphasizes raw performance and compatibility with legacy software, establishing it as the standard for desktops, laptops, and servers. However, ARM64's energy efficiency has increasingly driven its adoption in high-throughput and high-performance computing environments. A small number of sites within the OSPool now offer ARM64 resources, though these resources currently see limited demand. The availability of these underutilized cycles provides a strong incentive for users to incorporate ARM64 resources when running their jobs. Listing Available Resources \u00b6 To see the ARM64 resources in the OSPool, use condor_status with a constraint for the archtechture (note that on Linux and HTCondor, the offical label for ARM64 is aarch64 ): condor_status -constraint 'Arch == \"aarch64\"' Requesting ARM64 \u00b6 By default, HTCondor will automatically send your job to the same architechture as the access point you are submitting from, which currently is the x86_64 architechture. If you also want to target ARM64, add the following to your requirements . requirements = (Arch == \"X86_64\" || Arch == \"aarch64\") Software Considerations \u00b6 Since ARM64 is a different architecture, x86_64 binaries and containers are incompatible. Additionally, OSPool's container synchronization is not yet ARM64-compatible. Therefore, the options for software on ARM64 resources are limited to the following: Simple Python codes. If you have a simple Python script which runs on the OSPool default images, it will probably work fine on ARM64 as well. All you need to this in this case, is update your requirements as described in the previous section. Pre-built binaries. If you have built binaries for multiple architechtures, you can use HTCondor's machine add substitution mechanism to switch between the binaries depending on what machine the job lands on. Please the the HTCondor documentation for more details. Multiarch containers. If you are able to build multiarch containers (for example, with docker buildx build --platform linux/amd64,linux/arm64 ), you can specify which container to use similar to the pre-built binaries case. However, the image synchronization is still a manual process, so please contact support@osg-htc.org for help with this setup.","title":"ARM64"},{"location":"htc_workloads/specific_resource/arm64/#arm64","text":"ARM64 (AArch64) and x86_64 are both 64-bit architectures, but they differ in design and application. ARM64 is renowned for its energy efficiency, making it ideal for mobile devices and other low-power environments. In contrast, x86_64, predominantly used in Intel and AMD processors, emphasizes raw performance and compatibility with legacy software, establishing it as the standard for desktops, laptops, and servers. However, ARM64's energy efficiency has increasingly driven its adoption in high-throughput and high-performance computing environments. A small number of sites within the OSPool now offer ARM64 resources, though these resources currently see limited demand. The availability of these underutilized cycles provides a strong incentive for users to incorporate ARM64 resources when running their jobs.","title":"ARM64"},{"location":"htc_workloads/specific_resource/arm64/#listing-available-resources","text":"To see the ARM64 resources in the OSPool, use condor_status with a constraint for the archtechture (note that on Linux and HTCondor, the offical label for ARM64 is aarch64 ): condor_status -constraint 'Arch == \"aarch64\"'","title":"Listing Available Resources"},{"location":"htc_workloads/specific_resource/arm64/#requesting-arm64","text":"By default, HTCondor will automatically send your job to the same architechture as the access point you are submitting from, which currently is the x86_64 architechture. If you also want to target ARM64, add the following to your requirements . requirements = (Arch == \"X86_64\" || Arch == \"aarch64\")","title":"Requesting ARM64"},{"location":"htc_workloads/specific_resource/arm64/#software-considerations","text":"Since ARM64 is a different architecture, x86_64 binaries and containers are incompatible. Additionally, OSPool's container synchronization is not yet ARM64-compatible. Therefore, the options for software on ARM64 resources are limited to the following: Simple Python codes. If you have a simple Python script which runs on the OSPool default images, it will probably work fine on ARM64 as well. All you need to this in this case, is update your requirements as described in the previous section. Pre-built binaries. If you have built binaries for multiple architechtures, you can use HTCondor's machine add substitution mechanism to switch between the binaries depending on what machine the job lands on. Please the the HTCondor documentation for more details. Multiarch containers. If you are able to build multiarch containers (for example, with docker buildx build --platform linux/amd64,linux/arm64 ), you can specify which container to use similar to the pre-built binaries case. However, the image synchronization is still a manual process, so please contact support@osg-htc.org for help with this setup.","title":"Software Considerations"},{"location":"htc_workloads/specific_resource/el9-transition/","text":"Operating System Transition to EL9 \u00b6 During May 2024, the OSPool will transition to be mostly EL9 based. The access points will be upgraded, and the execution points will mostly shift to EL9. Note that EL9 in this context refers to Enterprise Linux 9, and is an umbrella term for CentOS Stream 9 and derived distributions such as AlmaLinux 9 and RockyLinux 9. What You Need to Do \u00b6 The access point transitions will be mostly transparent. You will get an email about when the switchover will happen, and the access point will be offline for about 8 hours. Data and jobs will be retained, so no action is required. If your jobs use containers (Apptainer/Singularity, Docker) \u00b6 No action is needed for researchers already using a Apptainer/Singularity or Docker software containers in their jobs. Becuase software containers have a small operating system installed inside of them, these jobs carry everything they need with them and do not rely signifcantly on the host operating system. By default, your jobs will match to any operating system in the HTC pool, including the new EL9 hosts. All other jobs (not using containers) \u00b6 Researchers not already using a Docker or Apptainer software container will need to either: Test their software/code on an EL9 machine to see their software needs to be rebuilt, and then update the job requirements line to refer to RHEL 9 . See Requirements or Switch to using a software container (recommended). See the below for additional information. If you would like to access as much computing capacity as possible, consider using an Apptainer or Docker software container for your jobs so that your jobs can match to a variety of operating systems. Options For Transitioning Your Jobs \u00b6 Option 1: Use a Software Container (Recommended) \u00b6 Using a software container to provide a base version of Linux will allow you to run on any nodes in the OSPool regardless of the operating system it is running, and not limit you to a subset of nodes. Apptainer/Singularity Docker Option 2: Transition to a New Operating System \u00b6 At any time, you can require a specific operating system version (or versions) for your jobs. Instructions for requesting a specific operating system(s) are outlined here: Requirements This option is more limiting because you are restricted to operating systems used by OSPool, and the number of nodes running that operating system. Alternativly, you can make your job run in a provided base OS container. For example, if you want your job to always run in RHEL 8, remove the requirements and add +SingularityImage in your submit file. Example: +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/htc/rocky:8\" requirements = True","title":"EL9 Transition"},{"location":"htc_workloads/specific_resource/el9-transition/#operating-system-transition-to-el9","text":"During May 2024, the OSPool will transition to be mostly EL9 based. The access points will be upgraded, and the execution points will mostly shift to EL9. Note that EL9 in this context refers to Enterprise Linux 9, and is an umbrella term for CentOS Stream 9 and derived distributions such as AlmaLinux 9 and RockyLinux 9.","title":"Operating System Transition to EL9"},{"location":"htc_workloads/specific_resource/el9-transition/#what-you-need-to-do","text":"The access point transitions will be mostly transparent. You will get an email about when the switchover will happen, and the access point will be offline for about 8 hours. Data and jobs will be retained, so no action is required.","title":"What You Need to Do"},{"location":"htc_workloads/specific_resource/el9-transition/#if-your-jobs-use-containers-apptainersingularity-docker","text":"No action is needed for researchers already using a Apptainer/Singularity or Docker software containers in their jobs. Becuase software containers have a small operating system installed inside of them, these jobs carry everything they need with them and do not rely signifcantly on the host operating system. By default, your jobs will match to any operating system in the HTC pool, including the new EL9 hosts.","title":"If your jobs use containers (Apptainer/Singularity, Docker)"},{"location":"htc_workloads/specific_resource/el9-transition/#all-other-jobs-not-using-containers","text":"Researchers not already using a Docker or Apptainer software container will need to either: Test their software/code on an EL9 machine to see their software needs to be rebuilt, and then update the job requirements line to refer to RHEL 9 . See Requirements or Switch to using a software container (recommended). See the below for additional information. If you would like to access as much computing capacity as possible, consider using an Apptainer or Docker software container for your jobs so that your jobs can match to a variety of operating systems.","title":"All other jobs (not using containers)"},{"location":"htc_workloads/specific_resource/el9-transition/#options-for-transitioning-your-jobs","text":"","title":"Options For Transitioning Your Jobs"},{"location":"htc_workloads/specific_resource/el9-transition/#option-1-use-a-software-container-recommended","text":"Using a software container to provide a base version of Linux will allow you to run on any nodes in the OSPool regardless of the operating system it is running, and not limit you to a subset of nodes. Apptainer/Singularity Docker","title":"Option 1: Use a Software Container (Recommended)"},{"location":"htc_workloads/specific_resource/el9-transition/#option-2-transition-to-a-new-operating-system","text":"At any time, you can require a specific operating system version (or versions) for your jobs. Instructions for requesting a specific operating system(s) are outlined here: Requirements This option is more limiting because you are restricted to operating systems used by OSPool, and the number of nodes running that operating system. Alternativly, you can make your job run in a provided base OS container. For example, if you want your job to always run in RHEL 8, remove the requirements and add +SingularityImage in your submit file. Example: +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/htc/rocky:8\" requirements = True","title":"Option 2: Transition to a New Operating System"},{"location":"htc_workloads/specific_resource/gpu-jobs/","text":"GPU Jobs \u00b6 GPUs (Graphical Processing Units) are a special kind of computer processor that are optimized for running very large numbers of simple calculations in parallel, which often can be applied to problems related to image processing or machine learning. Well-crafted GPU programs for suitable applications can outperform implementations running on CPUs by a factor of ten or more, but only when the program is written and designed explicitly to run on GPUs using special libraries like CUDA. Requesting GPUs \u00b6 To request a GPU for your HTCondor job, you can use the HTCondor request_gpus attribute in your submit file (along with the usual request_cpus , request_memory , and request_disk attributes). For example: request_gpus = 1 request_cpus = 1 request_memory = 4 GB request_disk = 2 GB Users can request one or multiple GPU cores on a single GPU machine. Specific GPU Requests \u00b6 If your software or code requires a certain type of GPU, or has some other special requirement, there is a special submit file line to request these capabilities, require_gpus . A few attributes that may be useful: Capability : this is NOT the GPU library, but rather a measure of the GPU's \"Compute Capability,\" which is relted to hardware generation DriverVersion : maximum version of the GPU libraries that can be supported GlobalMemoryMB : amount of GPU memory available on the GPU device in megabytes (MB) If you want a certain type or family of GPUs, we usually recommend using the GPU's 'Compute Capability', known as the Capability by HTCondor. For example, an NVIDIA A100 GPU has a Compute Capability of 8.0, so if you wanted to run on an A100 GPU specifically, the submit file requirement would be: require_gpus = (Capability == 8.0) Multiple requirements can be specified by using && statements: require_gpus = (Capability >= 7.5) && (GlobalMemoryMB >= 11000) Note that the more requirements you include, the fewer resources will be available to you! It's always better to set the minimal possible requirements (ideally, none!) in order to access the greatest amount of computing capacity. Sample Submit File \u00b6 universe = container container_image = /cvmfs/singularity.opensciencegrid.org/htc/tensorflow:1.3 log = job_$(Cluster)_$(Process).log error = job_$(Cluster)_$(Process).err output = job_$(Cluster)_$(Process).out executable = run_gpu_job.py #arguments = +JobDurationCategory = \"Medium\" # specify both general requirements and gpu requirements if there are any # requirements = require_gpus = (Capability > 7.5) request_gpus = 1 request_cpus = 1 request_memory = 4GB request_disk = 4GB queue 1 Available GPUs \u00b6 Capacity \u00b6 There are multiple OSPool contributors providing GPUs on a regular basis to the OSPool. Some of these contributors will make their GPUs available only when there is demand in the job queue, so after initial small-scale job testing, we strongly recommend submitting a signficant batch of test jobs to explore how much throughput you can get in the system as a whole. As a reminder, because the OSPool is dynamic, the more jobs submitted requesting GPUs, the more GPU machines will be pulled into the OSPool as execution points. GPU Types \u00b6 Because the composition of the OSPool can change from day to day, we do not know exactly what specific GPUs are available at any given time. Based on previous GPU job executions, you might land on one of the following types of GPUs: GeForce GTX 1080 Ti (Capability: 6.1) V100 (Capability: 7.0) GeForce GTX 2080 Ti (Capability: 7.5) Quadro RTX 6000 (Capability: 7.5) A100 (Capability: 8.0) A40 (Capability: 8.6) GeForce RTX 3090 (Capability: 8.6) Software and Data Considerations \u00b6 Software for GPUs \u00b6 For GPU-enabled machine learning libraries, we recommend using software containers to set up your software for jobs: Containers - Apptainer/Singularity Sample TensorFlow GPU Container Image Definition TensorFlow Example Job See our Data Staging and Transfer guide for details and contact the Research Computing Facilitation team with questions.","title":"GPU Jobs"},{"location":"htc_workloads/specific_resource/gpu-jobs/#gpu-jobs","text":"GPUs (Graphical Processing Units) are a special kind of computer processor that are optimized for running very large numbers of simple calculations in parallel, which often can be applied to problems related to image processing or machine learning. Well-crafted GPU programs for suitable applications can outperform implementations running on CPUs by a factor of ten or more, but only when the program is written and designed explicitly to run on GPUs using special libraries like CUDA.","title":"GPU Jobs"},{"location":"htc_workloads/specific_resource/gpu-jobs/#requesting-gpus","text":"To request a GPU for your HTCondor job, you can use the HTCondor request_gpus attribute in your submit file (along with the usual request_cpus , request_memory , and request_disk attributes). For example: request_gpus = 1 request_cpus = 1 request_memory = 4 GB request_disk = 2 GB Users can request one or multiple GPU cores on a single GPU machine.","title":"Requesting GPUs"},{"location":"htc_workloads/specific_resource/gpu-jobs/#specific-gpu-requests","text":"If your software or code requires a certain type of GPU, or has some other special requirement, there is a special submit file line to request these capabilities, require_gpus . A few attributes that may be useful: Capability : this is NOT the GPU library, but rather a measure of the GPU's \"Compute Capability,\" which is relted to hardware generation DriverVersion : maximum version of the GPU libraries that can be supported GlobalMemoryMB : amount of GPU memory available on the GPU device in megabytes (MB) If you want a certain type or family of GPUs, we usually recommend using the GPU's 'Compute Capability', known as the Capability by HTCondor. For example, an NVIDIA A100 GPU has a Compute Capability of 8.0, so if you wanted to run on an A100 GPU specifically, the submit file requirement would be: require_gpus = (Capability == 8.0) Multiple requirements can be specified by using && statements: require_gpus = (Capability >= 7.5) && (GlobalMemoryMB >= 11000) Note that the more requirements you include, the fewer resources will be available to you! It's always better to set the minimal possible requirements (ideally, none!) in order to access the greatest amount of computing capacity.","title":"Specific GPU Requests"},{"location":"htc_workloads/specific_resource/gpu-jobs/#sample-submit-file","text":"universe = container container_image = /cvmfs/singularity.opensciencegrid.org/htc/tensorflow:1.3 log = job_$(Cluster)_$(Process).log error = job_$(Cluster)_$(Process).err output = job_$(Cluster)_$(Process).out executable = run_gpu_job.py #arguments = +JobDurationCategory = \"Medium\" # specify both general requirements and gpu requirements if there are any # requirements = require_gpus = (Capability > 7.5) request_gpus = 1 request_cpus = 1 request_memory = 4GB request_disk = 4GB queue 1","title":"Sample Submit File"},{"location":"htc_workloads/specific_resource/gpu-jobs/#available-gpus","text":"","title":"Available GPUs"},{"location":"htc_workloads/specific_resource/gpu-jobs/#capacity","text":"There are multiple OSPool contributors providing GPUs on a regular basis to the OSPool. Some of these contributors will make their GPUs available only when there is demand in the job queue, so after initial small-scale job testing, we strongly recommend submitting a signficant batch of test jobs to explore how much throughput you can get in the system as a whole. As a reminder, because the OSPool is dynamic, the more jobs submitted requesting GPUs, the more GPU machines will be pulled into the OSPool as execution points.","title":"Capacity"},{"location":"htc_workloads/specific_resource/gpu-jobs/#gpu-types","text":"Because the composition of the OSPool can change from day to day, we do not know exactly what specific GPUs are available at any given time. Based on previous GPU job executions, you might land on one of the following types of GPUs: GeForce GTX 1080 Ti (Capability: 6.1) V100 (Capability: 7.0) GeForce GTX 2080 Ti (Capability: 7.5) Quadro RTX 6000 (Capability: 7.5) A100 (Capability: 8.0) A40 (Capability: 8.6) GeForce RTX 3090 (Capability: 8.6)","title":"GPU Types"},{"location":"htc_workloads/specific_resource/gpu-jobs/#software-and-data-considerations","text":"","title":"Software and Data Considerations"},{"location":"htc_workloads/specific_resource/gpu-jobs/#software-for-gpus","text":"For GPU-enabled machine learning libraries, we recommend using software containers to set up your software for jobs: Containers - Apptainer/Singularity Sample TensorFlow GPU Container Image Definition TensorFlow Example Job See our Data Staging and Transfer guide for details and contact the Research Computing Facilitation team with questions.","title":"Software for GPUs"},{"location":"htc_workloads/specific_resource/large-memory-jobs/","text":"Large Memory Jobs \u00b6 By default, 2 GB of RAM (aka memory) will be assigned to your jobs. However, some jobs will require additional memory to complete successfully. To request more memory, use the HTCondor request_memory attribute in your submit file. The default unit is MB. For example, the following will request 12 GB: request_memory = 12228 You might be wondering why the above is requesting 12228 MB for 12 GB. That's because byte units don't actually scale by 1000 (10^10) like the metric system, but instead scale by 1024 (2^10) due to the binary nature of bytes. Alternatively, you can define a memory request using standard units request_memory = 12GB We recommend always explictly defining the byte units in your request_memory statement. Please note that the OSG has limited resources available for large memory jobs. Requesting jobs with higher memory needs will results in longer than average queue times for these jobs.","title":"Large Memory Jobs"},{"location":"htc_workloads/specific_resource/large-memory-jobs/#large-memory-jobs","text":"By default, 2 GB of RAM (aka memory) will be assigned to your jobs. However, some jobs will require additional memory to complete successfully. To request more memory, use the HTCondor request_memory attribute in your submit file. The default unit is MB. For example, the following will request 12 GB: request_memory = 12228 You might be wondering why the above is requesting 12228 MB for 12 GB. That's because byte units don't actually scale by 1000 (10^10) like the metric system, but instead scale by 1024 (2^10) due to the binary nature of bytes. Alternatively, you can define a memory request using standard units request_memory = 12GB We recommend always explictly defining the byte units in your request_memory statement. Please note that the OSG has limited resources available for large memory jobs. Requesting jobs with higher memory needs will results in longer than average queue times for these jobs.","title":"Large Memory Jobs"},{"location":"htc_workloads/specific_resource/multicore-jobs/","text":"Multicore Jobs \u00b6 Please note, the OSG has limited support for multicore jobs. Multicore jobs can be submitted for threaded or OpenMP applications. To request multiple cores (aka cpus) use the HTCondor request_cpus attribute in your submit file. Example: request_cpus = 8 We recommend requesting a maximum of 8 cpus. Important considerations When submitting multicore jobs please note that you will also have to tell your code or application to use the number of cpus requested in your submit file. Do not use core auto-detection as it might detect more cores than what were actually assigned to your job. MPI Jobs For jobs that require MPI, see our OpenMPI Jobs guide.","title":"Multicore Jobs"},{"location":"htc_workloads/specific_resource/multicore-jobs/#multicore-jobs","text":"Please note, the OSG has limited support for multicore jobs. Multicore jobs can be submitted for threaded or OpenMP applications. To request multiple cores (aka cpus) use the HTCondor request_cpus attribute in your submit file. Example: request_cpus = 8 We recommend requesting a maximum of 8 cpus. Important considerations When submitting multicore jobs please note that you will also have to tell your code or application to use the number of cpus requested in your submit file. Do not use core auto-detection as it might detect more cores than what were actually assigned to your job. MPI Jobs For jobs that require MPI, see our OpenMPI Jobs guide.","title":"Multicore Jobs"},{"location":"htc_workloads/specific_resource/openmpi-jobs/","text":"OpenMPI Jobs \u00b6 Even though the Open Science Pool is a high throughput computing system, sometimes there is a need to run small OpenMPI based jobs. OSG has limited support for this, as long as the core count is small (4 is known to work well, 8 and 16 becomes more difficult due to the limited number of resources). Find an MPI-based Container \u00b6 To get started, first compile your code using an OpenMPI container. You can create your own OpenMPI container or use the one that is available on DockerHub. OSG also has an openmpi container that can be used for compiling. Please note that the OSG provided container openmpi.sif image is available only on the ap20.uc.osg-htc.org and ap21.uc.osg-htc.org access points. For the ap40 access point, please use your desired docker image and do apptainer pull . More information about using apptainer pull can be found here . Compile the Code \u00b6 To compile your code using the OSG provided image, start running the container first. Then run mpicc to compile the code: $ apptainer shell /ospool/uc-shared/public/OSG-Staff/openmpi.sif Apptainer> mpicc -o hello hello.c The hello.c is an example hello world code that can be executed using multiple processors. The code is given below: #include #include int main(int argc, char** argv) { MPI_Init(NULL, NULL); int world_size; MPI_Comm_size(MPI_COMM_WORLD, &world_size); int world_rank; MPI_Comm_rank(MPI_COMM_WORLD, &world_rank); char processor_name[MPI_MAX_PROCESSOR_NAME]; int name_len; MPI_Get_processor_name(processor_name, &name_len); printf(\"Hello world from processor %s, rank %d out of %d processors\\n\", processor_name, world_rank, world_size); MPI_Finalize(); } After compiling the code, you can test the executable locally using mpiexec : Apptainer> mpiexec -n 4 hello Hello world from processor ap21.uc.osg-htc.org, rank 0 out of 4 processors Hello world from processor ap21.uc.osg-htc.org, rank 1 out of 4 processors Hello world from processor ap21.uc.osg-htc.org, rank 2 out of 4 processors Hello world from processor ap21.uc.osg-htc.org, rank 3 out of 4 processors When testing is done be sure to exit from the apptainer shell using exit . Run a Job Using the MPI Container and Compiled Code \u00b6 The next step is to run your code as a job on the Open Science Pool. For this, first create a wrapper.sh . Example: #!/bin/sh set -e mpiexec -n 4 hello Then, a job submit file: +SingularityImage = \"osdf:///ospool/uc-shared/public/OSG-Staff/openmpi.sif\" executable = wrapper.sh transfer_input_files = hello +JobDurationCategory = \"Medium\" request_cpus = 4 request_memory = 1 GB output = job.out.$(Cluster).$(Process) error = job.error.$(Cluster).$(Process) log = job.log.$(Cluster).$(Process) queue 1 Note how the executable is the wrapper.sh script, and that the real executable hello is transferred using the transfer_input_files mechanism. Please make sure that the number of cores specified in the submit file via request_cpus match the -n argument in the wrapper.sh file.","title":"OpenMPI Jobs"},{"location":"htc_workloads/specific_resource/openmpi-jobs/#openmpi-jobs","text":"Even though the Open Science Pool is a high throughput computing system, sometimes there is a need to run small OpenMPI based jobs. OSG has limited support for this, as long as the core count is small (4 is known to work well, 8 and 16 becomes more difficult due to the limited number of resources).","title":"OpenMPI Jobs"},{"location":"htc_workloads/specific_resource/openmpi-jobs/#find-an-mpi-based-container","text":"To get started, first compile your code using an OpenMPI container. You can create your own OpenMPI container or use the one that is available on DockerHub. OSG also has an openmpi container that can be used for compiling. Please note that the OSG provided container openmpi.sif image is available only on the ap20.uc.osg-htc.org and ap21.uc.osg-htc.org access points. For the ap40 access point, please use your desired docker image and do apptainer pull . More information about using apptainer pull can be found here .","title":"Find an MPI-based Container"},{"location":"htc_workloads/specific_resource/openmpi-jobs/#compile-the-code","text":"To compile your code using the OSG provided image, start running the container first. Then run mpicc to compile the code: $ apptainer shell /ospool/uc-shared/public/OSG-Staff/openmpi.sif Apptainer> mpicc -o hello hello.c The hello.c is an example hello world code that can be executed using multiple processors. The code is given below: #include #include int main(int argc, char** argv) { MPI_Init(NULL, NULL); int world_size; MPI_Comm_size(MPI_COMM_WORLD, &world_size); int world_rank; MPI_Comm_rank(MPI_COMM_WORLD, &world_rank); char processor_name[MPI_MAX_PROCESSOR_NAME]; int name_len; MPI_Get_processor_name(processor_name, &name_len); printf(\"Hello world from processor %s, rank %d out of %d processors\\n\", processor_name, world_rank, world_size); MPI_Finalize(); } After compiling the code, you can test the executable locally using mpiexec : Apptainer> mpiexec -n 4 hello Hello world from processor ap21.uc.osg-htc.org, rank 0 out of 4 processors Hello world from processor ap21.uc.osg-htc.org, rank 1 out of 4 processors Hello world from processor ap21.uc.osg-htc.org, rank 2 out of 4 processors Hello world from processor ap21.uc.osg-htc.org, rank 3 out of 4 processors When testing is done be sure to exit from the apptainer shell using exit .","title":"Compile the Code"},{"location":"htc_workloads/specific_resource/openmpi-jobs/#run-a-job-using-the-mpi-container-and-compiled-code","text":"The next step is to run your code as a job on the Open Science Pool. For this, first create a wrapper.sh . Example: #!/bin/sh set -e mpiexec -n 4 hello Then, a job submit file: +SingularityImage = \"osdf:///ospool/uc-shared/public/OSG-Staff/openmpi.sif\" executable = wrapper.sh transfer_input_files = hello +JobDurationCategory = \"Medium\" request_cpus = 4 request_memory = 1 GB output = job.out.$(Cluster).$(Process) error = job.error.$(Cluster).$(Process) log = job.log.$(Cluster).$(Process) queue 1 Note how the executable is the wrapper.sh script, and that the real executable hello is transferred using the transfer_input_files mechanism. Please make sure that the number of cores specified in the submit file via request_cpus match the -n argument in the wrapper.sh file.","title":"Run a Job Using the MPI Container and Compiled Code"},{"location":"htc_workloads/specific_resource/requirements/","text":"Control Where Your Jobs Run / Job Requirements \u00b6 By default, your jobs will match any available slot in the OSG. This is fine for very generic jobs. However, in some cases a job may have one or more system requirements in order to complete successfully. For instance, your job may need to run on a node with a specific operating system. HTCondor provides several options for \"steering\" your jobs to appropriate nodes and system environments. The request_cpus , request_gpus , request_memory , and request_disk submit file attributes should be used to specify the hardware needs of your jobs. Please see our guides Multicore Jobs and Large Memory Jobs for more details. HTCondor also provides a requirements attribute and feature-specific attributes that can be added to your submit files to target specific environments in which to run your jobs. Lastly, there are some custom attributes you can add to your submit file to either focus on, or avoid, certain execution sites. Requirements \u00b6 The requirements attribute is formatted as an expression, so you can use logical operators to combine multiple requirements where && is used for AND and || used for OR. For example, the following requirements statement will direct jobs only to 64 bit RHEL (Red Hat Enterprise Linux) 9 nodes. requirements = OSGVO_OS_STRING == \"RHEL 9\" && Arch == \"X86_64\" Alternatively, if you have code which can run on either RHEL 8 or 9, you can use OR: requirements = (OSGVO_OS_STRING == \"RHEL 8\" || OSGVO_OS_STRING == \"RHEL 9\") && Arch == \"X86_64\" Note that parentheses placement is important for controling how the logical operations are interpreted by HTCondor. If you are interested in seeing a list of currently available operating systems (these are just the default ones, you can create a custom container image if you want something else): $ condor_status -autoformat OSGVO_OS_STRING | sort | uniq -c Another common requirement is to land on a node which has CVMFS. Then the requirements would be: requirements = HAS_oasis_opensciencegrid_org == True x86_64 Micro Architecture Levels \u00b6 The x86_64 set of CPUs contains a large number of different CPUs with different capabilities. Instead of trying to match on on individual attributes like the AVX/AVX2 ones in the previous section, it can be useful to match against a family of CPUs. There are currently 4 levels to chose from: x86_64-v1, x86_64-v2, x86_64-v3, and x86_64-v4. A description of the levels is available on Wikipedia . HTCondor advertises an attribute named Microarch . An example on how make jobs running on the two highest levels is: requirements = (Microarch >= \"x86_64-v3\") Note that in the past, it was recommended to use the HAS_AVX and HAS_AVX2 attributes to target CPUs with those capabilities. This is no longer recommended, with the replacement being Microarch >= \"x86_64-v3\" . Additional Feature-Specific Attributes \u00b6 There are many attributes that you can use with requirements . To see what values you can specify for a given attribute you can run the following command while connected to your login node: $ condor_status -af {ATTR_NAME} | sort -u For example, to see what values you can specify for the Microarch attribute run: $ condor_status -af Microarch | sort -u x86_64-v1 x86_64-v2 x86_64-v3 x86_64-v4 You will find many attributes will take the boolean values true or false . Below is a list of common attributes that you can include in your submit file requirements statement. Microarch - See above. x86_64-v1, x86_64-v2, x86_64-v3, and x86_64-v4 OSGVO_OS_NAME - The name of the operating system of the compute node. The most common name is RHEL OSGVO_OS_VERSION - Version of the operating system OSGVO_OS_STRING - Combined OS name and version. Please see the requirements string above on the recommended setup. OSGVO_CPU_MODEL - The CPU model identifier string as presented in /proc/cpuinfo HAS_CVMFS_oasis_opensciencegrid_org - Attribute specifying the need to access specific oasis /cvmfs file system repositories. Other common CVMFS repositories are HAS_CVMFS_singularity_opensciencegrid_org and project ones like HAS_CVMFS_xenon_opensciencegrid_org . For GPU attribtues, such as GPUs' compute capability, see our GPU guide . Non-x86 Based Architectures \u00b6 Within the computing community, there's a growing interest in exploring non-x86 architectures, such as ARM and PowerPC. As of now, the OSPool does not host resources based on these architectures; however, it is designed to accommodate them once available. The OSPool operates under a system where all tasks are configured to execute on the same architecture as the host from which they were submitted. This compatibility is ensured by HTCondor, which automatically adds the appropriate architecture to the job's requirements. By inspecting the classad of any given job, one would notice the inclusion of (TARGET.Arch == \"X86_64\") among its requirements, indicating the system's current architectural preference. If you do wish to specify a different architechure, just add it to your job requirements: requirements = Arch == \"PPC\" You can get a list of current architechures by running: $ condor_status -af Arch | sort | uniq X86_64 Specifying Sites / Avoiding Sites \u00b6 To run your jobs on a list of specific execution sites, or avoid a set of sites, use the +DESIRED_Sites / +UNDESIRED_Sites attributes in your job submit file. These attributes should only be used as a last resort. For example, it is much better to use feature attributes (see above) to make your job go to nodes matching what you really require, than to broadly allow/block whole sites. We encourage you to contact the facilitation team before taking this action, to make sure it is right for you. To avoid certain sites, first find the site names. You can find a current list by querying the pool: condor_status -af GLIDEIN_Site | sort -u In your submit file, add a comma separated list of sites like: +UNDESIRED_Sites = \"ISI,SU-ITS\" Those sites will now be exluded from the set of sites your job can run at. Similarly, you can use +DESIRED_Sites to list a subset of sites you want to target. For example, to run your jobs at the SU-ITS site, and only at that site, use: +DESIRED_Sites = \"ISI,SU-ITS\" Note that you should only specify one of +DESIRED_Sites / +UNDESIRED_Sites in the submit file. Using both at the same time will prevent the job from running.","title":"Control Where Your Jobs Run/Job Requirements"},{"location":"htc_workloads/specific_resource/requirements/#control-where-your-jobs-run-job-requirements","text":"By default, your jobs will match any available slot in the OSG. This is fine for very generic jobs. However, in some cases a job may have one or more system requirements in order to complete successfully. For instance, your job may need to run on a node with a specific operating system. HTCondor provides several options for \"steering\" your jobs to appropriate nodes and system environments. The request_cpus , request_gpus , request_memory , and request_disk submit file attributes should be used to specify the hardware needs of your jobs. Please see our guides Multicore Jobs and Large Memory Jobs for more details. HTCondor also provides a requirements attribute and feature-specific attributes that can be added to your submit files to target specific environments in which to run your jobs. Lastly, there are some custom attributes you can add to your submit file to either focus on, or avoid, certain execution sites.","title":"Control Where Your Jobs Run / Job Requirements"},{"location":"htc_workloads/specific_resource/requirements/#requirements","text":"The requirements attribute is formatted as an expression, so you can use logical operators to combine multiple requirements where && is used for AND and || used for OR. For example, the following requirements statement will direct jobs only to 64 bit RHEL (Red Hat Enterprise Linux) 9 nodes. requirements = OSGVO_OS_STRING == \"RHEL 9\" && Arch == \"X86_64\" Alternatively, if you have code which can run on either RHEL 8 or 9, you can use OR: requirements = (OSGVO_OS_STRING == \"RHEL 8\" || OSGVO_OS_STRING == \"RHEL 9\") && Arch == \"X86_64\" Note that parentheses placement is important for controling how the logical operations are interpreted by HTCondor. If you are interested in seeing a list of currently available operating systems (these are just the default ones, you can create a custom container image if you want something else): $ condor_status -autoformat OSGVO_OS_STRING | sort | uniq -c Another common requirement is to land on a node which has CVMFS. Then the requirements would be: requirements = HAS_oasis_opensciencegrid_org == True","title":"Requirements"},{"location":"htc_workloads/specific_resource/requirements/#x86_64-micro-architecture-levels","text":"The x86_64 set of CPUs contains a large number of different CPUs with different capabilities. Instead of trying to match on on individual attributes like the AVX/AVX2 ones in the previous section, it can be useful to match against a family of CPUs. There are currently 4 levels to chose from: x86_64-v1, x86_64-v2, x86_64-v3, and x86_64-v4. A description of the levels is available on Wikipedia . HTCondor advertises an attribute named Microarch . An example on how make jobs running on the two highest levels is: requirements = (Microarch >= \"x86_64-v3\") Note that in the past, it was recommended to use the HAS_AVX and HAS_AVX2 attributes to target CPUs with those capabilities. This is no longer recommended, with the replacement being Microarch >= \"x86_64-v3\" .","title":"x86_64 Micro Architecture Levels"},{"location":"htc_workloads/specific_resource/requirements/#additional-feature-specific-attributes","text":"There are many attributes that you can use with requirements . To see what values you can specify for a given attribute you can run the following command while connected to your login node: $ condor_status -af {ATTR_NAME} | sort -u For example, to see what values you can specify for the Microarch attribute run: $ condor_status -af Microarch | sort -u x86_64-v1 x86_64-v2 x86_64-v3 x86_64-v4 You will find many attributes will take the boolean values true or false . Below is a list of common attributes that you can include in your submit file requirements statement. Microarch - See above. x86_64-v1, x86_64-v2, x86_64-v3, and x86_64-v4 OSGVO_OS_NAME - The name of the operating system of the compute node. The most common name is RHEL OSGVO_OS_VERSION - Version of the operating system OSGVO_OS_STRING - Combined OS name and version. Please see the requirements string above on the recommended setup. OSGVO_CPU_MODEL - The CPU model identifier string as presented in /proc/cpuinfo HAS_CVMFS_oasis_opensciencegrid_org - Attribute specifying the need to access specific oasis /cvmfs file system repositories. Other common CVMFS repositories are HAS_CVMFS_singularity_opensciencegrid_org and project ones like HAS_CVMFS_xenon_opensciencegrid_org . For GPU attribtues, such as GPUs' compute capability, see our GPU guide .","title":"Additional Feature-Specific Attributes"},{"location":"htc_workloads/specific_resource/requirements/#non-x86-based-architectures","text":"Within the computing community, there's a growing interest in exploring non-x86 architectures, such as ARM and PowerPC. As of now, the OSPool does not host resources based on these architectures; however, it is designed to accommodate them once available. The OSPool operates under a system where all tasks are configured to execute on the same architecture as the host from which they were submitted. This compatibility is ensured by HTCondor, which automatically adds the appropriate architecture to the job's requirements. By inspecting the classad of any given job, one would notice the inclusion of (TARGET.Arch == \"X86_64\") among its requirements, indicating the system's current architectural preference. If you do wish to specify a different architechure, just add it to your job requirements: requirements = Arch == \"PPC\" You can get a list of current architechures by running: $ condor_status -af Arch | sort | uniq X86_64","title":"Non-x86 Based Architectures"},{"location":"htc_workloads/specific_resource/requirements/#specifying-sites-avoiding-sites","text":"To run your jobs on a list of specific execution sites, or avoid a set of sites, use the +DESIRED_Sites / +UNDESIRED_Sites attributes in your job submit file. These attributes should only be used as a last resort. For example, it is much better to use feature attributes (see above) to make your job go to nodes matching what you really require, than to broadly allow/block whole sites. We encourage you to contact the facilitation team before taking this action, to make sure it is right for you. To avoid certain sites, first find the site names. You can find a current list by querying the pool: condor_status -af GLIDEIN_Site | sort -u In your submit file, add a comma separated list of sites like: +UNDESIRED_Sites = \"ISI,SU-ITS\" Those sites will now be exluded from the set of sites your job can run at. Similarly, you can use +DESIRED_Sites to list a subset of sites you want to target. For example, to run your jobs at the SU-ITS site, and only at that site, use: +DESIRED_Sites = \"ISI,SU-ITS\" Note that you should only specify one of +DESIRED_Sites / +UNDESIRED_Sites in the submit file. Using both at the same time will prevent the job from running.","title":"Specifying Sites / Avoiding Sites"},{"location":"htc_workloads/submitting_workloads/Slurm_to_HTCondor/","text":"Convert Your Workflow From Slurm to HTCondor \u00b6 Introduction \u00b6 Slurm is a common workload manager for high performance computing (HPC) systems while HTCondor is a scheduler program developed for a high throughput computing (HTC) environment. As they are both implementations of scheduler/workload managers, they have some similarities, like needing to specify the computing resources required for a job. Some differences include the syntax for describing a job, and some of the system assumptions made by the scheduling program. In this guide, we will go through some general similarities and differences and provide an example of \"translating\" an existing Slurm submit file into HTCondor. Skip to this example . General Diffences Between Slurm and HTCondor \u00b6 HTCondor is good at managing a large quantity of single-node jobs; Slurm is suitable for scheduling multi-node and multi-core jobs, and can struggle when managing a large quantity of jobs Slurm requires a shared file system to operate, HTCondor does not. Slurm script has a certain order - all the requirements on the top then the code execution step; HTCondor script does not have any order. The only requirement is that it ends with the queue statement. Every requirement line in the Slurm script starts with #SBATCH . In HTCondor only the system requirements (RAM, Cores, Disk space) line starts with request_ The queue statement in HTCondor can be modified (include variables) to make it behave like an array job in Slurm. Basic job submission and queue checking command starts with a condor_ prefix in HTCondor; Slurm commands generally start with the letter s . To know more about Slurm please visit their website and for HTCondor take a look at the HTCondor manual page Special Considerations for the OSPool \u00b6 HTCondor on OSPool does not use modules and a shared file system . A user needs to identify every component of their jobs and transfer them from their access point to the execute node. The slides of the new user training contians more detils about it. Instead of relying on modules, please use the different conatiners available on the OSPool or make your own container . Please remember the faciliation team is here to support you . By default the wall time limit on the OSPool is 10 hours. Comparing Slurm and HTCondor Files \u00b6 A sample Slurm script is presented below with the equivalent HTCondor transformation. Submitting One Job \u00b6 The scenario here is submitting one Matlab job, requesting 8 cores, 16GB of memory (or RAM), planning to run for 20 hours, specifying where to save standard output and error Slurm Example \u00b6 #!/bin/bash #SBATCH --job-name=sample_slurm # Optional in HTCondor #SBATCH --error=job.%J.error #SBATCH --output=job.%J.out #SBATCH --time=20:00:00 #SBATCH --nodes=1 # HTCondor equivalent does not exist #SBATCH --ntasks-per-node=8 #SBATCH --mem-per-cpu=2gb #SBATCH --partition=batch # HTCondor equivalent does not exist module load matlab/r2020a matlab -nodisplay -r \"matlab_program(input_arguments),quit\" HTCondor Example \u00b6 +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2020a\" executable = matlab_program arguments = input_arguments # optional batch_name = sample_htcondor error = job.$(ClusterID).$(ProcID).error output = job.$(ClusterID).$(ProcID).out log = job.$(ProcID).log # transfer_input_files = +JobDurationCategory = \"Long\" request_cpus = 8 request_memory = 16 GB request_disk = 2 GB queue 1 Notice that: - Using a Singularity image replaces module loading - The Matlab command becomes executable and arguments in the submit file - HTCondor has its own custom \"log\" format in addition to saving standard output and standard error. - If there are additional input files, they would need to be added in the \"transfer_input_files\" line. - Note that memory is total not per-core. We also need to request disk space for the job's working directory, as it is not running on a shared file system. Submit Multiple Jobs \u00b6 Using the same base example, what options are needed if you wanted to run multiple copies of the same basic job? Slurm Example \u00b6 In Slurm, multiple tasks are expressed as an array job: %%%%%%%%%%%%%%%%%highlights for submitting an array jobs %%%%%%%%%%%%%%%%%%%%%%%%%%% #SBATCH --array=0-9 module load matlab/r2020a matlab -nodisplay -r \"matlab_program(input_arguments,$SLURM_ARRAY_TASK_ID),quit\" HTCondor Example \u00b6 In HTCondor, multiple tasks are submitted as many independent jobs. The $(ProcID) variable takes the place of $SLURM_ARRAY_TASK_ID above. %%%%%%%%%%%%%%% equivalent changes to HTCondor for array jobs%%%%%%%%%%%%%%%%%%%%%%%%%% executable = matlab_program arguments = input_arguments, $(ProcID) queue 10 HTCondor has many more ways to submit multiple jobs, behind this simple numerical approach. See our other HTCondor guides for more details.","title":"Convert your workflow from Slurm to HTCondor"},{"location":"htc_workloads/submitting_workloads/Slurm_to_HTCondor/#convert-your-workflow-from-slurm-to-htcondor","text":"","title":"Convert Your Workflow From Slurm to HTCondor"},{"location":"htc_workloads/submitting_workloads/Slurm_to_HTCondor/#introduction","text":"Slurm is a common workload manager for high performance computing (HPC) systems while HTCondor is a scheduler program developed for a high throughput computing (HTC) environment. As they are both implementations of scheduler/workload managers, they have some similarities, like needing to specify the computing resources required for a job. Some differences include the syntax for describing a job, and some of the system assumptions made by the scheduling program. In this guide, we will go through some general similarities and differences and provide an example of \"translating\" an existing Slurm submit file into HTCondor. Skip to this example .","title":"Introduction"},{"location":"htc_workloads/submitting_workloads/Slurm_to_HTCondor/#general-diffences-between-slurm-and-htcondor","text":"HTCondor is good at managing a large quantity of single-node jobs; Slurm is suitable for scheduling multi-node and multi-core jobs, and can struggle when managing a large quantity of jobs Slurm requires a shared file system to operate, HTCondor does not. Slurm script has a certain order - all the requirements on the top then the code execution step; HTCondor script does not have any order. The only requirement is that it ends with the queue statement. Every requirement line in the Slurm script starts with #SBATCH . In HTCondor only the system requirements (RAM, Cores, Disk space) line starts with request_ The queue statement in HTCondor can be modified (include variables) to make it behave like an array job in Slurm. Basic job submission and queue checking command starts with a condor_ prefix in HTCondor; Slurm commands generally start with the letter s . To know more about Slurm please visit their website and for HTCondor take a look at the HTCondor manual page","title":"General Diffences Between Slurm and HTCondor"},{"location":"htc_workloads/submitting_workloads/Slurm_to_HTCondor/#special-considerations-for-the-ospool","text":"HTCondor on OSPool does not use modules and a shared file system . A user needs to identify every component of their jobs and transfer them from their access point to the execute node. The slides of the new user training contians more detils about it. Instead of relying on modules, please use the different conatiners available on the OSPool or make your own container . Please remember the faciliation team is here to support you . By default the wall time limit on the OSPool is 10 hours.","title":"Special Considerations for the OSPool"},{"location":"htc_workloads/submitting_workloads/Slurm_to_HTCondor/#comparing-slurm-and-htcondor-files","text":"A sample Slurm script is presented below with the equivalent HTCondor transformation.","title":"Comparing Slurm and HTCondor Files"},{"location":"htc_workloads/submitting_workloads/Slurm_to_HTCondor/#submitting-one-job","text":"The scenario here is submitting one Matlab job, requesting 8 cores, 16GB of memory (or RAM), planning to run for 20 hours, specifying where to save standard output and error","title":"Submitting One Job"},{"location":"htc_workloads/submitting_workloads/Slurm_to_HTCondor/#slurm-example","text":"#!/bin/bash #SBATCH --job-name=sample_slurm # Optional in HTCondor #SBATCH --error=job.%J.error #SBATCH --output=job.%J.out #SBATCH --time=20:00:00 #SBATCH --nodes=1 # HTCondor equivalent does not exist #SBATCH --ntasks-per-node=8 #SBATCH --mem-per-cpu=2gb #SBATCH --partition=batch # HTCondor equivalent does not exist module load matlab/r2020a matlab -nodisplay -r \"matlab_program(input_arguments),quit\"","title":"Slurm Example"},{"location":"htc_workloads/submitting_workloads/Slurm_to_HTCondor/#htcondor-example","text":"+SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2020a\" executable = matlab_program arguments = input_arguments # optional batch_name = sample_htcondor error = job.$(ClusterID).$(ProcID).error output = job.$(ClusterID).$(ProcID).out log = job.$(ProcID).log # transfer_input_files = +JobDurationCategory = \"Long\" request_cpus = 8 request_memory = 16 GB request_disk = 2 GB queue 1 Notice that: - Using a Singularity image replaces module loading - The Matlab command becomes executable and arguments in the submit file - HTCondor has its own custom \"log\" format in addition to saving standard output and standard error. - If there are additional input files, they would need to be added in the \"transfer_input_files\" line. - Note that memory is total not per-core. We also need to request disk space for the job's working directory, as it is not running on a shared file system.","title":"HTCondor Example"},{"location":"htc_workloads/submitting_workloads/Slurm_to_HTCondor/#submit-multiple-jobs","text":"Using the same base example, what options are needed if you wanted to run multiple copies of the same basic job?","title":"Submit Multiple Jobs"},{"location":"htc_workloads/submitting_workloads/Slurm_to_HTCondor/#slurm-example_1","text":"In Slurm, multiple tasks are expressed as an array job: %%%%%%%%%%%%%%%%%highlights for submitting an array jobs %%%%%%%%%%%%%%%%%%%%%%%%%%% #SBATCH --array=0-9 module load matlab/r2020a matlab -nodisplay -r \"matlab_program(input_arguments,$SLURM_ARRAY_TASK_ID),quit\"","title":"Slurm Example"},{"location":"htc_workloads/submitting_workloads/Slurm_to_HTCondor/#htcondor-example_1","text":"In HTCondor, multiple tasks are submitted as many independent jobs. The $(ProcID) variable takes the place of $SLURM_ARRAY_TASK_ID above. %%%%%%%%%%%%%%% equivalent changes to HTCondor for array jobs%%%%%%%%%%%%%%%%%%%%%%%%%% executable = matlab_program arguments = input_arguments, $(ProcID) queue 10 HTCondor has many more ways to submit multiple jobs, behind this simple numerical approach. See our other HTCondor guides for more details.","title":"HTCondor Example"},{"location":"htc_workloads/submitting_workloads/checkpointing-on-OSPool/","text":"Checkpointing Jobs \u00b6 What is Checkpointing? \u00b6 Checkpointing is a technique that provides fault tolerance for a user's analysis. It consists of saving snapshots of a job's progress so the job can be restarted without losing its progress and having to restart from the beginning. We highly encourage checkpointing as a solution for jobs that will exceed the 10 hour maximum suggested runtime on the OSPool. This section is about jobs capable of periodically saving checkpoint information, and how to make HTCondor store that information safely, in case it's needed to continue the job on another machine or at a later time. There are two types of checkpointing: exit driven and eviction driven. In a vast majority of cases, exit driven checkpointing is preferred over eviction driven checkpointing. Therefore, this guide will focus on how to utilize exit driven checkpointing for your analysis. Note that not all software, programs, or code are capable of creating checkpoint files and knowing how to resume from them. Consult the manual for your software or program to determine if it supports checkpointing features. Some manuals will refer this ability as \"checkpoint\" features, as the ability to \"resume\" mid-analysis if a job is interrupted, or as \"checkpoint/restart\" capabilities. Contact a Research Computing Facilitator if you would like help determining if your software, program, or code is able to checkpoint. Why Checkpoint? \u00b6 Checkpointing allows a job to automatically resume from approximately where it left off instead of having to start over if interrupted. This behavior is advantageous for jobs limited by a maximum runtime policy. It is also advantageous for jobs submitted to backfill resources with no runtime guarantee (i.e. jobs on the OSPool) where the compute resources may also be more prone to hardware or networking failures. For example, checkpointing jobs that are limited by a runtime policy can enable HTCondor to exit a job and automatically requeue it to avoid hitting the maximum runtime limit. By using checkpointing, jobs circumvent hitting the maximum runtime limit and can run for extended periods of time until the completion of the analysis. This behavior avoids costly setbacks that may be caused by loosing results mid-way through an analysis due to hitting a runtime limit. Process of Exit Driven Checkpointing \u00b6 Using exit driven checkpointing, a job is specified to time out after a user-specified amount of time with an exit code value of 85 (more on this below). Upon hitting this time limit, HTCondor transfers any checkpoint files listed in the submit file attribute transfer_checkpoint_files to a directory called /spool . This directory acts as a storage location for these files in case the job is interrupted. HTCondor then knows that jobs with exit code 85 should be automatically requeued, and will transfer the checkpoint files in /spool to your job's working directory prior to restarting your executable. The process of exit driven checkpointing relies heavily on the use of exit codes to determine the next appropriate steps for HTCondor to take with a job. In general, exit codes are used to report system responses, such as when an analysis is running, encountered an error, or successfully completes. HTCondor recognizes exit code 85 as checkpointing jobs and therefore will know to handle these jobs differently than non-checkpoiting jobs. Requirements for Exit Driven Checkpointing \u00b6 Requirements for your code or software: Checkpoint : The software, program, or code you are using must be able to capture checkpoint files (i.e. snapshots of the progress made thus far) and know how to resume from them. Resume : This means your code must be able to recognize checkpoint files and know to resume from them instead of the original input data when the code is restarted. Exit : Jobs should exit with an exit code value of 85 after successfully creating checkpoint files. Additionally, jobs need to be able to exit with a non- 85 value if they encounter an error or write the writing the final outputs. In some cases, these requirements can be achieved by using a wrapper script. This means that your executable may be a script, rather than the code that is writing the checkpoint. An example wrapper script that enables some of these behaviors is below. Contact a Research Computing Facilitator for help determining if your job is capable of using checkpointing. Changes to the Submit File \u00b6 Several modifications to the submit file are needed to enable HTCondor's checkpointing feature. The line checkpoint_exit_code = 85 must be added. HTCondor recognizes code 85 as a checkpoint job. This means HTCondor knows to end a job with this code but to then to requeue it repeatedly until the analysis completes. The value of when_to_transfer_output should be set to ON_EXIT . The name of the checkpoint files or directories to be transferred to /spool should be specified using transfer_checkpoint_files . Optional In some cases, it is necessary to write a wrapper script to tell a job when to timeout and exit. In cases such as this, the executable will need to be changed to the name of that wrapper script. An example of a wrapper script that enables a job to checkout and exit with the proper exit codes can be found below. An example submit file for an exit driven checkpointing job looks like: # exit-driven-example.submit executable = exit-driven.sh arguments = argument1 argument2 checkpoint_exit_code = 85 transfer_checkpoint_files = my_output.txt, temp_dir, temp_file.txt should_transfer_files = yes when_to_transfer_output = ON_EXIT output = example.out error = example.err log = example.log +JobDurationCategory = \"Medium\" cpu = 1 request_disk = 2 GB request_memory = 2 GB queue 1 Example Wrapper Script for Checkpointing Job \u00b6 As previously described, it may be necessary to use a wrapper script to tell your job when and how to exit as it checkpoints. An example of a wrapper script that tells a job to exit every 4 hours looks like: #!/bin/bash timeout 4h do_science arg1 arg2 timeout_exit_status=$? if [ $timeout_exit_status -eq 124 ]; then exit 85 fi exit $timeout_exit_status Let's take a moment to understand what each section of this wrapper script is doing: #!/bin/bash timeout 4h do_science argument1 argument2 # The `timeout` command will stop the job after 4 hours (4h). # This number can be increased or decreased depending on how frequent your code/software/program # is creating checkpoint files and how long it takes to create/resume from these files. # Replace `do_science argument1 argument2` with the execution command and arguments for your job. timeout_exit_status=$? # Uses the bash notation of `$?` to call the exit value of the last executed command # and to save it in a variable called `timeout_exit_status`. if [ $timeout_exit_status -eq 124 ]; then exit 85 fi exit $timeout_exit_status # Programs typically have an exit code of `124` while they are actively running. # The portion above replaces exit code `124` with code `85`. HTCondor recognizes # code `85` and knows to end a job with this code once the time specified by `timeout` # has been reached. Upon exiting, HTCondor saves the files from jobs with exit code `85` # in the temporary directory within `/spool`. Once the files have been transferred, # HTCondor automatically requeues that job and fetches the files found in `/spool`. # If an exit code of `124` is not observed (for example if the program is done running # or has encountered an error), HTCondor will end the job and will not automaticlally requeue it. The ideal timeout frequency for a job is every 1-5 hours with a maximum of 10 hours. For jobs that checkpoint and timeout in under an hour, it is possible that a job may spend more time with checkpointing procedures than moving forward with the analysis. After 10 hours, the likelihood of a job being inturrupted on the OSPool is higher. Checking the Progress of Checkpointing Jobs \u00b6 It is possible to investigate checkpoint files once they have been transferred to /spool . You can explore the checkpointed files in /spool by navigating to /home/condor/spool on an OSPool Access Point. The directories in this folder are the last four digits of a job's cluster ID with leading zeros removed. Sub folders are labeled with the process ID for each job. For example, to investigate the checkpoint files for 17870068.220 , the files in /spool would be found in folder 68 in a subdirectory called 220 . More Information \u00b6 More information on checkpointing HTCondor jobs can be found in HTCondor's manual: https://htcondor.readthedocs.io/en/latest/users-manual/self-checkpointing-applications.html This documentation contains additional features available to checkpointing jobs, as well as additional examples such as a python checkpointing job.","title":"Checkpointing Jobs"},{"location":"htc_workloads/submitting_workloads/checkpointing-on-OSPool/#checkpointing-jobs","text":"","title":"Checkpointing Jobs"},{"location":"htc_workloads/submitting_workloads/checkpointing-on-OSPool/#what-is-checkpointing","text":"Checkpointing is a technique that provides fault tolerance for a user's analysis. It consists of saving snapshots of a job's progress so the job can be restarted without losing its progress and having to restart from the beginning. We highly encourage checkpointing as a solution for jobs that will exceed the 10 hour maximum suggested runtime on the OSPool. This section is about jobs capable of periodically saving checkpoint information, and how to make HTCondor store that information safely, in case it's needed to continue the job on another machine or at a later time. There are two types of checkpointing: exit driven and eviction driven. In a vast majority of cases, exit driven checkpointing is preferred over eviction driven checkpointing. Therefore, this guide will focus on how to utilize exit driven checkpointing for your analysis. Note that not all software, programs, or code are capable of creating checkpoint files and knowing how to resume from them. Consult the manual for your software or program to determine if it supports checkpointing features. Some manuals will refer this ability as \"checkpoint\" features, as the ability to \"resume\" mid-analysis if a job is interrupted, or as \"checkpoint/restart\" capabilities. Contact a Research Computing Facilitator if you would like help determining if your software, program, or code is able to checkpoint.","title":"What is Checkpointing?"},{"location":"htc_workloads/submitting_workloads/checkpointing-on-OSPool/#why-checkpoint","text":"Checkpointing allows a job to automatically resume from approximately where it left off instead of having to start over if interrupted. This behavior is advantageous for jobs limited by a maximum runtime policy. It is also advantageous for jobs submitted to backfill resources with no runtime guarantee (i.e. jobs on the OSPool) where the compute resources may also be more prone to hardware or networking failures. For example, checkpointing jobs that are limited by a runtime policy can enable HTCondor to exit a job and automatically requeue it to avoid hitting the maximum runtime limit. By using checkpointing, jobs circumvent hitting the maximum runtime limit and can run for extended periods of time until the completion of the analysis. This behavior avoids costly setbacks that may be caused by loosing results mid-way through an analysis due to hitting a runtime limit.","title":"Why Checkpoint?"},{"location":"htc_workloads/submitting_workloads/checkpointing-on-OSPool/#process-of-exit-driven-checkpointing","text":"Using exit driven checkpointing, a job is specified to time out after a user-specified amount of time with an exit code value of 85 (more on this below). Upon hitting this time limit, HTCondor transfers any checkpoint files listed in the submit file attribute transfer_checkpoint_files to a directory called /spool . This directory acts as a storage location for these files in case the job is interrupted. HTCondor then knows that jobs with exit code 85 should be automatically requeued, and will transfer the checkpoint files in /spool to your job's working directory prior to restarting your executable. The process of exit driven checkpointing relies heavily on the use of exit codes to determine the next appropriate steps for HTCondor to take with a job. In general, exit codes are used to report system responses, such as when an analysis is running, encountered an error, or successfully completes. HTCondor recognizes exit code 85 as checkpointing jobs and therefore will know to handle these jobs differently than non-checkpoiting jobs.","title":"Process of Exit Driven Checkpointing"},{"location":"htc_workloads/submitting_workloads/checkpointing-on-OSPool/#requirements-for-exit-driven-checkpointing","text":"Requirements for your code or software: Checkpoint : The software, program, or code you are using must be able to capture checkpoint files (i.e. snapshots of the progress made thus far) and know how to resume from them. Resume : This means your code must be able to recognize checkpoint files and know to resume from them instead of the original input data when the code is restarted. Exit : Jobs should exit with an exit code value of 85 after successfully creating checkpoint files. Additionally, jobs need to be able to exit with a non- 85 value if they encounter an error or write the writing the final outputs. In some cases, these requirements can be achieved by using a wrapper script. This means that your executable may be a script, rather than the code that is writing the checkpoint. An example wrapper script that enables some of these behaviors is below. Contact a Research Computing Facilitator for help determining if your job is capable of using checkpointing.","title":"Requirements for Exit Driven Checkpointing"},{"location":"htc_workloads/submitting_workloads/checkpointing-on-OSPool/#changes-to-the-submit-file","text":"Several modifications to the submit file are needed to enable HTCondor's checkpointing feature. The line checkpoint_exit_code = 85 must be added. HTCondor recognizes code 85 as a checkpoint job. This means HTCondor knows to end a job with this code but to then to requeue it repeatedly until the analysis completes. The value of when_to_transfer_output should be set to ON_EXIT . The name of the checkpoint files or directories to be transferred to /spool should be specified using transfer_checkpoint_files . Optional In some cases, it is necessary to write a wrapper script to tell a job when to timeout and exit. In cases such as this, the executable will need to be changed to the name of that wrapper script. An example of a wrapper script that enables a job to checkout and exit with the proper exit codes can be found below. An example submit file for an exit driven checkpointing job looks like: # exit-driven-example.submit executable = exit-driven.sh arguments = argument1 argument2 checkpoint_exit_code = 85 transfer_checkpoint_files = my_output.txt, temp_dir, temp_file.txt should_transfer_files = yes when_to_transfer_output = ON_EXIT output = example.out error = example.err log = example.log +JobDurationCategory = \"Medium\" cpu = 1 request_disk = 2 GB request_memory = 2 GB queue 1","title":"Changes to the Submit File"},{"location":"htc_workloads/submitting_workloads/checkpointing-on-OSPool/#example-wrapper-script-for-checkpointing-job","text":"As previously described, it may be necessary to use a wrapper script to tell your job when and how to exit as it checkpoints. An example of a wrapper script that tells a job to exit every 4 hours looks like: #!/bin/bash timeout 4h do_science arg1 arg2 timeout_exit_status=$? if [ $timeout_exit_status -eq 124 ]; then exit 85 fi exit $timeout_exit_status Let's take a moment to understand what each section of this wrapper script is doing: #!/bin/bash timeout 4h do_science argument1 argument2 # The `timeout` command will stop the job after 4 hours (4h). # This number can be increased or decreased depending on how frequent your code/software/program # is creating checkpoint files and how long it takes to create/resume from these files. # Replace `do_science argument1 argument2` with the execution command and arguments for your job. timeout_exit_status=$? # Uses the bash notation of `$?` to call the exit value of the last executed command # and to save it in a variable called `timeout_exit_status`. if [ $timeout_exit_status -eq 124 ]; then exit 85 fi exit $timeout_exit_status # Programs typically have an exit code of `124` while they are actively running. # The portion above replaces exit code `124` with code `85`. HTCondor recognizes # code `85` and knows to end a job with this code once the time specified by `timeout` # has been reached. Upon exiting, HTCondor saves the files from jobs with exit code `85` # in the temporary directory within `/spool`. Once the files have been transferred, # HTCondor automatically requeues that job and fetches the files found in `/spool`. # If an exit code of `124` is not observed (for example if the program is done running # or has encountered an error), HTCondor will end the job and will not automaticlally requeue it. The ideal timeout frequency for a job is every 1-5 hours with a maximum of 10 hours. For jobs that checkpoint and timeout in under an hour, it is possible that a job may spend more time with checkpointing procedures than moving forward with the analysis. After 10 hours, the likelihood of a job being inturrupted on the OSPool is higher.","title":"Example Wrapper Script for Checkpointing Job"},{"location":"htc_workloads/submitting_workloads/checkpointing-on-OSPool/#checking-the-progress-of-checkpointing-jobs","text":"It is possible to investigate checkpoint files once they have been transferred to /spool . You can explore the checkpointed files in /spool by navigating to /home/condor/spool on an OSPool Access Point. The directories in this folder are the last four digits of a job's cluster ID with leading zeros removed. Sub folders are labeled with the process ID for each job. For example, to investigate the checkpoint files for 17870068.220 , the files in /spool would be found in folder 68 in a subdirectory called 220 .","title":"Checking the Progress of Checkpointing Jobs"},{"location":"htc_workloads/submitting_workloads/checkpointing-on-OSPool/#more-information","text":"More information on checkpointing HTCondor jobs can be found in HTCondor's manual: https://htcondor.readthedocs.io/en/latest/users-manual/self-checkpointing-applications.html This documentation contains additional features available to checkpointing jobs, as well as additional examples such as a python checkpointing job.","title":"More Information"},{"location":"htc_workloads/submitting_workloads/jupyter/","text":"OSPool Notebooks: Access the OSPool via JupyterLab \u00b6 The OSG team supports an OSPool Notebooks service, a JupyterLab interface that connects with an OSPool Access Point. An OSPool Notebook instance can be used to manage files, submit jobs, summarize results, and run tutorials. Quick Start \u00b6 Go to this link to start an OSPool Notebooks instance: Launch an OSPool Notebook You will be prompted to \"Sign in\" using your institution credentials. Once logged in, you will be automatically redirected to the \"Server Options\" page. Several server options are listed, supporting a variety of programming environment and scientific workflows. Select your desired server option and click \"Start\" to launch your instance. This process can take several minutes to complete. You will be redirected automatically when your instance is ready. If you have an existing account on the ap40.uw.osg-htc.org Access Point, the started Jupyter instance will connect to your account on that Access Point. If you don't have an existing OSPool account, your Jupyter instance will be running on a temporary Access Point as the \"joyvan\" user. For more details on the differences between these instances, see Working with your OSPool Notebooks Instance . To log out of your session, go to the top left corner of the JupyterLab interface and click the \"File\" tab. Under this tab, click \"Log Out\". Why use OSPool Notebooks? \u00b6 There are many benefits to using this service: Ease of access : All you need to access OSPool Notebooks is an internet connection and web browser! You don't need an account, ssh keys, or anything else installed on your computer. User-friendly environment : The JupyterLab environment provides access to notebooks, terminals, and text editors in a visual environment, making it easier to use for researchers with newer command line skills. Learn yourself, train others : We have self-serve tutorials that anyone can use by starting up an OSPool Notebook and then going through the materials. This can be used by individuals (with or without an OSPool account!) or by anyone who wants to run a training on using the OSPool. Integration with Access Point : If you have an existing OSPool account, on ap40.uw.osg-htc.org , the OSPool Notebook service allows you to have the above benefits as part of your full OSPool account. If you start with a guest account, and then apply for a full account, you can keep using the same interface to work with the full OSPool. Working with your OSPool Notebooks Instance \u00b6 Needed Submit File Options \u00b6 When submitting jobs from the terminal in the OSPool Notebooks interface, make sure to always include this option in your submit file: should_transfer_input = YES This option is needed for jobs to start and run successfully. OSPool Notebook Experience \u00b6 There will be slight differences in your OSPool Notebook instance, depending on whether you have an existing OSPool account and what Access Point it is on. Click on the section below that applies to you to learn more. For all users, notebooks will time out after an hour an inactivity and may run for a maximum of four hours. Timing out will not impact jobs submitted to the OSPool. For researchers with accounts on a uw.osg-htc.org access point Working in OSPool Notebooks, your account will be tied to your account on your uw.osg-htc.org access point. This means you will be able to interact with files in your /home directory, execute code, and save files, similar to like you would if you were logged into your access point via a terminal. If you submit jobs to HTCondor, by default, your jobs will run on the Open Science Pool. As of right now, these HTCondor jobs will not be able to access any data you have stored in `/protected`. Unlike logging into your access point through a terminal, when you log in through an OSPool Notebooks instance, you can run computionally intensive tasks in your /home directory. This is because each researcher has a total of 8 CPUs and 16 GB memory available to their OSPool Notebook instance's /home directory. If you would like your HTCondor jobs to run inside your Jupyter container and not on the OSPool, you can copy/paste these lines to your submit file: requirements = Machine == \"CHTC-Jupyter-User-EP-$ENV(HOSTNAME)\" +FromJupyterLab = true The requirements = and +FromJupyterLab lines tell HTCondor to assign all jobs to run on the dedicated execute point server assigned to your instance upon launch. For researchers with accounts on ap2*.uc.osg-htc.org access point Working in OSPool Notebooks, your account will not be tied to your account on your ap2*.uc.osg-htc.org access point. OSPool Notebooks are run on only our uw.osg-htc.org access points . This means your OSPool account will not be recognized. Therefore, while you are welcome to upload data to your OSPool Notebooks instance and to use the 8 CPUs and 16 GB memory available to your instance to submit HTCondor jobs and analyze data, we recommend you request an account on a uw.osg-htc.org access points access point to be able to run full OSPool workflows and to avoid having data deleted upon logging out. For researchers with guest access on an OSPool access point Our OSPool Notebooks instance is a great way to see if you would like to request an account on an OSPool access point or to practice small High Throughput Computing workflows without needing an OSPool account. Your instance has HTCondor pre-installed, which allows you to practice the job submission process required to use OSG resources. Your instance will have 8 CPUs and 16 GB of memory available to your computations. We encourage you to also attend our twice-a-month trainings (where you can use your OSPool Notebooks instance to follow along). At any time, you are welcome to request a full account that will allow you to submit jobs to the OSPool using a Jupyter-based interface. Read More \u00b6 For more information about the JupyterLab interface in general, see the JupyterLab manual .","title":"Launch a JupyterLab Instance"},{"location":"htc_workloads/submitting_workloads/jupyter/#ospool-notebooks-access-the-ospool-via-jupyterlab","text":"The OSG team supports an OSPool Notebooks service, a JupyterLab interface that connects with an OSPool Access Point. An OSPool Notebook instance can be used to manage files, submit jobs, summarize results, and run tutorials.","title":"OSPool Notebooks: Access the OSPool via JupyterLab"},{"location":"htc_workloads/submitting_workloads/jupyter/#quick-start","text":"Go to this link to start an OSPool Notebooks instance: Launch an OSPool Notebook You will be prompted to \"Sign in\" using your institution credentials. Once logged in, you will be automatically redirected to the \"Server Options\" page. Several server options are listed, supporting a variety of programming environment and scientific workflows. Select your desired server option and click \"Start\" to launch your instance. This process can take several minutes to complete. You will be redirected automatically when your instance is ready. If you have an existing account on the ap40.uw.osg-htc.org Access Point, the started Jupyter instance will connect to your account on that Access Point. If you don't have an existing OSPool account, your Jupyter instance will be running on a temporary Access Point as the \"joyvan\" user. For more details on the differences between these instances, see Working with your OSPool Notebooks Instance . To log out of your session, go to the top left corner of the JupyterLab interface and click the \"File\" tab. Under this tab, click \"Log Out\".","title":"Quick Start"},{"location":"htc_workloads/submitting_workloads/jupyter/#why-use-ospool-notebooks","text":"There are many benefits to using this service: Ease of access : All you need to access OSPool Notebooks is an internet connection and web browser! You don't need an account, ssh keys, or anything else installed on your computer. User-friendly environment : The JupyterLab environment provides access to notebooks, terminals, and text editors in a visual environment, making it easier to use for researchers with newer command line skills. Learn yourself, train others : We have self-serve tutorials that anyone can use by starting up an OSPool Notebook and then going through the materials. This can be used by individuals (with or without an OSPool account!) or by anyone who wants to run a training on using the OSPool. Integration with Access Point : If you have an existing OSPool account, on ap40.uw.osg-htc.org , the OSPool Notebook service allows you to have the above benefits as part of your full OSPool account. If you start with a guest account, and then apply for a full account, you can keep using the same interface to work with the full OSPool.","title":"Why use OSPool Notebooks?"},{"location":"htc_workloads/submitting_workloads/jupyter/#working-with-your-ospool-notebooks-instance","text":"","title":"Working with your OSPool Notebooks Instance"},{"location":"htc_workloads/submitting_workloads/jupyter/#needed-submit-file-options","text":"When submitting jobs from the terminal in the OSPool Notebooks interface, make sure to always include this option in your submit file: should_transfer_input = YES This option is needed for jobs to start and run successfully.","title":"Needed Submit File Options"},{"location":"htc_workloads/submitting_workloads/jupyter/#ospool-notebook-experience","text":"There will be slight differences in your OSPool Notebook instance, depending on whether you have an existing OSPool account and what Access Point it is on. Click on the section below that applies to you to learn more. For all users, notebooks will time out after an hour an inactivity and may run for a maximum of four hours. Timing out will not impact jobs submitted to the OSPool. For researchers with accounts on a uw.osg-htc.org access point Working in OSPool Notebooks, your account will be tied to your account on your uw.osg-htc.org access point. This means you will be able to interact with files in your /home directory, execute code, and save files, similar to like you would if you were logged into your access point via a terminal. If you submit jobs to HTCondor, by default, your jobs will run on the Open Science Pool. As of right now, these HTCondor jobs will not be able to access any data you have stored in `/protected`. Unlike logging into your access point through a terminal, when you log in through an OSPool Notebooks instance, you can run computionally intensive tasks in your /home directory. This is because each researcher has a total of 8 CPUs and 16 GB memory available to their OSPool Notebook instance's /home directory. If you would like your HTCondor jobs to run inside your Jupyter container and not on the OSPool, you can copy/paste these lines to your submit file: requirements = Machine == \"CHTC-Jupyter-User-EP-$ENV(HOSTNAME)\" +FromJupyterLab = true The requirements = and +FromJupyterLab lines tell HTCondor to assign all jobs to run on the dedicated execute point server assigned to your instance upon launch. For researchers with accounts on ap2*.uc.osg-htc.org access point Working in OSPool Notebooks, your account will not be tied to your account on your ap2*.uc.osg-htc.org access point. OSPool Notebooks are run on only our uw.osg-htc.org access points . This means your OSPool account will not be recognized. Therefore, while you are welcome to upload data to your OSPool Notebooks instance and to use the 8 CPUs and 16 GB memory available to your instance to submit HTCondor jobs and analyze data, we recommend you request an account on a uw.osg-htc.org access points access point to be able to run full OSPool workflows and to avoid having data deleted upon logging out. For researchers with guest access on an OSPool access point Our OSPool Notebooks instance is a great way to see if you would like to request an account on an OSPool access point or to practice small High Throughput Computing workflows without needing an OSPool account. Your instance has HTCondor pre-installed, which allows you to practice the job submission process required to use OSG resources. Your instance will have 8 CPUs and 16 GB of memory available to your computations. We encourage you to also attend our twice-a-month trainings (where you can use your OSPool Notebooks instance to follow along). At any time, you are welcome to request a full account that will allow you to submit jobs to the OSPool using a Jupyter-based interface.","title":"OSPool Notebook Experience"},{"location":"htc_workloads/submitting_workloads/jupyter/#read-more","text":"For more information about the JupyterLab interface in general, see the JupyterLab manual .","title":"Read More"},{"location":"htc_workloads/submitting_workloads/monitor_review_jobs/","text":"Monitor and Review Jobs With condor_q and condor_history\" Objectives Monitor Queued Jobs with condor_q Default condor_q Constraints for condor_q View All Job Attributes Constraints for Job Attributes View Specific Job Attributes Across More Than One Job View Jobs that are Held View Machine Matches for a Job Review Job History with condor_history Default condor_history Constrain Your condor_history Query Viewing and Constraining Job Attributes Special Considerations More Information on Options for condor_q and condor_history Monitor and Review Jobs With condor_q and condor_history\" \u00b6 Objectives \u00b6 This guide discusses how to monitor jobs in the queue with condor_q and to review jobs that have recently left the queue with condor_history . Monitor Queued Jobs with condor_q \u00b6 Default condor_q \u00b6 The default behavior of condor_q is to list all of a user's jobs currently in HTCondor's queue grouped into batches. A batch consists of all jobs submitted using a single submit file. For example: $ condor_q -- Schedd: ap40.uw.osg-htc.org : <192.170.227.146:9618?... @ 03/04/22 12:31:45 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE TOTAL JOB_IDS alice ID: 21562536 3/4 12:31 _ _ 5 5 21562536.0-4 Total for query: 5 jobs; 0 completed, 0 removed, 5 idle, 0 running, 0 held, 0 suspended Total for alice: 5 jobs; 0 completed, 0 removed, 5 idle, 0 running, 0 held, 0 suspended Total for all users: 4112 jobs; 0 completed, 0 removed, 76 idle, 904 running, 3132 held, 0 suspended Constraints for condor_q \u00b6 condor_q can be used to list individual jobs associated with a username , cluster ID , or job ID as indicated by . Additionally, the flag -nobatch can be used to list individual jobs instead of batches of jobs using the format condor_q -nobatch . $ condor_q alice -nobatch -- Schedd: ap40.uw.osg-htc.org : <192.170.227.146:9618?... @ 03/04/22 12:52:22 ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD 21562638.0 alice 3/4 12:52 0+00:00:00 I 0 0.0 soilModel.py parameter1.csv 21562638.1 alice 3/4 12:52 0+00:00:00 I 0 0.0 soilModel.py parameter2.csv 21562638.2 alice 3/4 12:52 0+00:00:00 I 0 0.0 soilModel.py parameter3.csv 21562638.3 alice 3/4 12:52 0+00:00:00 I 0 0.0 soilModel.py parameter4.csv 21562638.4 alice 3/4 12:52 0+00:00:00 I 0 0.0 soilModel.py parameter5.csv 21562639.0 alice 3/4 12:52 0+00:00:00 I 0 0.0 wordcount.py Alice_in_Wonderland.tx 21562639.1 alice 3/4 12:52 0+00:00:00 I 0 0.0 wordcount.py Dracula.txt 21562639.2 alice 3/4 12:52 0+00:00:00 I 0 0.0 wordcount.py Huckleberry_Finn.txt 21562639.3 alice 3/4 12:52 0+00:00:00 I 0 0.0 wordcount.py Pride_and_Prejudice.tx 21562639.4 alice 3/4 12:52 0+00:00:00 I 0 0.0 wordcount.py Ulysses.txt View All Job Attributes \u00b6 Information about HTCondor jobs are saved as \"job attributes\". Job attributes can be viewed using the -l flag, a shorthand for -long . The output of condor_q -l can be used to learn more about a job and to diagnose errors. Examples of job attributes listed when using condor_q -l are as follows: Attribute Description MemoryUsage Maximum memory that a job used in MB DiskUsage Maximum disk space that a job used in KB BatchName Job batch label MATCH_EXP_JOBGLIDEIN_ResourceName Location of site at which a job is running RemoteHost Location of ite and slot number where a job is running ExitCode Exit code of a job upon its completion HoldReason Human-readable message as to why a job was held. It can be used to determine if a job should be released or not. HoldReasonCode Integer value that represents why a job was put on hold JobNotification Integer indicating when the user should be emailed regarding a change of status for their job RemotePool Name of the pool in which a job is running NumRestarts Number of restarts carried out by a job Many additional attributes are provided by HTCondor to learn about your jobs, including attributes dedicated to workflows that utilize DAGman and containers. For more information about these and other attributes, please see the HTCondor Manual . Constraints for Job Attributes \u00b6 To display only the output of specified attributes, it is possible to use the \"auto format\" flag denoted as -af with condor_q . An example use case is to view the owner and location of the site where a given job, such as job ID 15244592.127 , is running by using: $ condor_q 15244592.127 -af Owner MATCH_EXP_JOBGLIDEIN_ResourceName alice BNL-ATLAS In the above example, the Owner is the user alice and the job is running on resources owned by the Brookhaven National Laboratory as indicated by BNL_ATLAS . View Specific Job Attributes Across More Than One Job \u00b6 It is possible to sort and filter the output for one or more job attributes across a batch of jobs. When investigating more than one job, it is advantageous to limit the print out to a certain number of jobs to avoid flooding your screen. To limit the output to a specified number of jobs, use -limit N and replace N with the number of jobs you would like to view. For example, to view the site location where 100 jobs belonging to batch 12245532 ran, you can use: $ condor_q 12245532 -limit 100 -af MATCH_EXP_JOBGLIDEIN_ResourceName | sort | uniq -c 9 Crane 4 LSU-DB-CE1 4 ND-CAML_gpu 71 Rice-RAPID-Backfill 2 SDSC-PRP-CE1 6 TCNJ-ELSA 1 Tufts-Cluster 3 WSU-GRID In this example, 71 jobs ran at Rice University (Rice-RAPID-Backfill) while only one job ran at Tufts University (Tufts-Cluster). If you would like to know which abbreviations correspond to which compute resource provider in the OSPool, contact a Research Computing Facilitator . View Jobs that are Held \u00b6 To isolate and print out held jobs, use condor_q -held . The this command will print jobs currently in the \"Held\" state and will not print jobs that are in the \"Run\", \"Done\", or \"Idle\" states. Using the job ads and constraints described above, it is possible to print out the reasons why a subset of a user's jobs are being held. $ condor_q alice -held -af HoldReason | sort | uniq -c 4 Error from glidein_3439920_345771664@c6-6-39-2.aglt2.org: SHADOW at 192.170.227.166 failed to send file(s) to <192.41.230.81:44309>: error reading from /home/alice/InputData.txt: (errno 2) No such file or directory; STARTER failed to receive file(s) from <192.170.227.166:9618> 1 Job in status 2 put on hold by SYSTEM_PERIODIC_HOLD due to memory usage 10572684. In the output above, four jobs were place on hold due to a \"missing file or directory\" in the path of /home/alice/InputData.txt that was specified in the transfer_input_files line of the submit file. Because HTCondor could not locate this input (possibly due to an incorrect file path), the job was placed on hold. Additionally, one job was placed on hold due to exceeding the requested memory specified in the submit file. An in-depth guide on troubleshooting issues with held jobs on the OSPool is available on our website. View Machine Matches for a Job \u00b6 The -analyze and -better-analyze options can be used to view the number of machines that match to a job. These flags are often used to diagnose many problems, including understanding why a job has not started running. A portion of the output from these options shows the number of machines in the pool and how many of these are able to run your job: 21607747.000: Run analysis summary ignoring user priority. Of 2189 machines, 1605 are rejected by your job's requirements 53 reject your job because of their own requirements 1 match and are already running your jobs 0 match but are serving other users 530 are able to run your job Additional output of these options include the requirements line of the job's submit file, last successful match date, hold reason messages, and other useful information. The -analyze and -better-analyze options deliver similar output, however, -better-analyze is a newer feature that provides additional information including the number of slots matched by your job given the different requirements specified in the submit file. Additional information on using -analyze and -better-analyze for troubleshooting will be available in our troubleshooting guide in the near future. Review Job History with condor_history \u00b6 Default condor_history \u00b6 Somewhat similar to condor_q , which shows jobs currently in the queue, condor_history is used to show information about jobs that have recently left the queue. By default, condor_history will show every user's job that HTCondor still has a record of in its history. Because HTCondor jobs are constantly being sent to the queue on OSG-managed Access Points, HTCondor cleans its history of jobs every few days to free up space for new jobs that have recently left the queue. Once a job is cleaned from HTCondor's history, it is removed permanently from the queue. Before a job is cleaned from HTCondor's queue, condor_history can be valuable for learning about recently completed jobs. As previously stated, condor_history without any additional flags will list every user's job, which can be thousands of lines long. To exit this behavior, use control + C . In most cases, it is recommended to combine condor_history with one or more of the options below to help limit the output of this command to only the desired information. Constrain Your condor_history Query \u00b6 Like condor_q , it is possible to limit the output of your condor_history query by user , cluster ID , and job ID as indicated by ( ). By default, HTCondor will continue to search through its history of jobs by the option it is constrained by. Since HTCondor's history is extensive, this means your command line prompt will not be returned to you until HTCondor has finished its search and analysis of its entire history. To prevent this time-consuming behavior from occurring, we recommend using the -limit N flag with condor_history . This will tell HTCondor to limit its search to the first N items that appear matching its constraint. For example, condor_history alice -limit 20 will return the condor_history output of the user alice's 20 most recently submitted jobs. Viewing and Constraining Job Attributes \u00b6 Displaying the list of job attributes using -l and -af can also be used with condor_history . It is important to note that some attributes are renamed when a job exits the queue and enters HTCondor's history. For example, RemoteHost is renamed to LastRemoteHost and HoldReason will become LastHoldReason . Special Considerations \u00b6 Although many options that exist for condor_q also exist for condor_history , some do not. For example, -analyze and -better-analyze cannot be used with condor_history . Additionally, -hold cannot be used with condor_history as no job in HTCondor's history can be in the held state. More Information on Options for condor_q and condor_history \u00b6 A full list of the options for condor_q and condor_history may be listed by using combining them with the \u2013-help flag or by viewing the HTCondor manual .","title":"Monitor and Review Jobs With condor_q and condor_history"},{"location":"htc_workloads/submitting_workloads/monitor_review_jobs/#monitor-and-review-jobs-with-condor_q-and-condor_history","text":"","title":"Monitor and Review Jobs With condor_q and condor_history\""},{"location":"htc_workloads/submitting_workloads/monitor_review_jobs/#objectives","text":"This guide discusses how to monitor jobs in the queue with condor_q and to review jobs that have recently left the queue with condor_history .","title":"Objectives"},{"location":"htc_workloads/submitting_workloads/monitor_review_jobs/#monitor-queued-jobs-with-condor_q","text":"","title":"Monitor Queued Jobs with condor_q"},{"location":"htc_workloads/submitting_workloads/monitor_review_jobs/#default-condor_q","text":"The default behavior of condor_q is to list all of a user's jobs currently in HTCondor's queue grouped into batches. A batch consists of all jobs submitted using a single submit file. For example: $ condor_q -- Schedd: ap40.uw.osg-htc.org : <192.170.227.146:9618?... @ 03/04/22 12:31:45 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE TOTAL JOB_IDS alice ID: 21562536 3/4 12:31 _ _ 5 5 21562536.0-4 Total for query: 5 jobs; 0 completed, 0 removed, 5 idle, 0 running, 0 held, 0 suspended Total for alice: 5 jobs; 0 completed, 0 removed, 5 idle, 0 running, 0 held, 0 suspended Total for all users: 4112 jobs; 0 completed, 0 removed, 76 idle, 904 running, 3132 held, 0 suspended","title":"Default condor_q"},{"location":"htc_workloads/submitting_workloads/monitor_review_jobs/#constraints-for-condor_q","text":"condor_q can be used to list individual jobs associated with a username , cluster ID , or job ID as indicated by . Additionally, the flag -nobatch can be used to list individual jobs instead of batches of jobs using the format condor_q -nobatch . $ condor_q alice -nobatch -- Schedd: ap40.uw.osg-htc.org : <192.170.227.146:9618?... @ 03/04/22 12:52:22 ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD 21562638.0 alice 3/4 12:52 0+00:00:00 I 0 0.0 soilModel.py parameter1.csv 21562638.1 alice 3/4 12:52 0+00:00:00 I 0 0.0 soilModel.py parameter2.csv 21562638.2 alice 3/4 12:52 0+00:00:00 I 0 0.0 soilModel.py parameter3.csv 21562638.3 alice 3/4 12:52 0+00:00:00 I 0 0.0 soilModel.py parameter4.csv 21562638.4 alice 3/4 12:52 0+00:00:00 I 0 0.0 soilModel.py parameter5.csv 21562639.0 alice 3/4 12:52 0+00:00:00 I 0 0.0 wordcount.py Alice_in_Wonderland.tx 21562639.1 alice 3/4 12:52 0+00:00:00 I 0 0.0 wordcount.py Dracula.txt 21562639.2 alice 3/4 12:52 0+00:00:00 I 0 0.0 wordcount.py Huckleberry_Finn.txt 21562639.3 alice 3/4 12:52 0+00:00:00 I 0 0.0 wordcount.py Pride_and_Prejudice.tx 21562639.4 alice 3/4 12:52 0+00:00:00 I 0 0.0 wordcount.py Ulysses.txt","title":"Constraints for condor_q"},{"location":"htc_workloads/submitting_workloads/monitor_review_jobs/#view-all-job-attributes","text":"Information about HTCondor jobs are saved as \"job attributes\". Job attributes can be viewed using the -l flag, a shorthand for -long . The output of condor_q -l can be used to learn more about a job and to diagnose errors. Examples of job attributes listed when using condor_q -l are as follows: Attribute Description MemoryUsage Maximum memory that a job used in MB DiskUsage Maximum disk space that a job used in KB BatchName Job batch label MATCH_EXP_JOBGLIDEIN_ResourceName Location of site at which a job is running RemoteHost Location of ite and slot number where a job is running ExitCode Exit code of a job upon its completion HoldReason Human-readable message as to why a job was held. It can be used to determine if a job should be released or not. HoldReasonCode Integer value that represents why a job was put on hold JobNotification Integer indicating when the user should be emailed regarding a change of status for their job RemotePool Name of the pool in which a job is running NumRestarts Number of restarts carried out by a job Many additional attributes are provided by HTCondor to learn about your jobs, including attributes dedicated to workflows that utilize DAGman and containers. For more information about these and other attributes, please see the HTCondor Manual .","title":"View All Job Attributes"},{"location":"htc_workloads/submitting_workloads/monitor_review_jobs/#constraints-for-job-attributes","text":"To display only the output of specified attributes, it is possible to use the \"auto format\" flag denoted as -af with condor_q . An example use case is to view the owner and location of the site where a given job, such as job ID 15244592.127 , is running by using: $ condor_q 15244592.127 -af Owner MATCH_EXP_JOBGLIDEIN_ResourceName alice BNL-ATLAS In the above example, the Owner is the user alice and the job is running on resources owned by the Brookhaven National Laboratory as indicated by BNL_ATLAS .","title":"Constraints for Job Attributes"},{"location":"htc_workloads/submitting_workloads/monitor_review_jobs/#view-specific-job-attributes-across-more-than-one-job","text":"It is possible to sort and filter the output for one or more job attributes across a batch of jobs. When investigating more than one job, it is advantageous to limit the print out to a certain number of jobs to avoid flooding your screen. To limit the output to a specified number of jobs, use -limit N and replace N with the number of jobs you would like to view. For example, to view the site location where 100 jobs belonging to batch 12245532 ran, you can use: $ condor_q 12245532 -limit 100 -af MATCH_EXP_JOBGLIDEIN_ResourceName | sort | uniq -c 9 Crane 4 LSU-DB-CE1 4 ND-CAML_gpu 71 Rice-RAPID-Backfill 2 SDSC-PRP-CE1 6 TCNJ-ELSA 1 Tufts-Cluster 3 WSU-GRID In this example, 71 jobs ran at Rice University (Rice-RAPID-Backfill) while only one job ran at Tufts University (Tufts-Cluster). If you would like to know which abbreviations correspond to which compute resource provider in the OSPool, contact a Research Computing Facilitator .","title":"View Specific Job Attributes Across More Than One Job"},{"location":"htc_workloads/submitting_workloads/monitor_review_jobs/#view-jobs-that-are-held","text":"To isolate and print out held jobs, use condor_q -held . The this command will print jobs currently in the \"Held\" state and will not print jobs that are in the \"Run\", \"Done\", or \"Idle\" states. Using the job ads and constraints described above, it is possible to print out the reasons why a subset of a user's jobs are being held. $ condor_q alice -held -af HoldReason | sort | uniq -c 4 Error from glidein_3439920_345771664@c6-6-39-2.aglt2.org: SHADOW at 192.170.227.166 failed to send file(s) to <192.41.230.81:44309>: error reading from /home/alice/InputData.txt: (errno 2) No such file or directory; STARTER failed to receive file(s) from <192.170.227.166:9618> 1 Job in status 2 put on hold by SYSTEM_PERIODIC_HOLD due to memory usage 10572684. In the output above, four jobs were place on hold due to a \"missing file or directory\" in the path of /home/alice/InputData.txt that was specified in the transfer_input_files line of the submit file. Because HTCondor could not locate this input (possibly due to an incorrect file path), the job was placed on hold. Additionally, one job was placed on hold due to exceeding the requested memory specified in the submit file. An in-depth guide on troubleshooting issues with held jobs on the OSPool is available on our website.","title":"View Jobs that are Held"},{"location":"htc_workloads/submitting_workloads/monitor_review_jobs/#view-machine-matches-for-a-job","text":"The -analyze and -better-analyze options can be used to view the number of machines that match to a job. These flags are often used to diagnose many problems, including understanding why a job has not started running. A portion of the output from these options shows the number of machines in the pool and how many of these are able to run your job: 21607747.000: Run analysis summary ignoring user priority. Of 2189 machines, 1605 are rejected by your job's requirements 53 reject your job because of their own requirements 1 match and are already running your jobs 0 match but are serving other users 530 are able to run your job Additional output of these options include the requirements line of the job's submit file, last successful match date, hold reason messages, and other useful information. The -analyze and -better-analyze options deliver similar output, however, -better-analyze is a newer feature that provides additional information including the number of slots matched by your job given the different requirements specified in the submit file. Additional information on using -analyze and -better-analyze for troubleshooting will be available in our troubleshooting guide in the near future.","title":"View Machine Matches for a Job"},{"location":"htc_workloads/submitting_workloads/monitor_review_jobs/#review-job-history-with-condor_history","text":"","title":"Review Job History with condor_history"},{"location":"htc_workloads/submitting_workloads/monitor_review_jobs/#default-condor_history","text":"Somewhat similar to condor_q , which shows jobs currently in the queue, condor_history is used to show information about jobs that have recently left the queue. By default, condor_history will show every user's job that HTCondor still has a record of in its history. Because HTCondor jobs are constantly being sent to the queue on OSG-managed Access Points, HTCondor cleans its history of jobs every few days to free up space for new jobs that have recently left the queue. Once a job is cleaned from HTCondor's history, it is removed permanently from the queue. Before a job is cleaned from HTCondor's queue, condor_history can be valuable for learning about recently completed jobs. As previously stated, condor_history without any additional flags will list every user's job, which can be thousands of lines long. To exit this behavior, use control + C . In most cases, it is recommended to combine condor_history with one or more of the options below to help limit the output of this command to only the desired information.","title":"Default condor_history"},{"location":"htc_workloads/submitting_workloads/monitor_review_jobs/#constrain-your-condor_history-query","text":"Like condor_q , it is possible to limit the output of your condor_history query by user , cluster ID , and job ID as indicated by ( ). By default, HTCondor will continue to search through its history of jobs by the option it is constrained by. Since HTCondor's history is extensive, this means your command line prompt will not be returned to you until HTCondor has finished its search and analysis of its entire history. To prevent this time-consuming behavior from occurring, we recommend using the -limit N flag with condor_history . This will tell HTCondor to limit its search to the first N items that appear matching its constraint. For example, condor_history alice -limit 20 will return the condor_history output of the user alice's 20 most recently submitted jobs.","title":"Constrain Your condor_history Query"},{"location":"htc_workloads/submitting_workloads/monitor_review_jobs/#viewing-and-constraining-job-attributes","text":"Displaying the list of job attributes using -l and -af can also be used with condor_history . It is important to note that some attributes are renamed when a job exits the queue and enters HTCondor's history. For example, RemoteHost is renamed to LastRemoteHost and HoldReason will become LastHoldReason .","title":"Viewing and Constraining Job Attributes"},{"location":"htc_workloads/submitting_workloads/monitor_review_jobs/#special-considerations","text":"Although many options that exist for condor_q also exist for condor_history , some do not. For example, -analyze and -better-analyze cannot be used with condor_history . Additionally, -hold cannot be used with condor_history as no job in HTCondor's history can be in the held state.","title":"Special Considerations"},{"location":"htc_workloads/submitting_workloads/monitor_review_jobs/#more-information-on-options-for-condor_q-and-condor_history","text":"A full list of the options for condor_q and condor_history may be listed by using combining them with the \u2013-help flag or by viewing the HTCondor manual .","title":"More Information on Options for condor_q and condor_history"},{"location":"htc_workloads/submitting_workloads/submit-multiple-jobs/","text":"Easily Submit Multiple Jobs \u00b6 Overview \u00b6 HTCondor has several convenient features for streamlining high-throughput job submission. This guide provides several examples of how to leverage these features to submit multiple jobs with a single submit file. Why submit multiple jobs with a single submit file? As described in our Policies for using an OSPool Access Point , users should submit multiple jobs using a single submit file, or where applicable, as few separate submit files as needed. Using HTCondor multi-job submission features is more efficient for users and will help ensure reliable operation of the the login nodes. Many options exist for streamlining your submission of multiple jobs, and this guide only covers a few examples of what is truly possible with HTCondor. If you are interested in a particular approach that isn't described here, please contact OSG Facilitators and we will work with you to identify options to meet the needs of your work. Submit Multiple Jobs Using queue \u00b6 All HTCondor submit files require a queue attribute (which must also be the last line of the submit file). By default, queue will submit one job, but users can also configure the queue attribute to behave like a for loop that will submit multiple jobs, with each job varying as predefined by the user. Below are different HTCondor submit file examples for submitting batches of multiple jobs and, where applicable, how to indicate the differences between jobs in a batch with user-defined variables. Additional examples and use cases are provided further below: queue - will submit N number of jobs. Examples include performing replications, where the same job must be repeated N number of times, looping through files named with numbers, and looping through a matrix where each job uses information from a specific row or column. queue from - will loop through a list of file names, parameters, etc. as defined in separate text file (i.e. ). This queue option is very flexible and provides users with many options for submitting multiple jobs. Organizing Jobs Into Individual Directories - another option that can be helpful in organizing multi-job submissions. These queue options are also described in the following video from HTCondor Week 2020: Submitting Multiple Jobs Using HTCondor Video What makes these queue options powerful is the ability to use user-defined variables to specify details about your jobs in the HTCondor submit file. The examples below will include the use of $(variable_name) to specify details like input file names, file locations (aka paths), etc. When selecting a variable name, users must avoid bespoke HTCondor submit file variables such as Cluster , Process , output , and input , arguments , etc. 1. Use queue N in you HTCondor submit files \u00b6 When using queue N , HTCondor will submit a total of N jobs, counting from 0 to N - 1 and each job will be assigned a unique Process id number spanning this range of values. Because the Process variable will be unique for each job, it can be used in the submit file to indicate unique filenames and filepaths for each job. The most straightforward example of using queue N is to submit N number of identical jobs. The example shown below demonstrates how to use the Cluster and Process variables to assign unique names for the HTCondor error , output , and log files for each job in the batch: # 100jobs.sub # submit 100 identical jobs log = job_$(Cluster)_$(Process).log error = job_$(Cluster)_$(Process).err output = job_$(Cluster)_$(Process).out ... remaining submit details ... queue 100 For each job, the appropriate number, 0, 1, 2, ... 99 will replace $(Process) . $(Cluster) will be a unique number assigned to the entire 100 job batch. Each time you run condor_submit job.sub , you will be provided with the Cluster number which you will also see in the output produced by the command condor_q . If a uniquely named results file needs to be returned by each job, $(Process) and $(Cluster) can also be used as arguments , and anywhere else as needed, in the submit file: arguments = $(Cluster)_$(Process).results ... remaining submit details ... queue 100 Be sure to properly format the arguments statement according to the executable used by the job. What if my jobs are not identical? queue N may still be a great option! Additional examples for using this option include: A. Use integer numbered input files \u00b6 [user@login]$ ls *.data 0.data 1.data 2.data 3.data ... 97.data 98.data 99.data In the submit file, use: transfer_input_files = $(Process).data ... remaining submit details ... queue 100 B. Specify a row or column number for each job \u00b6 $(Process) can be used to specify a unique row or column of information in a matrix to be used by each job in the batch. The matrix needs to then be transferred with each job as input. For exmaple: transfer_input_files = matrix.csv arguments = $(Process) ... remaining submit details ... queue 100 The above exmaples assumes that your job is set up to use an argument to specify the row or column to be used by your software. C. Need N to start at 1 \u00b6 If your input files are numbered 1 - 100 instead of 0 - 99, or your matrix row starts with 1 instead of 0, you can perform basic arithmetic in the submit file: plusone = $(Process) + 1 NewProcess = $INT(plusone, %d) arguments = $(NewProcess) ... remaining submit details ... queue 100 Then use $(NewProcess) anywhere in the submit file that you would have otherwise used $(Process) . Note that there is nothing special about the names plusone and NewProcess , you can use any names you want as variables. 2. Submit multiple jobs with one or more distinct variables per job \u00b6 Think about what's different between each job that needs to be submitted. Will each job use a different input file or combination of software parameters? Do some of the jobs need more memory or disk space? Do you want to use a different software or script on a common set of input files? Using queue from in your submit files can make that possible! can be a single user-defined variable or comma-separated list of variables to be used anywhere in the submit file. is a plain text file that defines for each individual job to be submitted in the batch. Suppose you need to run a program called compare_states that will run on on the following set of input files: illinois.data , nebraska.data , and wisconsin.data and each input file can analyzed as a separate job. To create a submit file that will submit all three jobs, first create a text file that lists each .data file (one file per line). This step can be performed directly on the login node, for example: [user@state-analysis]$ ls *.data > states.txt [user@state-analysis]$ cat states.txt illinois.data nebraska.data wisconsin.data Then, in the submit file, following the pattern queue from , replace with a variable name like state and replace with the list of .data files saved in states.txt : queue state from states.txt For each line in states.txt , HTCondor will submit a job and the variable $(state) can be used anywhere in the submit file to represent the name of the .data file to be used by that job. For the first job, $(state) will be illinois.data , for the second job $(state) will be nebraska.data , and so on. For example: # run_compare_states_per_state.sub transfer_input_files = $(state) arguments = $(state) executable = compare_states ... remaining submit details ... queue state from states.txt For a working example of this kind of job submission, see our Word Frequency Tutorial . Use multiple variables for each job \u00b6 Let's imagine that each state .data file contains data spanning several years and that each job needs to analyze a specific year of data. Then the states.txt file can be modified to specify this information: [user@state-analysis]$ cat states.txt illinois.data, 1995 illinois.data, 2005 nebraska.data, 1999 nebraska.data, 2005 wisconsin.data, 2000 wisconsin.data, 2015 Then modify the queue to define two named state and year : queue state,year from states.txt Then the variables $(state) and $(year) can be used in the submit file: # run_compare_states_by_year.sub arguments = $(state) $(year) transfer_input_files = $(state) executable = compare_states ... remaining submit details ... queue state,year from states.txt 3. Organizing Jobs Into Individual Directories \u00b6 One way to organize jobs is to assign each job to its own directory, instead of putting files in the same directory with unique names. To continue our \\\"compare_states\\\" example, suppose there\\'s a directory for each state you want to analyze, and each of those directories has its own input file named input.data : [user@state-analysis]$ ls -F compare_states illinois/ nebraska/ wisconsin/ [user@state-analysis]$ ls -F illinois/ input.data [user@state-analysis]$ ls -F nebraska/ input.data [user@state-analysis]$ ls -F wisconsin/ input.data The HTCondor submit file attribute initialdir can be used to define a specific directory from which each job in the batch will be submitted. The default initialdir location is the directory from which the command condor_submit myjob.sub is executed. Combining queue var from list with initiadir , each line of will include the path to each state directory and initialdir set to this path for each job: #state-per-dir-job.sub initial_dir = $(state_dir) transfer_input_files = input.data executable = compare_states ... remaining submit details ... queue state_dir from state-dirs.txt Where state-dirs.txt is a list of each directory with state data: [user@state-analysis]$ cat state-dirs.txt illinois nebraska wisconsin Notice that executable = compare_states has remained unchanged in the above example. When using initialdir , only the input and output file path (including the HTCondor log, error, and output files) will be changed by initialdir . In this example, HTCondor will create a job for each directory in state-dirs.txt and use that state\\'s directory as the initialdir from which the job will be submitted. Therefore, transfer_input_files = input.data can be used without specifying the path to this input.data file. Any output generated by the job will then be returned to the initialdir location. Get Help \u00b6 For assistance or questions, please email the OSG Research Facilitation team at support@osg-htc.org or visit the help desk and community forums .","title":"Easily Submit Multiple Jobs"},{"location":"htc_workloads/submitting_workloads/submit-multiple-jobs/#easily-submit-multiple-jobs","text":"","title":"Easily Submit Multiple Jobs"},{"location":"htc_workloads/submitting_workloads/submit-multiple-jobs/#overview","text":"HTCondor has several convenient features for streamlining high-throughput job submission. This guide provides several examples of how to leverage these features to submit multiple jobs with a single submit file. Why submit multiple jobs with a single submit file? As described in our Policies for using an OSPool Access Point , users should submit multiple jobs using a single submit file, or where applicable, as few separate submit files as needed. Using HTCondor multi-job submission features is more efficient for users and will help ensure reliable operation of the the login nodes. Many options exist for streamlining your submission of multiple jobs, and this guide only covers a few examples of what is truly possible with HTCondor. If you are interested in a particular approach that isn't described here, please contact OSG Facilitators and we will work with you to identify options to meet the needs of your work.","title":"Overview"},{"location":"htc_workloads/submitting_workloads/submit-multiple-jobs/#submit-multiple-jobs-using-queue","text":"All HTCondor submit files require a queue attribute (which must also be the last line of the submit file). By default, queue will submit one job, but users can also configure the queue attribute to behave like a for loop that will submit multiple jobs, with each job varying as predefined by the user. Below are different HTCondor submit file examples for submitting batches of multiple jobs and, where applicable, how to indicate the differences between jobs in a batch with user-defined variables. Additional examples and use cases are provided further below: queue - will submit N number of jobs. Examples include performing replications, where the same job must be repeated N number of times, looping through files named with numbers, and looping through a matrix where each job uses information from a specific row or column. queue from - will loop through a list of file names, parameters, etc. as defined in separate text file (i.e. ). This queue option is very flexible and provides users with many options for submitting multiple jobs. Organizing Jobs Into Individual Directories - another option that can be helpful in organizing multi-job submissions. These queue options are also described in the following video from HTCondor Week 2020: Submitting Multiple Jobs Using HTCondor Video What makes these queue options powerful is the ability to use user-defined variables to specify details about your jobs in the HTCondor submit file. The examples below will include the use of $(variable_name) to specify details like input file names, file locations (aka paths), etc. When selecting a variable name, users must avoid bespoke HTCondor submit file variables such as Cluster , Process , output , and input , arguments , etc.","title":"Submit Multiple Jobs Using queue"},{"location":"htc_workloads/submitting_workloads/submit-multiple-jobs/#1-use-queue-n-in-you-htcondor-submit-files","text":"When using queue N , HTCondor will submit a total of N jobs, counting from 0 to N - 1 and each job will be assigned a unique Process id number spanning this range of values. Because the Process variable will be unique for each job, it can be used in the submit file to indicate unique filenames and filepaths for each job. The most straightforward example of using queue N is to submit N number of identical jobs. The example shown below demonstrates how to use the Cluster and Process variables to assign unique names for the HTCondor error , output , and log files for each job in the batch: # 100jobs.sub # submit 100 identical jobs log = job_$(Cluster)_$(Process).log error = job_$(Cluster)_$(Process).err output = job_$(Cluster)_$(Process).out ... remaining submit details ... queue 100 For each job, the appropriate number, 0, 1, 2, ... 99 will replace $(Process) . $(Cluster) will be a unique number assigned to the entire 100 job batch. Each time you run condor_submit job.sub , you will be provided with the Cluster number which you will also see in the output produced by the command condor_q . If a uniquely named results file needs to be returned by each job, $(Process) and $(Cluster) can also be used as arguments , and anywhere else as needed, in the submit file: arguments = $(Cluster)_$(Process).results ... remaining submit details ... queue 100 Be sure to properly format the arguments statement according to the executable used by the job. What if my jobs are not identical? queue N may still be a great option! Additional examples for using this option include:","title":"1. Use queue N in you HTCondor submit files"},{"location":"htc_workloads/submitting_workloads/submit-multiple-jobs/#a-use-integer-numbered-input-files","text":"[user@login]$ ls *.data 0.data 1.data 2.data 3.data ... 97.data 98.data 99.data In the submit file, use: transfer_input_files = $(Process).data ... remaining submit details ... queue 100","title":"A. Use integer numbered input files"},{"location":"htc_workloads/submitting_workloads/submit-multiple-jobs/#b-specify-a-row-or-column-number-for-each-job","text":"$(Process) can be used to specify a unique row or column of information in a matrix to be used by each job in the batch. The matrix needs to then be transferred with each job as input. For exmaple: transfer_input_files = matrix.csv arguments = $(Process) ... remaining submit details ... queue 100 The above exmaples assumes that your job is set up to use an argument to specify the row or column to be used by your software.","title":"B. Specify a row or column number for each job"},{"location":"htc_workloads/submitting_workloads/submit-multiple-jobs/#c-need-n-to-start-at-1","text":"If your input files are numbered 1 - 100 instead of 0 - 99, or your matrix row starts with 1 instead of 0, you can perform basic arithmetic in the submit file: plusone = $(Process) + 1 NewProcess = $INT(plusone, %d) arguments = $(NewProcess) ... remaining submit details ... queue 100 Then use $(NewProcess) anywhere in the submit file that you would have otherwise used $(Process) . Note that there is nothing special about the names plusone and NewProcess , you can use any names you want as variables.","title":"C. Need N to start at 1"},{"location":"htc_workloads/submitting_workloads/submit-multiple-jobs/#2-submit-multiple-jobs-with-one-or-more-distinct-variables-per-job","text":"Think about what's different between each job that needs to be submitted. Will each job use a different input file or combination of software parameters? Do some of the jobs need more memory or disk space? Do you want to use a different software or script on a common set of input files? Using queue from in your submit files can make that possible! can be a single user-defined variable or comma-separated list of variables to be used anywhere in the submit file. is a plain text file that defines for each individual job to be submitted in the batch. Suppose you need to run a program called compare_states that will run on on the following set of input files: illinois.data , nebraska.data , and wisconsin.data and each input file can analyzed as a separate job. To create a submit file that will submit all three jobs, first create a text file that lists each .data file (one file per line). This step can be performed directly on the login node, for example: [user@state-analysis]$ ls *.data > states.txt [user@state-analysis]$ cat states.txt illinois.data nebraska.data wisconsin.data Then, in the submit file, following the pattern queue from , replace with a variable name like state and replace with the list of .data files saved in states.txt : queue state from states.txt For each line in states.txt , HTCondor will submit a job and the variable $(state) can be used anywhere in the submit file to represent the name of the .data file to be used by that job. For the first job, $(state) will be illinois.data , for the second job $(state) will be nebraska.data , and so on. For example: # run_compare_states_per_state.sub transfer_input_files = $(state) arguments = $(state) executable = compare_states ... remaining submit details ... queue state from states.txt For a working example of this kind of job submission, see our Word Frequency Tutorial .","title":"2. Submit multiple jobs with one or more distinct variables per job"},{"location":"htc_workloads/submitting_workloads/submit-multiple-jobs/#use-multiple-variables-for-each-job","text":"Let's imagine that each state .data file contains data spanning several years and that each job needs to analyze a specific year of data. Then the states.txt file can be modified to specify this information: [user@state-analysis]$ cat states.txt illinois.data, 1995 illinois.data, 2005 nebraska.data, 1999 nebraska.data, 2005 wisconsin.data, 2000 wisconsin.data, 2015 Then modify the queue to define two named state and year : queue state,year from states.txt Then the variables $(state) and $(year) can be used in the submit file: # run_compare_states_by_year.sub arguments = $(state) $(year) transfer_input_files = $(state) executable = compare_states ... remaining submit details ... queue state,year from states.txt","title":"Use multiple variables for each job"},{"location":"htc_workloads/submitting_workloads/submit-multiple-jobs/#3-organizing-jobs-into-individual-directories","text":"One way to organize jobs is to assign each job to its own directory, instead of putting files in the same directory with unique names. To continue our \\\"compare_states\\\" example, suppose there\\'s a directory for each state you want to analyze, and each of those directories has its own input file named input.data : [user@state-analysis]$ ls -F compare_states illinois/ nebraska/ wisconsin/ [user@state-analysis]$ ls -F illinois/ input.data [user@state-analysis]$ ls -F nebraska/ input.data [user@state-analysis]$ ls -F wisconsin/ input.data The HTCondor submit file attribute initialdir can be used to define a specific directory from which each job in the batch will be submitted. The default initialdir location is the directory from which the command condor_submit myjob.sub is executed. Combining queue var from list with initiadir , each line of will include the path to each state directory and initialdir set to this path for each job: #state-per-dir-job.sub initial_dir = $(state_dir) transfer_input_files = input.data executable = compare_states ... remaining submit details ... queue state_dir from state-dirs.txt Where state-dirs.txt is a list of each directory with state data: [user@state-analysis]$ cat state-dirs.txt illinois nebraska wisconsin Notice that executable = compare_states has remained unchanged in the above example. When using initialdir , only the input and output file path (including the HTCondor log, error, and output files) will be changed by initialdir . In this example, HTCondor will create a job for each directory in state-dirs.txt and use that state\\'s directory as the initialdir from which the job will be submitted. Therefore, transfer_input_files = input.data can be used without specifying the path to this input.data file. Any output generated by the job will then be returned to the initialdir location.","title":"3. Organizing Jobs Into Individual Directories"},{"location":"htc_workloads/submitting_workloads/submit-multiple-jobs/#get-help","text":"For assistance or questions, please email the OSG Research Facilitation team at support@osg-htc.org or visit the help desk and community forums .","title":"Get Help"},{"location":"htc_workloads/submitting_workloads/tutorial-command/","text":"Workflow Tutorials \u00b6 OSPool workflow tutorials on Github \u00b6 All of the OSG provided tutorials are available as repositories on Github . These tutorials are tested regularly and should work as is, but if you experience any issues please contact us. Available tutorials \u00b6 The following tutorials are available and are compatible with OSG-provided Access Points: Currently available tutorials: R ...................... Estimate Pi using the R programming language R-addlibSNA ............ Shows how to add R external libraries for the R jobs ScalingUp-Python ....... Scaling up compute resources - Python example to optimize a function on grid points blast-split ............ How to run BLAST on the OSPool by splitting a large input file fastqc ................. How to run FastQC on the OSPool dagman-wordfreq ........ DAGMan based wordfreq example error101 ............... Use condor_q -better-analyze to analyze stuck jobs matlab-HelloWorld ...... Creating standalone MATLAB application - Hello World osg-locations .......... Tutorial based on OSPool location exercise from the User School pegasus ................ An introduction to the Pegasus job workflow manager quickstart ............. How to run your first OSPool job scaling ................ Learn to steer jobs to particular resources scaling-up-resources ... A simple multi-job demonstration software ............... Software access tutorial tensorflow-matmul ...... Tensorflow math operations as a singularity container job on the OSPool - matrix multiplication Install and setup a tutorial \u00b6 On an OSPool Access Point, type the following to download a tutorial's materials: $ git clone https://github.com/OSGConnect/ This command will clone the tutorial repository to your current working directory. cd to the repository directory and follow the steps described in the readme.md file. Alternatively, you can view the readme.md file at the tutorial's corresponding GitHub page.","title":"List of Available Tutorials"},{"location":"htc_workloads/submitting_workloads/tutorial-command/#workflow-tutorials","text":"","title":"Workflow Tutorials"},{"location":"htc_workloads/submitting_workloads/tutorial-command/#ospool-workflow-tutorials-on-github","text":"All of the OSG provided tutorials are available as repositories on Github . These tutorials are tested regularly and should work as is, but if you experience any issues please contact us.","title":"OSPool workflow tutorials on Github"},{"location":"htc_workloads/submitting_workloads/tutorial-command/#available-tutorials","text":"The following tutorials are available and are compatible with OSG-provided Access Points: Currently available tutorials: R ...................... Estimate Pi using the R programming language R-addlibSNA ............ Shows how to add R external libraries for the R jobs ScalingUp-Python ....... Scaling up compute resources - Python example to optimize a function on grid points blast-split ............ How to run BLAST on the OSPool by splitting a large input file fastqc ................. How to run FastQC on the OSPool dagman-wordfreq ........ DAGMan based wordfreq example error101 ............... Use condor_q -better-analyze to analyze stuck jobs matlab-HelloWorld ...... Creating standalone MATLAB application - Hello World osg-locations .......... Tutorial based on OSPool location exercise from the User School pegasus ................ An introduction to the Pegasus job workflow manager quickstart ............. How to run your first OSPool job scaling ................ Learn to steer jobs to particular resources scaling-up-resources ... A simple multi-job demonstration software ............... Software access tutorial tensorflow-matmul ...... Tensorflow math operations as a singularity container job on the OSPool - matrix multiplication","title":"Available tutorials"},{"location":"htc_workloads/submitting_workloads/tutorial-command/#install-and-setup-a-tutorial","text":"On an OSPool Access Point, type the following to download a tutorial's materials: $ git clone https://github.com/OSGConnect/ This command will clone the tutorial repository to your current working directory. cd to the repository directory and follow the steps described in the readme.md file. Alternatively, you can view the readme.md file at the tutorial's corresponding GitHub page.","title":"Install and setup a tutorial"},{"location":"htc_workloads/submitting_workloads/tutorial-error101/","text":"Troubleshooting Job Errors \u00b6 In this lesson, we'll learn how to troubleshoot jobs that never start or fail in unexpected ways. Troubleshooting techniques \u00b6 Diagnostics with condor_q \u00b6 The condor_q command shows the status of the jobs and it can be used to diagnose why jobs are not running. Using the -better-analyze flag with condor_q can show you detailed information about why a job isn't starting on a specific pool. Since OSG Connect sends jobs to many places, we also need to specify a pool name with the -pool flag. Unless you know a specific pool you would like to query, checking the flock.opensciencegrid.org pool is usually a good place to start. $ condor_q -better-analyze JOB-ID -pool POOL-NAME Let's do an example. First we'll need to login as usual, and then load the tutorial error101 . $ ssh username@login.osgconnect.net $ tutorial error101 $ cd tutorial-error101 $ condor_submit error101_job.submit We'll check the job status the normal way: condor_q username For some reason, our job is still idle. Why? Try using condor_q -better-analyze to find out. Remember that you will also need to specify a pool name. In this case we'll use flock.opensciencegrid.org : $ condor_q -better-analyze JOB-ID -pool flock.opensciencegrid.org # Produces a long ouput. # The following lines are part of the output regarding the job requirements. The Requirements expression for your job reduces to these conditions: Slots Step Matched Condition ----- -------- --------- [0] 10674 TARGET.Arch == \"X86_64\" [1] 10674 TARGET.OpSys == \"LINUX\" [3] 10674 TARGET.Disk >= RequestDisk [5] 0 TARGET.Memory >= RequestMemory [8] 10674 TARGET.HasFileTransfer By looking through the match conditions, we see that many nodes match our requests for the Linux operating system and the x86_64 architecture, but none of them match our requirement for 51200 MB of memory. Let's look at our submit script and see if we can find the source of this error: $ cat error101_job.submit Universe = vanilla Executable = error101.sh # to sleep an hour Arguments = 3600 request_memory = 2 TB Error = job.err Output = job.out Log = job.log Queue 1 See the request_memory line? We are asking for 2 Terabytes of memory, when we meant to only ask for 2 Gigabytes of memory. Our job is not matching any available job slots because none of the slots offer 2 TB of memory. Let's fix that by changing that line to read request_memory = 2 GB . $ nano error101_job.submit Let's cancel our idle job with the condor_rm command and then resubmit our edited job: $ condor_rm JOB-ID $ condor_submit error101_job.submit Alternatively, you can edit the resource requirements of the idle job in queue: condor_qedit JOB_ID RequestMemory 2048 Held jobs and condor_release \u00b6 Occasionally, a job can fail in various ways and go into \"Held\" state. Held state means that the job has encountered some error, and cannot run. This doesn't necessarily mean that your job has failed, but, for whatever reason, Condor cannot fulfill your request(s). In this particular case, a user had this in his or her Condor submit file: transfer_output_files = outputfile However, when the job executed, it went into Held state: $ condor_q -analyze 372993.0 -- Submitter: login01.osgconnect.net : <192.170.227.195:56174> : login01.osgconnect.net --- 372993.000: Request is held. Hold reason: Error from glidein_9371@compute-6-28.tier2: STARTER at 10.3.11.39 failed to send file(s) to <192.170.227.195:40485>: error reading from /wntmp/condor/compute-6-28/execute/dir_9368/glide_J6I1HT/execute/dir_16393/outputfile: (errno 2) No such file or directory; SHADOW failed to receive file(s) from <192.84.86.100:50805> Let's break down this error message piece by piece: Hold reason: Error from glidein_9371@compute-6-28.tier2: STARTER at 10.3.11.39 failed to send file(s) to <192.170.227.195:40485> This part is quite cryptic, but it simply means that the worker node where your job executed (glidein_9371@compute-6-28.tier2 or 10.3.11.39) tried to transfer a file to the OSG Connect login node (192.170.227.195) but did not succeed. The next part explains why: error reading from /wntmp/condor/compute-6-28/execute/dir_9368/glide_J6I1HT/execute/dir_16393/outputfile: (errno 2) No such file or directory This bit has the full path of the file that Condor tried to transfer back to login.osgconnect.net . The reason why the file transfer failed is because outputfile was never created on the worker node. Remember that at the beginning we said that the user specifically requested transfer_outputfiles = outputfile ! Condor could not complete this request, and so the job went into Held state instead of finishing normally. It's quite possible that the error was simply transient, and if we retry, the job will succeed. We can re-queue a job that is in Held state by using condor_release : condor_release JOB-ID","title":"Troubleshooting Job Errors"},{"location":"htc_workloads/submitting_workloads/tutorial-error101/#troubleshooting-job-errors","text":"In this lesson, we'll learn how to troubleshoot jobs that never start or fail in unexpected ways.","title":"Troubleshooting Job Errors"},{"location":"htc_workloads/submitting_workloads/tutorial-error101/#troubleshooting-techniques","text":"","title":"Troubleshooting techniques"},{"location":"htc_workloads/submitting_workloads/tutorial-error101/#diagnostics-with-condor_q","text":"The condor_q command shows the status of the jobs and it can be used to diagnose why jobs are not running. Using the -better-analyze flag with condor_q can show you detailed information about why a job isn't starting on a specific pool. Since OSG Connect sends jobs to many places, we also need to specify a pool name with the -pool flag. Unless you know a specific pool you would like to query, checking the flock.opensciencegrid.org pool is usually a good place to start. $ condor_q -better-analyze JOB-ID -pool POOL-NAME Let's do an example. First we'll need to login as usual, and then load the tutorial error101 . $ ssh username@login.osgconnect.net $ tutorial error101 $ cd tutorial-error101 $ condor_submit error101_job.submit We'll check the job status the normal way: condor_q username For some reason, our job is still idle. Why? Try using condor_q -better-analyze to find out. Remember that you will also need to specify a pool name. In this case we'll use flock.opensciencegrid.org : $ condor_q -better-analyze JOB-ID -pool flock.opensciencegrid.org # Produces a long ouput. # The following lines are part of the output regarding the job requirements. The Requirements expression for your job reduces to these conditions: Slots Step Matched Condition ----- -------- --------- [0] 10674 TARGET.Arch == \"X86_64\" [1] 10674 TARGET.OpSys == \"LINUX\" [3] 10674 TARGET.Disk >= RequestDisk [5] 0 TARGET.Memory >= RequestMemory [8] 10674 TARGET.HasFileTransfer By looking through the match conditions, we see that many nodes match our requests for the Linux operating system and the x86_64 architecture, but none of them match our requirement for 51200 MB of memory. Let's look at our submit script and see if we can find the source of this error: $ cat error101_job.submit Universe = vanilla Executable = error101.sh # to sleep an hour Arguments = 3600 request_memory = 2 TB Error = job.err Output = job.out Log = job.log Queue 1 See the request_memory line? We are asking for 2 Terabytes of memory, when we meant to only ask for 2 Gigabytes of memory. Our job is not matching any available job slots because none of the slots offer 2 TB of memory. Let's fix that by changing that line to read request_memory = 2 GB . $ nano error101_job.submit Let's cancel our idle job with the condor_rm command and then resubmit our edited job: $ condor_rm JOB-ID $ condor_submit error101_job.submit Alternatively, you can edit the resource requirements of the idle job in queue: condor_qedit JOB_ID RequestMemory 2048","title":"Diagnostics with condor_q"},{"location":"htc_workloads/submitting_workloads/tutorial-error101/#held-jobs-and-condor_release","text":"Occasionally, a job can fail in various ways and go into \"Held\" state. Held state means that the job has encountered some error, and cannot run. This doesn't necessarily mean that your job has failed, but, for whatever reason, Condor cannot fulfill your request(s). In this particular case, a user had this in his or her Condor submit file: transfer_output_files = outputfile However, when the job executed, it went into Held state: $ condor_q -analyze 372993.0 -- Submitter: login01.osgconnect.net : <192.170.227.195:56174> : login01.osgconnect.net --- 372993.000: Request is held. Hold reason: Error from glidein_9371@compute-6-28.tier2: STARTER at 10.3.11.39 failed to send file(s) to <192.170.227.195:40485>: error reading from /wntmp/condor/compute-6-28/execute/dir_9368/glide_J6I1HT/execute/dir_16393/outputfile: (errno 2) No such file or directory; SHADOW failed to receive file(s) from <192.84.86.100:50805> Let's break down this error message piece by piece: Hold reason: Error from glidein_9371@compute-6-28.tier2: STARTER at 10.3.11.39 failed to send file(s) to <192.170.227.195:40485> This part is quite cryptic, but it simply means that the worker node where your job executed (glidein_9371@compute-6-28.tier2 or 10.3.11.39) tried to transfer a file to the OSG Connect login node (192.170.227.195) but did not succeed. The next part explains why: error reading from /wntmp/condor/compute-6-28/execute/dir_9368/glide_J6I1HT/execute/dir_16393/outputfile: (errno 2) No such file or directory This bit has the full path of the file that Condor tried to transfer back to login.osgconnect.net . The reason why the file transfer failed is because outputfile was never created on the worker node. Remember that at the beginning we said that the user specifically requested transfer_outputfiles = outputfile ! Condor could not complete this request, and so the job went into Held state instead of finishing normally. It's quite possible that the error was simply transient, and if we retry, the job will succeed. We can re-queue a job that is in Held state by using condor_release : condor_release JOB-ID","title":"Held jobs and condor_release"},{"location":"htc_workloads/submitting_workloads/tutorial-organizing/","text":"Organizing and Submitting HTC Workloads \u00b6 Imagine you have a collection of books, and you want to analyze how word usage varies from book to book or author to author. This tutorial starts with the same set up as our Wordcount Tutorial for Submitting Multiple Jobs , but focuses on how to organize that example more effectively on the Access Point, with an eye to scaling up to a larger HTC workload in the future. Our Workload \u00b6 We can analyze one book by running the wordcount.py script, with the name of the book we want to analyze: $ ./wordcount.py Alice_in_Wonderland.txt Try running the command to see what the output is for the script. Once you have done that delete the output file created ( rm counts.Alice_in_Wonderland.txt ). We want to run this script on all the books we have copies of. What is the input set for this HTC workload? What is the output set? Make an Organization Plan \u00b6 Based on what you know about the script, inputs, and outputs, how would you organize this HTC workload in directories (folders) on the access point? There will also be system and HTCondor files produced when we submit a job -- how would you organize the log, standard error and standard output files? Try making those changes before moving on to the next section of the tutorial. Organize Files \u00b6 There are many different ways to organize files; a simple example that works for most workloads is having a directory for your input files and a directory for your output files. We can set up this structure on the command line by running: $ mkdir input $ mv *.txt input/ $ mkdir output/ We can view our current directory and its subdirectories by using the recursive flag with the ls command: $ ls -R README.md books.submit input output wordcount.py ./input: Alice_in_Wonderland.txt Huckleberry_Finn.txt Ulysses.txt Dracula.txt Pride_and_Prejudice.txt ./output: We are also going to create directories for the HTCondor log files and the standard error and standard output files (in one directory): $ mkdir logs $ mkdir errout Submit One Job \u00b6 Now we want to submit a test job that uses this organizing scheme, using just one item in our input set -- in this example, we'll use the Alice_in_Wonderland.txt file from our input/ directory. The lines that need to be filled in are shown below and can be edited using the nano text editor: $ nano books.submit executable = wordcount.py arguments = Alice_in_Wonderland.txt transfer_input_files = input/Alice_in_Wonderland.txt transfer_output_files = counts.Alice_in_Wonderland.txt transfer_output_remaps = \"counts.Alice_in_Wonderland.txt=output/counts.Alice_in_Wonderland.txt\" Note that to tell HTCondor the location of the input file, we need to include the input directory. We're also using a submit file option called transfer_output_remaps that will essentially move the output file to our output/ directory by renaming or remapping it. We also want to edit the submit file lines that tell the log, error and output files where to go: $ nano books.submit output = logs/job.$(ClusterID).$(ProcID).out error = errout/job.$(ClusterID).$(ProcID).err log = errout/job.$(ClusterID).$(ProcID).log Once you've made the above changes to the books.submit file, you can submit it, and monitor its progress: $ condor_submit books.submit $ condor_watch_q (Type CTRL - C to stop the condor_watch_q command.) Submit Multiple Jobs \u00b6 We are now sufficiently organized to submit our whole workload. First, we need to create a file with our input set -- in this case, it will be a list of the book files we want to analyze. We can do this by using the shell's listing command ls and redirecting the output to a file: $ cd input $ ls > booklist.txt $ cat booklist.txt $ mv booklist.txt .. $ cd .. Then, we modify our submit file to reference this input list and replace the static values from our test job ( Alice_in_Wonderland.txt ) with a variable -- we've chosen $(book) below: $ nano books.submit executable = wordcount.py arguments = $(book) transfer_input_files = input/$(book) transfer_output_files = counts.$(book) transfer_output_remaps = \"counts.$(book)=output/counts.$(book)\" # other options queue book from booklist.txt Once this is done, you can submit the jobs as usual: $ condor_submit books.submit","title":"Organizing and Submitting HTC Workloads"},{"location":"htc_workloads/submitting_workloads/tutorial-organizing/#organizing-and-submitting-htc-workloads","text":"Imagine you have a collection of books, and you want to analyze how word usage varies from book to book or author to author. This tutorial starts with the same set up as our Wordcount Tutorial for Submitting Multiple Jobs , but focuses on how to organize that example more effectively on the Access Point, with an eye to scaling up to a larger HTC workload in the future.","title":"Organizing and Submitting HTC Workloads"},{"location":"htc_workloads/submitting_workloads/tutorial-organizing/#our-workload","text":"We can analyze one book by running the wordcount.py script, with the name of the book we want to analyze: $ ./wordcount.py Alice_in_Wonderland.txt Try running the command to see what the output is for the script. Once you have done that delete the output file created ( rm counts.Alice_in_Wonderland.txt ). We want to run this script on all the books we have copies of. What is the input set for this HTC workload? What is the output set?","title":"Our Workload"},{"location":"htc_workloads/submitting_workloads/tutorial-organizing/#make-an-organization-plan","text":"Based on what you know about the script, inputs, and outputs, how would you organize this HTC workload in directories (folders) on the access point? There will also be system and HTCondor files produced when we submit a job -- how would you organize the log, standard error and standard output files? Try making those changes before moving on to the next section of the tutorial.","title":"Make an Organization Plan"},{"location":"htc_workloads/submitting_workloads/tutorial-organizing/#organize-files","text":"There are many different ways to organize files; a simple example that works for most workloads is having a directory for your input files and a directory for your output files. We can set up this structure on the command line by running: $ mkdir input $ mv *.txt input/ $ mkdir output/ We can view our current directory and its subdirectories by using the recursive flag with the ls command: $ ls -R README.md books.submit input output wordcount.py ./input: Alice_in_Wonderland.txt Huckleberry_Finn.txt Ulysses.txt Dracula.txt Pride_and_Prejudice.txt ./output: We are also going to create directories for the HTCondor log files and the standard error and standard output files (in one directory): $ mkdir logs $ mkdir errout","title":"Organize Files"},{"location":"htc_workloads/submitting_workloads/tutorial-organizing/#submit-one-job","text":"Now we want to submit a test job that uses this organizing scheme, using just one item in our input set -- in this example, we'll use the Alice_in_Wonderland.txt file from our input/ directory. The lines that need to be filled in are shown below and can be edited using the nano text editor: $ nano books.submit executable = wordcount.py arguments = Alice_in_Wonderland.txt transfer_input_files = input/Alice_in_Wonderland.txt transfer_output_files = counts.Alice_in_Wonderland.txt transfer_output_remaps = \"counts.Alice_in_Wonderland.txt=output/counts.Alice_in_Wonderland.txt\" Note that to tell HTCondor the location of the input file, we need to include the input directory. We're also using a submit file option called transfer_output_remaps that will essentially move the output file to our output/ directory by renaming or remapping it. We also want to edit the submit file lines that tell the log, error and output files where to go: $ nano books.submit output = logs/job.$(ClusterID).$(ProcID).out error = errout/job.$(ClusterID).$(ProcID).err log = errout/job.$(ClusterID).$(ProcID).log Once you've made the above changes to the books.submit file, you can submit it, and monitor its progress: $ condor_submit books.submit $ condor_watch_q (Type CTRL - C to stop the condor_watch_q command.)","title":"Submit One Job"},{"location":"htc_workloads/submitting_workloads/tutorial-organizing/#submit-multiple-jobs","text":"We are now sufficiently organized to submit our whole workload. First, we need to create a file with our input set -- in this case, it will be a list of the book files we want to analyze. We can do this by using the shell's listing command ls and redirecting the output to a file: $ cd input $ ls > booklist.txt $ cat booklist.txt $ mv booklist.txt .. $ cd .. Then, we modify our submit file to reference this input list and replace the static values from our test job ( Alice_in_Wonderland.txt ) with a variable -- we've chosen $(book) below: $ nano books.submit executable = wordcount.py arguments = $(book) transfer_input_files = input/$(book) transfer_output_files = counts.$(book) transfer_output_remaps = \"counts.$(book)=output/counts.$(book)\" # other options queue book from booklist.txt Once this is done, you can submit the jobs as usual: $ condor_submit books.submit","title":"Submit Multiple Jobs"},{"location":"htc_workloads/submitting_workloads/tutorial-osg-locations/","text":"Finding OSG Locations \u00b6 In this section, we will learn how to quickly submit multiple jobs simultaneously using HTCondor and we will visualize where these jobs run so we can get an idea of where and jobs are distributed on the Open Science Pool. Gathering network information from the OSG \u00b6 Now to create a submit file that will run in the OSG! Use the tutorial command to download the job submission files: tutorial osg-locations . Change into the tutorial-osg-locations directory with cd tutorial-osg-locations . Hostname fetching code \u00b6 The following Python script finds the ClassAd of the machine it's running on and finds a network identity that can be used to perform lookups: #!/bin/env python import re import os import socket machine_ad_file_name = os.getenv('_CONDOR_MACHINE_AD') try: machine_ad_file = open(machine_ad_file_name, 'r') machine_ad = machine_ad_file.read() machine_ad_file.close() except TypeError: print socket.getfqdn() exit(1) try: print re.search(r'GLIDEIN_Gatekeeper = \"(.*):\\d*/jobmanager-\\w*\"', machine_ad, re.MULTILINE).group(1) except AttributeError: try: print re.search(r'GLIDEIN_Gatekeeper = \"(\\S+) \\S+:9619\"', machine_ad, re.MULTILINE).group(1) except AttributeError: exit(1) This script ( wn-geoip.py ) is contained in the zipped archive ( wn-geoip.tar.gz ) that is transferred to the job and unpacked by the job wrapper script location-wrapper.sh . You will be using location-wrapper.sh as your executable and wn-geoip.tar.gz as an input file. The submit file for this job, scalingup.submit , is setup to specify these files and submit 100 jobs simultaneously. It also uses the job's process value to create unique output, error and log files for each of the job. $ cat scalingup.submit # The following requirments ensure we land on compute nodes # which have all the dependencies (modules, so we can # module load python2.7) and avoid some machines where # GeoIP does not work (such as Kubernetes containers) requirements = OSG_OS_STRING == \"RHEL 7\" && HAS_MODULES && GLIDEIN_Gatekeeper =!= UNDEFINED # We need the job to run our executable script, with the # input.txt filename as an argument, and to transfer the # relevant input and output files: executable = location-wrapper.sh transfer_input_files = wn-geoip.tar.gz # We can specify unique filenames for each job by using # the job's 'process' value. error = job.$(Process).error output = job.$(Process).output log = job.$(Process).log # The below are good base requirements for first testing jobs on OSG, # if you don't have a good idea of memory and disk usage. request_cpus = 1 request_memory = 1 GB request_disk = 1 GB # Queue 100 jobs with the above specifications. queue 100 Submit this job using the condor_submit command: $ condor_submit scalingup.submit Wait for the results. Remember, you can use watch condor_q to monitor the status of your jobs. Collating your results \u00b6 Now that you have your results, it's time to summarize them. Rather than inspecting each output file individually, you can use the cat command to print the results from all of your output files at once. If all of your output files have the format job.#.output (e.g., job.10.output ), your command will look something like this: $ cat job.*.output The * is a wildcard so the above cat command runs on all files that start with job- and end in .output . Additionally, you can use cat in combination with the sort and uniq commands to print only the unique results: $ cat job.*.output | sort | uniq Mapping your results \u00b6 To visualize the locations of the machines that your jobs ran on, you will be using http://www.mapcustomizer.com/. Copy and paste the collated results into the text box that pops up when clicking on the 'Bulk Entry' button on the right-hand side. Where did your jobs run?","title":"Finding OSG Locations"},{"location":"htc_workloads/submitting_workloads/tutorial-osg-locations/#finding-osg-locations","text":"In this section, we will learn how to quickly submit multiple jobs simultaneously using HTCondor and we will visualize where these jobs run so we can get an idea of where and jobs are distributed on the Open Science Pool.","title":"Finding OSG Locations"},{"location":"htc_workloads/submitting_workloads/tutorial-osg-locations/#gathering-network-information-from-the-osg","text":"Now to create a submit file that will run in the OSG! Use the tutorial command to download the job submission files: tutorial osg-locations . Change into the tutorial-osg-locations directory with cd tutorial-osg-locations .","title":"Gathering network information from the OSG"},{"location":"htc_workloads/submitting_workloads/tutorial-osg-locations/#hostname-fetching-code","text":"The following Python script finds the ClassAd of the machine it's running on and finds a network identity that can be used to perform lookups: #!/bin/env python import re import os import socket machine_ad_file_name = os.getenv('_CONDOR_MACHINE_AD') try: machine_ad_file = open(machine_ad_file_name, 'r') machine_ad = machine_ad_file.read() machine_ad_file.close() except TypeError: print socket.getfqdn() exit(1) try: print re.search(r'GLIDEIN_Gatekeeper = \"(.*):\\d*/jobmanager-\\w*\"', machine_ad, re.MULTILINE).group(1) except AttributeError: try: print re.search(r'GLIDEIN_Gatekeeper = \"(\\S+) \\S+:9619\"', machine_ad, re.MULTILINE).group(1) except AttributeError: exit(1) This script ( wn-geoip.py ) is contained in the zipped archive ( wn-geoip.tar.gz ) that is transferred to the job and unpacked by the job wrapper script location-wrapper.sh . You will be using location-wrapper.sh as your executable and wn-geoip.tar.gz as an input file. The submit file for this job, scalingup.submit , is setup to specify these files and submit 100 jobs simultaneously. It also uses the job's process value to create unique output, error and log files for each of the job. $ cat scalingup.submit # The following requirments ensure we land on compute nodes # which have all the dependencies (modules, so we can # module load python2.7) and avoid some machines where # GeoIP does not work (such as Kubernetes containers) requirements = OSG_OS_STRING == \"RHEL 7\" && HAS_MODULES && GLIDEIN_Gatekeeper =!= UNDEFINED # We need the job to run our executable script, with the # input.txt filename as an argument, and to transfer the # relevant input and output files: executable = location-wrapper.sh transfer_input_files = wn-geoip.tar.gz # We can specify unique filenames for each job by using # the job's 'process' value. error = job.$(Process).error output = job.$(Process).output log = job.$(Process).log # The below are good base requirements for first testing jobs on OSG, # if you don't have a good idea of memory and disk usage. request_cpus = 1 request_memory = 1 GB request_disk = 1 GB # Queue 100 jobs with the above specifications. queue 100 Submit this job using the condor_submit command: $ condor_submit scalingup.submit Wait for the results. Remember, you can use watch condor_q to monitor the status of your jobs.","title":"Hostname fetching code"},{"location":"htc_workloads/submitting_workloads/tutorial-osg-locations/#collating-your-results","text":"Now that you have your results, it's time to summarize them. Rather than inspecting each output file individually, you can use the cat command to print the results from all of your output files at once. If all of your output files have the format job.#.output (e.g., job.10.output ), your command will look something like this: $ cat job.*.output The * is a wildcard so the above cat command runs on all files that start with job- and end in .output . Additionally, you can use cat in combination with the sort and uniq commands to print only the unique results: $ cat job.*.output | sort | uniq","title":"Collating your results"},{"location":"htc_workloads/submitting_workloads/tutorial-osg-locations/#mapping-your-results","text":"To visualize the locations of the machines that your jobs ran on, you will be using http://www.mapcustomizer.com/. Copy and paste the collated results into the text box that pops up when clicking on the 'Bulk Entry' button on the right-hand side. Where did your jobs run?","title":"Mapping your results"},{"location":"htc_workloads/submitting_workloads/tutorial-quickstart/","text":"Quickstart - Submit Example HTCondor Jobs \u00b6 Login to OSG Access Point \u00b6 To begin, login to your OSG Access Point. Pretyped Setup \u00b6 To save some typing, you can download the tutorial materials into your home directory on the access point. This is highly recommended to ensure that you don't encounter transcription errors during the tutorials $ git clone https://github.com/OSGConnect/tutorial-quickstart Now, let's start the quickstart tutorial: $ cd tutorial-quickstart Manual Setup \u00b6 Alternatively, if you want the full manual experience, create a new directory for the tutorial work: $ mkdir tutorial-quickstart $ cd tutorial-quickstart Example Jobs \u00b6 Job 1: A single discovery job \u00b6 Inside the tutorial directory that you created or installed previously, let's create a test script to execute as your job. For pretyped setup, this is the short.sh file: #!/bin/bash # short.sh: a short discovery job printf \"Start time: \"; /bin/date printf \"Job is running on node: \"; /bin/hostname printf \"Job running as user: \"; /usr/bin/id printf \"Job is running in directory: \"; /bin/pwd echo echo \"Working hard...\" sleep 20 echo \"Science complete!\" Now, make the script executable. $ chmod +x short.sh Run the script locally \u00b6 When setting up a new job submission, it's important to test your job outside of HTCondor before submitting into the Open Science Pool. $ ./short.sh Start time: Wed Aug 08 09:21:35 CDT 2023 Job is running on node: ap50.ux.osg-htc.org Job running as user: uid=54161(alice), gid=5782(osg) groups=5782(osg),5513(osg.login-nodes),7158(osg.OSG-Staff) Job is running in directory: /home/alice/tutorial-quickstart Working hard... Science complete! Create an HTCondor submit file \u00b6 So far, so good! Let's create a simple (if verbose) HTCondor submit file. This can be found in tutorial01.submit . # Our executable is the main program or script that we've created # to do the 'work' of a single job. executable = short.sh # We need to name the files that HTCondor should create to save the # terminal output (stdout) and error (stderr) created by our job. # Similarly, we need to name the log file where HTCondor will save # information about job execution steps. log = short.log error = short.error output = short.output # This is the default category for jobs +JobDurationCategory = \"Medium\" # The below are good base requirements for first testing jobs on OSG, # if you don't have a good idea of memory and disk usage. request_cpus = 1 request_memory = 1 GB request_disk = 1 GB # The last line of a submit file indicates how many jobs of the above # description should be queued. We'll start with one job. queue 1 Submit the job \u00b6 Submit the job using condor_submit : $ condor_submit tutorial01.submit Submitting job(s). 1 job(s) submitted to cluster 144121. Check the job status \u00b6 The condor_q command tells the status of currently running jobs. $ condor_q -- Schedd: ap50.ux.osg-htc.org : < 192.170.227.22:9618?... @ 08/10/23 14:19:08 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE TOTAL JOB_IDS alice ID: 1441271 08/10 14:18 _ 1 _ 1 1441271.0 Total for query: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended Total for alice: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended Total for all users: 3001 jobs; 0 completed, 0 removed, 2189 idle, 754 running, 58 held, 0 suspended You can also get the status of a specific job cluster: $ condor_q 1441271 -- Schedd: ap50.ux.osg-htc.org : < 192.170.227.22:9618?... @ 08/10/23 14:19:08 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE TOTAL JOB_IDS alice ID: 1441271 08/10 14:18 _ 1 _ 1 1441271.0 Total for query: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended Total for alice: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended Total for all users: 3001 jobs; 0 completed, 0 removed, 2189 idle, 754 running, 58 held, 0 suspended Note the DONE , RUN , and IDLE columns. Your job will be listed in the IDLE column if it hasn't started yet. If it's currently scheduled and running, it will appear in the RUN column. As it finishes up, it will then show in the DONE column. Once the job completes completely, it will not appear in condor_q . Let's wait for your job to finish \u2013 that is, for condor_q not to show the job in its output. A useful tool for this is condor_watch_q \u2013 it efficiently monitors the status of your jobs by monitoring their corresponding log files. Let's submit the job again, and use condor_watch_q to follow the progress of your job (the status will update at two-second intervals): $ condor_submit tutorial01.submit Submitting job(s). 1 job(s) submitted to cluster 1441272 $ condor_watch_q ... Note : To close condor_watch_q, hold down Ctrl and press C. Check the job output \u00b6 Once your job has finished, you can look at the files that HTCondor has returned to the working directory. The names of these files were specified in our submit file. If everything was successful, it should have returned: a log file from HTCondor for the job cluster: short.log This file can tell you where a job ran, how long it ran, and what resources it used. If a job shows up as \"held\" in condor_q , this file will have a message that gives a reason why. an output file for each job's output: short.output This file can have useful messages that describe how the job progressed. an error file for each job's errors: short.error If the job encountered any errors, they will likely be in this file. In this case, we will read the output file, which should contain the information printed by our script. It should look something like this: $ cat short.output Start time: Mon Aug 10 20:18:56 UTC 2023 Job is running on node: osg-84086-0-cmswn2030.fnal.gov Job running as user: uid=12740(osg) gid=9652(osg) groups=9652(osg) Job is running in directory: /srv Working hard... Science complete! Job 2: Using Inputs and Arguments in a Job \u00b6 Sometimes it's useful to pass arguments to your executable from your submit file. For example, you might want to use the same job script for more than one run, varying only the parameters. You can do that by adding Arguments to your submission file. First, let's edit our existing short.sh script to accept arguments. To avoid losing our original script, we make a copy of the file under the name short_transfer.sh if you haven't already downloaded this entire tutorial . $ cp short.sh short_transfer.sh Now, edit the file to include the added lines below or use cat to view the existing short_transfer.sh file: #!/bin/bash # short_transfer.sh: a short discovery job printf \"Start time: \"; /bin/date printf \"Job is running on node: \"; /bin/hostname printf \"Job running as user: \"; /usr/bin/id printf \"Job is running in directory: \"; /bin/pwd printf \"The command line argument is: \"; echo $@ printf \"Job number is: \"; echo $2 printf \"Contents of $1 is \"; cat $1 cat $1 > output$2.txt echo echo \"Working hard...\" ls -l $PWD sleep 20 echo \"Science complete!\" We need to make our new script executable just as we did before: $ chmod +x short_transfer.sh Notice that with our changes, the new script will now print out the contents of whatever file we specify in our arguments, specified by the $1 . It will also copy the contents of that file into another file called output.txt . Make a simple text file called input.txt that we can pass to our script: \"Hello World\" Once again, before submitting our job we should test it locally to ensure it runs as we expect: $ ./short_transfer.sh input.txt Start time: Tue Dec 11 10:19:12 CST 2018 Job is running on node: ap50.ux.osg-htc.org Job running as user: uid=54161(alice), gid=5782(osg) groups=5782(osg),5513(osg.login-nodes),7158(osg.OSG-Staff) Job is running in directory: /home/alice/tutorial-quickstart The command line argument is: input.txt Job number is: Contents of input.txt is \"Hello World\" Working hard total 28 drwxrwxr-x 2 alice users 34 Aug 15 09:37 Images -rw-rw-r-- 1 alice users 13 Aug 15 09:37 input.txt drwxrwxr-x 2 alice users 114 Aug 11 09:50 log -rw-r--r-- 1 alice users 13 Aug 11 10:19 output.txt -rwxrwxr-x 1 alice users 291 Aug 15 09:37 short.sh -rwxrwxr-x 1 alice users 390 Aug 11 10:18 short_transfer.sh -rw-rw-r-- 1 alice users 806 Aug 15 09:37 tutorial01.submit -rw-rw-r-- 1 alice users 547 Aug 11 09:49 tutorial02.submit -rw-rw-r-- 1 alice users 1321 Aug 15 09:37 tutorial03.submit Science complete! Now, let's edit our submit file to properly handle these new arguments and output files and save this as tutorial02.submit # We need the job to run our executable script, with the # input.txt filename as an argument, and to transfer the # relevant input file. executable = short_transfer.sh arguments = input.txt transfer_input_files = input.txt # output files will be transferred back automatically log = job.log error = job.error output = job.output +JobDurationCategory = \"Medium\" request_cpus = 1 request_memory = 1 GB request_disk = 1 GB # Queue one job with the above specifications. queue 1 Notice the added arguments = input.txt information. The arguments option specifies what arguments should be passed to the executable. The transfer_input_files options needs to be included as well. When jobs are executed on the Open Science Pool via HTCondor, they are sent only with files that are specified. Any new files generated by the job in the working directory will be returned to the Access Point. Submit the new submit file using condor_submit . Be sure to check your output files once the job completes. $ condor_submit tutorial02.submit Submitting job(s). 1 job(s) submitted to cluster 1444781. Run the commands from the previous section to check on the job in the queue, and view the outputs when the job completes. Job 3: Submitting Multiple Jobs at Once \u00b6 What do we need to do to submit several jobs simultaneously? In the first example, Condor returned three files: out, error, and log. If we want to submit several jobs, we need to track these three files for each job. An easy way to do this is to add the $(Cluster) and $(Process) macros to the HTCondor submit file. Since this can make our working directory really messy with a large number of jobs, let's tell HTCondor to put the files in a directory called log . We will also include the $(Process) value as a second argument to our script, which will cause it to give our output files unique names. If you want to try it out, you can do so like this: $ ./short_transfer.sh input.txt 12 Incorporating all these ideas, here's what the third submit file looks like, called tutorial03.submit : # For this example, we'll specify unique filenames for each job by using # the job's 'Process' value. executable = short_transfer.sh arguments = input.txt $(Process) transfer_input_files = input.txt log = log/job.$(Cluster).log error = log/job.$(Cluster).$(Process).error output = log/job.$(Cluster).$(Process).output +JobDurationCategory = \"Medium\" request_cpus = 1 request_memory = 1 GB request_disk = 1 GB # Let's queue ten jobs with the above specifications queue 10 Before submitting, we also need to make sure the log directory exists. $ mkdir -p log You'll see something like the following upon submission: $ condor_submit tutorial03.submit Submitting job(s).......... 10 job(s) submitted to cluster 1444786. Look at the output files in the log directory and notice how each job received its own separate output file: $ ls log job.1444786.0.error job.1444786.3.error job.1444786.6.error job.1444786.9.error job.1444786.0.output job.1444786.3.output job.1444786.6.output job.1444786.9.output job.1444786.2.error job.1444786.4.error job.1444786.7.error job.1444786.log job.1444786.1.output job.1444786.4.output job.1444786.7.output job.1444786.2.error job.1444786.5.error job.1444786.8.error job.1444786.2.output job.1444786.5.output job.1444786.8.output Removing Jobs \u00b6 On occasion, jobs will need to be removed for a variety of reasons (incorrect parameters, errors in submission, etc.). In these instances, the condor_rm command can be used to remove an entire job submission or just particular jobs in a submission. The condor_rm command accepts a cluster id, a job id, or username and will remove an entire cluster of jobs, a single job, or all the jobs belonging to a given user respectively. E.g. if a job submission generates 100 jobs and is assigned a cluster id of 103, then condor_rm 103.0 will remove the first job in the cluster. Likewise, condor_rm 103 will remove all the jobs in the job submission and condor_rm [username] will remove all jobs belonging to the user. The condor_rm documenation has more details on using condor_rm including ways to remove jobs based on other constraints. Getting Your Work Running \u00b6 Now that you have some practice with running HTCondor jobs, consider reviewing our Getting Started Roadmap to see what next steps will get your own computational work running on the OSPool.","title":"Quickstart-Submit Example HTCondor Jobs"},{"location":"htc_workloads/submitting_workloads/tutorial-quickstart/#quickstart-submit-example-htcondor-jobs","text":"","title":"Quickstart - Submit Example HTCondor Jobs"},{"location":"htc_workloads/submitting_workloads/tutorial-quickstart/#login-to-osg-access-point","text":"To begin, login to your OSG Access Point.","title":"Login to OSG Access Point"},{"location":"htc_workloads/submitting_workloads/tutorial-quickstart/#pretyped-setup","text":"To save some typing, you can download the tutorial materials into your home directory on the access point. This is highly recommended to ensure that you don't encounter transcription errors during the tutorials $ git clone https://github.com/OSGConnect/tutorial-quickstart Now, let's start the quickstart tutorial: $ cd tutorial-quickstart","title":"Pretyped Setup"},{"location":"htc_workloads/submitting_workloads/tutorial-quickstart/#manual-setup","text":"Alternatively, if you want the full manual experience, create a new directory for the tutorial work: $ mkdir tutorial-quickstart $ cd tutorial-quickstart","title":"Manual Setup"},{"location":"htc_workloads/submitting_workloads/tutorial-quickstart/#example-jobs","text":"","title":"Example Jobs"},{"location":"htc_workloads/submitting_workloads/tutorial-quickstart/#job-1-a-single-discovery-job","text":"Inside the tutorial directory that you created or installed previously, let's create a test script to execute as your job. For pretyped setup, this is the short.sh file: #!/bin/bash # short.sh: a short discovery job printf \"Start time: \"; /bin/date printf \"Job is running on node: \"; /bin/hostname printf \"Job running as user: \"; /usr/bin/id printf \"Job is running in directory: \"; /bin/pwd echo echo \"Working hard...\" sleep 20 echo \"Science complete!\" Now, make the script executable. $ chmod +x short.sh","title":"Job 1: A single discovery job"},{"location":"htc_workloads/submitting_workloads/tutorial-quickstart/#run-the-script-locally","text":"When setting up a new job submission, it's important to test your job outside of HTCondor before submitting into the Open Science Pool. $ ./short.sh Start time: Wed Aug 08 09:21:35 CDT 2023 Job is running on node: ap50.ux.osg-htc.org Job running as user: uid=54161(alice), gid=5782(osg) groups=5782(osg),5513(osg.login-nodes),7158(osg.OSG-Staff) Job is running in directory: /home/alice/tutorial-quickstart Working hard... Science complete!","title":"Run the script locally"},{"location":"htc_workloads/submitting_workloads/tutorial-quickstart/#create-an-htcondor-submit-file","text":"So far, so good! Let's create a simple (if verbose) HTCondor submit file. This can be found in tutorial01.submit . # Our executable is the main program or script that we've created # to do the 'work' of a single job. executable = short.sh # We need to name the files that HTCondor should create to save the # terminal output (stdout) and error (stderr) created by our job. # Similarly, we need to name the log file where HTCondor will save # information about job execution steps. log = short.log error = short.error output = short.output # This is the default category for jobs +JobDurationCategory = \"Medium\" # The below are good base requirements for first testing jobs on OSG, # if you don't have a good idea of memory and disk usage. request_cpus = 1 request_memory = 1 GB request_disk = 1 GB # The last line of a submit file indicates how many jobs of the above # description should be queued. We'll start with one job. queue 1","title":"Create an HTCondor submit file"},{"location":"htc_workloads/submitting_workloads/tutorial-quickstart/#submit-the-job","text":"Submit the job using condor_submit : $ condor_submit tutorial01.submit Submitting job(s). 1 job(s) submitted to cluster 144121.","title":"Submit the job"},{"location":"htc_workloads/submitting_workloads/tutorial-quickstart/#check-the-job-status","text":"The condor_q command tells the status of currently running jobs. $ condor_q -- Schedd: ap50.ux.osg-htc.org : < 192.170.227.22:9618?... @ 08/10/23 14:19:08 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE TOTAL JOB_IDS alice ID: 1441271 08/10 14:18 _ 1 _ 1 1441271.0 Total for query: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended Total for alice: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended Total for all users: 3001 jobs; 0 completed, 0 removed, 2189 idle, 754 running, 58 held, 0 suspended You can also get the status of a specific job cluster: $ condor_q 1441271 -- Schedd: ap50.ux.osg-htc.org : < 192.170.227.22:9618?... @ 08/10/23 14:19:08 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE TOTAL JOB_IDS alice ID: 1441271 08/10 14:18 _ 1 _ 1 1441271.0 Total for query: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended Total for alice: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended Total for all users: 3001 jobs; 0 completed, 0 removed, 2189 idle, 754 running, 58 held, 0 suspended Note the DONE , RUN , and IDLE columns. Your job will be listed in the IDLE column if it hasn't started yet. If it's currently scheduled and running, it will appear in the RUN column. As it finishes up, it will then show in the DONE column. Once the job completes completely, it will not appear in condor_q . Let's wait for your job to finish \u2013 that is, for condor_q not to show the job in its output. A useful tool for this is condor_watch_q \u2013 it efficiently monitors the status of your jobs by monitoring their corresponding log files. Let's submit the job again, and use condor_watch_q to follow the progress of your job (the status will update at two-second intervals): $ condor_submit tutorial01.submit Submitting job(s). 1 job(s) submitted to cluster 1441272 $ condor_watch_q ... Note : To close condor_watch_q, hold down Ctrl and press C.","title":"Check the job status"},{"location":"htc_workloads/submitting_workloads/tutorial-quickstart/#check-the-job-output","text":"Once your job has finished, you can look at the files that HTCondor has returned to the working directory. The names of these files were specified in our submit file. If everything was successful, it should have returned: a log file from HTCondor for the job cluster: short.log This file can tell you where a job ran, how long it ran, and what resources it used. If a job shows up as \"held\" in condor_q , this file will have a message that gives a reason why. an output file for each job's output: short.output This file can have useful messages that describe how the job progressed. an error file for each job's errors: short.error If the job encountered any errors, they will likely be in this file. In this case, we will read the output file, which should contain the information printed by our script. It should look something like this: $ cat short.output Start time: Mon Aug 10 20:18:56 UTC 2023 Job is running on node: osg-84086-0-cmswn2030.fnal.gov Job running as user: uid=12740(osg) gid=9652(osg) groups=9652(osg) Job is running in directory: /srv Working hard... Science complete!","title":"Check the job output"},{"location":"htc_workloads/submitting_workloads/tutorial-quickstart/#job-2-using-inputs-and-arguments-in-a-job","text":"Sometimes it's useful to pass arguments to your executable from your submit file. For example, you might want to use the same job script for more than one run, varying only the parameters. You can do that by adding Arguments to your submission file. First, let's edit our existing short.sh script to accept arguments. To avoid losing our original script, we make a copy of the file under the name short_transfer.sh if you haven't already downloaded this entire tutorial . $ cp short.sh short_transfer.sh Now, edit the file to include the added lines below or use cat to view the existing short_transfer.sh file: #!/bin/bash # short_transfer.sh: a short discovery job printf \"Start time: \"; /bin/date printf \"Job is running on node: \"; /bin/hostname printf \"Job running as user: \"; /usr/bin/id printf \"Job is running in directory: \"; /bin/pwd printf \"The command line argument is: \"; echo $@ printf \"Job number is: \"; echo $2 printf \"Contents of $1 is \"; cat $1 cat $1 > output$2.txt echo echo \"Working hard...\" ls -l $PWD sleep 20 echo \"Science complete!\" We need to make our new script executable just as we did before: $ chmod +x short_transfer.sh Notice that with our changes, the new script will now print out the contents of whatever file we specify in our arguments, specified by the $1 . It will also copy the contents of that file into another file called output.txt . Make a simple text file called input.txt that we can pass to our script: \"Hello World\" Once again, before submitting our job we should test it locally to ensure it runs as we expect: $ ./short_transfer.sh input.txt Start time: Tue Dec 11 10:19:12 CST 2018 Job is running on node: ap50.ux.osg-htc.org Job running as user: uid=54161(alice), gid=5782(osg) groups=5782(osg),5513(osg.login-nodes),7158(osg.OSG-Staff) Job is running in directory: /home/alice/tutorial-quickstart The command line argument is: input.txt Job number is: Contents of input.txt is \"Hello World\" Working hard total 28 drwxrwxr-x 2 alice users 34 Aug 15 09:37 Images -rw-rw-r-- 1 alice users 13 Aug 15 09:37 input.txt drwxrwxr-x 2 alice users 114 Aug 11 09:50 log -rw-r--r-- 1 alice users 13 Aug 11 10:19 output.txt -rwxrwxr-x 1 alice users 291 Aug 15 09:37 short.sh -rwxrwxr-x 1 alice users 390 Aug 11 10:18 short_transfer.sh -rw-rw-r-- 1 alice users 806 Aug 15 09:37 tutorial01.submit -rw-rw-r-- 1 alice users 547 Aug 11 09:49 tutorial02.submit -rw-rw-r-- 1 alice users 1321 Aug 15 09:37 tutorial03.submit Science complete! Now, let's edit our submit file to properly handle these new arguments and output files and save this as tutorial02.submit # We need the job to run our executable script, with the # input.txt filename as an argument, and to transfer the # relevant input file. executable = short_transfer.sh arguments = input.txt transfer_input_files = input.txt # output files will be transferred back automatically log = job.log error = job.error output = job.output +JobDurationCategory = \"Medium\" request_cpus = 1 request_memory = 1 GB request_disk = 1 GB # Queue one job with the above specifications. queue 1 Notice the added arguments = input.txt information. The arguments option specifies what arguments should be passed to the executable. The transfer_input_files options needs to be included as well. When jobs are executed on the Open Science Pool via HTCondor, they are sent only with files that are specified. Any new files generated by the job in the working directory will be returned to the Access Point. Submit the new submit file using condor_submit . Be sure to check your output files once the job completes. $ condor_submit tutorial02.submit Submitting job(s). 1 job(s) submitted to cluster 1444781. Run the commands from the previous section to check on the job in the queue, and view the outputs when the job completes.","title":"Job 2: Using Inputs and Arguments in a Job"},{"location":"htc_workloads/submitting_workloads/tutorial-quickstart/#job-3-submitting-multiple-jobs-at-once","text":"What do we need to do to submit several jobs simultaneously? In the first example, Condor returned three files: out, error, and log. If we want to submit several jobs, we need to track these three files for each job. An easy way to do this is to add the $(Cluster) and $(Process) macros to the HTCondor submit file. Since this can make our working directory really messy with a large number of jobs, let's tell HTCondor to put the files in a directory called log . We will also include the $(Process) value as a second argument to our script, which will cause it to give our output files unique names. If you want to try it out, you can do so like this: $ ./short_transfer.sh input.txt 12 Incorporating all these ideas, here's what the third submit file looks like, called tutorial03.submit : # For this example, we'll specify unique filenames for each job by using # the job's 'Process' value. executable = short_transfer.sh arguments = input.txt $(Process) transfer_input_files = input.txt log = log/job.$(Cluster).log error = log/job.$(Cluster).$(Process).error output = log/job.$(Cluster).$(Process).output +JobDurationCategory = \"Medium\" request_cpus = 1 request_memory = 1 GB request_disk = 1 GB # Let's queue ten jobs with the above specifications queue 10 Before submitting, we also need to make sure the log directory exists. $ mkdir -p log You'll see something like the following upon submission: $ condor_submit tutorial03.submit Submitting job(s).......... 10 job(s) submitted to cluster 1444786. Look at the output files in the log directory and notice how each job received its own separate output file: $ ls log job.1444786.0.error job.1444786.3.error job.1444786.6.error job.1444786.9.error job.1444786.0.output job.1444786.3.output job.1444786.6.output job.1444786.9.output job.1444786.2.error job.1444786.4.error job.1444786.7.error job.1444786.log job.1444786.1.output job.1444786.4.output job.1444786.7.output job.1444786.2.error job.1444786.5.error job.1444786.8.error job.1444786.2.output job.1444786.5.output job.1444786.8.output","title":"Job 3: Submitting Multiple Jobs at Once"},{"location":"htc_workloads/submitting_workloads/tutorial-quickstart/#removing-jobs","text":"On occasion, jobs will need to be removed for a variety of reasons (incorrect parameters, errors in submission, etc.). In these instances, the condor_rm command can be used to remove an entire job submission or just particular jobs in a submission. The condor_rm command accepts a cluster id, a job id, or username and will remove an entire cluster of jobs, a single job, or all the jobs belonging to a given user respectively. E.g. if a job submission generates 100 jobs and is assigned a cluster id of 103, then condor_rm 103.0 will remove the first job in the cluster. Likewise, condor_rm 103 will remove all the jobs in the job submission and condor_rm [username] will remove all jobs belonging to the user. The condor_rm documenation has more details on using condor_rm including ways to remove jobs based on other constraints.","title":"Removing Jobs"},{"location":"htc_workloads/submitting_workloads/tutorial-quickstart/#getting-your-work-running","text":"Now that you have some practice with running HTCondor jobs, consider reviewing our Getting Started Roadmap to see what next steps will get your own computational work running on the OSPool.","title":"Getting Your Work Running"},{"location":"htc_workloads/using_software/available-containers-list/","text":"Existing OSPool-Supported Containers \u00b6 This is list of commonly used containers in the Open Science Pool. These can be used directly in your jobs or as base images if you want to define your own. Please see the pages on Apptainer containers and Docker containers for detailed instructions on how to use containers. Base \u00b6 Debian 12 (htc/debian:12) Debian 12 base image OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__debian__12.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/debian:12 Project Website Container Definition EL 7 (htc/centos:7) Enterprise Linux (CentOS) 7 base image OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__centos__7.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/centos:7 Project Website Container Definition Rocky 8 (htc/rocky:8) Rocky Linux 8 base image OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__rocky__8.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/rocky:8 Project Website Container Definition Rocky 8 / CUDA 11.0.3 (htc/rocky:8-cuda-11.0.3) Rocky Linux 8 / CUDA 11.0.3 image OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__rocky__8-cuda-11.0.3.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/rocky:8-cuda-11.0.3 Project Website Container Definition Rocky 9 (htc/rocky:9) Rocky Linux 9 base image OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__rocky__9.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/rocky:9 Project Website Container Definition Rocky 9 / CUDA 2.6.0 (htc/rocky:9-cuda-12.6.0) Rocky Linux 9 / CUDA 12.6.0 image OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__rocky__9-cuda-12.6.0.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/rocky:9-cuda-12.6.0 Project Website Container Definition Ubuntu 20.04 (htc/ubuntu:20.04) Ubuntu 20.04 (Focal) base image OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__ubuntu__20.04.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/ubuntu:20.04 Project Website Container Definition Ubuntu 22.04 (htc/ubuntu:22.04) Ubuntu 22.04 (Jammy Jellyfish) base image OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__ubuntu__22.04.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/ubuntu:22.04 Project Website Container Definition Ubuntu 24.04 (htc/ubuntu:24.04) Ubuntu 24.04 (Nobel Numbat) base image OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__ubuntu__24.04.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/ubuntu:24.04 Project Website Container Definition AI \u00b6 Tensorflow 2.15 (htc/tensorflow:2.15) Tensorflow image from the Tensorflow project, with OSG additions OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__tensorflow__2.15.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/tensorflow:2.15 Project Website Container Definition scikit-learn:1.3.2 (htc/scikit-learn:1.3) scikit-learn, configured for execution on OSG OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__scikit-learn__1.3.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/scikit-learn:1.3 Project Website Container Definition Languages \u00b6 Julia (opensciencegrid/osgvo-julia) Ubuntu based image with Julia OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-julia__1.0.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-julia__1.5.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-julia__1.7.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-julia__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-julia:1.0.3 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-julia:1.5.3 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-julia:1.7.3 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-julia:latest Project Website Container Definition Julia (m8zeng/julia-packages) Ubuntu based image with Julia OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/m8zeng__julia-packages__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/m8zeng/julia-packages:latest Project Website Container Definition Matlab Runtime (opensciencegrid/osgvo-matlab-runtime) This is the Matlab runtime component you can use to execute compiled Matlab codes OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-matlab-runtime__R2018b.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-matlab-runtime__R2019a.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-matlab-runtime__R2019b.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-matlab-runtime__R2020a.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-matlab-runtime__R2020b.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-matlab-runtime__R2021b.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-matlab-runtime__R2022b.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-matlab-runtime__R2023a.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2018b /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2019a /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2019b /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2020a /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2020b /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2021b /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2022b /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2023a Project Website Container Definition Matlab Runtime (htc/matlab-runtime:R2023a) This is the Matlab runtime component you can use to execute compiled Matlab codes OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__matlab-runtime__R2023a.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/matlab-runtime:R2023a Project Website Container Definition R (opensciencegrid/osgvo-r) Example for building R images OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-r__3.5.0.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-r__4.0.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-r__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-r:3.5.0 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-r:4.0.2 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-r:latest Project Website Container Definition R (clkwisconsin/spacetimer) Example for building R images OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/clkwisconsin__spacetimer__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/clkwisconsin/spacetimer:latest Project Website Container Definition Project \u00b6 XENONnT (opensciencegrid/osgvo-xenon) Base software environment for XENONnT, including Python 3.6 and data management tools OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2020.11.06.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2020.11.25.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2020.12.21.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2020.12.23.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.01.04.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.01.06.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.01.11.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.04.18.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.05.04.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.06.25.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.07.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.08.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.08.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.10.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.10.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.10.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.10.4.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.10.5.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.11.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.11.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.11.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.11.4.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.11.5.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.11.6.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.12.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.12.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.12.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.01.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.01.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.01.4.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.02.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.02.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.02.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.02.4.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.03.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.03.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.03.4.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.03.5.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.04.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.04.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.04.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.05.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.05.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.06.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.06.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.06.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.06.4.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.06.5.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.06.6.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.07.27.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.09.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.11.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__add_latex.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__gpu.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__latex_test3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__py38.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__stable.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__straxen_0-13-1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__straxen_v100.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__switch_deployhq_user.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__upgrade-boost.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2020.11.06 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2020.11.25 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2020.12.21 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2020.12.23 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.01.04 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.01.06 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.01.11 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.04.18 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.05.04 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.06.25 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.07.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.08.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.08.2 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.10.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.10.2 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.10.3 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.10.4 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.10.5 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.11.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.11.2 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.11.3 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.11.4 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.11.5 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.11.6 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.12.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.12.2 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.12.3 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.01.2 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.01.3 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.01.4 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.02.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.02.2 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.02.3 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.02.4 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.03.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.03.3 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.03.4 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.03.5 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.04.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.04.2 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.04.3 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.05.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.05.2 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.06.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.06.2 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.06.3 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.06.4 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.06.5 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.06.6 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.07.27 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.09.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.11.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:add_latex /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:gpu /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:latex_test3 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:py38 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:stable /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:straxen_0-13-1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:straxen_v100 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:switch_deployhq_user /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:upgrade-boost Project Website Container Definition XENONnT (xenonnt/base-environment) Base software environment for XENONnT, including Python 3.6 and data management tools OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2020.11.06.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2020.11.25.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2020.12.21.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2020.12.23.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2020.12.24.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.01.04.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.01.06.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.01.11.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.04.18.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.05.04.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.06.25.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.07.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.08.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.08.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.10.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.10.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.10.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.10.4.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.10.5.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.11.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.11.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.11.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.11.4.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.11.5.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.11.6.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.12.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.12.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.12.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.01.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.01.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.01.4.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.02.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.02.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.02.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.02.4.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.03.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.03.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.03.4.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.03.5.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.04.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.04.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.04.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.05.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.05.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.06.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.06.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.06.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.06.4.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.06.5.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.06.6.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.07.27.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.09.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.11.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__add_latex.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__gpu.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__latex_test3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__py38.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__stable.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__straxen_v100.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__switch_deployhq_user.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__testing.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__upgrade-boost.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2020.11.06 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2020.11.25 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2020.12.21 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2020.12.23 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2020.12.24 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.01.04 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.01.06 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.01.11 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.04.18 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.05.04 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.06.25 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.07.1 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.08.1 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.08.2 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.10.1 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.10.2 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.10.3 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.10.4 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.10.5 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.11.1 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.11.2 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.11.3 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.11.4 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.11.5 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.11.6 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.12.1 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.12.2 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.12.3 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.01.2 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.01.3 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.01.4 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.02.1 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.02.2 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.02.3 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.02.4 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.03.1 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.03.3 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.03.4 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.03.5 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.04.1 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.04.2 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.04.3 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.05.1 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.05.2 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.06.1 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.06.2 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.06.3 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.06.4 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.06.5 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.06.6 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.07.27 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.09.1 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.11.1 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:add_latex /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:gpu /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:latex_test3 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:py38 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:stable /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:straxen_v100 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:switch_deployhq_user /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:testing /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:upgrade-boost Project Website Container Definition XENONnT (xenonnt/osg_dev) Base software environment for XENONnT, including Python 3.6 and data management tools OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__osg_dev__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/xenonnt/osg_dev:latest Project Website Container Definition Tools \u00b6 DeepLabCut 3.0.0rc3 (htc/deeplabcut:3.0.0rc4) A software package for animal pose estimation OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__deeplabcut__3.0.0rc4.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/deeplabcut:3.0.0rc4 Project Website Container Definition FreeSurfer (opensciencegrid/osgvo-freesurfer) A software package for the analysis and visualization of structural and functional neuroimaging data from cross-sectional or longitudinal studies OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-freesurfer__6.0.0.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-freesurfer__6.0.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-freesurfer__7.0.0.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-freesurfer__7.1.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-freesurfer__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-freesurfer:6.0.0 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-freesurfer:6.0.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-freesurfer:7.0.0 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-freesurfer:7.1.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-freesurfer:latest Project Website Container Definition GROMACS (opensciencegrid/osgvo-gromacs) A versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-gromacs__2018.4.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-gromacs__2020.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-gromacs__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-gromacs:2018.4 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-gromacs:2020.2 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-gromacs:latest Project Website Container Definition GROMACS GPU (opensciencegrid/osgvo-gromacs-gpu) A versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. This is a GPU enabled version. OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-gromacs-gpu__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-gromacs-gpu:latest Project Website Container Definition Gromacs 2023.4 (htc/gromacs:2023.4) Gromacs 2023.4 for use on OSG OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__gromacs__2023.4.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/gromacs:2023.4 Project Website Container Definition Gromacs 2024.2 (htc/gromacs:2024.2) Gromacs 2024.2 for use on OSG OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__gromacs__2024.2.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/gromacs:2024.2 Project Website Container Definition Minimal (htc/minimal:0) Minimal image - used for testing OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__minimal__0.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/minimal:0 Project Website Container Definition PyTorch 2.3.1 (htc/pytorch:2.3.1-cuda11.8) A rich ecosystem of tools and libraries extends PyTorch and supports development in computer vision, NLP and more. OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__pytorch__2.3.1-cuda11.8.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/pytorch:2.3.1-cuda11.8 Project Website Container Definition Quantum Espresso (opensciencegrid/osgvo-quantum-espresso) A suite for first-principles electronic-structure calculations and materials modeling OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-quantum-espresso__6.6.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-quantum-espresso__6.8.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-quantum-espresso:6.6 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-quantum-espresso:6.8 Project Website Container Definition RASPA2 (opensciencegrid/osgvo-raspa2) General purpose classical simulation package. It can be used for the simulation of molecules in gases, fluids, zeolites, aluminosilicates, metal-organic frameworks, carbon nanotubes and external fields. OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-raspa2__2.0.41.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-raspa2:2.0.41 Project Website Container Definition TensorFlow (opensciencegrid/tensorflow) TensorFlow image (CPU only) OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__tensorflow__2.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__tensorflow__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow:2.3 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow:latest Project Website Container Definition TensorFlow (rynge/tensorflow-cowsay) TensorFlow image (CPU only) OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/rynge__tensorflow-cowsay__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/rynge/tensorflow-cowsay:latest Project Website Container Definition TensorFlow (jiahe58/tensorflow) TensorFlow image (CPU only) OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/jiahe58__tensorflow__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/jiahe58/tensorflow:latest Project Website Container Definition TensorFlow GPU (opensciencegrid/tensorflow-gpu) TensorFlow image with GPU support OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__tensorflow-gpu__2.2-cuda-10.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__tensorflow-gpu__2.3-cuda-10.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__tensorflow-gpu__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow-gpu:2.2-cuda-10.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow-gpu:2.3-cuda-10.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow-gpu:latest Project Website Container Definition TensorFlow GPU (efajardo/astroflow) TensorFlow image with GPU support OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/efajardo__astroflow__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/efajardo/astroflow:latest Project Website Container Definition TensorFlow GPU (ssrujanaa/catsanddogs) TensorFlow image with GPU support OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/ssrujanaa__catsanddogs__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/ssrujanaa/catsanddogs:latest Project Website Container Definition TensorFlow GPU (weiphy/skopt) TensorFlow image with GPU support OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/weiphy__skopt__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/weiphy/skopt:latest Project Website Container Definition","title":"Containers - Predefined List"},{"location":"htc_workloads/using_software/available-containers-list/#existing-ospool-supported-containers","text":"This is list of commonly used containers in the Open Science Pool. These can be used directly in your jobs or as base images if you want to define your own. Please see the pages on Apptainer containers and Docker containers for detailed instructions on how to use containers.","title":"Existing OSPool-Supported Containers"},{"location":"htc_workloads/using_software/available-containers-list/#base","text":"Debian 12 (htc/debian:12) Debian 12 base image OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__debian__12.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/debian:12 Project Website Container Definition EL 7 (htc/centos:7) Enterprise Linux (CentOS) 7 base image OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__centos__7.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/centos:7 Project Website Container Definition Rocky 8 (htc/rocky:8) Rocky Linux 8 base image OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__rocky__8.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/rocky:8 Project Website Container Definition Rocky 8 / CUDA 11.0.3 (htc/rocky:8-cuda-11.0.3) Rocky Linux 8 / CUDA 11.0.3 image OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__rocky__8-cuda-11.0.3.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/rocky:8-cuda-11.0.3 Project Website Container Definition Rocky 9 (htc/rocky:9) Rocky Linux 9 base image OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__rocky__9.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/rocky:9 Project Website Container Definition Rocky 9 / CUDA 2.6.0 (htc/rocky:9-cuda-12.6.0) Rocky Linux 9 / CUDA 12.6.0 image OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__rocky__9-cuda-12.6.0.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/rocky:9-cuda-12.6.0 Project Website Container Definition Ubuntu 20.04 (htc/ubuntu:20.04) Ubuntu 20.04 (Focal) base image OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__ubuntu__20.04.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/ubuntu:20.04 Project Website Container Definition Ubuntu 22.04 (htc/ubuntu:22.04) Ubuntu 22.04 (Jammy Jellyfish) base image OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__ubuntu__22.04.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/ubuntu:22.04 Project Website Container Definition Ubuntu 24.04 (htc/ubuntu:24.04) Ubuntu 24.04 (Nobel Numbat) base image OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__ubuntu__24.04.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/ubuntu:24.04 Project Website Container Definition","title":"Base"},{"location":"htc_workloads/using_software/available-containers-list/#ai","text":"Tensorflow 2.15 (htc/tensorflow:2.15) Tensorflow image from the Tensorflow project, with OSG additions OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__tensorflow__2.15.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/tensorflow:2.15 Project Website Container Definition scikit-learn:1.3.2 (htc/scikit-learn:1.3) scikit-learn, configured for execution on OSG OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__scikit-learn__1.3.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/scikit-learn:1.3 Project Website Container Definition","title":"AI"},{"location":"htc_workloads/using_software/available-containers-list/#languages","text":"Julia (opensciencegrid/osgvo-julia) Ubuntu based image with Julia OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-julia__1.0.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-julia__1.5.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-julia__1.7.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-julia__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-julia:1.0.3 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-julia:1.5.3 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-julia:1.7.3 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-julia:latest Project Website Container Definition Julia (m8zeng/julia-packages) Ubuntu based image with Julia OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/m8zeng__julia-packages__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/m8zeng/julia-packages:latest Project Website Container Definition Matlab Runtime (opensciencegrid/osgvo-matlab-runtime) This is the Matlab runtime component you can use to execute compiled Matlab codes OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-matlab-runtime__R2018b.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-matlab-runtime__R2019a.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-matlab-runtime__R2019b.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-matlab-runtime__R2020a.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-matlab-runtime__R2020b.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-matlab-runtime__R2021b.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-matlab-runtime__R2022b.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-matlab-runtime__R2023a.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2018b /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2019a /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2019b /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2020a /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2020b /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2021b /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2022b /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2023a Project Website Container Definition Matlab Runtime (htc/matlab-runtime:R2023a) This is the Matlab runtime component you can use to execute compiled Matlab codes OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__matlab-runtime__R2023a.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/matlab-runtime:R2023a Project Website Container Definition R (opensciencegrid/osgvo-r) Example for building R images OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-r__3.5.0.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-r__4.0.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-r__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-r:3.5.0 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-r:4.0.2 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-r:latest Project Website Container Definition R (clkwisconsin/spacetimer) Example for building R images OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/clkwisconsin__spacetimer__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/clkwisconsin/spacetimer:latest Project Website Container Definition","title":"Languages"},{"location":"htc_workloads/using_software/available-containers-list/#project","text":"XENONnT (opensciencegrid/osgvo-xenon) Base software environment for XENONnT, including Python 3.6 and data management tools OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2020.11.06.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2020.11.25.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2020.12.21.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2020.12.23.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.01.04.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.01.06.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.01.11.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.04.18.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.05.04.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.06.25.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.07.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.08.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.08.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.10.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.10.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.10.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.10.4.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.10.5.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.11.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.11.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.11.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.11.4.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.11.5.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.11.6.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.12.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.12.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2021.12.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.01.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.01.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.01.4.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.02.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.02.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.02.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.02.4.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.03.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.03.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.03.4.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.03.5.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.04.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.04.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.04.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.05.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.05.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.06.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.06.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.06.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.06.4.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.06.5.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.06.6.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.07.27.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.09.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__2022.11.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__add_latex.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__gpu.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__latex_test3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__py38.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__stable.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__straxen_0-13-1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__straxen_v100.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__switch_deployhq_user.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-xenon__upgrade-boost.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2020.11.06 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2020.11.25 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2020.12.21 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2020.12.23 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.01.04 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.01.06 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.01.11 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.04.18 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.05.04 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.06.25 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.07.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.08.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.08.2 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.10.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.10.2 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.10.3 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.10.4 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.10.5 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.11.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.11.2 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.11.3 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.11.4 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.11.5 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.11.6 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.12.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.12.2 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2021.12.3 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.01.2 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.01.3 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.01.4 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.02.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.02.2 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.02.3 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.02.4 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.03.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.03.3 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.03.4 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.03.5 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.04.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.04.2 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.04.3 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.05.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.05.2 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.06.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.06.2 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.06.3 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.06.4 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.06.5 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.06.6 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.07.27 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.09.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:2022.11.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:add_latex /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:gpu /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:latex_test3 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:py38 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:stable /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:straxen_0-13-1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:straxen_v100 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:switch_deployhq_user /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-xenon:upgrade-boost Project Website Container Definition XENONnT (xenonnt/base-environment) Base software environment for XENONnT, including Python 3.6 and data management tools OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2020.11.06.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2020.11.25.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2020.12.21.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2020.12.23.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2020.12.24.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.01.04.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.01.06.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.01.11.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.04.18.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.05.04.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.06.25.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.07.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.08.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.08.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.10.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.10.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.10.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.10.4.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.10.5.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.11.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.11.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.11.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.11.4.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.11.5.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.11.6.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.12.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.12.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2021.12.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.01.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.01.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.01.4.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.02.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.02.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.02.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.02.4.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.03.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.03.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.03.4.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.03.5.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.04.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.04.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.04.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.05.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.05.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.06.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.06.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.06.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.06.4.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.06.5.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.06.6.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.07.27.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.09.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__2022.11.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__add_latex.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__gpu.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__latex_test3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__py38.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__stable.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__straxen_v100.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__switch_deployhq_user.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__testing.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__base-environment__upgrade-boost.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2020.11.06 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2020.11.25 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2020.12.21 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2020.12.23 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2020.12.24 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.01.04 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.01.06 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.01.11 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.04.18 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.05.04 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.06.25 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.07.1 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.08.1 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.08.2 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.10.1 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.10.2 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.10.3 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.10.4 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.10.5 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.11.1 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.11.2 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.11.3 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.11.4 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.11.5 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.11.6 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.12.1 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.12.2 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2021.12.3 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.01.2 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.01.3 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.01.4 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.02.1 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.02.2 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.02.3 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.02.4 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.03.1 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.03.3 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.03.4 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.03.5 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.04.1 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.04.2 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.04.3 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.05.1 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.05.2 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.06.1 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.06.2 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.06.3 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.06.4 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.06.5 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.06.6 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.07.27 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.09.1 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:2022.11.1 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:add_latex /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:gpu /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:latex_test3 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:py38 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:stable /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:straxen_v100 /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:switch_deployhq_user /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:testing /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:upgrade-boost Project Website Container Definition XENONnT (xenonnt/osg_dev) Base software environment for XENONnT, including Python 3.6 and data management tools OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/xenonnt__osg_dev__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/xenonnt/osg_dev:latest Project Website Container Definition","title":"Project"},{"location":"htc_workloads/using_software/available-containers-list/#tools","text":"DeepLabCut 3.0.0rc3 (htc/deeplabcut:3.0.0rc4) A software package for animal pose estimation OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__deeplabcut__3.0.0rc4.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/deeplabcut:3.0.0rc4 Project Website Container Definition FreeSurfer (opensciencegrid/osgvo-freesurfer) A software package for the analysis and visualization of structural and functional neuroimaging data from cross-sectional or longitudinal studies OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-freesurfer__6.0.0.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-freesurfer__6.0.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-freesurfer__7.0.0.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-freesurfer__7.1.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-freesurfer__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-freesurfer:6.0.0 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-freesurfer:6.0.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-freesurfer:7.0.0 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-freesurfer:7.1.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-freesurfer:latest Project Website Container Definition GROMACS (opensciencegrid/osgvo-gromacs) A versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-gromacs__2018.4.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-gromacs__2020.2.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-gromacs__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-gromacs:2018.4 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-gromacs:2020.2 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-gromacs:latest Project Website Container Definition GROMACS GPU (opensciencegrid/osgvo-gromacs-gpu) A versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. This is a GPU enabled version. OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-gromacs-gpu__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-gromacs-gpu:latest Project Website Container Definition Gromacs 2023.4 (htc/gromacs:2023.4) Gromacs 2023.4 for use on OSG OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__gromacs__2023.4.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/gromacs:2023.4 Project Website Container Definition Gromacs 2024.2 (htc/gromacs:2024.2) Gromacs 2024.2 for use on OSG OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__gromacs__2024.2.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/gromacs:2024.2 Project Website Container Definition Minimal (htc/minimal:0) Minimal image - used for testing OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__minimal__0.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/minimal:0 Project Website Container Definition PyTorch 2.3.1 (htc/pytorch:2.3.1-cuda11.8) A rich ecosystem of tools and libraries extends PyTorch and supports development in computer vision, NLP and more. OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/htc__pytorch__2.3.1-cuda11.8.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/htc/pytorch:2.3.1-cuda11.8 Project Website Container Definition Quantum Espresso (opensciencegrid/osgvo-quantum-espresso) A suite for first-principles electronic-structure calculations and materials modeling OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-quantum-espresso__6.6.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-quantum-espresso__6.8.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-quantum-espresso:6.6 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-quantum-espresso:6.8 Project Website Container Definition RASPA2 (opensciencegrid/osgvo-raspa2) General purpose classical simulation package. It can be used for the simulation of molecules in gases, fluids, zeolites, aluminosilicates, metal-organic frameworks, carbon nanotubes and external fields. OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__osgvo-raspa2__2.0.41.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-raspa2:2.0.41 Project Website Container Definition TensorFlow (opensciencegrid/tensorflow) TensorFlow image (CPU only) OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__tensorflow__2.3.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__tensorflow__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow:2.3 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow:latest Project Website Container Definition TensorFlow (rynge/tensorflow-cowsay) TensorFlow image (CPU only) OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/rynge__tensorflow-cowsay__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/rynge/tensorflow-cowsay:latest Project Website Container Definition TensorFlow (jiahe58/tensorflow) TensorFlow image (CPU only) OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/jiahe58__tensorflow__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/jiahe58/tensorflow:latest Project Website Container Definition TensorFlow GPU (opensciencegrid/tensorflow-gpu) TensorFlow image with GPU support OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__tensorflow-gpu__2.2-cuda-10.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__tensorflow-gpu__2.3-cuda-10.1.sif osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/opensciencegrid__tensorflow-gpu__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow-gpu:2.2-cuda-10.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow-gpu:2.3-cuda-10.1 /cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow-gpu:latest Project Website Container Definition TensorFlow GPU (efajardo/astroflow) TensorFlow image with GPU support OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/efajardo__astroflow__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/efajardo/astroflow:latest Project Website Container Definition TensorFlow GPU (ssrujanaa/catsanddogs) TensorFlow image with GPU support OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/ssrujanaa__catsanddogs__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/ssrujanaa/catsanddogs:latest Project Website Container Definition TensorFlow GPU (weiphy/skopt) TensorFlow image with GPU support OSDF Locations: osdf:///ospool/uc-shared/public/OSG-Staff/images/repo/x86_64/weiphy__skopt__latest.sif CVMFS Locations: /cvmfs/singularity.opensciencegrid.org/weiphy/skopt:latest Project Website Container Definition","title":"Tools"},{"location":"htc_workloads/using_software/compiling-applications/","text":"Compiling Software \u00b6 Introduction \u00b6 Due to the distributed nature of the Open Science Pool, you will always need to ensure that your jobs have access to the software that will be executed. You have two options for using code on the OSG \u2013 transferring the code files by themselves, or putting the code files into a container. Sometimes code is already compiled and they offer a direct executable for the UNIX or Linux system. Those types of software can be directly used on the OSPool. If your software is dependent on different library functions and does not have a make or install command consider using containers. To learn more about containers please follow the instructions on our container guide. If your code is written in C or C++, and has instructions including make-this guide will help you. Moreover, this guide provides general information for compiling and using your software in the OSPool. A detailed example of a specific software compilation process is additionally available at Example Compilation Guide . What is compiling? The process of compiling converts human readable code into binary, machine readable code that will execute the steps of the program. Get software source code \u00b6 The first step to compiling your software is to locate and download the source code, being sure to select the version that you want. Source code will often be made available as a compressed tar archive which will need to be extracted for before compilation. You should also carefully review the installation instructions provided by the software developers. The installation instructions should include important information regarding various options for configuring and performing the compilation. Also carefully note any system dependencies (hardware, other software, and libraries) that are required for your software. Select the appropriate compiler and compilation options \u00b6 A compiler is a program that is used to peform source code compilation. The GNU Compiler Collection (GCC) is a common, open source collection of compilers with support for C, C++, fotran, and other languages, and includes important libraries for supporting your compilation and sometimes software execution. Your software compilation may require certain versions of a compiler which should be noted in the installation instructions or system dependencies documention. Currently the Access Points have GCC 8.5.0 as the default version, but newer versions of GCC may also be available - to learn more please contact support@osg-htc.org . Static versus dynamic linking during compilation \u00b6 Binary code often depends on additional information (i.e. instructions) from other software, known as libraries, for proper execution. The default behavior when compiling, is for the final binary to be \"dynamically linked\" to libraries that it depends on, such that when the binary is executed, it will look for these library files on the system that it is running on. Thus a copy of the appropriate library files will need to be available to your software wherever it runs. OSPool users can transfer a copy of the necessary libraries along with with their jobs to manage such dependencies if not supported by the execute node that your jobs run on. However, the option exists to \"statically link\" the library dependencies of your software. By statically linking libraries during compilation, the library code will be directly packaged with your software binary meaning the libraries will always be available to your software which your software to run on more execute nodes. To statically link libraries during compilation, use the -static flag when running gcc , use --enable-static when running a configure script, or set your LD_FLAGS environment variable to --enable-static (e.g. export LD_FLAGS=\"--enable-static\" ). Get access to libraries needed for your software \u00b6 As described above, your software may require additional software, known as libraries, for compilation and execution. For greatest portability of your software, we recommend installing the libraries needed for your software and transferring a copy of the libraries along with your subsequent jobs. When using libraries that you have installed yourself, you will likely need to add these libraries to your LIBRARY_PATH environment variable before compiling your software. There may also be additional environment variables that will need to be defined or modified for software compilation, this information should be provided in the installtion instructions of your software. For any libraries added to LIBRARY_PATH before software compilation, you'll also need to add these same libraries to your LD_LIBRARY_PATH as a step in your job's executable bash script before executing your software. Perform your compilation \u00b6 Software compilation is easiest to perform interactively, and OSPool users are welcome to compile software directly on their assigned Access Point. This will ensure that your application is built on an environment that is similar to the majority of the compute nodes on OSG. Because OSG Access Points currently use the Alma/CentOS Linux 8 operating system (which are similar to the more general Red Hat Enterprise Linux, or RHEL distribution), your software will, generally, only be compatible for execution on RHEL 9 or similar operating systems. You can use the requirements statement of your HTCondor submit file to direct your jobs to execute nodes with specific operating systems, for instance: requirements = (OSGVO_OS_STRING == \"RHEL 9\") Software installation typically includes three steps: 1.) configuration, 2.) compilation, and 3.) \"installation\" which places the compiled code in a specific location. In most cases, these steps will be achieved with the following commands: ./configure make make install Most software is written to install to a default location, however your OSG Access Point account is not authorized to write to these default system locations. Instead, you will want to create a folder for your software installation in your home directory and use an option in the configuration step that will install the software to this folder: ./configure --prefix=/home/username/path where username should be replaced with your OSG username and path replaced with the path to the directory you created for your software installation. Watch out for hardware feature detection \u00b6 Some software builds might try to optimize the software for the particular host you are building on. In general this is a good idea (optimized code will perform better), but be aware that not all execution endpoints on OSG are the same. If your software picks up hardware features such as AVX/AVX2, you might have to ensure the jobs are running on hardware with those features. For example, if your software requires AVX2: requirements = (OSGVO_OS_STRING == \"RHEL 9\") && (HAS_AVX2 == True) Please see Control Where Your Jobs Run / Job Requirements Use Your Software \u00b6 When submitting jobs, you will need to transfer a copy of your compiled software, and any dynamically-linked dependencies that you also installed. Our Introduction to Data Management on OSG guide is a good starting point for more information for selecting the appropriate methods for transferring you software. Depending on your job workflow, it may be possible to directly specify your executable binary as the executable in your HTCondor submit file. When using your software in subsequent job submissions, be sure to add additional commands to the executable bash script to define evironment variables, like for instance LD_LIBRARY_PATH , that may be needed to properly execute your software. Get Additional Assistance \u00b6 If you have questions or need assistance, please contact support@osg-htc.org .","title":"Compiling Software"},{"location":"htc_workloads/using_software/compiling-applications/#compiling-software","text":"","title":"Compiling Software"},{"location":"htc_workloads/using_software/compiling-applications/#introduction","text":"Due to the distributed nature of the Open Science Pool, you will always need to ensure that your jobs have access to the software that will be executed. You have two options for using code on the OSG \u2013 transferring the code files by themselves, or putting the code files into a container. Sometimes code is already compiled and they offer a direct executable for the UNIX or Linux system. Those types of software can be directly used on the OSPool. If your software is dependent on different library functions and does not have a make or install command consider using containers. To learn more about containers please follow the instructions on our container guide. If your code is written in C or C++, and has instructions including make-this guide will help you. Moreover, this guide provides general information for compiling and using your software in the OSPool. A detailed example of a specific software compilation process is additionally available at Example Compilation Guide . What is compiling? The process of compiling converts human readable code into binary, machine readable code that will execute the steps of the program.","title":"Introduction"},{"location":"htc_workloads/using_software/compiling-applications/#get-software-source-code","text":"The first step to compiling your software is to locate and download the source code, being sure to select the version that you want. Source code will often be made available as a compressed tar archive which will need to be extracted for before compilation. You should also carefully review the installation instructions provided by the software developers. The installation instructions should include important information regarding various options for configuring and performing the compilation. Also carefully note any system dependencies (hardware, other software, and libraries) that are required for your software.","title":"Get software source code"},{"location":"htc_workloads/using_software/compiling-applications/#select-the-appropriate-compiler-and-compilation-options","text":"A compiler is a program that is used to peform source code compilation. The GNU Compiler Collection (GCC) is a common, open source collection of compilers with support for C, C++, fotran, and other languages, and includes important libraries for supporting your compilation and sometimes software execution. Your software compilation may require certain versions of a compiler which should be noted in the installation instructions or system dependencies documention. Currently the Access Points have GCC 8.5.0 as the default version, but newer versions of GCC may also be available - to learn more please contact support@osg-htc.org .","title":"Select the appropriate compiler and compilation options"},{"location":"htc_workloads/using_software/compiling-applications/#static-versus-dynamic-linking-during-compilation","text":"Binary code often depends on additional information (i.e. instructions) from other software, known as libraries, for proper execution. The default behavior when compiling, is for the final binary to be \"dynamically linked\" to libraries that it depends on, such that when the binary is executed, it will look for these library files on the system that it is running on. Thus a copy of the appropriate library files will need to be available to your software wherever it runs. OSPool users can transfer a copy of the necessary libraries along with with their jobs to manage such dependencies if not supported by the execute node that your jobs run on. However, the option exists to \"statically link\" the library dependencies of your software. By statically linking libraries during compilation, the library code will be directly packaged with your software binary meaning the libraries will always be available to your software which your software to run on more execute nodes. To statically link libraries during compilation, use the -static flag when running gcc , use --enable-static when running a configure script, or set your LD_FLAGS environment variable to --enable-static (e.g. export LD_FLAGS=\"--enable-static\" ).","title":"Static versus dynamic linking during compilation"},{"location":"htc_workloads/using_software/compiling-applications/#get-access-to-libraries-needed-for-your-software","text":"As described above, your software may require additional software, known as libraries, for compilation and execution. For greatest portability of your software, we recommend installing the libraries needed for your software and transferring a copy of the libraries along with your subsequent jobs. When using libraries that you have installed yourself, you will likely need to add these libraries to your LIBRARY_PATH environment variable before compiling your software. There may also be additional environment variables that will need to be defined or modified for software compilation, this information should be provided in the installtion instructions of your software. For any libraries added to LIBRARY_PATH before software compilation, you'll also need to add these same libraries to your LD_LIBRARY_PATH as a step in your job's executable bash script before executing your software.","title":"Get access to libraries needed for your software"},{"location":"htc_workloads/using_software/compiling-applications/#perform-your-compilation","text":"Software compilation is easiest to perform interactively, and OSPool users are welcome to compile software directly on their assigned Access Point. This will ensure that your application is built on an environment that is similar to the majority of the compute nodes on OSG. Because OSG Access Points currently use the Alma/CentOS Linux 8 operating system (which are similar to the more general Red Hat Enterprise Linux, or RHEL distribution), your software will, generally, only be compatible for execution on RHEL 9 or similar operating systems. You can use the requirements statement of your HTCondor submit file to direct your jobs to execute nodes with specific operating systems, for instance: requirements = (OSGVO_OS_STRING == \"RHEL 9\") Software installation typically includes three steps: 1.) configuration, 2.) compilation, and 3.) \"installation\" which places the compiled code in a specific location. In most cases, these steps will be achieved with the following commands: ./configure make make install Most software is written to install to a default location, however your OSG Access Point account is not authorized to write to these default system locations. Instead, you will want to create a folder for your software installation in your home directory and use an option in the configuration step that will install the software to this folder: ./configure --prefix=/home/username/path where username should be replaced with your OSG username and path replaced with the path to the directory you created for your software installation.","title":"Perform your compilation"},{"location":"htc_workloads/using_software/compiling-applications/#watch-out-for-hardware-feature-detection","text":"Some software builds might try to optimize the software for the particular host you are building on. In general this is a good idea (optimized code will perform better), but be aware that not all execution endpoints on OSG are the same. If your software picks up hardware features such as AVX/AVX2, you might have to ensure the jobs are running on hardware with those features. For example, if your software requires AVX2: requirements = (OSGVO_OS_STRING == \"RHEL 9\") && (HAS_AVX2 == True) Please see Control Where Your Jobs Run / Job Requirements","title":"Watch out for hardware feature detection"},{"location":"htc_workloads/using_software/compiling-applications/#use-your-software","text":"When submitting jobs, you will need to transfer a copy of your compiled software, and any dynamically-linked dependencies that you also installed. Our Introduction to Data Management on OSG guide is a good starting point for more information for selecting the appropriate methods for transferring you software. Depending on your job workflow, it may be possible to directly specify your executable binary as the executable in your HTCondor submit file. When using your software in subsequent job submissions, be sure to add additional commands to the executable bash script to define evironment variables, like for instance LD_LIBRARY_PATH , that may be needed to properly execute your software.","title":"Use Your Software"},{"location":"htc_workloads/using_software/compiling-applications/#get-additional-assistance","text":"If you have questions or need assistance, please contact support@osg-htc.org .","title":"Get Additional Assistance"},{"location":"htc_workloads/using_software/containers-docker/","text":"Containers - Docker \u00b6 The OSPool is using Apptainer/Singularity to execute containers. It is recommended that if you are building your own custom container, you use the Apptainer/Singularity image defintion format . However, Docker images can also be used on the OSPool and a Docker image is sometimes the more appropriate choice. For example: There is an existing image on Docker Hub You found a Dockerfile which meets your requirements You have Docker installed on your own machine and want to develop the code/image locally before using it on the OSPool This guide contains examples on how to build your own Docker image, how to convert a Docker image to Apptainer/Singularity, and how to import a Docker image from the Docker Hub. Building Your Own Docker Image \u00b6 If you already have an existing Docker container image, skip to Preparing Docker Containers for HTCondor Jobs . Otherwise, continue reading. Identify Components \u00b6 What software do you want to install? Make sure that you have either the source code or a command that can be used to install it through Linux (like apt-get or yum ). You'll also need to choose a \"base\" container, on which to add your particular software or tools. Building \u00b6 There are two main methods for generating your own container image. Editing the Dockerfile Editing the default image using local Docker We recommend the first option, as it is more reproducible, but the second option can be useful for troubleshooting or especially tricky installs. Dockerfile \u00b6 Create a folder on your computer and inside it, create a blank text file called Dockerfile . The first line of this file should include the keyword FROM and then the name of a Docker image (from Docker Hub) you want to use as your starting point. If using the OSG's Ubuntu 22.04 image that would look like this: FROM hub.opensciencegrid.org/htc/ubuntu:22.04 Then, for each command you want to run to add libraries or software, use the keyword RUN and then the command. Sometimes it makes sense to string commands together using the && operator and line breaks \\ , like so: RUN apt-get update -y && \\ apt-get install -y build-essentials or RUN wget https://cran.r-project.org/src/base/R-3/R-3.6.0.tar.gz && \\ tar -xzf R-3.6.0.tar.gz && \\ cd R-3.6.0 && \\ ./configure && \\ make && \\ make install Typically it's good to group together commands installing the same kind of thing (system libraries, or software packages, or an installation process) under one RUN command, and then have multiple RUN commands, one for each of the different type of software or package you're installing. (For all the possible Dockerfile keywords, see the Docker Documentation ) Once your Dockerfile is ready, you can \"build\" the container image by running this command: $ docker build -t namespace/repository_name . Note that the naming convention for Docker images is your Docker Hub username and then a name you choose for that particular container image. So if my Docker Hub username is alice and I created an image with the NCBI blast tool, I might use this name: $ docker build -t alice/NCBI-blast . Editing an Image Interactively \u00b6 You can also build an image interactively, without a Dockerfile. First, get the desired starting image from Docker Hub. Again, we will look at the OSG Ubuntu 22.04 image. $ docker pull hub.opensciencegrid.org/htc/ubuntu:22.04 We will run the image in a docker interactive session $ docker run -it --name hub.opensciencegrid.org/htc/ubuntu:22.04 /bin/bash Giving the session a name is important because it will make it easier to reattach the session later and commit the changes later on. Now you will be greeted by a new command line prompt that will look something like this [root@740b9db736a1 /]# You can now install the software that you need through the default package manager, in this case apt-get . [root@740b9db736a1 /]# apt-get install build-essentials Once you have installed all the software, you simply exit [root@740b9db736a1 /]# exit Now you can commit the changes to the image and give it a name: docker commit namespace/repository_name You can also use the session's hash as found in the command prompt ( 740b9db736a1 in the above example) in place of the docker session name. Preparing Docker Containers for HTCondor Jobs \u00b6 Once you have a Docker container image, whether created by you or found on DockerHub, you should convert it to the \"sif\" image format for the best experience on the OSpool. Convert Docker containers on Docker Hub or online \u00b6 If the Docker container you want to use is online, on a site like Docker Hub, you can log in to your Access Point and run a single command to convert it to a .sif image: $ apptainer build my-container.sif docker://owner/repository:tag Where the path at the end of the command is customized to be the container image you want to use. Convert Docker containers on your computer \u00b6 If you have built a Docker image on your own host, you can save it as a tar file and then convert it to an Apptainer/Singularity SIF image. First find the image id: $ docker image list REPOSITORY IMAGE ID awesome/science f1e7972c55bc Using the image id, save the image to a tar file: $ docker save f1e7972c55bc -o my-container.tar Transfer my-container.tar to the OSPool access point, and use Apptainer to convert it to a SIF image: $ apptainer build my-container.sif docker-archive://my-container.tar Using Containers in HTCondor Jobs \u00b6 After converting the Docker image to a sif format, you can use the image in your job as described in the Apptainer/Singularity Guide . Special Cases \u00b6 ENTRYPOINT and ENV \u00b6 Two options that can be used in the Dockerfile to set the environment or default command are ENTRYPOINT and ENV . Unfortunately, both of these aspects of the Docker container are deleted when it is converted to a Singularity image in the Open Science Pool. Apptainer/Singularity Environment \u00b6 One approach for setting up the environment for an image which will be converted to Apptainer/Singularity, is to put a file under /.singularity.d/env/ . These files will be sourced when the container get instantiated. For example, if you have Conda environment, add this to the end of your Dockerfile: # set up environment for when using the container, this is for when # we invoke the container with Apptainer/Singularity RUN mkdir -p /.singularity.d/env && \\ echo \". /opt/conda/etc/profile.d/conda.sh\" >>/.singularity.d/env/91-environment.sh && \\ echo \"conda activate\" >>/.singularity.d/env/91-environment.sh","title":"Containers - Docker"},{"location":"htc_workloads/using_software/containers-docker/#containers-docker","text":"The OSPool is using Apptainer/Singularity to execute containers. It is recommended that if you are building your own custom container, you use the Apptainer/Singularity image defintion format . However, Docker images can also be used on the OSPool and a Docker image is sometimes the more appropriate choice. For example: There is an existing image on Docker Hub You found a Dockerfile which meets your requirements You have Docker installed on your own machine and want to develop the code/image locally before using it on the OSPool This guide contains examples on how to build your own Docker image, how to convert a Docker image to Apptainer/Singularity, and how to import a Docker image from the Docker Hub.","title":"Containers - Docker"},{"location":"htc_workloads/using_software/containers-docker/#building-your-own-docker-image","text":"If you already have an existing Docker container image, skip to Preparing Docker Containers for HTCondor Jobs . Otherwise, continue reading.","title":"Building Your Own Docker Image"},{"location":"htc_workloads/using_software/containers-docker/#identify-components","text":"What software do you want to install? Make sure that you have either the source code or a command that can be used to install it through Linux (like apt-get or yum ). You'll also need to choose a \"base\" container, on which to add your particular software or tools.","title":"Identify Components"},{"location":"htc_workloads/using_software/containers-docker/#building","text":"There are two main methods for generating your own container image. Editing the Dockerfile Editing the default image using local Docker We recommend the first option, as it is more reproducible, but the second option can be useful for troubleshooting or especially tricky installs.","title":"Building"},{"location":"htc_workloads/using_software/containers-docker/#dockerfile","text":"Create a folder on your computer and inside it, create a blank text file called Dockerfile . The first line of this file should include the keyword FROM and then the name of a Docker image (from Docker Hub) you want to use as your starting point. If using the OSG's Ubuntu 22.04 image that would look like this: FROM hub.opensciencegrid.org/htc/ubuntu:22.04 Then, for each command you want to run to add libraries or software, use the keyword RUN and then the command. Sometimes it makes sense to string commands together using the && operator and line breaks \\ , like so: RUN apt-get update -y && \\ apt-get install -y build-essentials or RUN wget https://cran.r-project.org/src/base/R-3/R-3.6.0.tar.gz && \\ tar -xzf R-3.6.0.tar.gz && \\ cd R-3.6.0 && \\ ./configure && \\ make && \\ make install Typically it's good to group together commands installing the same kind of thing (system libraries, or software packages, or an installation process) under one RUN command, and then have multiple RUN commands, one for each of the different type of software or package you're installing. (For all the possible Dockerfile keywords, see the Docker Documentation ) Once your Dockerfile is ready, you can \"build\" the container image by running this command: $ docker build -t namespace/repository_name . Note that the naming convention for Docker images is your Docker Hub username and then a name you choose for that particular container image. So if my Docker Hub username is alice and I created an image with the NCBI blast tool, I might use this name: $ docker build -t alice/NCBI-blast .","title":"Dockerfile"},{"location":"htc_workloads/using_software/containers-docker/#editing-an-image-interactively","text":"You can also build an image interactively, without a Dockerfile. First, get the desired starting image from Docker Hub. Again, we will look at the OSG Ubuntu 22.04 image. $ docker pull hub.opensciencegrid.org/htc/ubuntu:22.04 We will run the image in a docker interactive session $ docker run -it --name hub.opensciencegrid.org/htc/ubuntu:22.04 /bin/bash Giving the session a name is important because it will make it easier to reattach the session later and commit the changes later on. Now you will be greeted by a new command line prompt that will look something like this [root@740b9db736a1 /]# You can now install the software that you need through the default package manager, in this case apt-get . [root@740b9db736a1 /]# apt-get install build-essentials Once you have installed all the software, you simply exit [root@740b9db736a1 /]# exit Now you can commit the changes to the image and give it a name: docker commit namespace/repository_name You can also use the session's hash as found in the command prompt ( 740b9db736a1 in the above example) in place of the docker session name.","title":"Editing an Image Interactively"},{"location":"htc_workloads/using_software/containers-docker/#preparing-docker-containers-for-htcondor-jobs","text":"Once you have a Docker container image, whether created by you or found on DockerHub, you should convert it to the \"sif\" image format for the best experience on the OSpool.","title":"Preparing Docker Containers for HTCondor Jobs"},{"location":"htc_workloads/using_software/containers-docker/#convert-docker-containers-on-docker-hub-or-online","text":"If the Docker container you want to use is online, on a site like Docker Hub, you can log in to your Access Point and run a single command to convert it to a .sif image: $ apptainer build my-container.sif docker://owner/repository:tag Where the path at the end of the command is customized to be the container image you want to use.","title":"Convert Docker containers on Docker Hub or online"},{"location":"htc_workloads/using_software/containers-docker/#convert-docker-containers-on-your-computer","text":"If you have built a Docker image on your own host, you can save it as a tar file and then convert it to an Apptainer/Singularity SIF image. First find the image id: $ docker image list REPOSITORY IMAGE ID awesome/science f1e7972c55bc Using the image id, save the image to a tar file: $ docker save f1e7972c55bc -o my-container.tar Transfer my-container.tar to the OSPool access point, and use Apptainer to convert it to a SIF image: $ apptainer build my-container.sif docker-archive://my-container.tar","title":"Convert Docker containers on your computer"},{"location":"htc_workloads/using_software/containers-docker/#using-containers-in-htcondor-jobs","text":"After converting the Docker image to a sif format, you can use the image in your job as described in the Apptainer/Singularity Guide .","title":"Using Containers in HTCondor Jobs"},{"location":"htc_workloads/using_software/containers-docker/#special-cases","text":"","title":"Special Cases"},{"location":"htc_workloads/using_software/containers-docker/#entrypoint-and-env","text":"Two options that can be used in the Dockerfile to set the environment or default command are ENTRYPOINT and ENV . Unfortunately, both of these aspects of the Docker container are deleted when it is converted to a Singularity image in the Open Science Pool.","title":"ENTRYPOINT and ENV"},{"location":"htc_workloads/using_software/containers-docker/#apptainersingularity-environment","text":"One approach for setting up the environment for an image which will be converted to Apptainer/Singularity, is to put a file under /.singularity.d/env/ . These files will be sourced when the container get instantiated. For example, if you have Conda environment, add this to the end of your Dockerfile: # set up environment for when using the container, this is for when # we invoke the container with Apptainer/Singularity RUN mkdir -p /.singularity.d/env && \\ echo \". /opt/conda/etc/profile.d/conda.sh\" >>/.singularity.d/env/91-environment.sh && \\ echo \"conda activate\" >>/.singularity.d/env/91-environment.sh","title":"Apptainer/Singularity Environment"},{"location":"htc_workloads/using_software/containers-singularity/","text":"Containers - Apptainer/Singularity \u00b6 This guide is meant to accompany the instructions for using containers in the Open Science Pool. You can use your own custom container to run jobs in the Open Science Pool. This guide describes how to create your own Apptainer/Singularity container \"image\" (the blueprint for the container). Do You Need to Build a Container? \u00b6 If there is an existing Docker container or Apptainer/Singularity container with the software you need, you can proceed with using these options to submit a job. * See OSPool-provided containers here * Using an existing Docker container * Using an existing Apptainer/Singularity container If you can't find a good option among existing containers, you may need to build your own. See this section of the guide for more information. OSG-Provided Apptainer/Singularity Images \u00b6 The OSG Team maintains a set of images that are already in the OSG Apptainer/Singularity repository. A list of ready-to-use containers can be found on this page . If the software you need isn't already supported in a listed container, you can create your own container or use any container image in Docker Hub. How to explore these containers is shown below . Building Your Own Apptainer/Singularity Container \u00b6 Identify Components \u00b6 What software do you want to install? Make sure that you have either the source code or a command that can be used to install it through Linux (like apt-get or yum ). You'll also need to choose a \"base\" container, on which to add your particular software or tools. We recommend using one of the OSG's published containers as your starting point. See the available containers on Docker Hub here: OSG Docker Containers The best candidates for you will be containers that have \"osgvo\" in the name. Apptainer/Singularity Build \u00b6 If you are building an image for the first time, the temporary cache directory of the apptainer image needs to be defined. The following commands define the cache location of the apptainer image to be built. Please run the commands in the terminal of your access point. $mkdir $HOME/tmp $export TMPDIR=$HOME/tmp $export APPTAINER_TMPDIR=$HOME/tmp $export APPTAINER_CACHEDIR=$HOME/tmp To build a custom a Apptainer/Singularity image, create a folder on your access point. Inside it, create a blank text file called image.def . The first lines of this file should include where to get the base image from. If using the OSG's Ubuntu 20.04 image that would look like this: Bootstrap: docker From: hub.opensciencegrid.org/htc/ubuntu:22.04 Then there is a section called %post where you put the additional commands to make the image just like you need it. For example: %post # system packages apt-get update -y apt-get install -y \\ build-essential \\ cmake \\ g++ # install miniconda wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh bash Miniconda3-latest-Linux-x86_64.sh -b -f -p /opt/conda rm Miniconda3-latest-Linux-x86_64.sh # install conda components - add the packages you need here . /opt/conda/etc/profile.d/conda.sh conda create -y -n \"myenv\" python=3.9 conda activate myenv conda update --all conda install -y -n \"myenv\" -c conda-forge pytorch Another good section to include is %environment . This is executed before your job and lets the container configure the environment. Example: %environment # set up environment for when using the container . /opt/conda/etc/profile.d/conda.sh conda activate myenv See the Apptainer documentation for a full reference on how to specify build specs. Note that the %runscript section is ignored when the container is executed on OSG. The final image.def looks like: Bootstrap: docker From: hub.opensciencegrid.org/htc/ubuntu:22.04 %post # system packages apt-get update -y apt-get install -y \\ build-essential \\ wget # install miniconda wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh bash Miniconda3-latest-Linux-x86_64.sh -b -f -p /opt/conda rm Miniconda3-latest-Linux-x86_64.sh # install conda components - add the packages you need here . /opt/conda/etc/profile.d/conda.sh conda create -y -n \"myenv\" python=3.9 conda activate myenv conda update --all conda install -y -n \"myenv\" -c conda-forge pytorch %environment # set up environment for when using the container . /opt/conda/etc/profile.d/conda.sh conda activate myenv Once your build spec is ready, you can \"build\" the container image by running this command: $ apptainer build my-container.sif image.def Once the image is built, test it on an OSG-managed access point, and use it in your HTCondor jobs. Exploring Apptainer/Singularity Images on the Access Points \u00b6 Just like it is important to test your codes and jobs at a small scale, you should make sure that your Apptainer/Singularity container is working correctly before using it in jobs. One way to test your container image on our system is to test it on an OSG-managed access point. To do so, first log in to your assigned access point. Start an interactive session with the Apptainer/Singularity \"shell\" mode. The recommended command line, similar to how containers are started for jobs, is: apptainer shell my-container.sif If you want to test an existing container produced by OSG Staff, use the full path provided in this guide . This example will give you an interactive shell. You can explore the container and test your code with your own inputs from your /home directory, which is automatically mounted (but note - $HOME will not be available to your jobs later). Once you are down exploring, exit the container by running exit or with CTRL+D . Using Singularity or Apptainer Images in an HTCondor Job \u00b6 Once you have a \".sif\" container image file with all your needed software, you can use this file as part of an HTCondor job. Upload the Container Image to the OSDF \u00b6 The image will be resused for each job, and thus the preferred transfer method is OSDF . Store the .sif file under your personal data area on your access point (see table here ). Use the Container in an HTCondor Job \u00b6 Once the image is placed in your OSDF space, you can use an OSDF url directly in the +SingularityImage attribute. Note that you can not use shell variable expansion in the submit file - be sure to replace the username with your actual OSPool username. Example: +SingularityImage = \"osdf:///ospool/apXX/data/USERNAME/my-custom-image-v1.sif\" queue Be aware that OSDF aggressively caches the image based on file naming. If you need to do quick changes, please use versioning of the .sif file so that the caches see a \"new\" name. In this example, replacing my-custom-image-v1.sif with new content will probably mean that some nodes get the old version and some nodes the new version. Prevent this by creating a new file named with v2. Common Issues \u00b6 FATAL: kernel too old If you get a *FATAL: kernel too old* error, it means that the glibc version in the image is too new for the kernel on the host. You can work around this problem by specifying the minimum host kernel. For example, if you want to run the Ubuntu 18.04 image, specfy a minimum host kernel of 3.10.0, formatted as 31000 (major * 10000 + minor * 100 + patch): Requirements = HAS_SINGULARITY == True && OSG_HOST_KERNEL_VERSION >= 31000","title":"Containers - Apptainer/Singularity"},{"location":"htc_workloads/using_software/containers-singularity/#containers-apptainersingularity","text":"This guide is meant to accompany the instructions for using containers in the Open Science Pool. You can use your own custom container to run jobs in the Open Science Pool. This guide describes how to create your own Apptainer/Singularity container \"image\" (the blueprint for the container).","title":"Containers - Apptainer/Singularity"},{"location":"htc_workloads/using_software/containers-singularity/#do-you-need-to-build-a-container","text":"If there is an existing Docker container or Apptainer/Singularity container with the software you need, you can proceed with using these options to submit a job. * See OSPool-provided containers here * Using an existing Docker container * Using an existing Apptainer/Singularity container If you can't find a good option among existing containers, you may need to build your own. See this section of the guide for more information.","title":"Do You Need to Build a Container?"},{"location":"htc_workloads/using_software/containers-singularity/#osg-provided-apptainersingularity-images","text":"The OSG Team maintains a set of images that are already in the OSG Apptainer/Singularity repository. A list of ready-to-use containers can be found on this page . If the software you need isn't already supported in a listed container, you can create your own container or use any container image in Docker Hub. How to explore these containers is shown below .","title":"OSG-Provided Apptainer/Singularity Images"},{"location":"htc_workloads/using_software/containers-singularity/#building-your-own-apptainersingularity-container","text":"","title":"Building Your Own Apptainer/Singularity Container"},{"location":"htc_workloads/using_software/containers-singularity/#identify-components","text":"What software do you want to install? Make sure that you have either the source code or a command that can be used to install it through Linux (like apt-get or yum ). You'll also need to choose a \"base\" container, on which to add your particular software or tools. We recommend using one of the OSG's published containers as your starting point. See the available containers on Docker Hub here: OSG Docker Containers The best candidates for you will be containers that have \"osgvo\" in the name.","title":"Identify Components"},{"location":"htc_workloads/using_software/containers-singularity/#apptainersingularity-build","text":"If you are building an image for the first time, the temporary cache directory of the apptainer image needs to be defined. The following commands define the cache location of the apptainer image to be built. Please run the commands in the terminal of your access point. $mkdir $HOME/tmp $export TMPDIR=$HOME/tmp $export APPTAINER_TMPDIR=$HOME/tmp $export APPTAINER_CACHEDIR=$HOME/tmp To build a custom a Apptainer/Singularity image, create a folder on your access point. Inside it, create a blank text file called image.def . The first lines of this file should include where to get the base image from. If using the OSG's Ubuntu 20.04 image that would look like this: Bootstrap: docker From: hub.opensciencegrid.org/htc/ubuntu:22.04 Then there is a section called %post where you put the additional commands to make the image just like you need it. For example: %post # system packages apt-get update -y apt-get install -y \\ build-essential \\ cmake \\ g++ # install miniconda wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh bash Miniconda3-latest-Linux-x86_64.sh -b -f -p /opt/conda rm Miniconda3-latest-Linux-x86_64.sh # install conda components - add the packages you need here . /opt/conda/etc/profile.d/conda.sh conda create -y -n \"myenv\" python=3.9 conda activate myenv conda update --all conda install -y -n \"myenv\" -c conda-forge pytorch Another good section to include is %environment . This is executed before your job and lets the container configure the environment. Example: %environment # set up environment for when using the container . /opt/conda/etc/profile.d/conda.sh conda activate myenv See the Apptainer documentation for a full reference on how to specify build specs. Note that the %runscript section is ignored when the container is executed on OSG. The final image.def looks like: Bootstrap: docker From: hub.opensciencegrid.org/htc/ubuntu:22.04 %post # system packages apt-get update -y apt-get install -y \\ build-essential \\ wget # install miniconda wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh bash Miniconda3-latest-Linux-x86_64.sh -b -f -p /opt/conda rm Miniconda3-latest-Linux-x86_64.sh # install conda components - add the packages you need here . /opt/conda/etc/profile.d/conda.sh conda create -y -n \"myenv\" python=3.9 conda activate myenv conda update --all conda install -y -n \"myenv\" -c conda-forge pytorch %environment # set up environment for when using the container . /opt/conda/etc/profile.d/conda.sh conda activate myenv Once your build spec is ready, you can \"build\" the container image by running this command: $ apptainer build my-container.sif image.def Once the image is built, test it on an OSG-managed access point, and use it in your HTCondor jobs.","title":"Apptainer/Singularity Build"},{"location":"htc_workloads/using_software/containers-singularity/#exploring-apptainersingularity-images-on-the-access-points","text":"Just like it is important to test your codes and jobs at a small scale, you should make sure that your Apptainer/Singularity container is working correctly before using it in jobs. One way to test your container image on our system is to test it on an OSG-managed access point. To do so, first log in to your assigned access point. Start an interactive session with the Apptainer/Singularity \"shell\" mode. The recommended command line, similar to how containers are started for jobs, is: apptainer shell my-container.sif If you want to test an existing container produced by OSG Staff, use the full path provided in this guide . This example will give you an interactive shell. You can explore the container and test your code with your own inputs from your /home directory, which is automatically mounted (but note - $HOME will not be available to your jobs later). Once you are down exploring, exit the container by running exit or with CTRL+D .","title":"Exploring Apptainer/Singularity Images on the Access Points"},{"location":"htc_workloads/using_software/containers-singularity/#using-singularity-or-apptainer-images-in-an-htcondor-job","text":"Once you have a \".sif\" container image file with all your needed software, you can use this file as part of an HTCondor job.","title":"Using Singularity or Apptainer Images in an HTCondor Job"},{"location":"htc_workloads/using_software/containers-singularity/#upload-the-container-image-to-the-osdf","text":"The image will be resused for each job, and thus the preferred transfer method is OSDF . Store the .sif file under your personal data area on your access point (see table here ).","title":"Upload the Container Image to the OSDF"},{"location":"htc_workloads/using_software/containers-singularity/#use-the-container-in-an-htcondor-job","text":"Once the image is placed in your OSDF space, you can use an OSDF url directly in the +SingularityImage attribute. Note that you can not use shell variable expansion in the submit file - be sure to replace the username with your actual OSPool username. Example: +SingularityImage = \"osdf:///ospool/apXX/data/USERNAME/my-custom-image-v1.sif\" queue Be aware that OSDF aggressively caches the image based on file naming. If you need to do quick changes, please use versioning of the .sif file so that the caches see a \"new\" name. In this example, replacing my-custom-image-v1.sif with new content will probably mean that some nodes get the old version and some nodes the new version. Prevent this by creating a new file named with v2.","title":"Use the Container in an HTCondor Job"},{"location":"htc_workloads/using_software/containers-singularity/#common-issues","text":"FATAL: kernel too old If you get a *FATAL: kernel too old* error, it means that the glibc version in the image is too new for the kernel on the host. You can work around this problem by specifying the minimum host kernel. For example, if you want to run the Ubuntu 18.04 image, specfy a minimum host kernel of 3.10.0, formatted as 31000 (major * 10000 + minor * 100 + patch): Requirements = HAS_SINGULARITY == True && OSG_HOST_KERNEL_VERSION >= 31000","title":"Common Issues"},{"location":"htc_workloads/using_software/example-compilation/","text":"Example of Compiling Software For Use on the OSPool \u00b6 Introduction \u00b6 This guide provides a detailed example of compiling software for use from an OSG Access Point. For this example, we will be compiling Samtools which is a very common bioinformatics software for working with aligned sequencing data. We hope that this specific example helps illustrate the general compilation steps that can be applied to many other software compilations. For a general introduction to software compilation, please see our Compiling Software guide . Two Examples \u00b6 This guide provides two examples of compiling Samtools, one without CRAM file support and one with CRAM file support . Why two examples? Currently, to install Samtools with CRAM support requires additional dependencies (aka libraries) that will also need to be installed and most Samtools users are only working with BAM files which does not require CRAM support. Do I need CRAM support for my work? CRAM is an alternative compressed sequence alignment file format to BAM. Learn more at https://www.sanger.ac.uk/tool/cram/ . Compile Samtools Without CRAM Support \u00b6 Step 1. Acquire Samtools source code \u00b6 Samtools source code is available at http://www.htslib.org/download/ . The development code is also available via GitHub at https://github.com/samtools/samtools . On the download page is some important information to make note of: \"[Samtools] uses HTSlib internally [and] these source packages contain their own copies of htslib\" What this means is 1.) HTSlib is a dependency of Samtools and 2.) the HTSlib source code is included with the Samtools source code. Either download the Samtools source code to your computer and upload to your login node, or right-click on the Samtools source code link and copy the link location. Login in to your OSG Access Point and use wget to download the source code directly and extract the tarball: [user@apXX ~]$ wget https://github.com/samtools/samtools/releases/download/1.10/samtools-1.10.tar.bz2 [user@apXX ~]$ tar -xjf samtools-1.10.tar.bz2 The above two commands will create a directory named samtools-1.10 which contains all the code and instructions needed for compiling Samtools and HTSlib. Take a moment to look at the content available in this new directory. Step 2. Read through installation instructions \u00b6 What steps need to be performed for our compilation? What system dependencies exist for our software? Answers to these questions, and other important information, should be available in the installation instructions for your software which will be available online and/or included in the source code. The HTSlib website where the Samtools source code is hosted provides basic installation instructions and refers users to INSTALL (which is a plain text file that can be found in samtools-1.10/ ) for more information. You will also see a README file in the source code directory which will provide important information. README files will always be included with your source code and we recommend reviewing before compiling software. There is also a README and INSTALL file available for HTSlib in the source code directory samtools-1.10/htslib-1.10/ . cd to samtools-1.10 and read through README and INSTALL . As described in INSTALL , the Samtools installation will follow the common configure , make , make install process: Basic Installation ================== To build and install Samtools, 'cd' to the samtools-1.x directory containing the package's source and type the following commands: ./configure make make install The './configure' command checks your build environment and allows various optional functionality to be enabled (see Configuration below). Also described in INSTALL are a number of required and optional system dependencies for installing Samtools and HTSlib (which is itself a dependency of Samtools): System Requirements =================== Samtools and HTSlib depend on the following libraries: Samtools: zlib curses or GNU ncurses (optional, for the 'tview' command) HTSlib: zlib libbz2 liblzma libcurl (optional but strongly recommended, for network access) libcrypto (optional, for Amazon S3 support; not needed on MacOS) ... The bzip2 and liblzma dependencies can be removed if full CRAM support is not needed - see HTSlib's INSTALL file for details. Some dependencies are needed to support certain features from Samtools (such as tview and CRAM compression). You will not need tview as this is intended for interactive work which is not currently supported from the OSG Access Points. For this specific compilation example, we will disable both tview and CRAM support - see below for our compilation example that will provide CRAM file support. Following the suggestion in the Samtools INSTALL file, we can view the HTSlib INSTALL file at samtools-1.10/htslib-1.10/INSTALL . Here we will find the necessary information for disabling bzip2 and liblzma dependencies: --disable-bz2 Bzip2 is an optional compression codec format for CRAM, included in HTSlib by default. It can be disabled with --disable-bz2, but be aware that not all CRAM files may be possible to decode. --disable-lzma LZMA is an optional compression codec for CRAM, included in HTSlib by default. It can be disabled with --disable-lzma, but be aware that not all CRAM files may be possible to decode. These are two flags that will need to be used when performing our installation. To determine what libraries are available on our OSG Access Point, we can look at /usr/lib and /usr/lib64 for the various Samtools library dependencies, for example: [user@apXX ~]$ ls /usr/lib* | grep libcurl [user@apXX ~]$ ls /usr/lib* | grep htslib Although we will find matches for libcurl , we will not find any htslib files meaning that HTSlib is not currently installed on the login node, nor is it currently available as a module. This means that HTSlib will also need to be compiled. Luckly, the Samtools developers have conveniently included the HTSlib source code with the Samtools source code and have made it possible to compile both Samtools and HTSlib at the same time. From the Samtools INSTALL file, is the following: By default, configure looks for an HTSlib source tree within or alongside the samtools source directory; if there are several likely candidates, you will have to choose one via this option. This mean that we don't have to do anything extra to get HTSlib installed because the Samtools installation will do it by default. When performing your compilation, if your compiler is unable to locate the necessary libraries, or if newer versions of libraries are needed, it will result in an error - this makes for an alternative method of determining whether your system has the appropriate libraries for your software and more often than not, installation by trial and error is a common approach. However, taking a little bit of time before hand and looking for library files can save you time and frustration during software compilation. Step 3. Perform Samtools compilation \u00b6 We now have all of the information needed to start our compilation of Samtools without CRAM support. First, we will create a new directory in our home directory that will store the Samtools compiled software. The example here will use a directory, called my-software , for organizing all compiled software in the home directory: [user@apXX ~]$ mkdir $HOME/my-software [user@apXX ~]$ mkdir $HOME/my-software/samtools-1.10 As a best practice, always include the version name of your software in the directory name. Next we'll change to the Samtools source code directory that was created in Step 1 . You should see the INSTALL and README files as well as a file called configure . The first command we will run is ./configure - this step will execute the configure script and allows us to modify various details about our Samtools installation. We will be executing configure with several flags: [user@apXX samtools-1.10]$ ./configure --prefix=$HOME/my-software/samtools-1.10 --disable-bz2 --disable-lzma --without-curses Here we used --prefix to specify where we would like the final Samtools software to be installed, --disable-bz2 and --disable-lzma to disable lzma and bzip2 dependencies for CRAM, and --without-curses to disable tview support. Next run the final two commands: [user@apXX samtools-1.10]$ make [user@apXX samtools-1.10]$ make install Once make install has finished running, the compilation is complete. We can also confirm this by looking at the content of ~/my-software/samtools-1.10/ where we had Samtools installed: [user@apXX samtools-1.10]$ cd ~ [user@apXX ~]$ ls -F my-software/samtools-1.10/ bin/ share/ There will be two directories present in my-software/samtools-1.10 , one named bin and another named share . The Samtools executable will be located in bin and we can give it a quick test to make sure it runs as expected: [user@apXX ~]$ ./my-software/samtools-1.10/bin/samtools view which will return the Samtools view usage statement. Step 4. Make our software portable \u00b6 Our subsequent job submissions on the OSPool will need a copy of our software. For convenience, we recommend converting your software directory to a tar archive. First move to my-software/ , then create the tar archive: [user@apXX ~]$ cd my-software/ [user@apXX my-software]$ tar -czf samtools-1.10.tar.gz samtools-1.10/ [user@apXX my-software]$ ls samtools-1.10* samtools-1.10/ samtools-1.10.tar.gz [user@apXX my-software]$ du -h samtools-1.10.tar.gz 2.0M samtools-1.10.tar.gz The last command in the above example returns the size of our tar archive. This is important for determine the appropriate method that we should use for transferring this file along with our subsequent jobs. To learn more, please see Overview: Data Staging and Transfer to Jobs . To clean up and clear out space in your home directory, we recommend deleting the Samtools source code directory. Step 5. Use Samtools in our jobs \u00b6 Now that Samtools has been compiled we can submit jobs that use this software. Below is an example submit file for a job that will use Samtools with a BAM file named my-sample.bam which is <100MB in size: #samtools.sub log = samtools.$(Cluster).log error = samtools.$(Cluster)_$(Process).err output = samtools.$(Cluster)_$(Process).out executable = samtools.sh transfer_input_files = /home/username/my-software/samtools-1.10.tar.gz, my-sample.bam should_transfer_files = YES when_to_transfer_output = ON_EXIT +JobDurationCategory = \"Medium\" requirements = (OSGVO_OS_STRING == \"RHEL 9\") request_memory = 1.3GB request_disk = 1.5GB request_cpus = 1 queue 1 The above submit file will transfer a complete copy of the Samtools tar archive created in Step 4 and also includes an important requirements attribute which tells HTCondor to run our job on execute nodes running Red Hat Linux version 7 operating system. The resource requests for your jobs may differ from what is shown in the above example. Always run tests to determine the appropriate requests for your jobs. Some additional steps are then needed in the executable bash script used by this job to \"untar\" the Samtools and add this software to the PATH enviroment variable: #!/bin/bash # samtools.sh # untar software tar -xzf samtools-1.10.tar.gz # modify environment variables export PATH=$_CONDOR_SCRATCH_DIR/samtools-1.10/bin:$PATH # run samtools commands ... Compile Samtools With CRAM Support \u00b6 This example includes steps to install and use a library and to use a module, which are both currently needed for compiling Samtools with CRAM support. The steps in this example assume that you have performed Step 1 and Step 2 in the above example for compiling Samtools without CRAM support. Step 2. Read through installation instructions, continued \u00b6 From both the Samtools and HTSlib INSTALL files, we know that both bzip2 and libzlma are required for CRAM support. We can check our system for these libraries: [user@apXX ~]$ ls /usr/lib* | grep libz [user@apXX ~]$ ls /usr/lib* | grep libbz2 which will reveal that both sets of libraries are available on the login. However if we were to attempt Samtools installation with CRAM support right now we would find that this results in an error when performing the configure step. If the libraries are present, why do we get this error? This error is due to differences between types of library files. For example, running ls /usr/lib* | grep libbz2 will return two matches, libbz2.so.1 and libbz2.so.1.0.6 . But running ls /usr/lib* | grep liblz will return four matches including three .so and one .a files. Our Samtools compilation specifically requires the .a type of library file for both libbz2 and liblzma and the absence of this type of library file in /usr/lib64 is why compilation will fail without additional steps. Step 3. Compile liblzma \u00b6 To compile Samtools with CRAM support requires that we first compile liblzma . Following the same approach as we did for Samtools, first we acquire a copy of the the latest liblzma source code, then review the installation instructions. From our online search we will that liblzma is availble from the XZ Utils library package. [user@apXX ~]$ wget https://tukaani.org/xz/xz-5.2.5.tar.gz [user@apXX ~]$ tar -xzf xz-5.2.5.tar.gz Then review the installation instructions and check for dependencies. Everything that is needed for the default installation of XZ utils is currently available on the login node. [user@apXX ~]$ cd xz-5.2.5/ [user@apXX xz-5.2.5]$ less INSTALL Perform the XZ Utils compilation: [user@apXX xz-5.2.5]$ mkdir $HOME/my-software/xz-5.2.5 [user@apXX xz-5.2.5]$ ./configure --prefix=$HOME/my-software/xz-5.2.5 [user@apXX xz-5.2.5]$ make [user@apXX xz-5.2.5]$ make install [user@apXX xz-5.2.5]$ ls -F $HOME/my-software/xz-5.2.5 /bin /include /lib /share Success! Lastly we need to set some environment variables so that Samtools knows where to find this library: [user@apXX xz-5.2.5]$ export PATH=$HOME/my-software/xz-5.2.5/bin:$PATH [user@apXX xz-5.2.5]$ export LIBRARY_PATH=$HOME/my-software/xz-5.2.5/lib:$LIBRARY_PATH [user@apXX xz-5.2.5]$ export LD_LIBRARY_PATH=$LIBRARY_PATH Step 4. Load bzip2 module \u00b6 After installing XZ Utils and setting our environment variable, next we will load the bzip2 module: [user@apXX xz-5.2.5]$ module load bzip2/1.0.6 Loading this module will further modify some of your environment variables so that Samtools is able to locate the bzip2 library files. Step 5. Compile Samtools \u00b6 After compiling XZ Utils (which provides liblzma ) and loading the bzip2 1.0.6 module, we are now ready to compile Samtools with CRAM support. First, we will create a new directory in our home directory that will store the Samtools compiled software. The example here will use a common directory, called my-software , for organizing all compiled software in the home directory: [user@apXX ~]$ mkdir $HOME/my-software [user@apXX ~]$ mkdir $HOME/my-software/samtools-1.10 As a best practice, always include the version name of your software in the directory name. Next, we will change our directory to the Samtools source code directory that was created in Step 1 . You should see the INSTALL and README files as well as a file called configure . The first command we will run is ./configure - this file is a script that allows us to modify various details about our Samtools installation and we will be executing configure with a flag that disables tview : [user@apXX samtools-1.10]$ ./configure --prefix=$HOME/my-software/samtools-1.10 --without-curses Here we used --prefix to specify where we would like the final Samtools software to be installed and --without-curses to disable tview support. Next run the final two commands: [user@apXX samtools-1.10]$ make [user@apXX samtools-1.10]$ make install Once make install has finished running, the compilation is complete. We can also confirm this by looking at the content of ~/my-software/samtools-1.10/ where we had Samtools installed: [user@apXX samtools-1.10]$ cd ~ [user@apXX ~]$ ls -F my-software/samtools-1.10/ bin/ share/ There will be two directories present in my-software/samtools-1.10 , one named bin and another named share . The Samtools executable will be located in bin and we can give it a quick test to make sure it runs as expected: [user@apXX ~]$ ./my-software/samtools-1.10/bin/samtools view which will return the Samtools view usage statement. Step 6. Make our software portable \u00b6 Our subsequent job submissions on the OSPool will need a copy of our software. For convenience, we recommend converting your software directory to a tar archive. First move to my-software/ , then create the tar archive: [user@apXX ~]$ cd my-software/ [user@apXX my-software]$ tar -czf samtools-1.10.tar.gz samtools-1.10/ [user@apXX my-software]$ ls samtools-1.10* samtools-1.10/ samtools-1.10.tar.gz [user@apXX my-software]$ du -h samtools-1.10.tar.gz 2.0M samtools-1.10.tar.gz The last command in the above example returns the size of our tar archive. This is important for determine the appropriate method that we should use for transferring this file along with our subsequent jobs. To learn more, please see Introduction to Data Management on OSG . Follow the these same steps for creating a tar archive of the xz-5.2.5 library as well. To clean up and clear out space in your home directory, we recommend deleting the Samtools source code directory. Step 7. Use Samtools in our jobs \u00b6 Now that Samtools has been compiled we can submit jobs that use this software. For Samtools with CRAM we will also need to bring along a copy of XZ Utils (which includes the liblzma library) and ensure that our jobs have access to the bzip2 1.0.6 module. Below is an example submit file for a job that will use Samtools with a Fasta file genome.fa' and CRAM file named my-sample.cram` which is <100MB in size: #samtools-cram.sub log = samtools-cram.$(Cluster).log error = samtools-cram.$(Cluster)_$(Process).err output = samtools-cram.$(Cluster)_$(Process).out executable = samtools-cram.sh transfer_input_files = /home/username/my-software/samtools-1.10.tar.gz, /home/username/my-software/xz-5.2.5.tar.gz, genome.fa, my-sample.cram should_transfer_files = YES when_to_transfer_output = ON_EXIT +JobDurationCategory = \"Medium\" requirements = (OSGVO_OS_STRING == \"RHEL 9\") request_memory = 1.3GB request_disk = 1.5GB request_cpus = 1 queue 1 The above submit file will transfer a complete copy of the Samtools tar archive created in Step 6 as well as a copy of XZ Utils installation from Step 3 . This submit file also includes an important requirements which tell HTCondor to run our job on execute nodes running Red Hat Linux version 7 operating system/ The resource requests for your jobs may differ from what is shown in the above example. Always run tests to determine the appropriate requests for your jobs. Some additional steps are then needed in the executable bash script used by this job to \"untar\" the Samtools and XZ Util tar archives, modify the PATH and LD_LIBRARY_PATH enviroments of our job, and load the bzip2 module: #!/bin/bash # samtools-cram.sh # untar software and libraries tar -xzf samtools-1.10.tar.gz tar -xzf xz-5.2.5.tar.gz # modify environment variables export LD_LIBRARY_PATH=$_CONDOR_SCRATCH_DIR/xz-5.2.5/lib:$LD_LIBRARY_PATH export PATH=$_CONDOR_SCRATCH_DIR/samtools-1.10/bin:$_CONDOR_SCRATCH_DIR/xz-5.2.5/bin:$PATH # load bzip2 module module load bzip2/1.0.6 # run samtools commands ...","title":"Example Software Compilation"},{"location":"htc_workloads/using_software/example-compilation/#example-of-compiling-software-for-use-on-the-ospool","text":"","title":"Example of Compiling Software For Use on the OSPool"},{"location":"htc_workloads/using_software/example-compilation/#introduction","text":"This guide provides a detailed example of compiling software for use from an OSG Access Point. For this example, we will be compiling Samtools which is a very common bioinformatics software for working with aligned sequencing data. We hope that this specific example helps illustrate the general compilation steps that can be applied to many other software compilations. For a general introduction to software compilation, please see our Compiling Software guide .","title":"Introduction"},{"location":"htc_workloads/using_software/example-compilation/#two-examples","text":"This guide provides two examples of compiling Samtools, one without CRAM file support and one with CRAM file support . Why two examples? Currently, to install Samtools with CRAM support requires additional dependencies (aka libraries) that will also need to be installed and most Samtools users are only working with BAM files which does not require CRAM support. Do I need CRAM support for my work? CRAM is an alternative compressed sequence alignment file format to BAM. Learn more at https://www.sanger.ac.uk/tool/cram/ .","title":"Two Examples"},{"location":"htc_workloads/using_software/example-compilation/#compile-samtools-without-cram-support","text":"","title":"Compile Samtools Without CRAM Support"},{"location":"htc_workloads/using_software/example-compilation/#step-1-acquire-samtools-source-code","text":"Samtools source code is available at http://www.htslib.org/download/ . The development code is also available via GitHub at https://github.com/samtools/samtools . On the download page is some important information to make note of: \"[Samtools] uses HTSlib internally [and] these source packages contain their own copies of htslib\" What this means is 1.) HTSlib is a dependency of Samtools and 2.) the HTSlib source code is included with the Samtools source code. Either download the Samtools source code to your computer and upload to your login node, or right-click on the Samtools source code link and copy the link location. Login in to your OSG Access Point and use wget to download the source code directly and extract the tarball: [user@apXX ~]$ wget https://github.com/samtools/samtools/releases/download/1.10/samtools-1.10.tar.bz2 [user@apXX ~]$ tar -xjf samtools-1.10.tar.bz2 The above two commands will create a directory named samtools-1.10 which contains all the code and instructions needed for compiling Samtools and HTSlib. Take a moment to look at the content available in this new directory.","title":"Step 1. Acquire Samtools source code"},{"location":"htc_workloads/using_software/example-compilation/#step-2-read-through-installation-instructions","text":"What steps need to be performed for our compilation? What system dependencies exist for our software? Answers to these questions, and other important information, should be available in the installation instructions for your software which will be available online and/or included in the source code. The HTSlib website where the Samtools source code is hosted provides basic installation instructions and refers users to INSTALL (which is a plain text file that can be found in samtools-1.10/ ) for more information. You will also see a README file in the source code directory which will provide important information. README files will always be included with your source code and we recommend reviewing before compiling software. There is also a README and INSTALL file available for HTSlib in the source code directory samtools-1.10/htslib-1.10/ . cd to samtools-1.10 and read through README and INSTALL . As described in INSTALL , the Samtools installation will follow the common configure , make , make install process: Basic Installation ================== To build and install Samtools, 'cd' to the samtools-1.x directory containing the package's source and type the following commands: ./configure make make install The './configure' command checks your build environment and allows various optional functionality to be enabled (see Configuration below). Also described in INSTALL are a number of required and optional system dependencies for installing Samtools and HTSlib (which is itself a dependency of Samtools): System Requirements =================== Samtools and HTSlib depend on the following libraries: Samtools: zlib curses or GNU ncurses (optional, for the 'tview' command) HTSlib: zlib libbz2 liblzma libcurl (optional but strongly recommended, for network access) libcrypto (optional, for Amazon S3 support; not needed on MacOS) ... The bzip2 and liblzma dependencies can be removed if full CRAM support is not needed - see HTSlib's INSTALL file for details. Some dependencies are needed to support certain features from Samtools (such as tview and CRAM compression). You will not need tview as this is intended for interactive work which is not currently supported from the OSG Access Points. For this specific compilation example, we will disable both tview and CRAM support - see below for our compilation example that will provide CRAM file support. Following the suggestion in the Samtools INSTALL file, we can view the HTSlib INSTALL file at samtools-1.10/htslib-1.10/INSTALL . Here we will find the necessary information for disabling bzip2 and liblzma dependencies: --disable-bz2 Bzip2 is an optional compression codec format for CRAM, included in HTSlib by default. It can be disabled with --disable-bz2, but be aware that not all CRAM files may be possible to decode. --disable-lzma LZMA is an optional compression codec for CRAM, included in HTSlib by default. It can be disabled with --disable-lzma, but be aware that not all CRAM files may be possible to decode. These are two flags that will need to be used when performing our installation. To determine what libraries are available on our OSG Access Point, we can look at /usr/lib and /usr/lib64 for the various Samtools library dependencies, for example: [user@apXX ~]$ ls /usr/lib* | grep libcurl [user@apXX ~]$ ls /usr/lib* | grep htslib Although we will find matches for libcurl , we will not find any htslib files meaning that HTSlib is not currently installed on the login node, nor is it currently available as a module. This means that HTSlib will also need to be compiled. Luckly, the Samtools developers have conveniently included the HTSlib source code with the Samtools source code and have made it possible to compile both Samtools and HTSlib at the same time. From the Samtools INSTALL file, is the following: By default, configure looks for an HTSlib source tree within or alongside the samtools source directory; if there are several likely candidates, you will have to choose one via this option. This mean that we don't have to do anything extra to get HTSlib installed because the Samtools installation will do it by default. When performing your compilation, if your compiler is unable to locate the necessary libraries, or if newer versions of libraries are needed, it will result in an error - this makes for an alternative method of determining whether your system has the appropriate libraries for your software and more often than not, installation by trial and error is a common approach. However, taking a little bit of time before hand and looking for library files can save you time and frustration during software compilation.","title":"Step 2. Read through installation instructions"},{"location":"htc_workloads/using_software/example-compilation/#step-3-perform-samtools-compilation","text":"We now have all of the information needed to start our compilation of Samtools without CRAM support. First, we will create a new directory in our home directory that will store the Samtools compiled software. The example here will use a directory, called my-software , for organizing all compiled software in the home directory: [user@apXX ~]$ mkdir $HOME/my-software [user@apXX ~]$ mkdir $HOME/my-software/samtools-1.10 As a best practice, always include the version name of your software in the directory name. Next we'll change to the Samtools source code directory that was created in Step 1 . You should see the INSTALL and README files as well as a file called configure . The first command we will run is ./configure - this step will execute the configure script and allows us to modify various details about our Samtools installation. We will be executing configure with several flags: [user@apXX samtools-1.10]$ ./configure --prefix=$HOME/my-software/samtools-1.10 --disable-bz2 --disable-lzma --without-curses Here we used --prefix to specify where we would like the final Samtools software to be installed, --disable-bz2 and --disable-lzma to disable lzma and bzip2 dependencies for CRAM, and --without-curses to disable tview support. Next run the final two commands: [user@apXX samtools-1.10]$ make [user@apXX samtools-1.10]$ make install Once make install has finished running, the compilation is complete. We can also confirm this by looking at the content of ~/my-software/samtools-1.10/ where we had Samtools installed: [user@apXX samtools-1.10]$ cd ~ [user@apXX ~]$ ls -F my-software/samtools-1.10/ bin/ share/ There will be two directories present in my-software/samtools-1.10 , one named bin and another named share . The Samtools executable will be located in bin and we can give it a quick test to make sure it runs as expected: [user@apXX ~]$ ./my-software/samtools-1.10/bin/samtools view which will return the Samtools view usage statement.","title":"Step 3. Perform Samtools compilation"},{"location":"htc_workloads/using_software/example-compilation/#step-4-make-our-software-portable","text":"Our subsequent job submissions on the OSPool will need a copy of our software. For convenience, we recommend converting your software directory to a tar archive. First move to my-software/ , then create the tar archive: [user@apXX ~]$ cd my-software/ [user@apXX my-software]$ tar -czf samtools-1.10.tar.gz samtools-1.10/ [user@apXX my-software]$ ls samtools-1.10* samtools-1.10/ samtools-1.10.tar.gz [user@apXX my-software]$ du -h samtools-1.10.tar.gz 2.0M samtools-1.10.tar.gz The last command in the above example returns the size of our tar archive. This is important for determine the appropriate method that we should use for transferring this file along with our subsequent jobs. To learn more, please see Overview: Data Staging and Transfer to Jobs . To clean up and clear out space in your home directory, we recommend deleting the Samtools source code directory.","title":"Step 4. Make our software portable"},{"location":"htc_workloads/using_software/example-compilation/#step-5-use-samtools-in-our-jobs","text":"Now that Samtools has been compiled we can submit jobs that use this software. Below is an example submit file for a job that will use Samtools with a BAM file named my-sample.bam which is <100MB in size: #samtools.sub log = samtools.$(Cluster).log error = samtools.$(Cluster)_$(Process).err output = samtools.$(Cluster)_$(Process).out executable = samtools.sh transfer_input_files = /home/username/my-software/samtools-1.10.tar.gz, my-sample.bam should_transfer_files = YES when_to_transfer_output = ON_EXIT +JobDurationCategory = \"Medium\" requirements = (OSGVO_OS_STRING == \"RHEL 9\") request_memory = 1.3GB request_disk = 1.5GB request_cpus = 1 queue 1 The above submit file will transfer a complete copy of the Samtools tar archive created in Step 4 and also includes an important requirements attribute which tells HTCondor to run our job on execute nodes running Red Hat Linux version 7 operating system. The resource requests for your jobs may differ from what is shown in the above example. Always run tests to determine the appropriate requests for your jobs. Some additional steps are then needed in the executable bash script used by this job to \"untar\" the Samtools and add this software to the PATH enviroment variable: #!/bin/bash # samtools.sh # untar software tar -xzf samtools-1.10.tar.gz # modify environment variables export PATH=$_CONDOR_SCRATCH_DIR/samtools-1.10/bin:$PATH # run samtools commands ...","title":"Step 5. Use Samtools in our jobs"},{"location":"htc_workloads/using_software/example-compilation/#compile-samtools-with-cram-support","text":"This example includes steps to install and use a library and to use a module, which are both currently needed for compiling Samtools with CRAM support. The steps in this example assume that you have performed Step 1 and Step 2 in the above example for compiling Samtools without CRAM support.","title":"Compile Samtools With CRAM Support"},{"location":"htc_workloads/using_software/example-compilation/#step-2-read-through-installation-instructions-continued","text":"From both the Samtools and HTSlib INSTALL files, we know that both bzip2 and libzlma are required for CRAM support. We can check our system for these libraries: [user@apXX ~]$ ls /usr/lib* | grep libz [user@apXX ~]$ ls /usr/lib* | grep libbz2 which will reveal that both sets of libraries are available on the login. However if we were to attempt Samtools installation with CRAM support right now we would find that this results in an error when performing the configure step. If the libraries are present, why do we get this error? This error is due to differences between types of library files. For example, running ls /usr/lib* | grep libbz2 will return two matches, libbz2.so.1 and libbz2.so.1.0.6 . But running ls /usr/lib* | grep liblz will return four matches including three .so and one .a files. Our Samtools compilation specifically requires the .a type of library file for both libbz2 and liblzma and the absence of this type of library file in /usr/lib64 is why compilation will fail without additional steps.","title":"Step 2. Read through installation instructions, continued"},{"location":"htc_workloads/using_software/example-compilation/#step-3-compile-liblzma","text":"To compile Samtools with CRAM support requires that we first compile liblzma . Following the same approach as we did for Samtools, first we acquire a copy of the the latest liblzma source code, then review the installation instructions. From our online search we will that liblzma is availble from the XZ Utils library package. [user@apXX ~]$ wget https://tukaani.org/xz/xz-5.2.5.tar.gz [user@apXX ~]$ tar -xzf xz-5.2.5.tar.gz Then review the installation instructions and check for dependencies. Everything that is needed for the default installation of XZ utils is currently available on the login node. [user@apXX ~]$ cd xz-5.2.5/ [user@apXX xz-5.2.5]$ less INSTALL Perform the XZ Utils compilation: [user@apXX xz-5.2.5]$ mkdir $HOME/my-software/xz-5.2.5 [user@apXX xz-5.2.5]$ ./configure --prefix=$HOME/my-software/xz-5.2.5 [user@apXX xz-5.2.5]$ make [user@apXX xz-5.2.5]$ make install [user@apXX xz-5.2.5]$ ls -F $HOME/my-software/xz-5.2.5 /bin /include /lib /share Success! Lastly we need to set some environment variables so that Samtools knows where to find this library: [user@apXX xz-5.2.5]$ export PATH=$HOME/my-software/xz-5.2.5/bin:$PATH [user@apXX xz-5.2.5]$ export LIBRARY_PATH=$HOME/my-software/xz-5.2.5/lib:$LIBRARY_PATH [user@apXX xz-5.2.5]$ export LD_LIBRARY_PATH=$LIBRARY_PATH","title":"Step 3. Compile liblzma"},{"location":"htc_workloads/using_software/example-compilation/#step-4-load-bzip2-module","text":"After installing XZ Utils and setting our environment variable, next we will load the bzip2 module: [user@apXX xz-5.2.5]$ module load bzip2/1.0.6 Loading this module will further modify some of your environment variables so that Samtools is able to locate the bzip2 library files.","title":"Step 4. Load bzip2 module"},{"location":"htc_workloads/using_software/example-compilation/#step-5-compile-samtools","text":"After compiling XZ Utils (which provides liblzma ) and loading the bzip2 1.0.6 module, we are now ready to compile Samtools with CRAM support. First, we will create a new directory in our home directory that will store the Samtools compiled software. The example here will use a common directory, called my-software , for organizing all compiled software in the home directory: [user@apXX ~]$ mkdir $HOME/my-software [user@apXX ~]$ mkdir $HOME/my-software/samtools-1.10 As a best practice, always include the version name of your software in the directory name. Next, we will change our directory to the Samtools source code directory that was created in Step 1 . You should see the INSTALL and README files as well as a file called configure . The first command we will run is ./configure - this file is a script that allows us to modify various details about our Samtools installation and we will be executing configure with a flag that disables tview : [user@apXX samtools-1.10]$ ./configure --prefix=$HOME/my-software/samtools-1.10 --without-curses Here we used --prefix to specify where we would like the final Samtools software to be installed and --without-curses to disable tview support. Next run the final two commands: [user@apXX samtools-1.10]$ make [user@apXX samtools-1.10]$ make install Once make install has finished running, the compilation is complete. We can also confirm this by looking at the content of ~/my-software/samtools-1.10/ where we had Samtools installed: [user@apXX samtools-1.10]$ cd ~ [user@apXX ~]$ ls -F my-software/samtools-1.10/ bin/ share/ There will be two directories present in my-software/samtools-1.10 , one named bin and another named share . The Samtools executable will be located in bin and we can give it a quick test to make sure it runs as expected: [user@apXX ~]$ ./my-software/samtools-1.10/bin/samtools view which will return the Samtools view usage statement.","title":"Step 5. Compile Samtools"},{"location":"htc_workloads/using_software/example-compilation/#step-6-make-our-software-portable","text":"Our subsequent job submissions on the OSPool will need a copy of our software. For convenience, we recommend converting your software directory to a tar archive. First move to my-software/ , then create the tar archive: [user@apXX ~]$ cd my-software/ [user@apXX my-software]$ tar -czf samtools-1.10.tar.gz samtools-1.10/ [user@apXX my-software]$ ls samtools-1.10* samtools-1.10/ samtools-1.10.tar.gz [user@apXX my-software]$ du -h samtools-1.10.tar.gz 2.0M samtools-1.10.tar.gz The last command in the above example returns the size of our tar archive. This is important for determine the appropriate method that we should use for transferring this file along with our subsequent jobs. To learn more, please see Introduction to Data Management on OSG . Follow the these same steps for creating a tar archive of the xz-5.2.5 library as well. To clean up and clear out space in your home directory, we recommend deleting the Samtools source code directory.","title":"Step 6. Make our software portable"},{"location":"htc_workloads/using_software/example-compilation/#step-7-use-samtools-in-our-jobs","text":"Now that Samtools has been compiled we can submit jobs that use this software. For Samtools with CRAM we will also need to bring along a copy of XZ Utils (which includes the liblzma library) and ensure that our jobs have access to the bzip2 1.0.6 module. Below is an example submit file for a job that will use Samtools with a Fasta file genome.fa' and CRAM file named my-sample.cram` which is <100MB in size: #samtools-cram.sub log = samtools-cram.$(Cluster).log error = samtools-cram.$(Cluster)_$(Process).err output = samtools-cram.$(Cluster)_$(Process).out executable = samtools-cram.sh transfer_input_files = /home/username/my-software/samtools-1.10.tar.gz, /home/username/my-software/xz-5.2.5.tar.gz, genome.fa, my-sample.cram should_transfer_files = YES when_to_transfer_output = ON_EXIT +JobDurationCategory = \"Medium\" requirements = (OSGVO_OS_STRING == \"RHEL 9\") request_memory = 1.3GB request_disk = 1.5GB request_cpus = 1 queue 1 The above submit file will transfer a complete copy of the Samtools tar archive created in Step 6 as well as a copy of XZ Utils installation from Step 3 . This submit file also includes an important requirements which tell HTCondor to run our job on execute nodes running Red Hat Linux version 7 operating system/ The resource requests for your jobs may differ from what is shown in the above example. Always run tests to determine the appropriate requests for your jobs. Some additional steps are then needed in the executable bash script used by this job to \"untar\" the Samtools and XZ Util tar archives, modify the PATH and LD_LIBRARY_PATH enviroments of our job, and load the bzip2 module: #!/bin/bash # samtools-cram.sh # untar software and libraries tar -xzf samtools-1.10.tar.gz tar -xzf xz-5.2.5.tar.gz # modify environment variables export LD_LIBRARY_PATH=$_CONDOR_SCRATCH_DIR/xz-5.2.5/lib:$LD_LIBRARY_PATH export PATH=$_CONDOR_SCRATCH_DIR/samtools-1.10/bin:$_CONDOR_SCRATCH_DIR/xz-5.2.5/bin:$PATH # load bzip2 module module load bzip2/1.0.6 # run samtools commands ...","title":"Step 7. Use Samtools in our jobs"},{"location":"htc_workloads/using_software/software-overview/","text":"Using Software on the Open Science Pool \u00b6 Overview of Software Options \u00b6 There are several options available for managing the software needs of your work within the Open Science Pool (OSPool). For most cases, it will be advantageous for you to install the software needed for your jobs. This not only gives you the greatest control over your computing environment, but will also make your jobs more distributable, allowing you to run jobs at more locations. * The OSPool can support most popular, open source software that fit the distributed high throughput computing model. * We do not have or support most commercial software due to licensing issues. Here we review options, and provide links to additonal information, for using software installed by users, software available as precompiled binaries or via containers. More details and instructions on installing software from source code, precompiled binaries/prebuilt executables, and on creating and using containers can be found on the OSPool documentation website , under the \"Software\" section. Use Precompiled Binaries and Prebuilt Executables \u00b6 Some software may be available as a precompiled binary or prebuilt executable which provides a quick and easy way to run a program without the need for installation from source code. Binaries and executables are software files that are ready to run as is, however binaries should always be tested beforehand. There are several important considerations for using precompiled binaries on the OSPool: 1) only binary files compiled against a Linux operating system are suitable for use on the OSPool, 2) some softwares have system and hardware dependencies that must be met in order to run properly, and 3) the available binaries may not have been compiled with the feaures or configuration needed for your work. Install Software from Source Code \u00b6 When installing software from source code on an OSPool Access Point, your software will be specifically compiled against the Red Hat Enterprise Linux (RHEL) 9 operating system used on these nodes. In most cases, subsequent jobs that use this software will also need to run on a RHEL 9 OS, which can be specified by the requirements attribute of your HTCondor submit files as described in the guide linked above. Use Docker and Apptainer Containers \u00b6 Container systems provide users with customizable and reproducable computing and software environments. The Open Science Pool is compatible with both Apptainer and Docker containers - the latter will be converted to a Apptainer image and added to the OSG container image repository. For more information about Docker, please see: Docker Home Page and Apptainer/Singularity, please see: Apptainer Home Page Apptainer/ Singularity has become the preferred containerization method in scientific computing. This talk is an example of how containers are used in scientific computing. Users can choose from a set of pre-defined containers already available within OSG , or can use published or custom made containers. For jobs submitted to the OSPool, it does not matter whether you provide a Docker or Apptainer/Singularity image. Either is compatible with our system and can be used with little to no modification. Determining factors on when to use Apptainer/Singularity images over Docker images include if an image already exists and if you have experience building images in one for format and not the other. When using a container for your jobs, the container image is automatically started up when HTCondor matches your job to a slot. The executable provided in the submit script will be run within the context of the container image, having access to software and libraries that were installed to the image, as if they were already on the server where the job is running. Job executables do not need to run any commands to start the container. Request Help with Installing Software \u00b6 If you believe none of the options described above are applicable for your software, send an email to support@osg-htc.org that describes: 1. the software name, version, and/or website with download and install instructions 2. what science each job does, using the software 3. what you've tried so far (if anything), and what indications of issues you've experienced We will do our best to help you create a portable installation. Additional Resources \u00b6 Watch this video from the 2021 OSG Virtual School for more information about using software on OSG:","title":"Overview: Software on the Open Science Pool"},{"location":"htc_workloads/using_software/software-overview/#using-software-on-the-open-science-pool","text":"","title":"Using Software on the Open Science Pool"},{"location":"htc_workloads/using_software/software-overview/#overview-of-software-options","text":"There are several options available for managing the software needs of your work within the Open Science Pool (OSPool). For most cases, it will be advantageous for you to install the software needed for your jobs. This not only gives you the greatest control over your computing environment, but will also make your jobs more distributable, allowing you to run jobs at more locations. * The OSPool can support most popular, open source software that fit the distributed high throughput computing model. * We do not have or support most commercial software due to licensing issues. Here we review options, and provide links to additonal information, for using software installed by users, software available as precompiled binaries or via containers. More details and instructions on installing software from source code, precompiled binaries/prebuilt executables, and on creating and using containers can be found on the OSPool documentation website , under the \"Software\" section.","title":"Overview of Software Options"},{"location":"htc_workloads/using_software/software-overview/#use-precompiled-binaries-and-prebuilt-executables","text":"Some software may be available as a precompiled binary or prebuilt executable which provides a quick and easy way to run a program without the need for installation from source code. Binaries and executables are software files that are ready to run as is, however binaries should always be tested beforehand. There are several important considerations for using precompiled binaries on the OSPool: 1) only binary files compiled against a Linux operating system are suitable for use on the OSPool, 2) some softwares have system and hardware dependencies that must be met in order to run properly, and 3) the available binaries may not have been compiled with the feaures or configuration needed for your work.","title":"Use Precompiled Binaries and Prebuilt Executables"},{"location":"htc_workloads/using_software/software-overview/#install-software-from-source-code","text":"When installing software from source code on an OSPool Access Point, your software will be specifically compiled against the Red Hat Enterprise Linux (RHEL) 9 operating system used on these nodes. In most cases, subsequent jobs that use this software will also need to run on a RHEL 9 OS, which can be specified by the requirements attribute of your HTCondor submit files as described in the guide linked above.","title":"Install Software from Source Code"},{"location":"htc_workloads/using_software/software-overview/#use-docker-and-apptainer-containers","text":"Container systems provide users with customizable and reproducable computing and software environments. The Open Science Pool is compatible with both Apptainer and Docker containers - the latter will be converted to a Apptainer image and added to the OSG container image repository. For more information about Docker, please see: Docker Home Page and Apptainer/Singularity, please see: Apptainer Home Page Apptainer/ Singularity has become the preferred containerization method in scientific computing. This talk is an example of how containers are used in scientific computing. Users can choose from a set of pre-defined containers already available within OSG , or can use published or custom made containers. For jobs submitted to the OSPool, it does not matter whether you provide a Docker or Apptainer/Singularity image. Either is compatible with our system and can be used with little to no modification. Determining factors on when to use Apptainer/Singularity images over Docker images include if an image already exists and if you have experience building images in one for format and not the other. When using a container for your jobs, the container image is automatically started up when HTCondor matches your job to a slot. The executable provided in the submit script will be run within the context of the container image, having access to software and libraries that were installed to the image, as if they were already on the server where the job is running. Job executables do not need to run any commands to start the container.","title":"Use Docker and Apptainer Containers"},{"location":"htc_workloads/using_software/software-overview/#request-help-with-installing-software","text":"If you believe none of the options described above are applicable for your software, send an email to support@osg-htc.org that describes: 1. the software name, version, and/or website with download and install instructions 2. what science each job does, using the software 3. what you've tried so far (if anything), and what indications of issues you've experienced We will do our best to help you create a portable installation.","title":"Request Help with Installing Software"},{"location":"htc_workloads/using_software/software-overview/#additional-resources","text":"Watch this video from the 2021 OSG Virtual School for more information about using software on OSG:","title":"Additional Resources"},{"location":"htc_workloads/using_software/software-request/","text":"Request Help with Your Software \u00b6 A large number of software packages can be used by compiling a portable installation or using a container (many community sofwares are already available in authoritative containers). If you believe none of these options ( described here ) are applicable for your software, please get in touch with a simple email to [support@osg-htc.org][support] that describes: 1. the software name, version, and/or website with download and install instructions 2. what science each job does, using the software 3. what you've tried so far (if anything), and what indications of issues you've experienced As long as this code is: available to the public in source form (e.g. open source) licensed to all users, and does not require a license key would not be better supported by another approach (which are usually preferable) we should be able to help you create a portable installation with the 'right' solution.","title":"Software request"},{"location":"htc_workloads/using_software/software-request/#request-help-with-your-software","text":"A large number of software packages can be used by compiling a portable installation or using a container (many community sofwares are already available in authoritative containers). If you believe none of these options ( described here ) are applicable for your software, please get in touch with a simple email to [support@osg-htc.org][support] that describes: 1. the software name, version, and/or website with download and install instructions 2. what science each job does, using the software 3. what you've tried so far (if anything), and what indications of issues you've experienced As long as this code is: available to the public in source form (e.g. open source) licensed to all users, and does not require a license key would not be better supported by another approach (which are usually preferable) we should be able to help you create a portable installation with the 'right' solution.","title":"Request Help with Your Software"},{"location":"htc_workloads/workload_planning/htcondor_job_submission/","text":"Overview: Submit Jobs to the OSPool using HTCondor \u00b6 Purpose \u00b6 This guide discusses the mechanics of creating and submitting jobs to the OSPool using HTCondor. OSPool Workflow Overview \u00b6 The process of running computational workflows on OSG resources follows the following outline: Terminology: Access point is where you login and stage your data, executables/scripts, and software to use in jobs. HTCondor is a job scheduling software that will run your jobs out on the OSPool execution points. All jobs must be submitted to HTCondor to run out on the OSPool. The Open Science Pool (OSPool) is the set of resources your job runs on. It is composed of execution points, as well as other technologies, that compose the cpus, memory, and disk space that will run the computations of your jobs. Run Jobs on the OSPool using HTCondor \u00b6 We are going to run the traditional 'hello world' program with a OSPool twist. In order to demonstrate the distributed resource nature of OSPool HTC System, we will produce a 'Hello CHTC' message 3 times, where each message is produced within is its own 'job'. Since you will not run execution commands yourself (HTCondor will do it for you), you need to tell HTCondor how to run the jobs for you in the form of a submit file, which describes the set of jobs. Note: You must be logged into an OSPool Access Point for the following example to work. 1. Prepare an executable \u00b6 First, create the executable script you would like HTCondor to run. For our example, copy the text below and paste it into a file called hello-ospool.sh (we recommend using a command line text editor) in your home directory. #!/bin/bash # # hello-ospool.sh # My very first OSPool job # # print a 'hello' message to the job's terminal output: echo \"Hello OSPool from Job $1 running on `whoami`@`hostname`\" # # keep this job running for a few minutes so you'll see it in the queue: sleep 180 This script would be run locally on our terminal by typing hello-ospool.sh . However, to run it on the OSPool, we will use our HTCondor submit file to run the hello-ospool.sh executable and to automatically pass different arguments to our script. 2. Prepare a submit file \u00b6 Create your HTCondor submit file, which you will use to tell HTCondor what job to run and how to run it. Copy the text below, and paste it into file called hello-ospool.sub . This is the file you will submit to HTCondor to describe your jobs (known as the submit file). # hello-ospool.sub # My very first HTCondor submit file # Specify your executable (single binary or a script that runs several # commands) and arguments to be passed to jobs. # $(Process) will be a integer number for each job, starting with \"0\" # and increasing for the relevant number of jobs. executable = hello-ospool.sh arguments = $(Process) # Specify the name of the log, standard error, and standard output (or \"screen output\") files. Wherever you see $(Cluster), HTCondor will insert the # queue number assigned to this set of jobs at the time of submission. log = hello-ospool_$(Cluster)_$(Process).log error = hello-ospool_$(Cluster)_$(Process).err output = hello-ospool_$(Cluster)_$(Process).out # This lines *would* be used if there were any other files # needed for the executable to use. # transfer_input_files = file1,/absolute/pathto/file2,etc # Specify Job duration category as \"Medium\" (expected runtime <10 hr) or \"Long\" (expected runtime <20 hr). +JobDurationCategory = \"Medium\" # Tell HTCondor requirements (e.g., operating system) your job needs, # what amount of compute resources each job will need on the computer where it runs. requirements = (OSGVO_OS_STRING == \"RHEL 9\") request_cpus = 1 request_memory = 1GB request_disk = 5GB # Tell HTCondor to run 3 instances of our job: queue 3 By using the \"$1\" variable in our hello-ospool.shexecutable, we are telling HTCondor to fetch the value of the argument in the first position in the submit file and to insert it in location of \"$1\" in our executable file. Therefore, when HTCondor runs this executable, it will pass the $(Process) value for each job and hello-ospool.sh will insert that value for \"$1\" in hello-ospool.sh. More information on special variables like \"$1\", \"$2\", and \"$@\" can be found here . Additionally, the JobDurationCategory must be listed anywhere prior to the final \u2018queue\u2019 statement of the submit file, as below: +JobDurationCategory = \u201cMedium\u201d JobDurationCategory Expected Job Duration Maximum Allowed Duration Medium (default) <10 hrs 20 hrs Long <20 hrs 40 hrs If the user does not indicate a JobDurationCategory in the submit file, the relevant job(s) will be labeled as Medium by default. Batches with jobs that individually execute for longer than 20 hours are not a good fit for the OSPool . We encourage users with long jobs to implement self-checkpoint when possible. Why Job Duration Categories? To maximize the value of the capacity contributed by the different organizations to the OSPool, users are requested to identify a duration categories for their jobs. These categories should be selected based upon test jobs (run on the OSPool) and allow for more effective scheduling of the capacity contributed to the pool. Every job submitted from an OSG-managed access point must be labeled with a Job Duration Category upon submission. By knowing the expected duration, the OSG is working to be able to direct longer-running jobs to resources that are faster and are interrupted less, while shorter jobs can run across more of the OSPool for better overall throughput. Jobs with single executions longer than 20 hours in tests on the OSPool should not be submitted , without self-checkpointing . 3. Submit the job \u00b6 Now, submit your job to HTCondor\u2019s queue by using the command condor_submit and providing the name of the submit file you created above: [alice@ap40]$ condor_submit hello-ospool.sub The condor_submit command actually submits your jobs to HTCondor. If all goes well, you will see output from the condor_submit command that appears as: Submitting job(s)... 3 job(s) submitted to cluster 36062145. 4. Check the job status \u00b6 To check on the status of your jobs in the queue, run the following command: [alice@ap40]$ condor_q The output of `condor_q` should look like this: -- Schedd: ap40.uw.osg-htc.org : <128.104.101.92:9618?... @ 04/14/23 15:35:17 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE TOTAL JOB_IDS Alice ID: 3606214 4/14 12:31 2 1 _ 3 36062145.0-2 3 jobs; 2 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended By default, condor_q shows jobs grouped into batches by batch name (if provided), or executable name. To show all of your jobs on individual lines, add the -nobatch option. To see a live update of the status of your jobs, use the command condor_watch_q . (To exit the live view, use the keyboard shortcut Ctrl + C .) 5. Examine the results \u00b6 When your jobs complete after a few minutes, they'll leave the queue. If you do a listing of your /home directory with the command ls -l , you should see something like: [alice@submit]$ ls -l total 28 -rw-r--r-- 1 alice alice 0 Apr 14 15:37 hello-ospool_36062145_0.err -rw-r--r-- 1 alice alice 60 Apr 14 15:37 hello-ospool_36062145_0.out -rw-r--r-- 1 alice alice 0 Apr 14 15:37 hello-ospool_36062145_0.log -rw-r--r-- 1 alice alice 0 Apr 14 15:37 hello-ospool_36062145_1.err -rw-r--r-- 1 alice alice 60 Apr 14 15:37 hello-ospool_36062145_1.out -rw-r--r-- 1 alice alice 0 Apr 14 15:37 hello-ospool_36062145_1.log -rw-r--r-- 1 alice alice 0 Apr 14 15:37 hello-ospool_36062145_2.err -rw-r--r-- 1 alice alice 60 Apr 14 15:37 hello-ospool_36062145_2.out -rw-r--r-- 1 alice alice 0 Apr 14 15:37 hello-ospool_36062145_2.log -rw-rw-r-- 1 alice alice 241 Apr 14 15:33 hello-ospool.sh -rw-rw-r-- 1 alice alice 1387 Apr 14 15:33 hello-ospool.sub Useful information is provided in the user log, standard error, and standard output files. HTCondor creates a transaction log of everything that happens to your jobs. Looking at the log file is very useful for debugging problems that may arise. Additionally, at the completion of a job, the .log file will print a table describing the amount of compute resources requested in the submit file compared to the amount the job actually used. An excerpt from hello-ospool_36062145_0.log produced due the submission of the 3 jobs will looks like this: \u2026 005 (36062145.000.000) 2023-04-14 12:36:09 Job terminated. (1) Normal termination (return value 0) Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage 72 - Run Bytes Sent By Job 265 - Run Bytes Received By Job 72 - Total Bytes Sent By Job 265 - Total Bytes Received By Job Partitionable Resources : Usage Request Allocated Cpus : 0 1 1 Disk (KB) : 118 1024 1810509281 Memory (MB) : 54 1024 1024 Job terminated of its own accord at 2023-04-14T17:36:09Z with exit-code 0. And, if you look at one of the output files, you should see something like this: Hello OSPool from Job 0 running on alice@e389.chtc.wisc.edu. Congratulations. You've run your first jobs in the OSPool! Important Workflow Elements \u00b6 A. Removing Jobs To remove a specific job, use condor_rm . Example: [alice@ap40]$ condor_rm 845638.0 B. Importance of Testing & Resource Optimization Examine Job Success Within the log file, you can see information about the completion of each job, including a system error code (as seen in \"return value 0\"). You can use this code, as well as information in your \".err\" file and other output files, to determine what issues your job(s) may have had, if any. Improve Efficiency Researchers with input and output files greater than 1GB, should store them in their /protected directory instead of /home to improve file transfer efficiency. See our data transfer guides to learn more. Get the Right Resource Requests Be sure to always add or modify the following lines in your submit files, as appropriate, and after running a few tests. Submit file entry Resources your jobs will run on request_cpus = cpus Matches each job to a computer \"slot\" with at least this many CPU cores. request_disk = kilobytes Matches each job to a slot with at least this much disk space, in units of KB. request_memory = megabytes Matches each job to a slot with at least this much memory (RAM), in units of MB. Determining Memory and Disk Requirements. The log file also indicates how much memory and disk each job used, so that you can first test a few jobs before submitting many more with more accurate request values. When you request too little, your jobs will be terminated by HTCondor and set to \"hold\" status to flag that job as requiring your attention. To learn more about why a job as gone on hold, use condor_q -hold . When you request too much, your jobs may not match to as many available \"slots\" as they could otherwise, and your overall throughput will suffer. You Have the Basics, Now Run Your OWN Jobs \u00b6 Check out the HTCondor Job Submission Intro video , which introduces various ways to specify differences between jobs (e.g. parameters, different input filenames, etc.), ways to organize your data, etc. and our full set of OSPool User Guides to begin submitting your own jobs.","title":"Overview: Submit Jobs to the OSPool using HTCondor"},{"location":"htc_workloads/workload_planning/htcondor_job_submission/#overview-submit-jobs-to-the-ospool-using-htcondor","text":"","title":"Overview: Submit Jobs to the OSPool using HTCondor"},{"location":"htc_workloads/workload_planning/htcondor_job_submission/#purpose","text":"This guide discusses the mechanics of creating and submitting jobs to the OSPool using HTCondor.","title":"Purpose"},{"location":"htc_workloads/workload_planning/htcondor_job_submission/#ospool-workflow-overview","text":"The process of running computational workflows on OSG resources follows the following outline: Terminology: Access point is where you login and stage your data, executables/scripts, and software to use in jobs. HTCondor is a job scheduling software that will run your jobs out on the OSPool execution points. All jobs must be submitted to HTCondor to run out on the OSPool. The Open Science Pool (OSPool) is the set of resources your job runs on. It is composed of execution points, as well as other technologies, that compose the cpus, memory, and disk space that will run the computations of your jobs.","title":"OSPool Workflow Overview"},{"location":"htc_workloads/workload_planning/htcondor_job_submission/#run-jobs-on-the-ospool-using-htcondor","text":"We are going to run the traditional 'hello world' program with a OSPool twist. In order to demonstrate the distributed resource nature of OSPool HTC System, we will produce a 'Hello CHTC' message 3 times, where each message is produced within is its own 'job'. Since you will not run execution commands yourself (HTCondor will do it for you), you need to tell HTCondor how to run the jobs for you in the form of a submit file, which describes the set of jobs. Note: You must be logged into an OSPool Access Point for the following example to work.","title":"Run Jobs on the OSPool using HTCondor"},{"location":"htc_workloads/workload_planning/htcondor_job_submission/#1-prepare-an-executable","text":"First, create the executable script you would like HTCondor to run. For our example, copy the text below and paste it into a file called hello-ospool.sh (we recommend using a command line text editor) in your home directory. #!/bin/bash # # hello-ospool.sh # My very first OSPool job # # print a 'hello' message to the job's terminal output: echo \"Hello OSPool from Job $1 running on `whoami`@`hostname`\" # # keep this job running for a few minutes so you'll see it in the queue: sleep 180 This script would be run locally on our terminal by typing hello-ospool.sh . However, to run it on the OSPool, we will use our HTCondor submit file to run the hello-ospool.sh executable and to automatically pass different arguments to our script.","title":"1. Prepare an executable"},{"location":"htc_workloads/workload_planning/htcondor_job_submission/#2-prepare-a-submit-file","text":"Create your HTCondor submit file, which you will use to tell HTCondor what job to run and how to run it. Copy the text below, and paste it into file called hello-ospool.sub . This is the file you will submit to HTCondor to describe your jobs (known as the submit file). # hello-ospool.sub # My very first HTCondor submit file # Specify your executable (single binary or a script that runs several # commands) and arguments to be passed to jobs. # $(Process) will be a integer number for each job, starting with \"0\" # and increasing for the relevant number of jobs. executable = hello-ospool.sh arguments = $(Process) # Specify the name of the log, standard error, and standard output (or \"screen output\") files. Wherever you see $(Cluster), HTCondor will insert the # queue number assigned to this set of jobs at the time of submission. log = hello-ospool_$(Cluster)_$(Process).log error = hello-ospool_$(Cluster)_$(Process).err output = hello-ospool_$(Cluster)_$(Process).out # This lines *would* be used if there were any other files # needed for the executable to use. # transfer_input_files = file1,/absolute/pathto/file2,etc # Specify Job duration category as \"Medium\" (expected runtime <10 hr) or \"Long\" (expected runtime <20 hr). +JobDurationCategory = \"Medium\" # Tell HTCondor requirements (e.g., operating system) your job needs, # what amount of compute resources each job will need on the computer where it runs. requirements = (OSGVO_OS_STRING == \"RHEL 9\") request_cpus = 1 request_memory = 1GB request_disk = 5GB # Tell HTCondor to run 3 instances of our job: queue 3 By using the \"$1\" variable in our hello-ospool.shexecutable, we are telling HTCondor to fetch the value of the argument in the first position in the submit file and to insert it in location of \"$1\" in our executable file. Therefore, when HTCondor runs this executable, it will pass the $(Process) value for each job and hello-ospool.sh will insert that value for \"$1\" in hello-ospool.sh. More information on special variables like \"$1\", \"$2\", and \"$@\" can be found here . Additionally, the JobDurationCategory must be listed anywhere prior to the final \u2018queue\u2019 statement of the submit file, as below: +JobDurationCategory = \u201cMedium\u201d JobDurationCategory Expected Job Duration Maximum Allowed Duration Medium (default) <10 hrs 20 hrs Long <20 hrs 40 hrs If the user does not indicate a JobDurationCategory in the submit file, the relevant job(s) will be labeled as Medium by default. Batches with jobs that individually execute for longer than 20 hours are not a good fit for the OSPool . We encourage users with long jobs to implement self-checkpoint when possible. Why Job Duration Categories? To maximize the value of the capacity contributed by the different organizations to the OSPool, users are requested to identify a duration categories for their jobs. These categories should be selected based upon test jobs (run on the OSPool) and allow for more effective scheduling of the capacity contributed to the pool. Every job submitted from an OSG-managed access point must be labeled with a Job Duration Category upon submission. By knowing the expected duration, the OSG is working to be able to direct longer-running jobs to resources that are faster and are interrupted less, while shorter jobs can run across more of the OSPool for better overall throughput. Jobs with single executions longer than 20 hours in tests on the OSPool should not be submitted , without self-checkpointing .","title":"2. Prepare a submit file"},{"location":"htc_workloads/workload_planning/htcondor_job_submission/#3-submit-the-job","text":"Now, submit your job to HTCondor\u2019s queue by using the command condor_submit and providing the name of the submit file you created above: [alice@ap40]$ condor_submit hello-ospool.sub The condor_submit command actually submits your jobs to HTCondor. If all goes well, you will see output from the condor_submit command that appears as: Submitting job(s)... 3 job(s) submitted to cluster 36062145.","title":"3. Submit the job"},{"location":"htc_workloads/workload_planning/htcondor_job_submission/#4-check-the-job-status","text":"To check on the status of your jobs in the queue, run the following command: [alice@ap40]$ condor_q The output of `condor_q` should look like this: -- Schedd: ap40.uw.osg-htc.org : <128.104.101.92:9618?... @ 04/14/23 15:35:17 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE TOTAL JOB_IDS Alice ID: 3606214 4/14 12:31 2 1 _ 3 36062145.0-2 3 jobs; 2 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended By default, condor_q shows jobs grouped into batches by batch name (if provided), or executable name. To show all of your jobs on individual lines, add the -nobatch option. To see a live update of the status of your jobs, use the command condor_watch_q . (To exit the live view, use the keyboard shortcut Ctrl + C .)","title":"4. Check the job status"},{"location":"htc_workloads/workload_planning/htcondor_job_submission/#5-examine-the-results","text":"When your jobs complete after a few minutes, they'll leave the queue. If you do a listing of your /home directory with the command ls -l , you should see something like: [alice@submit]$ ls -l total 28 -rw-r--r-- 1 alice alice 0 Apr 14 15:37 hello-ospool_36062145_0.err -rw-r--r-- 1 alice alice 60 Apr 14 15:37 hello-ospool_36062145_0.out -rw-r--r-- 1 alice alice 0 Apr 14 15:37 hello-ospool_36062145_0.log -rw-r--r-- 1 alice alice 0 Apr 14 15:37 hello-ospool_36062145_1.err -rw-r--r-- 1 alice alice 60 Apr 14 15:37 hello-ospool_36062145_1.out -rw-r--r-- 1 alice alice 0 Apr 14 15:37 hello-ospool_36062145_1.log -rw-r--r-- 1 alice alice 0 Apr 14 15:37 hello-ospool_36062145_2.err -rw-r--r-- 1 alice alice 60 Apr 14 15:37 hello-ospool_36062145_2.out -rw-r--r-- 1 alice alice 0 Apr 14 15:37 hello-ospool_36062145_2.log -rw-rw-r-- 1 alice alice 241 Apr 14 15:33 hello-ospool.sh -rw-rw-r-- 1 alice alice 1387 Apr 14 15:33 hello-ospool.sub Useful information is provided in the user log, standard error, and standard output files. HTCondor creates a transaction log of everything that happens to your jobs. Looking at the log file is very useful for debugging problems that may arise. Additionally, at the completion of a job, the .log file will print a table describing the amount of compute resources requested in the submit file compared to the amount the job actually used. An excerpt from hello-ospool_36062145_0.log produced due the submission of the 3 jobs will looks like this: \u2026 005 (36062145.000.000) 2023-04-14 12:36:09 Job terminated. (1) Normal termination (return value 0) Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage 72 - Run Bytes Sent By Job 265 - Run Bytes Received By Job 72 - Total Bytes Sent By Job 265 - Total Bytes Received By Job Partitionable Resources : Usage Request Allocated Cpus : 0 1 1 Disk (KB) : 118 1024 1810509281 Memory (MB) : 54 1024 1024 Job terminated of its own accord at 2023-04-14T17:36:09Z with exit-code 0. And, if you look at one of the output files, you should see something like this: Hello OSPool from Job 0 running on alice@e389.chtc.wisc.edu. Congratulations. You've run your first jobs in the OSPool!","title":"5. Examine the results"},{"location":"htc_workloads/workload_planning/htcondor_job_submission/#important-workflow-elements","text":"A. Removing Jobs To remove a specific job, use condor_rm . Example: [alice@ap40]$ condor_rm 845638.0 B. Importance of Testing & Resource Optimization Examine Job Success Within the log file, you can see information about the completion of each job, including a system error code (as seen in \"return value 0\"). You can use this code, as well as information in your \".err\" file and other output files, to determine what issues your job(s) may have had, if any. Improve Efficiency Researchers with input and output files greater than 1GB, should store them in their /protected directory instead of /home to improve file transfer efficiency. See our data transfer guides to learn more. Get the Right Resource Requests Be sure to always add or modify the following lines in your submit files, as appropriate, and after running a few tests. Submit file entry Resources your jobs will run on request_cpus = cpus Matches each job to a computer \"slot\" with at least this many CPU cores. request_disk = kilobytes Matches each job to a slot with at least this much disk space, in units of KB. request_memory = megabytes Matches each job to a slot with at least this much memory (RAM), in units of MB. Determining Memory and Disk Requirements. The log file also indicates how much memory and disk each job used, so that you can first test a few jobs before submitting many more with more accurate request values. When you request too little, your jobs will be terminated by HTCondor and set to \"hold\" status to flag that job as requiring your attention. To learn more about why a job as gone on hold, use condor_q -hold . When you request too much, your jobs may not match to as many available \"slots\" as they could otherwise, and your overall throughput will suffer.","title":"Important Workflow Elements"},{"location":"htc_workloads/workload_planning/htcondor_job_submission/#you-have-the-basics-now-run-your-own-jobs","text":"Check out the HTCondor Job Submission Intro video , which introduces various ways to specify differences between jobs (e.g. parameters, different input filenames, etc.), ways to organize your data, etc. and our full set of OSPool User Guides to begin submitting your own jobs.","title":"You Have the Basics, Now Run Your OWN Jobs"},{"location":"htc_workloads/workload_planning/jobdurationcategory/","text":"Indicate the Duration Category of Your Jobs \u00b6 Why Job Duration Categories? \u00b6 To maximize the value of the capacity contributed by the different organizations to the Open Science Pool (OSPool), users are requested to identify one of three duration categories for their jobs. These categories should be selected based upon test jobs (run on the OSPool) and allow for more effective scheduling of the capacity contributed to the pool, honoring the community\u2019s shared responsibility for efficient use of the contributed resources. As a reminder, jobs with single executions longer than 20 hours in tests on the OSPool should not be submitted , without self-checkpointing (see further below). Every job submitted from an OSG-managed access point must be labeled with a Job Duration Category upon submission. By knowing the expected duration, the OSG is working to be able to direct longer-running jobs to resources that are faster and are interrupted less, while shorter jobs can run across more of the OSPool for better overall throughput. Specify a Job Duration Category \u00b6 The JobDurationCategory must be listed anywhere prior to the final \u2018queue\u2019 statement of the submit file, as below: +JobDurationCategory = \u201cLong\u201d JobDurationCategory Expected Job Duration Maximum Allowed Duration Medium (default) <10 hrs 20 hrs Long <20 hrs 40 hrs If the user does not indicate a JobDurationCategory in the submit file, the relevant job(s) will be labeled as Medium by default. Batches with jobs that individually execute for longer than 20 hours are not a good fit for the OSPool . If your jobs are self-checkpointing, see \u201cSelf-Checkpointing Jobs\u201d, further below. Test Jobs for Expected Duration \u00b6 As part of the preparation for running a full-scale job batch , users should test a subset (first ~10, then 100 or 1000) of their jobs with the Medium or Long categories, and then review actual job execution durations in the job log files. If the user expects potentially significant variation in job durations within a single batch, a longer JobDurationCategory may be warranted relative to the duration of test jobs. Or, if variations in job duration may be predictable, the user may choose to submit different subsets of jobs with different Job Duration Categories. OSG Facilitators have a lot of experience with approaches for achieving shorter jobs (e.g. breaking up work into shorter, more numerous jobs; self-checkpointing; automated sequential job submissions; etc.) Get in touch, and we'll help you work through a solution!! support@osg-htc.org Maximum Allowed Duration \u00b6 Jobs in each category will be placed on hold in the queue if they run longer than their Maximum Allowed Duration (starting Tuesday, Nov 16, 2021). In that case, the user may remove and resubmit the jobs, identifying a longer category. Jobs that test as longer than 20 hours are not a good fit for the OSPool resources, and should not be submitted prior to contacting support@osg-htc.org to discuss options . The Maximum Allowed Durations are longer than the Expected Job Durations in order to accommodate CPU speed variations across OSPool computing resources, as well as other contributions to job duration that may not be apparent in smaller test batches. Similarly, Long jobs held after running longer than 40 hours represent significant wasted capacity and should never be released or resubmitted by the user without first taking steps to modify and test the jobs to run shorter. Self-Checkpointing Jobs \u00b6 Jobs that self-checkpoint at least every 10 hours are an excellent way for users to run jobs that would otherwise be longer in total execution time than the durations listed above. Jobs that complete a checkpoint at least as often as allowed for their JobDurationCategory will not be held. We are excited to help you think through and implement self-checkpointing. Get in touch via support@osg-htc.org if you have questions. :)","title":"Jobdurationcategory"},{"location":"htc_workloads/workload_planning/jobdurationcategory/#indicate-the-duration-category-of-your-jobs","text":"","title":"Indicate the Duration Category of Your Jobs"},{"location":"htc_workloads/workload_planning/jobdurationcategory/#why-job-duration-categories","text":"To maximize the value of the capacity contributed by the different organizations to the Open Science Pool (OSPool), users are requested to identify one of three duration categories for their jobs. These categories should be selected based upon test jobs (run on the OSPool) and allow for more effective scheduling of the capacity contributed to the pool, honoring the community\u2019s shared responsibility for efficient use of the contributed resources. As a reminder, jobs with single executions longer than 20 hours in tests on the OSPool should not be submitted , without self-checkpointing (see further below). Every job submitted from an OSG-managed access point must be labeled with a Job Duration Category upon submission. By knowing the expected duration, the OSG is working to be able to direct longer-running jobs to resources that are faster and are interrupted less, while shorter jobs can run across more of the OSPool for better overall throughput.","title":"Why Job Duration Categories?"},{"location":"htc_workloads/workload_planning/jobdurationcategory/#specify-a-job-duration-category","text":"The JobDurationCategory must be listed anywhere prior to the final \u2018queue\u2019 statement of the submit file, as below: +JobDurationCategory = \u201cLong\u201d JobDurationCategory Expected Job Duration Maximum Allowed Duration Medium (default) <10 hrs 20 hrs Long <20 hrs 40 hrs If the user does not indicate a JobDurationCategory in the submit file, the relevant job(s) will be labeled as Medium by default. Batches with jobs that individually execute for longer than 20 hours are not a good fit for the OSPool . If your jobs are self-checkpointing, see \u201cSelf-Checkpointing Jobs\u201d, further below.","title":"Specify a Job Duration Category"},{"location":"htc_workloads/workload_planning/jobdurationcategory/#test-jobs-for-expected-duration","text":"As part of the preparation for running a full-scale job batch , users should test a subset (first ~10, then 100 or 1000) of their jobs with the Medium or Long categories, and then review actual job execution durations in the job log files. If the user expects potentially significant variation in job durations within a single batch, a longer JobDurationCategory may be warranted relative to the duration of test jobs. Or, if variations in job duration may be predictable, the user may choose to submit different subsets of jobs with different Job Duration Categories. OSG Facilitators have a lot of experience with approaches for achieving shorter jobs (e.g. breaking up work into shorter, more numerous jobs; self-checkpointing; automated sequential job submissions; etc.) Get in touch, and we'll help you work through a solution!! support@osg-htc.org","title":"Test Jobs for Expected Duration"},{"location":"htc_workloads/workload_planning/jobdurationcategory/#maximum-allowed-duration","text":"Jobs in each category will be placed on hold in the queue if they run longer than their Maximum Allowed Duration (starting Tuesday, Nov 16, 2021). In that case, the user may remove and resubmit the jobs, identifying a longer category. Jobs that test as longer than 20 hours are not a good fit for the OSPool resources, and should not be submitted prior to contacting support@osg-htc.org to discuss options . The Maximum Allowed Durations are longer than the Expected Job Durations in order to accommodate CPU speed variations across OSPool computing resources, as well as other contributions to job duration that may not be apparent in smaller test batches. Similarly, Long jobs held after running longer than 40 hours represent significant wasted capacity and should never be released or resubmitted by the user without first taking steps to modify and test the jobs to run shorter.","title":"Maximum Allowed Duration"},{"location":"htc_workloads/workload_planning/jobdurationcategory/#self-checkpointing-jobs","text":"Jobs that self-checkpoint at least every 10 hours are an excellent way for users to run jobs that would otherwise be longer in total execution time than the durations listed above. Jobs that complete a checkpoint at least as often as allowed for their JobDurationCategory will not be held. We are excited to help you think through and implement self-checkpointing. Get in touch via support@osg-htc.org if you have questions. :)","title":"Self-Checkpointing Jobs"},{"location":"htc_workloads/workload_planning/preparing-to-scale-up/","text":"Determining the Amount of Resources to Request in a Submit File \u00b6 Learning Objectives \u00b6 This guide discuses the following: Best practices for testing jobs and scaling up your analysis. How to determine the amount of resources (CPU, memory, disk space) to request in a submit file. Overview \u00b6 Much of HTCondor's power comes from the ability to run a large number of jobs simultaneously. To optimize your work with a high-throughput computing (HTC) approach, you will need to test and optimize the resource requests of those jobs to only request the amount of memory, disk, and cpus truly needed. This is an important practice that will maximize your throughput by optimizing the number of potential 'slots' in the OSPool that your jobs can match to, reducing the overall turnaround time for completing a whole batch. This guide will describe best practices and general tips for testing your job resource requests before scaling up to submit your full set of jobs. Additional information is also available from the following \"Introduction to High Throughput Computing with HTCondor\" 2020 OSG Virtual Pilot School lecture video: Always Start With Test Jobs \u00b6 Submitting test jobs is an important first step for optimizing the resource requests of your jobs. We always recommend submitting a few (3-10) test jobs first before scaling up. If you plan to submit thousands of jobs, you may even want to run an intermediate test of 100-1,000 jobs to catch any failures or holds that mean your jobs have additional requirements they need to specify. Some general tips for test jobs: Select smaller data sets or subsets of data for your first test jobs. Using smaller data will keep the resource needs of your jobs low which will help get test jobs to start and complete sooner, when you're just making sure that your submit file and other logistical aspects of jobs submission are as you want them. If possible, submit test jobs that will reproduce results you've gotten using another system. This approach can be used as a good \"sanity check\" as you'll be able to compare the results of the test to those previously obtained. After initial tests complete successfully, scale up to larger or full-size data sets; if your jobs span a range of input file sizes, submit tests using the smallest and largest inputs to examine the range of resources that these jobs may need. Give your test jobs and associated HTCondor log , error , output , and submit files meaningful names so you know which results refer to which tests. Requesting CPUs, Memory, and Disk Space in the HTCondor Submit File \u00b6 In the HTCondor submit file, you must explicitly request the number of CPUs (i.e. cores), and the amount of disk and memory that the job needs to complete successfully, and identify a JobDurationCategory . When you submit a job for the first time, you may not know just how much to request and that's OK. Below are some suggestions for making resource requests for initial test jobs. For requesting CPU cores start by requesting a single cpu. With single-cpu jobs, you will see your jobs start sooner. Ultimately you will be able to achieve greater throughput with single cpus jobs compared to jobs that request and use multiple cpus. Keep in mind, requesting more CPU cores for a job does not mean that your jobs will use more cpus. Rather, you want to make sure that your CPU request matches the number of cores (i.e. 'threads' or 'processes') that you expect your software to use. (Most softwares only use 1 CPU core, by default.) There is limited support for multicore work in OSG. To learn more, see our guide on Multicore Jobs Depending on how long you expect your test jobs to take on a single core, you may need to identify a non-default JobDurationCategory , or consider implementing self-checkpointing. To inform initial disk requests always look at the size of your input files. At a minimum, you need to request enough disk to support all of the input files, executable, and the output you expect, but don't forget that the standard 'error' and 'output' files you specify will capture 'terminal' output that may add up, too. If many of your input and output files are compressed (i.e. zipped or tarballs) you will need to factor that into your estimates for disk usage as these files will take up additional space once uncompressed in the job. For your initial tests it is OK to request more disk than your job may need so that the test completes successfully. The key is to adjust disk requests for subsequent jobs based on the results of these test jobs. Estimating memory requests can sometimes be tricky. If you've performed the same or similar work on another computer, consider using the amount of memory (i.e. RAM) from that computer as a starting point. For instance, most laptop computers these days will have 8 or 16 GB of memory, which is okay to start with if you know a single job will succeed on your laptop. For your initial tests it is OK to request more memory than your job may need so that the test completes successfully. The key is to adjust memory requests for subsequent jobs based on the results of these test jobs. If you find that memory usage will vary greatly across a batch of jobs, we can assist you with creating dynamic memory requests in your submit files. Optimize Job Resource Requests For Subsequent Jobs \u00b6 As always, reviewing the HTCondor log file from past jobs is a great way to learn about the resource needs of your jobs. Optimizing the resources requested for each job may help your job run faster and achieve more throughput. HTCondor will report the memory, disk, and cpu usage of your jobs at the end of the HTCondor .log file. The amount of each resource requested in the submit file is listed under the \"Request\" column and information about the amount of each resource actually utilized to complete the job is provided in the \"Usage\" column. For example: Partitionable Resources : Usage Request Allocated Cpus : 1 1 Disk (KB) : 12 1000000 26703078 Memory (MB) : 0 1000 1000 One quick option to query your log files is to use the Unix tool grep . For example: [user@login]$ grep \"Disk (KB)\" my-job.log The above will return all lines in my-job.log that report the disk usage, request, and allocation of all jobs reported in that log file. Alternatively, condor_history can be used to query details from recently completed job submissions. HTCondor's history is continuously updating with information from new jobs, so condor_history is best performed shortly after the jobs of interest enter/leave the queue. Submit Multiple Jobs Using A Single Submit File \u00b6 Once you have a single test job that completes successfully, the next step is to submit a small batch of test jobs (e.g. 5 or 10 jobs) using a single submit file . Use this small-scale multi-job submission test to ensure that all jobs complete successfully, produce the desired output, and do not conflict with each other when submitted together. Once you are confident that the jobs will complete as desired, then scale up to submitting the entire set of jobs. Monitoring Job Status and Obtaining Run Information \u00b6 Gathering information about how, what, and where a job ran can be important for both troubleshooting and optimizing a workflow. The following commands are a great way to learn more about your jobs: Command Description condor_q Shows the queue information for your jobs. Includes information such as batch name and total jobs. condor_q -l Prints all information related to a job including attributes and run information about a job in the queue. Output includes JobDurationCategory , ServerTime , SubmitFile , etc. Also works with condor_history . condor_q -af Prints information about an attribute or list of attributes for a single job using the autoformat -af flag. The list of possible attributes can be found using condor_q -l . Also works with condor_history . condor_q -constraint ' == \"\"' The -constraint flag allows users to find all jobs with a certain value for a given parameter. This flag supports searching by more than one parameter and different operators (e.g. =!= ). Also works with condor_history . condor_q -better-analyze -pool Shows a list of the number of slots matching a job's requirements. For more information, see Troubleshooting Job Errors . Additional condor_q flags involved in optimizing and troubleshooting jobs include: Flag Description -nobatch Combined with condor_q , this flag will list jobs individually and not by batch. -hold Show only jobs in the \"on hold\" state and the reason for that. An action from the user is expected to solve the problem. -run Show your running jobs and related info, like how much time they have been running, where they are running, etc. -dag Organize condor_q output by DAG. More information about the commands and flags above can be found in the HTCondor manual . Avoid Exceeding Disk Quotas in /home and /protected \u00b6 To prevent errors or workflow interruption, be sure to estimate the input and output needed for all of your concurrently running jobs. By default, after your job terminates HTCondor will transfer back any new or modified files from the top-level directory where the job ran, back to your /home directory. Efficiently manage output by including steps to remove intermediate and/or unnecessary files as part of your job. Workflow Management \u00b6 To help manage complicated workflows, consider a workflow manager such as HTCondor's built-in DAGman or the HTCondor-compatible Pegasus workflow tool.","title":"Determining the Amount of Resources to Request in a Submit File "},{"location":"htc_workloads/workload_planning/preparing-to-scale-up/#determining-the-amount-of-resources-to-request-in-a-submit-file","text":"","title":"Determining the Amount of Resources to Request in a Submit File"},{"location":"htc_workloads/workload_planning/preparing-to-scale-up/#learning-objectives","text":"This guide discuses the following: Best practices for testing jobs and scaling up your analysis. How to determine the amount of resources (CPU, memory, disk space) to request in a submit file.","title":"Learning Objectives"},{"location":"htc_workloads/workload_planning/preparing-to-scale-up/#overview","text":"Much of HTCondor's power comes from the ability to run a large number of jobs simultaneously. To optimize your work with a high-throughput computing (HTC) approach, you will need to test and optimize the resource requests of those jobs to only request the amount of memory, disk, and cpus truly needed. This is an important practice that will maximize your throughput by optimizing the number of potential 'slots' in the OSPool that your jobs can match to, reducing the overall turnaround time for completing a whole batch. This guide will describe best practices and general tips for testing your job resource requests before scaling up to submit your full set of jobs. Additional information is also available from the following \"Introduction to High Throughput Computing with HTCondor\" 2020 OSG Virtual Pilot School lecture video:","title":"Overview"},{"location":"htc_workloads/workload_planning/preparing-to-scale-up/#always-start-with-test-jobs","text":"Submitting test jobs is an important first step for optimizing the resource requests of your jobs. We always recommend submitting a few (3-10) test jobs first before scaling up. If you plan to submit thousands of jobs, you may even want to run an intermediate test of 100-1,000 jobs to catch any failures or holds that mean your jobs have additional requirements they need to specify. Some general tips for test jobs: Select smaller data sets or subsets of data for your first test jobs. Using smaller data will keep the resource needs of your jobs low which will help get test jobs to start and complete sooner, when you're just making sure that your submit file and other logistical aspects of jobs submission are as you want them. If possible, submit test jobs that will reproduce results you've gotten using another system. This approach can be used as a good \"sanity check\" as you'll be able to compare the results of the test to those previously obtained. After initial tests complete successfully, scale up to larger or full-size data sets; if your jobs span a range of input file sizes, submit tests using the smallest and largest inputs to examine the range of resources that these jobs may need. Give your test jobs and associated HTCondor log , error , output , and submit files meaningful names so you know which results refer to which tests.","title":"Always Start With Test Jobs"},{"location":"htc_workloads/workload_planning/preparing-to-scale-up/#requesting-cpus-memory-and-disk-space-in-the-htcondor-submit-file","text":"In the HTCondor submit file, you must explicitly request the number of CPUs (i.e. cores), and the amount of disk and memory that the job needs to complete successfully, and identify a JobDurationCategory . When you submit a job for the first time, you may not know just how much to request and that's OK. Below are some suggestions for making resource requests for initial test jobs. For requesting CPU cores start by requesting a single cpu. With single-cpu jobs, you will see your jobs start sooner. Ultimately you will be able to achieve greater throughput with single cpus jobs compared to jobs that request and use multiple cpus. Keep in mind, requesting more CPU cores for a job does not mean that your jobs will use more cpus. Rather, you want to make sure that your CPU request matches the number of cores (i.e. 'threads' or 'processes') that you expect your software to use. (Most softwares only use 1 CPU core, by default.) There is limited support for multicore work in OSG. To learn more, see our guide on Multicore Jobs Depending on how long you expect your test jobs to take on a single core, you may need to identify a non-default JobDurationCategory , or consider implementing self-checkpointing. To inform initial disk requests always look at the size of your input files. At a minimum, you need to request enough disk to support all of the input files, executable, and the output you expect, but don't forget that the standard 'error' and 'output' files you specify will capture 'terminal' output that may add up, too. If many of your input and output files are compressed (i.e. zipped or tarballs) you will need to factor that into your estimates for disk usage as these files will take up additional space once uncompressed in the job. For your initial tests it is OK to request more disk than your job may need so that the test completes successfully. The key is to adjust disk requests for subsequent jobs based on the results of these test jobs. Estimating memory requests can sometimes be tricky. If you've performed the same or similar work on another computer, consider using the amount of memory (i.e. RAM) from that computer as a starting point. For instance, most laptop computers these days will have 8 or 16 GB of memory, which is okay to start with if you know a single job will succeed on your laptop. For your initial tests it is OK to request more memory than your job may need so that the test completes successfully. The key is to adjust memory requests for subsequent jobs based on the results of these test jobs. If you find that memory usage will vary greatly across a batch of jobs, we can assist you with creating dynamic memory requests in your submit files.","title":"Requesting CPUs, Memory, and Disk Space in the HTCondor Submit File"},{"location":"htc_workloads/workload_planning/preparing-to-scale-up/#optimize-job-resource-requests-for-subsequent-jobs","text":"As always, reviewing the HTCondor log file from past jobs is a great way to learn about the resource needs of your jobs. Optimizing the resources requested for each job may help your job run faster and achieve more throughput. HTCondor will report the memory, disk, and cpu usage of your jobs at the end of the HTCondor .log file. The amount of each resource requested in the submit file is listed under the \"Request\" column and information about the amount of each resource actually utilized to complete the job is provided in the \"Usage\" column. For example: Partitionable Resources : Usage Request Allocated Cpus : 1 1 Disk (KB) : 12 1000000 26703078 Memory (MB) : 0 1000 1000 One quick option to query your log files is to use the Unix tool grep . For example: [user@login]$ grep \"Disk (KB)\" my-job.log The above will return all lines in my-job.log that report the disk usage, request, and allocation of all jobs reported in that log file. Alternatively, condor_history can be used to query details from recently completed job submissions. HTCondor's history is continuously updating with information from new jobs, so condor_history is best performed shortly after the jobs of interest enter/leave the queue.","title":"Optimize Job Resource Requests For Subsequent Jobs"},{"location":"htc_workloads/workload_planning/preparing-to-scale-up/#submit-multiple-jobs-using-a-single-submit-file","text":"Once you have a single test job that completes successfully, the next step is to submit a small batch of test jobs (e.g. 5 or 10 jobs) using a single submit file . Use this small-scale multi-job submission test to ensure that all jobs complete successfully, produce the desired output, and do not conflict with each other when submitted together. Once you are confident that the jobs will complete as desired, then scale up to submitting the entire set of jobs.","title":"Submit Multiple Jobs Using A Single Submit File"},{"location":"htc_workloads/workload_planning/preparing-to-scale-up/#monitoring-job-status-and-obtaining-run-information","text":"Gathering information about how, what, and where a job ran can be important for both troubleshooting and optimizing a workflow. The following commands are a great way to learn more about your jobs: Command Description condor_q Shows the queue information for your jobs. Includes information such as batch name and total jobs. condor_q -l Prints all information related to a job including attributes and run information about a job in the queue. Output includes JobDurationCategory , ServerTime , SubmitFile , etc. Also works with condor_history . condor_q -af Prints information about an attribute or list of attributes for a single job using the autoformat -af flag. The list of possible attributes can be found using condor_q -l . Also works with condor_history . condor_q -constraint ' == \"\"' The -constraint flag allows users to find all jobs with a certain value for a given parameter. This flag supports searching by more than one parameter and different operators (e.g. =!= ). Also works with condor_history . condor_q -better-analyze -pool Shows a list of the number of slots matching a job's requirements. For more information, see Troubleshooting Job Errors . Additional condor_q flags involved in optimizing and troubleshooting jobs include: Flag Description -nobatch Combined with condor_q , this flag will list jobs individually and not by batch. -hold Show only jobs in the \"on hold\" state and the reason for that. An action from the user is expected to solve the problem. -run Show your running jobs and related info, like how much time they have been running, where they are running, etc. -dag Organize condor_q output by DAG. More information about the commands and flags above can be found in the HTCondor manual .","title":"Monitoring Job Status and Obtaining Run Information"},{"location":"htc_workloads/workload_planning/preparing-to-scale-up/#avoid-exceeding-disk-quotas-in-home-and-protected","text":"To prevent errors or workflow interruption, be sure to estimate the input and output needed for all of your concurrently running jobs. By default, after your job terminates HTCondor will transfer back any new or modified files from the top-level directory where the job ran, back to your /home directory. Efficiently manage output by including steps to remove intermediate and/or unnecessary files as part of your job.","title":"Avoid Exceeding Disk Quotas in /home and /protected"},{"location":"htc_workloads/workload_planning/preparing-to-scale-up/#workflow-management","text":"To help manage complicated workflows, consider a workflow manager such as HTCondor's built-in DAGman or the HTCondor-compatible Pegasus workflow tool.","title":"Workflow Management"},{"location":"htc_workloads/workload_planning/roadmap/","text":"Roadmap to HTC Workload Submission \u00b6 Overview \u00b6 This guide lays out the steps needed to go from logging in to an OSG Access Point to running a full scale high throughput computing (HTC) workload on OSG's Open Science Pool (OSPool) . The steps listed here apply to any new workload submission, whether you are a long-time OSG user or just getting started with your first workload, with helpful links to our documentation pages. This guide assumes that you have applied for an OSG Access Point account and have been approved after meeting with an OSG Research Computing Facilitator. If you don't yet have an account, you can apply for one here or contact us with any questions you have. Learning how to get started on the OSG does not need to end with this document or our guides! Learn about our training opportunities and personal facilitation support in the Getting Help section below. 1. Introduction to the OSPool and OSG Resources \u00b6 The OSG's Open Science Pool is best-suited for computing work that can be run as many, independent tasks, in an approach called \"high throughput computing.\" For more information on what kind of work is a good fit for the OSG, see Is the Open Science Pool for You? . Learn more about the services provided by the OSG in this video: 2. Log on to an OSG Access Point \u00b6 If you have not done so, apply for an account here . A Research Computing Facilitator will contact you within one business day to arrange a meeting to discuss your computational goals and to activate your account. Note that there are multiple classes of access points provided. When your account was activated, you should have been told which access point your account belongs to: Log In to \"uw.osg-htc.org\" Access Points (e.g., ap40.uw.osg-htc.org) If your account is on the uw.osg-htc.org Access Points (e.g., accounts on ap40.uw.osg-htc.org), follow instructions in this guide for logging in: Log In to uw.osg-htc.org Access Points Log In to \"OSG Connect\" Access Points (e.g., ap20.uc.osg-htc.org) If your account is on the OSG Connect Access points (e.g., accounts on ap20.uc.osg-htc.org, ap21.uc.osg-htc.org), follow instructions in this guide for logging in: Log In to OSG Connect Access Points 3. Learn to Submit HTCondor Jobs \u00b6 Computational work is run on the OSPool by submitting it as \u201cjobs\u201d to the HTCondor scheduler. Jobs submitted to HTCondor are then scheduled and run on different resources that are part of the Open Science Pool. Before submitting your own computational work, it is important to understand how HTCondor job submission works. The following guides show how to submit basic HTCondor jobs. Overview: Submit Jobs to the OSPool using HTCondor 4. Test a First Job \u00b6 After learning about the basics of HTCondor job submission, you will need to generate your own HTCondor job -- including the software needed by the job and the appropriate mechanism to handle the data. We recommend doing this using a single test job. Prepare your software \u00b6 Software is an integral part of your HTC workflow. Whether you\u2019ve written it yourself, inherited it from your research group, or use common open-source packages, any required executables and libraries will need to be made available to your jobs if they are to run on the OSPool. Read through this overview of Using Software to help you determine the best way to provide your software. We also have the following guides/tutorials for each major software portability approach: To install your own software , begin with the guide on Compiling Software and then complete the Example Software Compilation tutorial . To use precompiled binaries , try the example presented in the AutoDock Vina tutorial and/or the Julia tutorial . To use Apptainer/Singularity/Docker containers for your jobs, see the Create an Apptainer/Singularity Container Image Finally, here are some additional guides specific to some of the most common scripting languages and software tools used on OSG**: Python R Machine Learning BLAST **This is not a complete list. Feel free to search for your software in our Knowledge base . Manage your data \u00b6 The data for your jobs will need to be transferred to each job that runs in the OSPool, and HTCondor has built-in features for getting data to jobs. Our Data Management guide discussed the relevant approaches, when to use them, and where to stage data for each. Assign the Appropriate Job Duration Category \u00b6 Jobs running in the OSPool may be interrupted at any time, and will be re-run by HTCondor, unless a single execution of a job exceeds the allowed duration. Jobs expected to take longer than 10 hours will need to identify themselves as 'Long' according to our Job Duration policies . Remember that jobs expected to take longer than 20 hours are not a good fit for the OSPool (see Is the Open Science Pool for You? ) without implementing self-checkpointing (further below). 5. Scale Up \u00b6 After you have a sample job running successfully, you\u2019ll want to scale up in one or two steps (first run several jobs, before running ALL of them). HTCondor has many useful features that make it easy to submit multiple jobs with the same submit file. Easily submit multiple jobs Scaling up after success with test jobs discusses how to test your jobs for duration, memory and disk usage, and the total amount of space you might need on the 6. Special Use Cases \u00b6 If you think any of the below applies to you, please get in touch and our facilitation team will be happy to discuss your individual case. Run sequential workflows of jobs: Workflows with HTCondor's DAGMan Implement self-checkpointing for long jobs: HTCondor Checkpointing Guide Build your own Apptainer container: Create an Apptainer/Singularity Container Image Submit more than 10,000 jobs at once: FAQ, search for 'max_idle' Larger or speciality resource requests: GPUs: GPU Jobs Multiple CPUs: Multicore Jobs Large Memory: Large Memory Jobs Getting Help \u00b6 The OSG Facilitation team is here to help with questions and issues that come up as you work through these roadmap steps. We are available via email, office hours, appointments, and offer regular training opportunities. See our Get Help page and OSG Training page for all the different ways you can reach us. Our purpose is to assist you with achieving your computational goals, so we want to hear from you!","title":"Roadmap to HTC Workload Submission"},{"location":"htc_workloads/workload_planning/roadmap/#roadmap-to-htc-workload-submission","text":"","title":"Roadmap to HTC Workload Submission"},{"location":"htc_workloads/workload_planning/roadmap/#overview","text":"This guide lays out the steps needed to go from logging in to an OSG Access Point to running a full scale high throughput computing (HTC) workload on OSG's Open Science Pool (OSPool) . The steps listed here apply to any new workload submission, whether you are a long-time OSG user or just getting started with your first workload, with helpful links to our documentation pages. This guide assumes that you have applied for an OSG Access Point account and have been approved after meeting with an OSG Research Computing Facilitator. If you don't yet have an account, you can apply for one here or contact us with any questions you have. Learning how to get started on the OSG does not need to end with this document or our guides! Learn about our training opportunities and personal facilitation support in the Getting Help section below.","title":"Overview"},{"location":"htc_workloads/workload_planning/roadmap/#1-introduction-to-the-ospool-and-osg-resources","text":"The OSG's Open Science Pool is best-suited for computing work that can be run as many, independent tasks, in an approach called \"high throughput computing.\" For more information on what kind of work is a good fit for the OSG, see Is the Open Science Pool for You? . Learn more about the services provided by the OSG in this video:","title":"1. Introduction to the OSPool and OSG Resources"},{"location":"htc_workloads/workload_planning/roadmap/#2-log-on-to-an-osg-access-point","text":"If you have not done so, apply for an account here . A Research Computing Facilitator will contact you within one business day to arrange a meeting to discuss your computational goals and to activate your account. Note that there are multiple classes of access points provided. When your account was activated, you should have been told which access point your account belongs to: Log In to \"uw.osg-htc.org\" Access Points (e.g., ap40.uw.osg-htc.org) If your account is on the uw.osg-htc.org Access Points (e.g., accounts on ap40.uw.osg-htc.org), follow instructions in this guide for logging in: Log In to uw.osg-htc.org Access Points Log In to \"OSG Connect\" Access Points (e.g., ap20.uc.osg-htc.org) If your account is on the OSG Connect Access points (e.g., accounts on ap20.uc.osg-htc.org, ap21.uc.osg-htc.org), follow instructions in this guide for logging in: Log In to OSG Connect Access Points","title":"2. Log on to an OSG Access Point"},{"location":"htc_workloads/workload_planning/roadmap/#3-learn-to-submit-htcondor-jobs","text":"Computational work is run on the OSPool by submitting it as \u201cjobs\u201d to the HTCondor scheduler. Jobs submitted to HTCondor are then scheduled and run on different resources that are part of the Open Science Pool. Before submitting your own computational work, it is important to understand how HTCondor job submission works. The following guides show how to submit basic HTCondor jobs. Overview: Submit Jobs to the OSPool using HTCondor","title":"3. Learn to Submit HTCondor Jobs"},{"location":"htc_workloads/workload_planning/roadmap/#4-test-a-first-job","text":"After learning about the basics of HTCondor job submission, you will need to generate your own HTCondor job -- including the software needed by the job and the appropriate mechanism to handle the data. We recommend doing this using a single test job.","title":"4. Test a First Job"},{"location":"htc_workloads/workload_planning/roadmap/#prepare-your-software","text":"Software is an integral part of your HTC workflow. Whether you\u2019ve written it yourself, inherited it from your research group, or use common open-source packages, any required executables and libraries will need to be made available to your jobs if they are to run on the OSPool. Read through this overview of Using Software to help you determine the best way to provide your software. We also have the following guides/tutorials for each major software portability approach: To install your own software , begin with the guide on Compiling Software and then complete the Example Software Compilation tutorial . To use precompiled binaries , try the example presented in the AutoDock Vina tutorial and/or the Julia tutorial . To use Apptainer/Singularity/Docker containers for your jobs, see the Create an Apptainer/Singularity Container Image Finally, here are some additional guides specific to some of the most common scripting languages and software tools used on OSG**: Python R Machine Learning BLAST **This is not a complete list. Feel free to search for your software in our Knowledge base .","title":"Prepare your software"},{"location":"htc_workloads/workload_planning/roadmap/#manage-your-data","text":"The data for your jobs will need to be transferred to each job that runs in the OSPool, and HTCondor has built-in features for getting data to jobs. Our Data Management guide discussed the relevant approaches, when to use them, and where to stage data for each.","title":"Manage your data"},{"location":"htc_workloads/workload_planning/roadmap/#assign-the-appropriate-job-duration-category","text":"Jobs running in the OSPool may be interrupted at any time, and will be re-run by HTCondor, unless a single execution of a job exceeds the allowed duration. Jobs expected to take longer than 10 hours will need to identify themselves as 'Long' according to our Job Duration policies . Remember that jobs expected to take longer than 20 hours are not a good fit for the OSPool (see Is the Open Science Pool for You? ) without implementing self-checkpointing (further below).","title":"Assign the Appropriate Job Duration Category"},{"location":"htc_workloads/workload_planning/roadmap/#5-scale-up","text":"After you have a sample job running successfully, you\u2019ll want to scale up in one or two steps (first run several jobs, before running ALL of them). HTCondor has many useful features that make it easy to submit multiple jobs with the same submit file. Easily submit multiple jobs Scaling up after success with test jobs discusses how to test your jobs for duration, memory and disk usage, and the total amount of space you might need on the","title":"5. Scale Up"},{"location":"htc_workloads/workload_planning/roadmap/#6-special-use-cases","text":"If you think any of the below applies to you, please get in touch and our facilitation team will be happy to discuss your individual case. Run sequential workflows of jobs: Workflows with HTCondor's DAGMan Implement self-checkpointing for long jobs: HTCondor Checkpointing Guide Build your own Apptainer container: Create an Apptainer/Singularity Container Image Submit more than 10,000 jobs at once: FAQ, search for 'max_idle' Larger or speciality resource requests: GPUs: GPU Jobs Multiple CPUs: Multicore Jobs Large Memory: Large Memory Jobs","title":"6. Special Use Cases"},{"location":"htc_workloads/workload_planning/roadmap/#getting-help","text":"The OSG Facilitation team is here to help with questions and issues that come up as you work through these roadmap steps. We are available via email, office hours, appointments, and offer regular training opportunities. See our Get Help page and OSG Training page for all the different ways you can reach us. Our purpose is to assist you with achieving your computational goals, so we want to hear from you!","title":"Getting Help"},{"location":"overview/account_setup/ap20-ap21-migration/","text":"Migrating to New Access Points From login04 , login05 \u00b6 The login04 / login05.osgconnect.net access points were replaced with new improved access points during July and August of 2023. If you did not migrate your account or data during this time, you can likely still access the new access points, following these steps. Please contact the facilitation team with any questions. Migration Steps \u00b6 Step 1: Determine Your Assigned Access Point \u00b6 Your new access point assignment will be based on your former access point: If your current assigment is login04.osgconnect.net , your new access point will be ap20.uc.osg-htc.org If your current assigment is login05.osgconnect.net , your new access point will be ap21.uc.osg-htc.org You can also see this information on your profile page on osgconnect.net Step 2: Set Up Multi Factor Authentication \u00b6 An important change is that the new access points will require multi factor authentication. As part of the migration process, you will connect to your account to a time-based one-time password (TOTP) client. When connecting to an access point via ssh, you will be asked to provide the generated 6 digit verification code when logging in. Please see detailed instructions here . Step 3 (If Needed): Modify Workflows to Use New Data Paths \u00b6 OSDF locations have changed. We recommend that most data from the old /public/ or /protected/ folders transition to the new access point- specific user-only areas ( /ospool/ap20/data/ or /ospool/ap21/data based on which access point you are assigned to). This will offer the the best performance. You will also need to upload submit files and scripts to use these new data locations. Consult the updated Data Overview and OSDF guides for more information, and contact the Facilitation team with any questions. Get Help \u00b6 We understand transitions may raise questions or difficulties. Should you require any assistance, please feel free to reach out to us via email, or join one of our office hours sessions .","title":"Migrating to ap20/ap21.uc.osg-htc.org"},{"location":"overview/account_setup/ap20-ap21-migration/#migrating-to-new-access-points-from-login04-login05","text":"The login04 / login05.osgconnect.net access points were replaced with new improved access points during July and August of 2023. If you did not migrate your account or data during this time, you can likely still access the new access points, following these steps. Please contact the facilitation team with any questions.","title":"Migrating to New Access Points From login04, login05"},{"location":"overview/account_setup/ap20-ap21-migration/#migration-steps","text":"","title":"Migration Steps"},{"location":"overview/account_setup/ap20-ap21-migration/#step-1-determine-your-assigned-access-point","text":"Your new access point assignment will be based on your former access point: If your current assigment is login04.osgconnect.net , your new access point will be ap20.uc.osg-htc.org If your current assigment is login05.osgconnect.net , your new access point will be ap21.uc.osg-htc.org You can also see this information on your profile page on osgconnect.net","title":"Step 1: Determine Your Assigned Access Point"},{"location":"overview/account_setup/ap20-ap21-migration/#step-2-set-up-multi-factor-authentication","text":"An important change is that the new access points will require multi factor authentication. As part of the migration process, you will connect to your account to a time-based one-time password (TOTP) client. When connecting to an access point via ssh, you will be asked to provide the generated 6 digit verification code when logging in. Please see detailed instructions here .","title":"Step 2: Set Up Multi Factor Authentication"},{"location":"overview/account_setup/ap20-ap21-migration/#step-3-if-needed-modify-workflows-to-use-new-data-paths","text":"OSDF locations have changed. We recommend that most data from the old /public/ or /protected/ folders transition to the new access point- specific user-only areas ( /ospool/ap20/data/ or /ospool/ap21/data based on which access point you are assigned to). This will offer the the best performance. You will also need to upload submit files and scripts to use these new data locations. Consult the updated Data Overview and OSDF guides for more information, and contact the Facilitation team with any questions.","title":"Step 3 (If Needed): Modify Workflows to Use New Data Paths"},{"location":"overview/account_setup/ap20-ap21-migration/#get-help","text":"We understand transitions may raise questions or difficulties. Should you require any assistance, please feel free to reach out to us via email, or join one of our office hours sessions .","title":"Get Help"},{"location":"overview/account_setup/ap7-access/","text":"The latest version of this guide is at this link","title":"Ap7 access"},{"location":"overview/account_setup/comanage-access/","text":"Log In to uw.osg-htc.org Access Points \u00b6 This guide is for users who were notified by a member of the OSG team that they will be using the uw.osg-htc.org Access Points. To join and use the uw.osg-htc.org Access Points ( ap40.uw.osg-htc.org ), you will go through the following steps: Apply for a uw.osg-htc.org Access Point account Have your account approved by an OSG Team member Log in to ap40.uw.osg-htc.org Request Access to uw.osg-htc.org Access Points \u00b6 To request access to ap40.uw.osg-htc.org , submit an application using the following steps: To request an OSPool account, visit this account registration page . You will be redirected to the CILogon sign in page. Select your institution and use your institutional credentials to login. You will use these credentials later to login so it is important to remember the institution you use at this step. If you have issues signing in using your institutional credentials, contact us at support@osg-htc.org . Once you sign in, you will be redirected to the User Enrollment page. Click \"Begin\" and enter your name and email address in the following page. In many cases, this information will be automatically populated. If desired, it is possible to manually edit any information automatically filled in. Once you have entered your information, click \"SUBMIT\". After submitting your application, you will receive an email from registry@cilogon.org to verify your email address. Click the link listed in the email to be redirected to a page confirm your invitation details. Click the \"ACCEPT\" button to complete this step. Account Approval by a Research Computing Facilitator \u00b6 If a meeting has not already been scheduled with a Research Computing Facilitator, one of the facilitation team will contact you about arranging a short consultation. Following the meeting, the Facilitator will approve your account and add your profile to any relevant OSG \u2018project\u2019 names. Once your account is ready, the Facilitator will email you with your account details including the 'username' you will use to log in to the ap40.uw.osg-htc.org access point. Log in \u00b6 Once your account has been added to the ap40.uw.osg-htc.org access point, you will be able to log in using a terminal or SSH program. Logging in requires authenticating your credientials using one of two options: web authentication or SSH key pair authentication . Additional information on this process will be provided during and/or following your discussion with a Research Computing Facilitator. Option 1: Log in via Web Authentication \u00b6 Logging in via web authentication requires no preparatory steps beyond having access to an internet browser. To authenticate using this approach: Open a terminal and type ssh username@ap40.uw.osg-htc.org , being sure to replace username with your uw.osg-htc.org username. Upon hitting enter, the following text should appear with a unique, but similar, URL: Authenticate at ----------------- https://cilogon.org/device/?user_code=FF4-ZX6-9LK ----------------- Type 'Enter' when you authenticate. Copy the https:// link, paste it into a web browser, and hit enter. You will be redirected to a new page where you will be prompted to login using your institutional credentials. Once you have done so, a new page will appear with the following text: \"You have successfully approved the user code. Please return to your device for further instructions.\" Return to your terminal, and type 'Enter' to complete the login process. Option 2: Log in via SSH Key Pair Authentication \u00b6 It is also possible to authenticate using an SSH key pair, if you prefer. Logging in using SSH keys does not require access to an internet browser to log in into the OSG Access Point, ap40.uw.osg-htc.org . The process below describes how to upload a public key to the registration website. It assumes that a private/public key pair has already been generated. If you need to generate a key pair, see this OSG guide . Return to the Registration Page and login using your institutional credentials if prompted. Click your name at the top right. In the dropdown box, click \"My Profile (OSG)\" button. On the right hand side of your profile, click \"Authenticators\" link. On the authenticators page, click the \"Manage\" button. On the new SSH Keys page, click \"Add SSH Key\" and browse your computer to upload your public SSH key. You can now log in to ap40.uw.osg-htc.org from the terminal, using ssh username@ap40.uw.osg-htc.org . When you log in, instead of being prompted with a web link, you should either authenticate automatically or be asked for your ssh key passphrase to complete logging in. Get Help \u00b6 For questions regarding logging in or creating an account, contact us at support@osg-htc.org .","title":"Log In to uw.osg-htc.org Access Points"},{"location":"overview/account_setup/comanage-access/#log-in-to-uwosg-htcorg-access-points","text":"This guide is for users who were notified by a member of the OSG team that they will be using the uw.osg-htc.org Access Points. To join and use the uw.osg-htc.org Access Points ( ap40.uw.osg-htc.org ), you will go through the following steps: Apply for a uw.osg-htc.org Access Point account Have your account approved by an OSG Team member Log in to ap40.uw.osg-htc.org","title":"Log In to uw.osg-htc.org Access Points"},{"location":"overview/account_setup/comanage-access/#request-access-to-uwosg-htcorg-access-points","text":"To request access to ap40.uw.osg-htc.org , submit an application using the following steps: To request an OSPool account, visit this account registration page . You will be redirected to the CILogon sign in page. Select your institution and use your institutional credentials to login. You will use these credentials later to login so it is important to remember the institution you use at this step. If you have issues signing in using your institutional credentials, contact us at support@osg-htc.org . Once you sign in, you will be redirected to the User Enrollment page. Click \"Begin\" and enter your name and email address in the following page. In many cases, this information will be automatically populated. If desired, it is possible to manually edit any information automatically filled in. Once you have entered your information, click \"SUBMIT\". After submitting your application, you will receive an email from registry@cilogon.org to verify your email address. Click the link listed in the email to be redirected to a page confirm your invitation details. Click the \"ACCEPT\" button to complete this step.","title":"Request Access to uw.osg-htc.org Access Points"},{"location":"overview/account_setup/comanage-access/#account-approval-by-a-research-computing-facilitator","text":"If a meeting has not already been scheduled with a Research Computing Facilitator, one of the facilitation team will contact you about arranging a short consultation. Following the meeting, the Facilitator will approve your account and add your profile to any relevant OSG \u2018project\u2019 names. Once your account is ready, the Facilitator will email you with your account details including the 'username' you will use to log in to the ap40.uw.osg-htc.org access point.","title":"Account Approval by a Research Computing Facilitator"},{"location":"overview/account_setup/comanage-access/#log-in","text":"Once your account has been added to the ap40.uw.osg-htc.org access point, you will be able to log in using a terminal or SSH program. Logging in requires authenticating your credientials using one of two options: web authentication or SSH key pair authentication . Additional information on this process will be provided during and/or following your discussion with a Research Computing Facilitator.","title":"Log in"},{"location":"overview/account_setup/comanage-access/#option-1-log-in-via-web-authentication","text":"Logging in via web authentication requires no preparatory steps beyond having access to an internet browser. To authenticate using this approach: Open a terminal and type ssh username@ap40.uw.osg-htc.org , being sure to replace username with your uw.osg-htc.org username. Upon hitting enter, the following text should appear with a unique, but similar, URL: Authenticate at ----------------- https://cilogon.org/device/?user_code=FF4-ZX6-9LK ----------------- Type 'Enter' when you authenticate. Copy the https:// link, paste it into a web browser, and hit enter. You will be redirected to a new page where you will be prompted to login using your institutional credentials. Once you have done so, a new page will appear with the following text: \"You have successfully approved the user code. Please return to your device for further instructions.\" Return to your terminal, and type 'Enter' to complete the login process.","title":"Option 1: Log in via Web Authentication"},{"location":"overview/account_setup/comanage-access/#option-2-log-in-via-ssh-key-pair-authentication","text":"It is also possible to authenticate using an SSH key pair, if you prefer. Logging in using SSH keys does not require access to an internet browser to log in into the OSG Access Point, ap40.uw.osg-htc.org . The process below describes how to upload a public key to the registration website. It assumes that a private/public key pair has already been generated. If you need to generate a key pair, see this OSG guide . Return to the Registration Page and login using your institutional credentials if prompted. Click your name at the top right. In the dropdown box, click \"My Profile (OSG)\" button. On the right hand side of your profile, click \"Authenticators\" link. On the authenticators page, click the \"Manage\" button. On the new SSH Keys page, click \"Add SSH Key\" and browse your computer to upload your public SSH key. You can now log in to ap40.uw.osg-htc.org from the terminal, using ssh username@ap40.uw.osg-htc.org . When you log in, instead of being prompted with a web link, you should either authenticate automatically or be asked for your ssh key passphrase to complete logging in.","title":"Option 2: Log in via SSH Key Pair Authentication"},{"location":"overview/account_setup/comanage-access/#get-help","text":"For questions regarding logging in or creating an account, contact us at support@osg-htc.org .","title":"Get Help"},{"location":"overview/account_setup/connect-access/","text":"Log In to \"OSG Connect\" Access Points \u00b6 This guide is for users who were notified by a member of the OSG team that they will be using the \"OSG Connect\" Access Points. Do not go through the steps of this guide until advised to by a Research Computing Facilitator To join and use the \"OSG Connect\" Access Points ( ap20.uc.osg-htc.org , ap21.uc.osg-htc.org ), you will go through the following steps: Apply for an OSG Connect Access Point account Have your account approved by an OSG Team member Generate an ssh key and add it to your web profile Log in to the appropriate Access Point Apply for an OSG Connect Access Point account \u00b6 If prompted by a Research Computing Facilitator, you can apply for OSG Connect Access Points here: OSG Connect Account Request Account Approval by a Research Computing Facilitator \u00b6 If a meeting has not already been scheduled with a Research Computing Facilitator, one of the facilitation team will contact you about arranging a short consultation. Following the meeting, the Facilitator will approve your account and add your profile to any relevant OSG \u2018project\u2019 names. Once your account is ready, the Facilitator will email you with your account details. Add a public SSH key to your web profile \u00b6 Log in to OSG Connect Access Points is via SSH key. To generate an SSH key pair, see this guide and then proceed with the following steps. To add your public key to the OSG Connect log in node: Go to www.osgconnect.net and sign in with the institutional identity you used when requesting an OSG Connect account. Click \"Profile\" in the top right corner. Click the \"Edit Profile\" button located after the user information in the left hand box. Copy/paste the public key which is found in the .pub file into the \"SSH Public Key\" text box. The expected key is a single line, with three fields looking something like ssh-rsa ASSFFSAF... user@host . If you used the first set of key-generating instructions it is the content of ~/.ssh/id_rsa.pub and for the second (using PuTTYgen), it is the content from step 7 above. Click \"Update Profile\" The key is now added to your profile in the OSG Connect website. This will automatically be added to the login nodes within a couple hours. Can I Use Multiple Keys? \u00b6 Yes! If you want to log into OSG Connect from multiple computers, you can do so by generating a keypair on each computer you want to use, and then adding the public key to your OSG Connect profile. Add multi factor authentication to your web profile \u00b6 Multi factor authentication means that you will use 2 different methods to authenticate when you log in. The first factor is the ssh key you added above. The second factor is a 6 digit code from one of your devices. OSGConnect uses the TOTP (Time-based One-time Password) standard - any TOTP client should work. Some common clients include: FreeOTP Google Authenticator DUO TOTP clients are most commonly used from smartphones. If you do not have a smartphone or are otherwise struggling to access or use a TOTP client, please contact the facilitation team: support@osg-htc.org Once you have a TOTP client, configure it to be used with OSG Connect: Go to https://osgconnect.net and sign in with the institutional identity you used when requesting an OSG Connect account. Click \"Profile\" in the top right corner. Click the \"Edit Profile\" button located after the user information in the left hand box. Check the \"Set up Multi-Factor Authentication\" at the bottom and hit Apply. In the Multi-Factor Authentication box, follow the instructions (scan the QR code with your TOTP client) Important: after setting up multi-factor authentication using your TOTP client, you will need to wait 15 minutes before logging in. Logging In \u00b6 After following the steps above to upload your key and set up multi factor authentication, once about fifteen minutes have passed, you should be able to log in to OSG Connect. Determine which login node to use \u00b6 Before you can connect, you will need to know which login node your account is assigned to. You can find this information on your profile from the OSG Connect website. Go to www.osgconnect.net and sign in with your institution credentials that you used to request an account. Click \"Profile\" in the top right corner. The assigned login nodes are listed in the left side box. Make note of the address of your assigned login node as you will use this to connect to OSG Connect. For Mac, Linux, or newer versions of Windows \u00b6 Open a terminal and type in: ssh @ It will ask for the passphrase for your ssh key (if you set one), then for a \"Verification code\" which you should get by going to the TOTP client you used to set up two factor authentication above. After entering the six digit code, you should be logged in. Note that when you are typing your passphrase and verification code, your typing will NOT appear on the terminal, but the information is being entered! For older versions of Windows \u00b6 On older versions of Windows, you can use the Putty program to log in. Open the PutTTY program. If necessary, you can download PuTTY from the website here PuTTY download page . Type the address of your assigned login node as the hostname (see \"Determine which login node to use\" above). In the left hand menu, click the \"+\" next to \"SSH\" to expand the menu. Click \"Auth\" in the \"SSH\" menu. Click \"Browse\" and specify the private key file you saved in step 5 above. Return to \"Session\". a. Name your session b. Save session for future use Click \"Open\" to launch shell. Provide your ssh-key passphrase (created at Step 4 in PuTTYgen) when prompted to do so. When prompted for a \"Verification Code\", go to the TOTP client you used to set up two-factor authentication, above, and enter the six digit code from the client into your PuTTY terminal prompt. The following video demonstrates the key generation and login process from the Putty","title":"Log In to OSG Connect Access Points"},{"location":"overview/account_setup/connect-access/#log-in-to-osg-connect-access-points","text":"This guide is for users who were notified by a member of the OSG team that they will be using the \"OSG Connect\" Access Points. Do not go through the steps of this guide until advised to by a Research Computing Facilitator To join and use the \"OSG Connect\" Access Points ( ap20.uc.osg-htc.org , ap21.uc.osg-htc.org ), you will go through the following steps: Apply for an OSG Connect Access Point account Have your account approved by an OSG Team member Generate an ssh key and add it to your web profile Log in to the appropriate Access Point","title":"Log In to \"OSG Connect\" Access Points"},{"location":"overview/account_setup/connect-access/#apply-for-an-osg-connect-access-point-account","text":"If prompted by a Research Computing Facilitator, you can apply for OSG Connect Access Points here: OSG Connect Account Request","title":"Apply for an OSG Connect Access Point account"},{"location":"overview/account_setup/connect-access/#account-approval-by-a-research-computing-facilitator","text":"If a meeting has not already been scheduled with a Research Computing Facilitator, one of the facilitation team will contact you about arranging a short consultation. Following the meeting, the Facilitator will approve your account and add your profile to any relevant OSG \u2018project\u2019 names. Once your account is ready, the Facilitator will email you with your account details.","title":"Account Approval by a Research Computing Facilitator"},{"location":"overview/account_setup/connect-access/#add-a-public-ssh-key-to-your-web-profile","text":"Log in to OSG Connect Access Points is via SSH key. To generate an SSH key pair, see this guide and then proceed with the following steps. To add your public key to the OSG Connect log in node: Go to www.osgconnect.net and sign in with the institutional identity you used when requesting an OSG Connect account. Click \"Profile\" in the top right corner. Click the \"Edit Profile\" button located after the user information in the left hand box. Copy/paste the public key which is found in the .pub file into the \"SSH Public Key\" text box. The expected key is a single line, with three fields looking something like ssh-rsa ASSFFSAF... user@host . If you used the first set of key-generating instructions it is the content of ~/.ssh/id_rsa.pub and for the second (using PuTTYgen), it is the content from step 7 above. Click \"Update Profile\" The key is now added to your profile in the OSG Connect website. This will automatically be added to the login nodes within a couple hours.","title":"Add a public SSH key to your web profile"},{"location":"overview/account_setup/connect-access/#can-i-use-multiple-keys","text":"Yes! If you want to log into OSG Connect from multiple computers, you can do so by generating a keypair on each computer you want to use, and then adding the public key to your OSG Connect profile.","title":"Can I Use Multiple Keys?"},{"location":"overview/account_setup/connect-access/#add-multi-factor-authentication-to-your-web-profile","text":"Multi factor authentication means that you will use 2 different methods to authenticate when you log in. The first factor is the ssh key you added above. The second factor is a 6 digit code from one of your devices. OSGConnect uses the TOTP (Time-based One-time Password) standard - any TOTP client should work. Some common clients include: FreeOTP Google Authenticator DUO TOTP clients are most commonly used from smartphones. If you do not have a smartphone or are otherwise struggling to access or use a TOTP client, please contact the facilitation team: support@osg-htc.org Once you have a TOTP client, configure it to be used with OSG Connect: Go to https://osgconnect.net and sign in with the institutional identity you used when requesting an OSG Connect account. Click \"Profile\" in the top right corner. Click the \"Edit Profile\" button located after the user information in the left hand box. Check the \"Set up Multi-Factor Authentication\" at the bottom and hit Apply. In the Multi-Factor Authentication box, follow the instructions (scan the QR code with your TOTP client) Important: after setting up multi-factor authentication using your TOTP client, you will need to wait 15 minutes before logging in.","title":"Add multi factor authentication to your web profile"},{"location":"overview/account_setup/connect-access/#logging-in","text":"After following the steps above to upload your key and set up multi factor authentication, once about fifteen minutes have passed, you should be able to log in to OSG Connect.","title":"Logging In"},{"location":"overview/account_setup/connect-access/#determine-which-login-node-to-use","text":"Before you can connect, you will need to know which login node your account is assigned to. You can find this information on your profile from the OSG Connect website. Go to www.osgconnect.net and sign in with your institution credentials that you used to request an account. Click \"Profile\" in the top right corner. The assigned login nodes are listed in the left side box. Make note of the address of your assigned login node as you will use this to connect to OSG Connect.","title":"Determine which login node to use"},{"location":"overview/account_setup/connect-access/#for-mac-linux-or-newer-versions-of-windows","text":"Open a terminal and type in: ssh @ It will ask for the passphrase for your ssh key (if you set one), then for a \"Verification code\" which you should get by going to the TOTP client you used to set up two factor authentication above. After entering the six digit code, you should be logged in. Note that when you are typing your passphrase and verification code, your typing will NOT appear on the terminal, but the information is being entered!","title":"For Mac, Linux, or newer versions of Windows"},{"location":"overview/account_setup/connect-access/#for-older-versions-of-windows","text":"On older versions of Windows, you can use the Putty program to log in. Open the PutTTY program. If necessary, you can download PuTTY from the website here PuTTY download page . Type the address of your assigned login node as the hostname (see \"Determine which login node to use\" above). In the left hand menu, click the \"+\" next to \"SSH\" to expand the menu. Click \"Auth\" in the \"SSH\" menu. Click \"Browse\" and specify the private key file you saved in step 5 above. Return to \"Session\". a. Name your session b. Save session for future use Click \"Open\" to launch shell. Provide your ssh-key passphrase (created at Step 4 in PuTTYgen) when prompted to do so. When prompted for a \"Verification Code\", go to the TOTP client you used to set up two-factor authentication, above, and enter the six digit code from the client into your PuTTY terminal prompt. The following video demonstrates the key generation and login process from the Putty","title":"For older versions of Windows"},{"location":"overview/account_setup/generate-add-sshkey/","text":"Generate SSH Keys For Login \u00b6 Overview \u00b6 One way to connect to an OSG-managed Access Point is an SSH key. This guide details how to create an SSH key. Once created, it needs to be added to your web profile in order to enable log in to an Access Point. Generate SSH Keys \u00b6 We will discuss how to generate a SSH key pair for two cases: \"Unix\" systems (Linux, Mac) and certain, latest versions of Windows Older Windows systems Please note: The key pair consist of a private key and a public key. You will upload the public key to the OSG Connect website or COmanage, but you also need to keep a copy of the private key to log in! You should keep the private key on machines that you have direct access to, i.e. your local computer (your laptop or desktop). Unix-based operating system (Linux/Mac) or latest Windows 10 versions \u00b6 We will create a key in the .ssh directory of your computer. Open a terminal on your local computer and run the following commands: mkdir ~/.ssh chmod 700 ~/.ssh ssh-keygen -t rsa For the newer OS versions the .ssh directory is already created and the first command is redundant. The last command will produce a prompt similar to Generating public/private rsa key pair. Enter file in which to save the key (/home//.ssh/id_rsa): Unless you want to change the location of the key, continue by pressing enter. Now you will be asked for a passphrase. Enter a passphrase that you will be able to remember and which is secure: Enter passphrase (empty for no passphrase): Enter same passphrase again: When everything has successfully completed, the output should resemble the following: Your identification has been saved in /home//.ssh/id_rsa. Your public key has been saved in /home//.ssh/id_rsa.pub. The key fingerprint is: ... The part you want to upload is the content of the .pub file (~/.ssh/id_rsa.pub) The following video demonstrates the key generation process from the terminal Windows, using Putty to log in \u00b6 If you can connect using the ssh command within the Command Prompt (Windows 10 build version 1803 and later), please follow the Mac/Linux directions above. If not, continue with the directions below. Open the PuTTYgen program. You can download PuttyGen here: PuttyGen Download Page , scroll down until you see the puttygen.exe file. For Type of key to generate, select RSA or SSH-2 RSA. Click the \"Generate\" button. Move your mouse in the area below the progress bar. When the progress bar is full, PuTTYgen generates your key pair. Type a passphrase in the \"Key passphrase\" field. Type the same passphrase in the \"Confirm passphrase\" field. You can use a key without a passphrase, but this is not recommended. Click the \"Save private key\" button to save the private key. You must save the private key. You will need it to connect to your machine. Right-click in the text field labeled \"Public key for pasting into OpenSSH authorized_keys file\" and choose Select All. Right-click again in the same text field and choose Copy. Next Steps \u00b6 After generating the key, you will need to upload it to a web profile to use it for log in. If you have an account on an uw.osg-htc.org Access Point (account created through https://registry.cilogon.org/registry/) follow the instructions here: Log In to uw.osg-htc.org Access Points If you have an account on \"OSG Connect\" Access Points (account created through https://www.osgconnect.net/), follow the instructions here: Log In to OSG Connect Access Points Getting Help \u00b6 For assistance or questions, please email the OSG Research Facilitation team at support@osg-htc.org or visit the help desk and community forums .","title":"Generate SSH Keys"},{"location":"overview/account_setup/generate-add-sshkey/#generate-ssh-keys-for-login","text":"","title":"Generate SSH Keys For Login"},{"location":"overview/account_setup/generate-add-sshkey/#overview","text":"One way to connect to an OSG-managed Access Point is an SSH key. This guide details how to create an SSH key. Once created, it needs to be added to your web profile in order to enable log in to an Access Point.","title":"Overview"},{"location":"overview/account_setup/generate-add-sshkey/#generate-ssh-keys","text":"We will discuss how to generate a SSH key pair for two cases: \"Unix\" systems (Linux, Mac) and certain, latest versions of Windows Older Windows systems Please note: The key pair consist of a private key and a public key. You will upload the public key to the OSG Connect website or COmanage, but you also need to keep a copy of the private key to log in! You should keep the private key on machines that you have direct access to, i.e. your local computer (your laptop or desktop).","title":"Generate SSH Keys"},{"location":"overview/account_setup/generate-add-sshkey/#unix-based-operating-system-linuxmac-or-latest-windows-10-versions","text":"We will create a key in the .ssh directory of your computer. Open a terminal on your local computer and run the following commands: mkdir ~/.ssh chmod 700 ~/.ssh ssh-keygen -t rsa For the newer OS versions the .ssh directory is already created and the first command is redundant. The last command will produce a prompt similar to Generating public/private rsa key pair. Enter file in which to save the key (/home//.ssh/id_rsa): Unless you want to change the location of the key, continue by pressing enter. Now you will be asked for a passphrase. Enter a passphrase that you will be able to remember and which is secure: Enter passphrase (empty for no passphrase): Enter same passphrase again: When everything has successfully completed, the output should resemble the following: Your identification has been saved in /home//.ssh/id_rsa. Your public key has been saved in /home//.ssh/id_rsa.pub. The key fingerprint is: ... The part you want to upload is the content of the .pub file (~/.ssh/id_rsa.pub) The following video demonstrates the key generation process from the terminal","title":"Unix-based operating system (Linux/Mac) or latest Windows 10 versions"},{"location":"overview/account_setup/generate-add-sshkey/#windows-using-putty-to-log-in","text":"If you can connect using the ssh command within the Command Prompt (Windows 10 build version 1803 and later), please follow the Mac/Linux directions above. If not, continue with the directions below. Open the PuTTYgen program. You can download PuttyGen here: PuttyGen Download Page , scroll down until you see the puttygen.exe file. For Type of key to generate, select RSA or SSH-2 RSA. Click the \"Generate\" button. Move your mouse in the area below the progress bar. When the progress bar is full, PuTTYgen generates your key pair. Type a passphrase in the \"Key passphrase\" field. Type the same passphrase in the \"Confirm passphrase\" field. You can use a key without a passphrase, but this is not recommended. Click the \"Save private key\" button to save the private key. You must save the private key. You will need it to connect to your machine. Right-click in the text field labeled \"Public key for pasting into OpenSSH authorized_keys file\" and choose Select All. Right-click again in the same text field and choose Copy.","title":"Windows, using Putty to log in"},{"location":"overview/account_setup/generate-add-sshkey/#next-steps","text":"After generating the key, you will need to upload it to a web profile to use it for log in. If you have an account on an uw.osg-htc.org Access Point (account created through https://registry.cilogon.org/registry/) follow the instructions here: Log In to uw.osg-htc.org Access Points If you have an account on \"OSG Connect\" Access Points (account created through https://www.osgconnect.net/), follow the instructions here: Log In to OSG Connect Access Points","title":"Next Steps"},{"location":"overview/account_setup/generate-add-sshkey/#getting-help","text":"For assistance or questions, please email the OSG Research Facilitation team at support@osg-htc.org or visit the help desk and community forums .","title":"Getting Help"},{"location":"overview/account_setup/is-it-for-you/","text":"Computation on the Open Science Pool \u00b6 The OSG is a nationally-funded consortium of computing resources at more than one hundred institutional partners that, together, offer a strategic advantage for computing work that can be run as numerous short tasks that can execute independent of one another. For researchers who are not part of an organization with their own pool in the OSG, we offer the Open Science Pool (OSPool) , with dozens of campuses contributing excess computing capacity in support of open science. The OSPool is available for US-affiliated academic, government, and non-profit research projects and groups for their High Throughput Computing (HTC) workflows. Learn more about the services provided by the OSG that can support your HTC workload: For problems that can be run as numerous independent jobs (a high-throughput approach) and have requirements represented in the first two columns of the table below, the significant capacity of the OSPool can transform the types of questions that researchers are able to tackle. Importantly, many compute tasks that may appear to not be a good fit can be modified in simple ways to take advantage, and we'd love to discuss options with you! Ideal jobs! Still advantageous Maybe not, but get in touch! Expected Throughput: 1000s concurrent jobs 100s concurrent jobs let's discuss! Per-Job Requirements CPU cores 1 < 8 > 8 (or MPI) GPUs 0 1 > 1 Walltime < 10 hrs* < 20 hrs* > 20 hrs (not a good fit) RAM < few GB < 40 GB > 40 GB Input < 500 MB < 10 GB > 10 GB** Output < 1GB < 10 GB > 10 GB** Software pre-compiled binaries, containers Most other than ---> Licensed software, non-Linux * or checkpointable ** per job; you can work with a multi-TB dataset on the OSPool if it can be split into pieces! Some examples of work that have been a good fit for the OSPool and benefited from using its resources include: image analysis (including MRI, GIS, etc.) text-based analysis, including DNA read mapping and other bioinformatics hyper/parameter sweeps Monte Carlo methods and other model optimization Resources to Quickly Learn More \u00b6 Introduction to OSG the Distributed High Throughput Computing framework from the annual OSG User School : Full OSG User Documentation including our Roadmap to HTC Workload Submission OSG User Training materials . Any researcher affiliated with an academic, non-profit, or government US-based research project is welcome to attend our trainings. Learn more and chat with a Research Computing Facilitator by signing up for OSPool account","title":"Computation on the Open Science Pool"},{"location":"overview/account_setup/is-it-for-you/#computation-on-the-open-science-pool","text":"The OSG is a nationally-funded consortium of computing resources at more than one hundred institutional partners that, together, offer a strategic advantage for computing work that can be run as numerous short tasks that can execute independent of one another. For researchers who are not part of an organization with their own pool in the OSG, we offer the Open Science Pool (OSPool) , with dozens of campuses contributing excess computing capacity in support of open science. The OSPool is available for US-affiliated academic, government, and non-profit research projects and groups for their High Throughput Computing (HTC) workflows. Learn more about the services provided by the OSG that can support your HTC workload: For problems that can be run as numerous independent jobs (a high-throughput approach) and have requirements represented in the first two columns of the table below, the significant capacity of the OSPool can transform the types of questions that researchers are able to tackle. Importantly, many compute tasks that may appear to not be a good fit can be modified in simple ways to take advantage, and we'd love to discuss options with you! Ideal jobs! Still advantageous Maybe not, but get in touch! Expected Throughput: 1000s concurrent jobs 100s concurrent jobs let's discuss! Per-Job Requirements CPU cores 1 < 8 > 8 (or MPI) GPUs 0 1 > 1 Walltime < 10 hrs* < 20 hrs* > 20 hrs (not a good fit) RAM < few GB < 40 GB > 40 GB Input < 500 MB < 10 GB > 10 GB** Output < 1GB < 10 GB > 10 GB** Software pre-compiled binaries, containers Most other than ---> Licensed software, non-Linux * or checkpointable ** per job; you can work with a multi-TB dataset on the OSPool if it can be split into pieces! Some examples of work that have been a good fit for the OSPool and benefited from using its resources include: image analysis (including MRI, GIS, etc.) text-based analysis, including DNA read mapping and other bioinformatics hyper/parameter sweeps Monte Carlo methods and other model optimization","title":"Computation on the Open Science Pool"},{"location":"overview/account_setup/is-it-for-you/#resources-to-quickly-learn-more","text":"Introduction to OSG the Distributed High Throughput Computing framework from the annual OSG User School : Full OSG User Documentation including our Roadmap to HTC Workload Submission OSG User Training materials . Any researcher affiliated with an academic, non-profit, or government US-based research project is welcome to attend our trainings. Learn more and chat with a Research Computing Facilitator by signing up for OSPool account","title":"Resources to Quickly Learn More"},{"location":"overview/account_setup/registration-and-login/","text":"Start Here: Overview of Requesting OSPool Access \u00b6 The major steps to get started on the OSPool are: apply for access to the OSPool meet with a facilitation team member for an short consultation and orientation. register for a specific OSPool Access Point log in to your designated Access Point Each of these is detailed in the guide below. Once you've gone through these steps, you should be able to begin running work! Apply for OSPool Access \u00b6 To start, fill out the interest form on this OSG Portal site: OSPool Consultation Request This will send the Research Facilitation team an email. We will be in touch to set up an orientation meeting, and confirm if you are joining an existing project on the OSPool or starting a new one. Orientation Meeting \u00b6 The orientation meeting generally takes about 20-30 minutes and is a chance to talk about your work, how it will fit on the OSPool, and some practical next steps for getting started. Register for an Access Point \u00b6 Before or during the orientation meeting, you will be prompted to register for an account on a specific OSPool Access Point. The current default are uw.osg-htc.org Access Points. You will be directed to follow instructions on this page to register for an account. Log In \u00b6 Once you've gone through the steps above, you should have an account on on OSPool Access Point! Follow the instructions below to learn how to log in to you OSPool Access Point. Accounts for all new users are created on uw.osg-htc.org Access Points unless otherwise specified. Log In to \"uw.osg-htc.org\" Access Points (e.g., ap40.uw.osg-htc.org) If your account is on the uw.osg-htc.org Access Points (e.g., accounts on ap40.uw.osg-htc.org), follow instructions in this guide for logging in: Log In to uw.osg-htc.org Access Points Log In to \"OSG Connect\" Access Points (e.g., ap20.uc.osg-htc.org) If your account is on the OSG Connect Access points (e.g., accounts on ap20.uc.osg-htc.org, ap21.uc.osg-htc.org), follow instructions in this guide for logging in: Log In to OSG Connect Access Points","title":"Start Here: Overview of Requesting OSPool Access"},{"location":"overview/account_setup/registration-and-login/#start-here-overview-of-requesting-ospool-access","text":"The major steps to get started on the OSPool are: apply for access to the OSPool meet with a facilitation team member for an short consultation and orientation. register for a specific OSPool Access Point log in to your designated Access Point Each of these is detailed in the guide below. Once you've gone through these steps, you should be able to begin running work!","title":"Start Here: Overview of Requesting OSPool Access"},{"location":"overview/account_setup/registration-and-login/#apply-for-ospool-access","text":"To start, fill out the interest form on this OSG Portal site: OSPool Consultation Request This will send the Research Facilitation team an email. We will be in touch to set up an orientation meeting, and confirm if you are joining an existing project on the OSPool or starting a new one.","title":"Apply for OSPool Access"},{"location":"overview/account_setup/registration-and-login/#orientation-meeting","text":"The orientation meeting generally takes about 20-30 minutes and is a chance to talk about your work, how it will fit on the OSPool, and some practical next steps for getting started.","title":"Orientation Meeting"},{"location":"overview/account_setup/registration-and-login/#register-for-an-access-point","text":"Before or during the orientation meeting, you will be prompted to register for an account on a specific OSPool Access Point. The current default are uw.osg-htc.org Access Points. You will be directed to follow instructions on this page to register for an account.","title":"Register for an Access Point"},{"location":"overview/account_setup/registration-and-login/#log-in","text":"Once you've gone through the steps above, you should have an account on on OSPool Access Point! Follow the instructions below to learn how to log in to you OSPool Access Point. Accounts for all new users are created on uw.osg-htc.org Access Points unless otherwise specified. Log In to \"uw.osg-htc.org\" Access Points (e.g., ap40.uw.osg-htc.org) If your account is on the uw.osg-htc.org Access Points (e.g., accounts on ap40.uw.osg-htc.org), follow instructions in this guide for logging in: Log In to uw.osg-htc.org Access Points Log In to \"OSG Connect\" Access Points (e.g., ap20.uc.osg-htc.org) If your account is on the OSG Connect Access points (e.g., accounts on ap20.uc.osg-htc.org, ap21.uc.osg-htc.org), follow instructions in this guide for logging in: Log In to OSG Connect Access Points","title":"Log In"},{"location":"overview/account_setup/starting-project/","text":"Set and View Project Usage \u00b6 Background \u00b6 The OSG team assigns individual user accounts to \"projects\". These projects are a way to track usage hours and capture information about the types of research using the OSPool. A project typically corresponds to a research group headed by a single PI, but can sometimes represent a long-term multi-institutional project or some other grouping. You must be a member of a project before you can use an OSPool Access Point to submit jobs. The next section of this guide describes the process for joining a project. Default Behavior (one project) \u00b6 By default, you are added to a project when your OSG account is created. This project will be automatically added to your job submissions for tracking usage. Choose a Project (multiple projects) \u00b6 If you are affiliated with multiple groups using the OSPool and are a member of multiple projects, you will want to set the project name in your submit file. Run the following command to see a list of projects you belong to: grep $USER /etc/condor/UserToProjectMap.txt You can manually set the project for a set of jobs by putting this option in the submit file: +ProjectName=\"ProjectName\" View Metrics For Your Project \u00b6 The project's resource usage appears in the OSG accounting system, GRACC , specifically, in this OSPool Usage Dashboard At the top of that dashboard, there is a set of filters that you can use to examine the number of hours used by your project, or your institution. You can adjust the time range displayed on the top right corner.","title":"Set and View Project Usage"},{"location":"overview/account_setup/starting-project/#set-and-view-project-usage","text":"","title":"Set and View Project Usage"},{"location":"overview/account_setup/starting-project/#background","text":"The OSG team assigns individual user accounts to \"projects\". These projects are a way to track usage hours and capture information about the types of research using the OSPool. A project typically corresponds to a research group headed by a single PI, but can sometimes represent a long-term multi-institutional project or some other grouping. You must be a member of a project before you can use an OSPool Access Point to submit jobs. The next section of this guide describes the process for joining a project.","title":"Background"},{"location":"overview/account_setup/starting-project/#default-behavior-one-project","text":"By default, you are added to a project when your OSG account is created. This project will be automatically added to your job submissions for tracking usage.","title":"Default Behavior (one project)"},{"location":"overview/account_setup/starting-project/#choose-a-project-multiple-projects","text":"If you are affiliated with multiple groups using the OSPool and are a member of multiple projects, you will want to set the project name in your submit file. Run the following command to see a list of projects you belong to: grep $USER /etc/condor/UserToProjectMap.txt You can manually set the project for a set of jobs by putting this option in the submit file: +ProjectName=\"ProjectName\"","title":"Choose a Project (multiple projects)"},{"location":"overview/account_setup/starting-project/#view-metrics-for-your-project","text":"The project's resource usage appears in the OSG accounting system, GRACC , specifically, in this OSPool Usage Dashboard At the top of that dashboard, there is a set of filters that you can use to examine the number of hours used by your project, or your institution. You can adjust the time range displayed on the top right corner.","title":"View Metrics For Your Project"},{"location":"overview/references/acknowledgeOSG/","text":"Acknowledge the OSG \u00b6 This page has been moved to the OSG Website .","title":"Acknowledge the OSG"},{"location":"overview/references/acknowledgeOSG/#acknowledge-the-osg","text":"This page has been moved to the OSG Website .","title":"Acknowledge the OSG"},{"location":"overview/references/contact-information/","text":"Contact OSG for non-Support Inquiries \u00b6 For media contact, leadership, or general questions about OSG, please see our main website or send an email to webmaster@osg-htc.org. For OSG policies and executive information, email Frank Wuerthwein (OSG Executive Director). For help managing an OSG Mailing list membership, lease refer to our managing mailing list membership document . To get started using OSG resources, for support or operational issues, or to request OSPool account information, email support@osg-htc.org . For any assistance or technical questions regarding jobs or data, please see our page on how to Get Help and/or contact the OSG Research Facilitation team at support@osg-htc.org","title":"Contact OSG for non-Support Inquiries"},{"location":"overview/references/contact-information/#contact-osg-for-non-support-inquiries","text":"For media contact, leadership, or general questions about OSG, please see our main website or send an email to webmaster@osg-htc.org. For OSG policies and executive information, email Frank Wuerthwein (OSG Executive Director). For help managing an OSG Mailing list membership, lease refer to our managing mailing list membership document . To get started using OSG resources, for support or operational issues, or to request OSPool account information, email support@osg-htc.org . For any assistance or technical questions regarding jobs or data, please see our page on how to Get Help and/or contact the OSG Research Facilitation team at support@osg-htc.org","title":"Contact OSG for non-Support Inquiries"},{"location":"overview/references/frequently-asked-questions/","text":"Frequently Asked Questions \u00b6 Getting Started \u00b6 Who is eligible to request an OSG account? Any researcher affiliated with a U.S. institution (college, university, national laboratory or research foundation) is eligible to use OSG resources for their work. Researchers outside of the U.S. with affiliations to U.S. groups may be eligible for membership if they are sponsored by a collaborator within the U.S. How do I request an OSG account? Please visit our website for the most up-to-date information on requesting an account. Once your account request has been received, a Research Computing Facilitator will contact you within one business day to arrange a meeting to learn about your computational goals and to create your account. How do I change the project my jobs are affiliated with? The OSG team assigns individual user accounts to \"projects\" upon account creation. These projects are a way to track usage hours and capture information about the types of research running on OSG resources. A project typically corresponds to a research group headed by a single PI, but can sometimes represent a long-term multi-institutional project or some other grouping. If you only belong to a single project, that project will be charged automatically when you submit jobs. Run the following command to see a list of projects you belong to: $ grep $USER /etc/condor/UserToProjectMap.txt If need to run jobs under a different project you are a member of, you can manually set the project for those jobs by putting this option in the submit file: +ProjectName=\"ProjectName\" Can I use my ACCESS allocation? There are two ways OSG interfaces with ACCESS: You can get an allocation for the OSPool. This will allow you to run OSPool jobs and have the usage charged to your ACCESS credits, and can be useful if you already have an allocation. If you only need to use OSG resources, we recommend you come directly to our system. You can manage your workloads on the OSPool access points, and run those jobs on other ACCESS resources. This is a capability still in development. Workshops and Training \u00b6 Do you offer training sessions and workshops? We offer virtual trainings twice-a-month, as well as an annual, week-long summer school for OSG users. We also participate in additional external conferences and events throughout the year. Information about upcoming and past events, including workshop dates and locations, is available on our website. Who may attend OSG workshops? Workshops are available to any researcher affiliated with a U.S. academic, non-profit, or government institution. How to cite or acknowledge OSG? Whenever you make use of OSG resources, services or tools, we request you acknowledge OSG in your presentations and publications using the informtion provided on the Acknowledging the OSG Consortium page. Software \u00b6 What software packages are available? In general, we support most software that fits the distributed high throughput computing model (e.g., open source). Users are encouraged to download and install their own software on our Access Points. Additionally, users may install their software into a Docker container which can run on OSG as an Apptainer image or use one of our existing containers. See the Software guides on the OSPool documentation website for more information. Are there any restrictions on installing commercial softwares? We can only *directly* support software that is freely distributable. At present, we do not have or support most commercial software due to licensing issues. (One exception is running MATLAB standalone executables which have been compiled with the MATLAB Compiler Runtime). Software that is licensed to individual users (and not to be shared between users) can be staged within the user's `/home` or `/protected` directories, but should not be staged in OSG's `/public` data staging locations. See OSPool policies for more information. Please get in touch with any questions about licensed software. Can I request for system wide installation of the open source software useful for my research? We recommend users use Docker or Apptainer containers if jobs require system wide installations of software. Visit the OSPool Documentation website to learn more about creating your own container. Running Jobs \u00b6 What type of computation is a good match or NOT a good match for the OSG's Open Science Pool? The OSG provides computing resources through the Open Science Pool for high throughput computing workloads. You can get the most of out OSG resources by breaking up a single large computational task into many smaller tasks for the fastest overall turnaround. This approach can be invaluable in accelerating your computational work and thus your research. Please see our Computation on the Open Science Pool page for more details on how to determine if your work matches up well with OSG's high throughput computing model. What job scheduler is being used on the Open Science Pool? We use a task scheduling software called HTCondor to schedule and run jobs. How do I submit a computing job? Jobs are submitted via HTCondor scheduler. Please see our Roadmap to HTC Workload Submission guide for more details on submitting and managing jobs. How many jobs can I have in the queue? The number of jobs that are submitted to the queue by any one user cannot not exceed 10,000 without adding a special statement to the submit file. If you have more jobs than that, we ask that you include the following statement in your submit file: max_idle = 2000 This is the maximum number of jobs that you will have in the \"Idle\" or \"Held\" state for the submitted batch of jobs at any given time. Using a value of 2000 will ensure that your jobs continue to apply a constant pressure on the queue, but will not fill up the queue unnecessarily (which helps the scheduler to perform optimally). How do I view usage metrics for my project? The project's resource usage appears in the OSG accounting system, GRid ACcounting Collector (GRACC) . Additional dashboards are available to help filter information of interest. At the top of that dashboard, there is a set of filters that you can use to examine the number of hours used by your project, or your institution. Why specify +JobDurationCategory in the HTCondor submit file? To maximize the value of the capacity contributed by the different organizations to the OSPool, users are requested to identify a duration categories for their jobs. These categories should be selected based upon test jobs (run on the OSPool) and allow for more effective scheduling of the capacity contributed to the pool. Every job submitted from an OSG-managed access point must be labeled with a Job Duration Category upon submission. By knowing the expected job duration, OSG will be able to direct longer-running jobs to resources that are faster and are interrupted less, while shorter jobs can run across more of the OSPool for better overall throughput. Jobs with single executions longer than 20 hours in tests on the OSPool should not be submitted, without self-checkpointing. Details on how to specify +JobDurationCategory can be found in our Overview: Submit Jobs to the OSPool using HTCondor and Roadmap to HTC Workload Submission guides. Data Storage and Transfer \u00b6 What is the best way to process large volume of data? There may be more than one solution that is available to researchers to process large amounts of data. Contact a Facilitator at for a free, individual consulation to learn about your options. How do I transfer my data to and from OSG Access Points? You can transfer data using `scp`, `rsync`, or other common Unix tools. See Using scp To Transfer Files for more details. Is there any support for private data? Data stored in `/protected` and in `/home` is not publically accessible. Sensitive data, such as HIPAA data, is not allowed to be uploaded or analyzed using OSG resources. Is data backed up on OSG resources? Our data storage locations are not backed up nor are they intended for long-term storage. If the data is not being used for active computing work, it should not be stored on OSG systems. Can I get a quota increase? Yes. Contact support@osg-htc.org if you think you'll need a quota increase for `/home`, `/public`, or `/protected` to accommodate a set of concurrently-running jobs. We can suppport very large amounts of data, the default quotas are just a starting point. Will I get notified about hitting quota limits? The best place to see your quota status is in the login message. Workflow Management \u00b6 How do I run and manage complex workflows? For workflows that have multiple steps and/or multiple files, we advise using a workflow management system. A workflow management system allows you to define different computational steps in your workflow and indicate how inputs and outputs should be transferred between these steps. Once you define a workflow, the workflow management system will then run your workflow, automatically retrying failed jobs and transferrring files between different steps. What workflow management systems are recommended on OSG? We support DAGMan and Pegasus for workflow management.","title":"Frequently Asked Questions"},{"location":"overview/references/frequently-asked-questions/#frequently-asked-questions","text":"","title":"Frequently Asked Questions"},{"location":"overview/references/frequently-asked-questions/#getting-started","text":"Who is eligible to request an OSG account? Any researcher affiliated with a U.S. institution (college, university, national laboratory or research foundation) is eligible to use OSG resources for their work. Researchers outside of the U.S. with affiliations to U.S. groups may be eligible for membership if they are sponsored by a collaborator within the U.S. How do I request an OSG account? Please visit our website for the most up-to-date information on requesting an account. Once your account request has been received, a Research Computing Facilitator will contact you within one business day to arrange a meeting to learn about your computational goals and to create your account. How do I change the project my jobs are affiliated with? The OSG team assigns individual user accounts to \"projects\" upon account creation. These projects are a way to track usage hours and capture information about the types of research running on OSG resources. A project typically corresponds to a research group headed by a single PI, but can sometimes represent a long-term multi-institutional project or some other grouping. If you only belong to a single project, that project will be charged automatically when you submit jobs. Run the following command to see a list of projects you belong to: $ grep $USER /etc/condor/UserToProjectMap.txt If need to run jobs under a different project you are a member of, you can manually set the project for those jobs by putting this option in the submit file: +ProjectName=\"ProjectName\" Can I use my ACCESS allocation? There are two ways OSG interfaces with ACCESS: You can get an allocation for the OSPool. This will allow you to run OSPool jobs and have the usage charged to your ACCESS credits, and can be useful if you already have an allocation. If you only need to use OSG resources, we recommend you come directly to our system. You can manage your workloads on the OSPool access points, and run those jobs on other ACCESS resources. This is a capability still in development.","title":"Getting Started"},{"location":"overview/references/frequently-asked-questions/#workshops-and-training","text":"Do you offer training sessions and workshops? We offer virtual trainings twice-a-month, as well as an annual, week-long summer school for OSG users. We also participate in additional external conferences and events throughout the year. Information about upcoming and past events, including workshop dates and locations, is available on our website. Who may attend OSG workshops? Workshops are available to any researcher affiliated with a U.S. academic, non-profit, or government institution. How to cite or acknowledge OSG? Whenever you make use of OSG resources, services or tools, we request you acknowledge OSG in your presentations and publications using the informtion provided on the Acknowledging the OSG Consortium page.","title":"Workshops and Training"},{"location":"overview/references/frequently-asked-questions/#software","text":"What software packages are available? In general, we support most software that fits the distributed high throughput computing model (e.g., open source). Users are encouraged to download and install their own software on our Access Points. Additionally, users may install their software into a Docker container which can run on OSG as an Apptainer image or use one of our existing containers. See the Software guides on the OSPool documentation website for more information. Are there any restrictions on installing commercial softwares? We can only *directly* support software that is freely distributable. At present, we do not have or support most commercial software due to licensing issues. (One exception is running MATLAB standalone executables which have been compiled with the MATLAB Compiler Runtime). Software that is licensed to individual users (and not to be shared between users) can be staged within the user's `/home` or `/protected` directories, but should not be staged in OSG's `/public` data staging locations. See OSPool policies for more information. Please get in touch with any questions about licensed software. Can I request for system wide installation of the open source software useful for my research? We recommend users use Docker or Apptainer containers if jobs require system wide installations of software. Visit the OSPool Documentation website to learn more about creating your own container.","title":"Software"},{"location":"overview/references/frequently-asked-questions/#running-jobs","text":"What type of computation is a good match or NOT a good match for the OSG's Open Science Pool? The OSG provides computing resources through the Open Science Pool for high throughput computing workloads. You can get the most of out OSG resources by breaking up a single large computational task into many smaller tasks for the fastest overall turnaround. This approach can be invaluable in accelerating your computational work and thus your research. Please see our Computation on the Open Science Pool page for more details on how to determine if your work matches up well with OSG's high throughput computing model. What job scheduler is being used on the Open Science Pool? We use a task scheduling software called HTCondor to schedule and run jobs. How do I submit a computing job? Jobs are submitted via HTCondor scheduler. Please see our Roadmap to HTC Workload Submission guide for more details on submitting and managing jobs. How many jobs can I have in the queue? The number of jobs that are submitted to the queue by any one user cannot not exceed 10,000 without adding a special statement to the submit file. If you have more jobs than that, we ask that you include the following statement in your submit file: max_idle = 2000 This is the maximum number of jobs that you will have in the \"Idle\" or \"Held\" state for the submitted batch of jobs at any given time. Using a value of 2000 will ensure that your jobs continue to apply a constant pressure on the queue, but will not fill up the queue unnecessarily (which helps the scheduler to perform optimally). How do I view usage metrics for my project? The project's resource usage appears in the OSG accounting system, GRid ACcounting Collector (GRACC) . Additional dashboards are available to help filter information of interest. At the top of that dashboard, there is a set of filters that you can use to examine the number of hours used by your project, or your institution. Why specify +JobDurationCategory in the HTCondor submit file? To maximize the value of the capacity contributed by the different organizations to the OSPool, users are requested to identify a duration categories for their jobs. These categories should be selected based upon test jobs (run on the OSPool) and allow for more effective scheduling of the capacity contributed to the pool. Every job submitted from an OSG-managed access point must be labeled with a Job Duration Category upon submission. By knowing the expected job duration, OSG will be able to direct longer-running jobs to resources that are faster and are interrupted less, while shorter jobs can run across more of the OSPool for better overall throughput. Jobs with single executions longer than 20 hours in tests on the OSPool should not be submitted, without self-checkpointing. Details on how to specify +JobDurationCategory can be found in our Overview: Submit Jobs to the OSPool using HTCondor and Roadmap to HTC Workload Submission guides.","title":"Running Jobs"},{"location":"overview/references/frequently-asked-questions/#data-storage-and-transfer","text":"What is the best way to process large volume of data? There may be more than one solution that is available to researchers to process large amounts of data. Contact a Facilitator at for a free, individual consulation to learn about your options. How do I transfer my data to and from OSG Access Points? You can transfer data using `scp`, `rsync`, or other common Unix tools. See Using scp To Transfer Files for more details. Is there any support for private data? Data stored in `/protected` and in `/home` is not publically accessible. Sensitive data, such as HIPAA data, is not allowed to be uploaded or analyzed using OSG resources. Is data backed up on OSG resources? Our data storage locations are not backed up nor are they intended for long-term storage. If the data is not being used for active computing work, it should not be stored on OSG systems. Can I get a quota increase? Yes. Contact support@osg-htc.org if you think you'll need a quota increase for `/home`, `/public`, or `/protected` to accommodate a set of concurrently-running jobs. We can suppport very large amounts of data, the default quotas are just a starting point. Will I get notified about hitting quota limits? The best place to see your quota status is in the login message.","title":"Data Storage and Transfer"},{"location":"overview/references/frequently-asked-questions/#workflow-management","text":"How do I run and manage complex workflows? For workflows that have multiple steps and/or multiple files, we advise using a workflow management system. A workflow management system allows you to define different computational steps in your workflow and indicate how inputs and outputs should be transferred between these steps. Once you define a workflow, the workflow management system will then run your workflow, automatically retrying failed jobs and transferrring files between different steps. What workflow management systems are recommended on OSG? We support DAGMan and Pegasus for workflow management.","title":"Workflow Management"},{"location":"overview/references/gracc/","text":"OSG Accounting (GRACC) \u00b6 GRACC is the Open Science Pool's accounting system. If you need graphs or high level statistics on your OSG usage, please go to: https://gracc.opensciencegrid.org/ GRACC contains an overwhelming amount of data. As an OSPool user, you are most likely interested in seeing your own usage over time. This can be found under the Open Science Pool - All Usage dashboard here Under the Project drop-down, find your project. You can select multiple ones. In the upper right corner, you can select a different time period. You can then select a different Bin size time range. For example, if you want data for the last year grouped monthly, select \"Last 1 year\" for the Period , and \"1M\" for the Bin size . Here is an example of what the information provided will look like:","title":"OSG Accounting (GRACC)"},{"location":"overview/references/gracc/#osg-accounting-gracc","text":"GRACC is the Open Science Pool's accounting system. If you need graphs or high level statistics on your OSG usage, please go to: https://gracc.opensciencegrid.org/ GRACC contains an overwhelming amount of data. As an OSPool user, you are most likely interested in seeing your own usage over time. This can be found under the Open Science Pool - All Usage dashboard here Under the Project drop-down, find your project. You can select multiple ones. In the upper right corner, you can select a different time period. You can then select a different Bin size time range. For example, if you want data for the last year grouped monthly, select \"Last 1 year\" for the Period , and \"1M\" for the Bin size . Here is an example of what the information provided will look like:","title":"OSG Accounting (GRACC)"},{"location":"overview/references/policy/","text":"Policies for Using OSG Services and the OSPool \u00b6 Access to OSG services and the Open Science Pool (OSPool) is contingent on compliance with the below and with any requests from OSG staff to change practices that cause issues for OSG systems and/or users. Please contact us if you have any questions! We can often help with exceptions to default policies and/or identify available alternative approaches to help you with a perceived barrier. As the below do not cover every possible scenario of potentially disruptive practices, OSG staff reserve the right to take any necessary corrective actions to ensure performance and resource availability for all users from OSG-managed Access Points. This may include the hold or removal of jobs, deletion of user data, deactivation of accounts, etc. In some cases, these actions may need to be taken without notifying the user. By using the OSG resources, users are expected to follow the Open Science Pool acceptable use policy , which includes appropriate scope of use and common user security practices. OSG resources are only available to individuals affiliated with a US-based academic, government, or non-profit organization, or with a research project led by an affiliated sponsor. Users can have up to 10,000 jobs queued, without taking additional steps , and should submit multiple jobs via a single submit file, according to our online guides. Please write to us if you\u2019d like to easily submit more! Do not run computationally-intensive or persistent processes on the Access Points (login nodes). Exceptions include single-threaded software compilation and data management tasks (transfer to/from the Access Point, directory creation, file moving/renaming, untar-ing, etc.). The execution of multi-threaded tasks for job setup or post-processing or software testing will almost certainly cause performance issues and may result in loss of access. Software testing should be executed from within submitted jobs, where job scheduling also provides a more accurate test environment to the user without compromising performance of the Access Points. OSG staff reserve the right to kill any tasks running on the login nodes, in order to ensure performance for all users. Similarly, please contact us to discuss appropriate features and options, rather than running scripts (including cron ) to automate job submission , throttling, resubmission, or ordered execution (e.g. workflows), even if these are executed remotely to coordinate work on OSG-managed Access Points. These almost always end up causing significant issues and/or wasted computing capacity, and we're happy to help you to implement automation tools the integrate with HTCondor. Data Policies : OSG-managed filesystems are not backed up and should be treated as temporary (\u201cscratch\u201d-like) space for active work, only , following OSG policies for data storage and per-job transfers . Some OSG-managed storage spaces are truly \u2018open\u2019 with data available to be downloaded publicly. Of note: Users should keep copies of essential data and software in non-OSG locations, as OSG staff reserve the right to remove data at any time in order to ensure and/or restore system availability, and without prior notice to users. Proprietary data, HIPAA, and data with any other privacy concerns should not be stored on any OSG-managed filesystems or computed on using OSG-managed resources. Similarly, users should follow all licensing requirements when storing and executing software via OSG-managed Access Points. Users should keep their /home directory privileges restricted to their user or group, and should not add \u2018global\u2019 permissions, which will allow other users to potentially make your data public. User-created \u2018open\u2019 network ports are disallowed , unless explicitly permitted following an accepted justification to support@osg-htc.org. (If you\u2019re not sure whether something you want to do will open a port, just get in touch!) The following actions may be taken automatedly or by OSG staff to stop or prevent jobs from causing problems. Please contact us if you\u2019d like help understanding why your jobs were held or removed, and so we can help you avoid problems in the future. Jobs using more memory or disk than requested may be automatically held (see Scaling Up after Test Jobs for tips on requesting the \u2018right\u2019 amount of job resources in your submit file). Jobs running longer than their JobDurationCategory allows for will be held (see Indicate the Job Duration Category of Your Jobs ). Jobs that have executed more than 30 times without completing may be automatically held (likely because they\u2019re too long for OSG). Jobs that have been held more than 14 days may be automatically removed. Jobs queued for more than three months may be automatically removed. Jobs otherwise causing known problems may be held or removed, without prior notification to the user. Held jobs may also be edited to prevent automated release/retry NOTE: in order to respect user email clients, job holds and removals do not come with specific notification to the user, unless configured by the user at the time of submission using HTCondor\u2019s \u2018notification\u2019 feature.","title":"Policies for Using OSG Services and the OSPool "},{"location":"overview/references/policy/#policies-for-using-osg-services-and-the-ospool","text":"Access to OSG services and the Open Science Pool (OSPool) is contingent on compliance with the below and with any requests from OSG staff to change practices that cause issues for OSG systems and/or users. Please contact us if you have any questions! We can often help with exceptions to default policies and/or identify available alternative approaches to help you with a perceived barrier. As the below do not cover every possible scenario of potentially disruptive practices, OSG staff reserve the right to take any necessary corrective actions to ensure performance and resource availability for all users from OSG-managed Access Points. This may include the hold or removal of jobs, deletion of user data, deactivation of accounts, etc. In some cases, these actions may need to be taken without notifying the user. By using the OSG resources, users are expected to follow the Open Science Pool acceptable use policy , which includes appropriate scope of use and common user security practices. OSG resources are only available to individuals affiliated with a US-based academic, government, or non-profit organization, or with a research project led by an affiliated sponsor. Users can have up to 10,000 jobs queued, without taking additional steps , and should submit multiple jobs via a single submit file, according to our online guides. Please write to us if you\u2019d like to easily submit more! Do not run computationally-intensive or persistent processes on the Access Points (login nodes). Exceptions include single-threaded software compilation and data management tasks (transfer to/from the Access Point, directory creation, file moving/renaming, untar-ing, etc.). The execution of multi-threaded tasks for job setup or post-processing or software testing will almost certainly cause performance issues and may result in loss of access. Software testing should be executed from within submitted jobs, where job scheduling also provides a more accurate test environment to the user without compromising performance of the Access Points. OSG staff reserve the right to kill any tasks running on the login nodes, in order to ensure performance for all users. Similarly, please contact us to discuss appropriate features and options, rather than running scripts (including cron ) to automate job submission , throttling, resubmission, or ordered execution (e.g. workflows), even if these are executed remotely to coordinate work on OSG-managed Access Points. These almost always end up causing significant issues and/or wasted computing capacity, and we're happy to help you to implement automation tools the integrate with HTCondor. Data Policies : OSG-managed filesystems are not backed up and should be treated as temporary (\u201cscratch\u201d-like) space for active work, only , following OSG policies for data storage and per-job transfers . Some OSG-managed storage spaces are truly \u2018open\u2019 with data available to be downloaded publicly. Of note: Users should keep copies of essential data and software in non-OSG locations, as OSG staff reserve the right to remove data at any time in order to ensure and/or restore system availability, and without prior notice to users. Proprietary data, HIPAA, and data with any other privacy concerns should not be stored on any OSG-managed filesystems or computed on using OSG-managed resources. Similarly, users should follow all licensing requirements when storing and executing software via OSG-managed Access Points. Users should keep their /home directory privileges restricted to their user or group, and should not add \u2018global\u2019 permissions, which will allow other users to potentially make your data public. User-created \u2018open\u2019 network ports are disallowed , unless explicitly permitted following an accepted justification to support@osg-htc.org. (If you\u2019re not sure whether something you want to do will open a port, just get in touch!) The following actions may be taken automatedly or by OSG staff to stop or prevent jobs from causing problems. Please contact us if you\u2019d like help understanding why your jobs were held or removed, and so we can help you avoid problems in the future. Jobs using more memory or disk than requested may be automatically held (see Scaling Up after Test Jobs for tips on requesting the \u2018right\u2019 amount of job resources in your submit file). Jobs running longer than their JobDurationCategory allows for will be held (see Indicate the Job Duration Category of Your Jobs ). Jobs that have executed more than 30 times without completing may be automatically held (likely because they\u2019re too long for OSG). Jobs that have been held more than 14 days may be automatically removed. Jobs queued for more than three months may be automatically removed. Jobs otherwise causing known problems may be held or removed, without prior notification to the user. Held jobs may also be edited to prevent automated release/retry NOTE: in order to respect user email clients, job holds and removals do not come with specific notification to the user, unless configured by the user at the time of submission using HTCondor\u2019s \u2018notification\u2019 feature.","title":"Policies for Using OSG Services and the OSPool"},{"location":"software_examples/ai/scikit-learn/","text":"scikit-learn \u00b6 scikit-learn is a machine learning toolkit for Python. Below you will find an example on how to use an OSG-provided software container that contains scikit-learn. However, it is good to keep in mind that you have two options when it comes to integrating your own code: If the code is simple, send it with the job (this is what the example uses) For more complex codes, consider extending the provided containers and integrate the code into the new custom container Containers are detailed in our general documentation: Containers - Apptainer/Singularity Scikit-learn Python Code \u00b6 An example scikit-learn machine learning executable is: #!/usr/bin/env python3 # example adopted from https://scikit-learn.org/stable/tutorial/basic/tutorial.html from sklearn import datasets from sklearn import svm iris = datasets.load_iris() digits = datasets.load_digits() # learning clf = svm.SVC(gamma=0.001, C=100.) clf.fit(digits.data[:-1], digits.target[:-1]) # predicting print(clf.predict(digits.data[-1:])) Submit File \u00b6 universe = container container_image = /cvmfs/singularity.opensciencegrid.org/htc/scikit-learn:1.3 log = job_$(Cluster)_$(Process).log error = job_$(Cluster)_$(Process).err output = job_$(Cluster)_$(Process).out executable = run-scikit-learn.py #arguments = # specify both general requirements and gpu requirements if there are any # requirements = True # require_gpus = +JobDurationCategory = \"Medium\" request_gpus = 0 request_cpus = 1 request_memory = 4GB request_disk = 4GB queue 1","title":"scikit-learn"},{"location":"software_examples/ai/scikit-learn/#scikit-learn","text":"scikit-learn is a machine learning toolkit for Python. Below you will find an example on how to use an OSG-provided software container that contains scikit-learn. However, it is good to keep in mind that you have two options when it comes to integrating your own code: If the code is simple, send it with the job (this is what the example uses) For more complex codes, consider extending the provided containers and integrate the code into the new custom container Containers are detailed in our general documentation: Containers - Apptainer/Singularity","title":"scikit-learn"},{"location":"software_examples/ai/scikit-learn/#scikit-learn-python-code","text":"An example scikit-learn machine learning executable is: #!/usr/bin/env python3 # example adopted from https://scikit-learn.org/stable/tutorial/basic/tutorial.html from sklearn import datasets from sklearn import svm iris = datasets.load_iris() digits = datasets.load_digits() # learning clf = svm.SVC(gamma=0.001, C=100.) clf.fit(digits.data[:-1], digits.target[:-1]) # predicting print(clf.predict(digits.data[-1:]))","title":"Scikit-learn Python Code"},{"location":"software_examples/ai/scikit-learn/#submit-file","text":"universe = container container_image = /cvmfs/singularity.opensciencegrid.org/htc/scikit-learn:1.3 log = job_$(Cluster)_$(Process).log error = job_$(Cluster)_$(Process).err output = job_$(Cluster)_$(Process).out executable = run-scikit-learn.py #arguments = # specify both general requirements and gpu requirements if there are any # requirements = True # require_gpus = +JobDurationCategory = \"Medium\" request_gpus = 0 request_cpus = 1 request_memory = 4GB request_disk = 4GB queue 1","title":"Submit File"},{"location":"software_examples/ai/tensorflow/","text":"The OSPool enables AI (Artificial Intelligence) workloads by providing access to GPUs and custom software stacks via containers. An example of this support is the machine learning platform TensorFlow. TensorFlow \u00b6 https://www.tensorflow.org/ desribes TensorFlow as: TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google's Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well. TensorFlow can be a complicated software to install as it requires many dependencies and specific environmental configurations. Software ontainers solve this problem by defining a full operating system image, containing not only the complex software package, but dependencies and environment configuration as well. Working with GPUs and containers are detailed in the general documentation: GPU Jobs Containers - Apptainer/Singularity TensorFlow Python Code \u00b6 An example TensorFlow executable that builds a machine learning model and evaluates it is: #!/usr/bin/env python3 # example adopted from https://www.tensorflow.org/tutorials/quickstart/beginner import tensorflow as tf print(\"TensorFlow version:\", tf.__version__) # this will show that the GPU was found tf.debugging.set_log_device_placement(True) # load a dataset mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 # build a machine learning model model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10) ]) predictions = model(x_train[:1]).numpy() # convert to probabilities tf.nn.softmax(predictions).numpy() # loss function loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) loss_fn(y_train[:1], predictions).numpy() # compile model model.compile(optimizer='adam', loss=loss_fn, metrics=['accuracy']) # train model.fit(x_train, y_train, epochs=5) # evaluate model.evaluate(x_test, y_test, verbose=2) HTCondor Submit File \u00b6 To run this TensorFlow script, create an HTCondor submit file to tell HTCondor how you would like it run on your behalf. An example HTCondor submit file for this job is below. Because TensorFlow is optimized to run with GPUs, make sure to tell HTCondor to assign your job to a GPU machine: universe = container container_image = /cvmfs/singularity.opensciencegrid.org/htc/tensorflow:2.15 log = job_$(Cluster)_$(Process).log error = job_$(Cluster)_$(Process).err output = job_$(Cluster)_$(Process).out executable = run-tf.py #arguments = +JobDurationCategory = \"Medium\" # specify both general requirements and gpu requirements if needed # requirements = True require_gpus = (Capability > 7.5) request_gpus = 1 request_cpus = 1 request_memory = 4GB request_disk = 4GB queue 1 Run TensorFlow \u00b6 Since we have prepared our executable, submit file, and are using an OSG-provided TensorFlow container, we are ready to submit this job to run on one of the OSPool GPU machines. To submit this job to run, type condor_submit TensorFlow.submit . The status of your job can be checked at any time by running condor_q .","title":"TensorFlow"},{"location":"software_examples/ai/tensorflow/#tensorflow","text":"https://www.tensorflow.org/ desribes TensorFlow as: TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google's Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well. TensorFlow can be a complicated software to install as it requires many dependencies and specific environmental configurations. Software ontainers solve this problem by defining a full operating system image, containing not only the complex software package, but dependencies and environment configuration as well. Working with GPUs and containers are detailed in the general documentation: GPU Jobs Containers - Apptainer/Singularity","title":"TensorFlow"},{"location":"software_examples/ai/tensorflow/#tensorflow-python-code","text":"An example TensorFlow executable that builds a machine learning model and evaluates it is: #!/usr/bin/env python3 # example adopted from https://www.tensorflow.org/tutorials/quickstart/beginner import tensorflow as tf print(\"TensorFlow version:\", tf.__version__) # this will show that the GPU was found tf.debugging.set_log_device_placement(True) # load a dataset mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 # build a machine learning model model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10) ]) predictions = model(x_train[:1]).numpy() # convert to probabilities tf.nn.softmax(predictions).numpy() # loss function loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) loss_fn(y_train[:1], predictions).numpy() # compile model model.compile(optimizer='adam', loss=loss_fn, metrics=['accuracy']) # train model.fit(x_train, y_train, epochs=5) # evaluate model.evaluate(x_test, y_test, verbose=2)","title":"TensorFlow Python Code"},{"location":"software_examples/ai/tensorflow/#htcondor-submit-file","text":"To run this TensorFlow script, create an HTCondor submit file to tell HTCondor how you would like it run on your behalf. An example HTCondor submit file for this job is below. Because TensorFlow is optimized to run with GPUs, make sure to tell HTCondor to assign your job to a GPU machine: universe = container container_image = /cvmfs/singularity.opensciencegrid.org/htc/tensorflow:2.15 log = job_$(Cluster)_$(Process).log error = job_$(Cluster)_$(Process).err output = job_$(Cluster)_$(Process).out executable = run-tf.py #arguments = +JobDurationCategory = \"Medium\" # specify both general requirements and gpu requirements if needed # requirements = True require_gpus = (Capability > 7.5) request_gpus = 1 request_cpus = 1 request_memory = 4GB request_disk = 4GB queue 1","title":"HTCondor Submit File"},{"location":"software_examples/ai/tensorflow/#run-tensorflow","text":"Since we have prepared our executable, submit file, and are using an OSG-provided TensorFlow container, we are ready to submit this job to run on one of the OSPool GPU machines. To submit this job to run, type condor_submit TensorFlow.submit . The status of your job can be checked at any time by running condor_q .","title":"Run TensorFlow"},{"location":"software_examples/ai/tutorial-pytorch/","text":"The OSPool can be used as a platform to carry out machine learning and artificial intelligence research. The following tutorial uses the common machine learning framework, PyTorch. Using PyTorch on OSPool \u00b6 The preferred method of using a software on the the OSPool is to use a container. The guide shows two ways of running PyTorch on the OSPool. Firstly, downloading our desired version of PyTorch images from dockerhub. Secondly, how to use an already created singularity container of PyTorch to submit a HTCondor job on OSPool. Pulling an Image from Docker \u00b6 Please note that the docker build will not work on the access point . Apptainer is installed on the access point and users can use Apptainer to either build an image from the definition file or use apptainer pull to create a .sif file from Docker images. At the time the guide is written, the latest version of PyTorch is 2.1.1. Before pulling the image/software from Docker it is a good practice to set up the cache directory of Apptainer. Run the following command on the command prompt [user@ap]$ mkdir $HOME/tmp [user@ap]$ export TMPDIR=$HOME/tmp [user@ap]$ export APPTAINER_TMPDIR=$HOME/tmp [user@ap]$ export APPTAINER_CACHEDIR=$HOME/tmp Now, we pull the image and convert it to a .sif file using apptainer pull [user@ap]$ apptainer pull pytorch-2.1.1.sif docker://pytorch/pytorch:2.1.1-cuda12.1-cudnn8-runtime Transfer the image using OSDF \u00b6 The above command will create a singularity container named pytorch-2.1.1.sif in your current directory. The image will be reused for each job, and thus the preferred transfer method is OSDF . Store the pytorch-2.1.1.sif file under the \"protected\" area on your access point (see table here ), and then use the OSDF url directly in the +SingularityImage attribute. Note that you can not use shell variable expansion in the submit file - be sure to replace the username with your actual OSPool username. +SingularityImage = \"osdf:///ospool/PROTECTED//pytorch-2.1.1.sif\" queue Using an existing PyTorch container \u00b6 OSG has the pytorch-2.1.1.sif container image. To use the OSG built container just provide the address of the container- '/ospool/uc-shared/public/OSG-Staff/pytorch-2.1.1.sif' in to your submit file +SingularityImage = \"osdf:///ospool/uc-shared/public/OSG-Staff/pytorch-2.1.1.sif\" queue Running an ML job using PyTorch \u00b6 For this tutorial, we will see how to use PyTorch to run a machine learning workflow from the MNIST database. To download the materials for this tutorial, use the command git clone https://github.com/OSGConnect/tutorial-pytorch The github repository contains a tarball of the MNIST data- MNIST_data.tar.gz , a wrapper script- pytorch_cnn.sh that untars the data and runs the python script- main.py to train a neural network on this MNIST database. The content of the pytorch_cnn.sh wrapper script is given below: #!/bin/bash echo \"Hello OSPool from Job $1 running on `hostname`\" # untar the test and training data tar zxf MNIST_data.tar.gz # run the PyTorch model python main.py --save-model --epochs 20 # remove the data directory rm -r data A submit script- pytorch_cnn.sub is also there to submit the PyTorch job on the OSPool using the container that is provided by OSG. The contents of pytorch_cnn.sub file are: # PyTorch test of convolutional neural network # Submit file +SingularityImage = \"osdf:///ospool/uc-shared/public/OSG-Staff/pytorch-2.1.1.sif\" # set the log, error and output files log = logs/pytorch_cnn.log.txt error = logs/pytorch_cnn.err.txt output = output/pytorch_cnn.out.txt # set the executable to run executable = pytorch_cnn.sh arguments = $(Process) # Transfer the python script and the MNIST database to the compute node transfer_input_files = main.py, MNIST_data.tar.gz should_transfer_files = YES when_to_transfer_output = ON_EXIT # We require a machine with a compatible version of the CUDA driver require_gpus = (DriverVersion >= 10.1) # We must request 1 CPU in addition to 1 GPU request_cpus = 1 request_gpus = 1 # select some memory and disk space request_memory = 3GB request_disk = 5GB # Tell HTCondor to run 1 instance of our job: queue 1 Please note, if you want to use your own container please replace the +SingularityImage attribute accordingly. Create Log Directories and Submit Job \u00b6 You will need to create the logs and output directories to hold the files that will be created for each job. You can create both directories at once with the command mkdir logs output Submit the job using condor_submit pytorch_cnn.sub Output \u00b6 The output of the code will be the CNN Network that was trained. It will be returned to us as a file mnist_cnn.pt . The are also some output stats on the training and test error in the pytorch_cnn.out.txt. file Test set: Average loss: 0.0278, Accuracy: 9909/10000 (99%) Getting help \u00b6 For assistance or questions, please email the OSG User Support team at support@osg-htc.org or visit the help desk and community forums .","title":"PyTorch"},{"location":"software_examples/ai/tutorial-pytorch/#using-pytorch-on-ospool","text":"The preferred method of using a software on the the OSPool is to use a container. The guide shows two ways of running PyTorch on the OSPool. Firstly, downloading our desired version of PyTorch images from dockerhub. Secondly, how to use an already created singularity container of PyTorch to submit a HTCondor job on OSPool.","title":"Using PyTorch on OSPool"},{"location":"software_examples/ai/tutorial-pytorch/#pulling-an-image-from-docker","text":"Please note that the docker build will not work on the access point . Apptainer is installed on the access point and users can use Apptainer to either build an image from the definition file or use apptainer pull to create a .sif file from Docker images. At the time the guide is written, the latest version of PyTorch is 2.1.1. Before pulling the image/software from Docker it is a good practice to set up the cache directory of Apptainer. Run the following command on the command prompt [user@ap]$ mkdir $HOME/tmp [user@ap]$ export TMPDIR=$HOME/tmp [user@ap]$ export APPTAINER_TMPDIR=$HOME/tmp [user@ap]$ export APPTAINER_CACHEDIR=$HOME/tmp Now, we pull the image and convert it to a .sif file using apptainer pull [user@ap]$ apptainer pull pytorch-2.1.1.sif docker://pytorch/pytorch:2.1.1-cuda12.1-cudnn8-runtime","title":"Pulling an Image from Docker"},{"location":"software_examples/ai/tutorial-pytorch/#transfer-the-image-using-osdf","text":"The above command will create a singularity container named pytorch-2.1.1.sif in your current directory. The image will be reused for each job, and thus the preferred transfer method is OSDF . Store the pytorch-2.1.1.sif file under the \"protected\" area on your access point (see table here ), and then use the OSDF url directly in the +SingularityImage attribute. Note that you can not use shell variable expansion in the submit file - be sure to replace the username with your actual OSPool username. +SingularityImage = \"osdf:///ospool/PROTECTED//pytorch-2.1.1.sif\" queue","title":"Transfer the image using OSDF"},{"location":"software_examples/ai/tutorial-pytorch/#using-an-existing-pytorch-container","text":"OSG has the pytorch-2.1.1.sif container image. To use the OSG built container just provide the address of the container- '/ospool/uc-shared/public/OSG-Staff/pytorch-2.1.1.sif' in to your submit file +SingularityImage = \"osdf:///ospool/uc-shared/public/OSG-Staff/pytorch-2.1.1.sif\" queue","title":"Using an existing PyTorch container"},{"location":"software_examples/ai/tutorial-pytorch/#running-an-ml-job-using-pytorch","text":"For this tutorial, we will see how to use PyTorch to run a machine learning workflow from the MNIST database. To download the materials for this tutorial, use the command git clone https://github.com/OSGConnect/tutorial-pytorch The github repository contains a tarball of the MNIST data- MNIST_data.tar.gz , a wrapper script- pytorch_cnn.sh that untars the data and runs the python script- main.py to train a neural network on this MNIST database. The content of the pytorch_cnn.sh wrapper script is given below: #!/bin/bash echo \"Hello OSPool from Job $1 running on `hostname`\" # untar the test and training data tar zxf MNIST_data.tar.gz # run the PyTorch model python main.py --save-model --epochs 20 # remove the data directory rm -r data A submit script- pytorch_cnn.sub is also there to submit the PyTorch job on the OSPool using the container that is provided by OSG. The contents of pytorch_cnn.sub file are: # PyTorch test of convolutional neural network # Submit file +SingularityImage = \"osdf:///ospool/uc-shared/public/OSG-Staff/pytorch-2.1.1.sif\" # set the log, error and output files log = logs/pytorch_cnn.log.txt error = logs/pytorch_cnn.err.txt output = output/pytorch_cnn.out.txt # set the executable to run executable = pytorch_cnn.sh arguments = $(Process) # Transfer the python script and the MNIST database to the compute node transfer_input_files = main.py, MNIST_data.tar.gz should_transfer_files = YES when_to_transfer_output = ON_EXIT # We require a machine with a compatible version of the CUDA driver require_gpus = (DriverVersion >= 10.1) # We must request 1 CPU in addition to 1 GPU request_cpus = 1 request_gpus = 1 # select some memory and disk space request_memory = 3GB request_disk = 5GB # Tell HTCondor to run 1 instance of our job: queue 1 Please note, if you want to use your own container please replace the +SingularityImage attribute accordingly.","title":"Running an ML job using PyTorch"},{"location":"software_examples/ai/tutorial-pytorch/#create-log-directories-and-submit-job","text":"You will need to create the logs and output directories to hold the files that will be created for each job. You can create both directories at once with the command mkdir logs output Submit the job using condor_submit pytorch_cnn.sub","title":"Create Log Directories and Submit Job"},{"location":"software_examples/ai/tutorial-pytorch/#output","text":"The output of the code will be the CNN Network that was trained. It will be returned to us as a file mnist_cnn.pt . The are also some output stats on the training and test error in the pytorch_cnn.out.txt. file Test set: Average loss: 0.0278, Accuracy: 9909/10000 (99%)","title":"Output"},{"location":"software_examples/ai/tutorial-pytorch/#getting-help","text":"For assistance or questions, please email the OSG User Support team at support@osg-htc.org or visit the help desk and community forums .","title":"Getting help"},{"location":"software_examples/bioinformatics/tutorial-blast-split/","text":"High-Throughput BLAST \u00b6 This tutorial will put together several OSG tools and ideas - handling a larger data file, splitting a large file into smaller pieces, and transferring a portable software program. Job components and plan \u00b6 To run BLAST, we need three things: 1. the BLAST program (specifically the blastx binary) 2. a reference database (this is usually a larger file) 3. the file we want to query against the database The database and the input file will each get special treatment. The database we are using is large enough that we will want to use OSG Connect's stashcache capability (more information about that here ). The input file is large enough that a) it is near the upper limit of what is practical to transfer, b) it would take hours to complete a single blastx analysis for it, and c) the resulting output file would be huge. Because the BLAST process is run over the input file line by line, it is scientifically valid to split up the input query file, analyze the pieces, and then put the results back together at the end! By splitting the input query file into smaller pieces, each of the queries can be run as separate jobs. On the other hand, BLAST databases should not be split, because the blast output includes a score value for each sequence that is calculated relative to the entire length of the database. Get materials and set up files \u00b6 Run the tutorial command: tutorial blast-split Once the tutorial has downloaded, move into the folder and run the download_files.sh script to download the remaining files: cd tutorial-blast-split ./download_files.sh This command will have downloaded and unzipped the BLAST program ( ncbi-blast-2.9.0+ ), the file we want to query ( mouse_rna.fa ) and a set of tools that will split the file into smaller pieces ( gt-1.5.10-Linux_x86_64-64bit-complete ). Next, we will use the command gt from the genome tools package to split our input query file into 2 MB chunks as indicated by the -targetsize flag. To split the file, run this command: ./gt-1.5.10-Linux_x86_64-64bit-complete/bin/gt splitfasta -targetsize 2 mouse_rna.fa Later, we'll need a list of the split files, so run this command to generate that list: ls mouse_rna.fa.* > list.txt Examine the submit file \u00b6 The submit file, blast.submit looks like this: executable = run_blast.sh arguments = $(inputfile) transfer_input_files = ncbi-blast-2.9.0+/bin/blastx, $(inputfile), stash:///osgconnect/public/osg/BlastTutorial/pdbaa.tar.gz output = logs/job_$(process).out error = logs/job_$(process).err log = logs/job_$(process).log requirements = OSGVO_OS_STRING == \"RHEL 7\" && Arch == \"X86_64\" request_memory = 2GB request_disk = 1GB request_cpus = 1 queue inputfile from list.txt The executable run_blast.sh is a script that runs blast and takes in a file to query as its argument. We'll look at this script in more detail in a minute. Our job will need to transfer the blastx executable and the input file being used for queries, shown in the transfer_input_files line. Because of the size of our database, we'll be using stash:/// to transfer the database to our job. Note on stash:/// : In this job, we're copying the file from a particular /public folder ( osg/BlastTutorialV1 ), but you have your own /public folder that you could use for the database. If you wanted to try this, you would want to navigate to your /public folder, download the pdbaa.tar.gz file, return to your /home folder, and change the path in the stash:/// command above. This might look like: cd /public/username wget http://stash.osgconnect.net/public/osg/BlastTutorialV1/pdbaa.tar.gz cd /home/username Finally, you may have already noticed that instead of listing the individual input file by name, we've used the following syntax: $(inputfile) . This is a variable that represents the name of an individual input file. We've done this so that we can set the variable as a different file name for each job. We can set the variable by using the queue syntax shown at the bottom of the file: queue inputfile from list.txt This command will pull file names from the list.txt file that we created earlier, and submit one job per file and set the \"inputfile\" variable to that file name. Examine the wrapper script \u00b6 The submit file had a script called run_blast.sh : #!/bin/bash # get input file from arguments inputfile=$1 # Prepare our database and unzip into new dir tar -xzvf pdbaa.tar.gz rm pdbaa.tar.gz # run blast query on input file ./blastx -db pdbaa/pdbaa -query $inputfile -out $inputfile.result It saves the name of the input file, unpacks our database, and then runs the BLAST query from the input file we transferred and used as the argument. Submit the jobs \u00b6 Our jobs should be set and ready to go. To submit them, run this command: condor_submit blast.submit And you should see that 51 jobs have been submitted: Submitting job(s)................................................ 51 job(s) submitted to cluster 90363. You can check on your jobs' progress using condor_q Bonus: a BLAST workflow \u00b6 We had to go through multiple steps to run the jobs above. There was an initial step to split the files and generate a list of them; then we submitted the jobs. These two steps can be tied together in a workflow using the HTCondor DAGMan workflow tool. First, we would create a script ( split_files.sh ) that does the file splitting steps: #!/bin/bash filesize=$1 ./gt-1.5.10-Linux_x86_64-64bit-complete/bin/gt splitfasta -targetsize $filesize mouse_rna.fa ls mouse_rna.fa.* > list.txt This script will need executable permissions: chmod +x split_files.sh Then, we create a DAG workflow file that ties the two steps together: ## DAG: blastrun.dag JOB blast blast.submit SCRIPT PRE blast split_files.sh 2 To submit this DAG, we use this command: condor_submit_dag blastrun.dag","title":"High-Throughput BLAST"},{"location":"software_examples/bioinformatics/tutorial-blast-split/#high-throughput-blast","text":"This tutorial will put together several OSG tools and ideas - handling a larger data file, splitting a large file into smaller pieces, and transferring a portable software program.","title":"High-Throughput BLAST"},{"location":"software_examples/bioinformatics/tutorial-blast-split/#job-components-and-plan","text":"To run BLAST, we need three things: 1. the BLAST program (specifically the blastx binary) 2. a reference database (this is usually a larger file) 3. the file we want to query against the database The database and the input file will each get special treatment. The database we are using is large enough that we will want to use OSG Connect's stashcache capability (more information about that here ). The input file is large enough that a) it is near the upper limit of what is practical to transfer, b) it would take hours to complete a single blastx analysis for it, and c) the resulting output file would be huge. Because the BLAST process is run over the input file line by line, it is scientifically valid to split up the input query file, analyze the pieces, and then put the results back together at the end! By splitting the input query file into smaller pieces, each of the queries can be run as separate jobs. On the other hand, BLAST databases should not be split, because the blast output includes a score value for each sequence that is calculated relative to the entire length of the database.","title":"Job components and plan"},{"location":"software_examples/bioinformatics/tutorial-blast-split/#get-materials-and-set-up-files","text":"Run the tutorial command: tutorial blast-split Once the tutorial has downloaded, move into the folder and run the download_files.sh script to download the remaining files: cd tutorial-blast-split ./download_files.sh This command will have downloaded and unzipped the BLAST program ( ncbi-blast-2.9.0+ ), the file we want to query ( mouse_rna.fa ) and a set of tools that will split the file into smaller pieces ( gt-1.5.10-Linux_x86_64-64bit-complete ). Next, we will use the command gt from the genome tools package to split our input query file into 2 MB chunks as indicated by the -targetsize flag. To split the file, run this command: ./gt-1.5.10-Linux_x86_64-64bit-complete/bin/gt splitfasta -targetsize 2 mouse_rna.fa Later, we'll need a list of the split files, so run this command to generate that list: ls mouse_rna.fa.* > list.txt","title":"Get materials and set up files"},{"location":"software_examples/bioinformatics/tutorial-blast-split/#examine-the-submit-file","text":"The submit file, blast.submit looks like this: executable = run_blast.sh arguments = $(inputfile) transfer_input_files = ncbi-blast-2.9.0+/bin/blastx, $(inputfile), stash:///osgconnect/public/osg/BlastTutorial/pdbaa.tar.gz output = logs/job_$(process).out error = logs/job_$(process).err log = logs/job_$(process).log requirements = OSGVO_OS_STRING == \"RHEL 7\" && Arch == \"X86_64\" request_memory = 2GB request_disk = 1GB request_cpus = 1 queue inputfile from list.txt The executable run_blast.sh is a script that runs blast and takes in a file to query as its argument. We'll look at this script in more detail in a minute. Our job will need to transfer the blastx executable and the input file being used for queries, shown in the transfer_input_files line. Because of the size of our database, we'll be using stash:/// to transfer the database to our job. Note on stash:/// : In this job, we're copying the file from a particular /public folder ( osg/BlastTutorialV1 ), but you have your own /public folder that you could use for the database. If you wanted to try this, you would want to navigate to your /public folder, download the pdbaa.tar.gz file, return to your /home folder, and change the path in the stash:/// command above. This might look like: cd /public/username wget http://stash.osgconnect.net/public/osg/BlastTutorialV1/pdbaa.tar.gz cd /home/username Finally, you may have already noticed that instead of listing the individual input file by name, we've used the following syntax: $(inputfile) . This is a variable that represents the name of an individual input file. We've done this so that we can set the variable as a different file name for each job. We can set the variable by using the queue syntax shown at the bottom of the file: queue inputfile from list.txt This command will pull file names from the list.txt file that we created earlier, and submit one job per file and set the \"inputfile\" variable to that file name.","title":"Examine the submit file"},{"location":"software_examples/bioinformatics/tutorial-blast-split/#examine-the-wrapper-script","text":"The submit file had a script called run_blast.sh : #!/bin/bash # get input file from arguments inputfile=$1 # Prepare our database and unzip into new dir tar -xzvf pdbaa.tar.gz rm pdbaa.tar.gz # run blast query on input file ./blastx -db pdbaa/pdbaa -query $inputfile -out $inputfile.result It saves the name of the input file, unpacks our database, and then runs the BLAST query from the input file we transferred and used as the argument.","title":"Examine the wrapper script"},{"location":"software_examples/bioinformatics/tutorial-blast-split/#submit-the-jobs","text":"Our jobs should be set and ready to go. To submit them, run this command: condor_submit blast.submit And you should see that 51 jobs have been submitted: Submitting job(s)................................................ 51 job(s) submitted to cluster 90363. You can check on your jobs' progress using condor_q","title":"Submit the jobs"},{"location":"software_examples/bioinformatics/tutorial-blast-split/#bonus-a-blast-workflow","text":"We had to go through multiple steps to run the jobs above. There was an initial step to split the files and generate a list of them; then we submitted the jobs. These two steps can be tied together in a workflow using the HTCondor DAGMan workflow tool. First, we would create a script ( split_files.sh ) that does the file splitting steps: #!/bin/bash filesize=$1 ./gt-1.5.10-Linux_x86_64-64bit-complete/bin/gt splitfasta -targetsize $filesize mouse_rna.fa ls mouse_rna.fa.* > list.txt This script will need executable permissions: chmod +x split_files.sh Then, we create a DAG workflow file that ties the two steps together: ## DAG: blastrun.dag JOB blast blast.submit SCRIPT PRE blast split_files.sh 2 To submit this DAG, we use this command: condor_submit_dag blastrun.dag","title":"Bonus: a BLAST workflow"},{"location":"software_examples/bioinformatics/tutorial-bwa/","text":"High-Throughput BWA Read Mapping \u00b6 This tutorial focuses on a subset of the Data Carpentry Genomics workshop curriculum - specifically, this page cover's how to run a BWA workflow on OSG resources. It will use the same general flow as the BWA segment of the Data Carpentry workshop with minor adjustments. The goal of this tutorial is to learn how to convert an existing BWA workflow to run on the OSPool. Get Tutorial Files \u00b6 Logged into the submit node, we will run the tutorial command, that will create a folder for our analysis, as well as some sample files. tutorial bwa Install and Prepare BWA \u00b6 First, we need to install BWA, also called Burrows-Wheeler Aligner. To do this, we will create and navigate to a new folder in our /home directory called software . We will then follow the developer's instructions (https://github.com/lh3/bwa) for using git clone to clone the software and then build the tool using make . cd ~/tutorial-bwa cd software git clone https://github.com/lh3/bwa.git cd bwa make Next, BWA needs to be added to our PATH variables, to test if the installation worked: export PATH=$PATH:/home/$USER/tutorial-bwa/software/bwa/ To check that BWA has been installed correctly, type bwa . You should receive output similar to the following: Program: bwa (alignment via Burrows-Wheeler transformation) Version: 0.7.17-r1198-dirty Contact: Heng Li Usage: bwa [options] Command: index index sequences in the FASTA format mem BWA-MEM algorithm fastmap identify super-maximal exact matches ... Now that we have successfully installed bwa , we will create a portable compressed tarball of this software so that it is smaller and quicker to transport when we submit our jobs to the OSPool. cd ~/tutorial-bwa/software tar -czvf bwa.tar.gz bwa Checking the size of this compressed tarball using ls -lh bwa.tar.gz reveals the file is approximately 4MB. The tarball should stay in /home. Download Data to Analyze \u00b6 Now that we have installed BWA, we need to download data to analyze. For this tutorial, we will be downloading data used in the Data Carpentry workshop. This data includes both the genome of Escherichia coli (E. coli) and paired-end RNA sequencing reads obtained from a study carried out by Blount et al. published in PNAS . Additional information about how the data was modified in preparation for this analysis can be found on the Data Carpentry's workshop website . cd ~/tutorial-bwa ./download_data.sh Investigating the size of the downloaded genome by typing: ls -lh data/ref_genome/ reveals the file is 1.4 MB. Therefore, this file should remain in /home and does not need to be moved to /public. We should also check the trimmed fastq paired-end read files: ls -lh data/trimmed_fastq_small Once everything is downloaded, make sure you're still in the tutorial-bwa directory. cd ~/tutorial-bwa Run a Single Test Job \u00b6 Now that we have all items in our analysis ready, it is time to submit a single test job to map our RNA reads to the E. coli genome. For a single test job, we will choose a single sample to analyze. In the following example, we will align both the forward and reverse reads of SRR2584863 to the E. coli genome. Using a text editor such as nano or vim , we can create an example submit file for this test job called bwa-test.sub containing the following information: universe = vanilla executable = bwa-test.sh # arguments = # need to transfer bwa.tar.gz file, the reference # genome, and the trimmed fastq files transfer_input_files = software/bwa.tar.gz, data/ref_genome/ecoli_rel606.fasta.gz, data/trimmed_fastq_small/SRR2584863_1.trim.sub.fastq, data/trimmed_fastq_small/SRR2584863_2.trim.sub.fastq should_transfer_files = YES when_to_transfer_output = ON_EXIT log = logs/bwa_test_job.log output = logs/bwa_test_job.out error = logs/bwa_test_job.error +JobDurationCategory = \"Medium\" request_cpus = 1 request_memory = 2GB request_disk = 1GB requirements = (OSGVO_OS_STRING == \"RHEL 7\") queue 1 You will notice that the .log, .out, and .error files will be saved to a folder called logs . We need to create this folder using mkdir logs before we submit our job. We will call the script for this analysis bwa-test.sh and it should contain the following information: #!/bin/bash # Script name: bwa-test.sh echo \"Unpacking software\" tar -xzf bwa.tar.gz echo \"Setting PATH for bwa\" export PATH=$_CONDOR_SCRATCH_DIR/bwa/:$PATH echo \"Indexing E. coli genome\" bwa index ecoli_rel606.fasta.gz echo \"Starting bwa alignment for SRR2584863\" bwa mem ecoli_rel606.fasta.gz SRR2584863_1.trim.sub.fastq SRR2584863_2.trim.sub.fastq > SRR2584863.aligned.sam echo \"Done with bwa alignment for SRR2584863!\" echo \"Cleaning up files generated from genome indexing\" rm ecoli_rel606.fasta.gz.amb rm ecoli_rel606.fasta.gz.ann rm ecoli_rel606.fasta.gz.bwt rm ecoli_rel606.fasta.gz.pac rm ecoli_rel606.fasta.gz.sa We can submit this single test job to HTCondor by typing: condor_submit bwa-test.sub To check the status of the job, we can use condor_q . Upon the completion of the test job, we should investigate the output to ensure that it is what we expected and also review the .log file to help optimize future resource requests in preparation for scaling up. For example, when we investigate the bwa_test_job.log file created in this analysis, at the bottom of the file we see a resource table: Partitionable Resources : Usage Request Allocated Cpus : 1 1 Disk (KB) : 253770 1048576 27945123 Memory (MB) : 144 2048 2500 Here we see that we used less than half of both the disk space and memory we requested. In future jobs, we should request a smaller amount of each resource, such as 0.5 GB of disk space and 0.5 GB of memory. Prior to scaling up our analysis, we should run additional test jobs using these resource requests to ensure that they are sufficient to allow our job to complete successfully. Scaling Up to Analyze Multiple Samples \u00b6 In preparation for scaling up, please review our guide on how to scale up after a successful test job and how to easily submit multiple jobs with a single submit file . After reviewing how to submit multiple jobs with a single submit file, it is possible to determine that the most appropriate way to submit multiple jobs for this analysis is to use queue from . To use this option, we first need to create a file with just the sample names/IDs that we want to analyze. To do this, we want to cut all information after the \"_\" symbol to remove the forward/reverse read information and file extensions. For example, we want SRR2584863_1.trim.sub.fastq to become just SRR2584863. We will save the sample names in a file called samples.txt : cd ~/tutorial-bwa cd data/trimmed_fastq_small/ ls *.fastq | cut -f 1 -d '_' | uniq > samples.txt cd ~/tutorial-bwa Now, we can create a new submit file called bwa-alignment.sub to queue a new job for each sample. To make it simpler to start, you can copy the bwa-test.sub file ( cp bwa-test.sub bwa-alignment.sub ) and modify it. universe = vanilla executable = bwa-alignment.sh arguments = $(sample) transfer_input_files = software/bwa.tar.gz, data/ref_genome/ecoli_rel606.fasta.gz, data/trimmed_fastq_small/$(sample)_1.trim.sub.fastq, data/trimmed_fastq_small/$(sample)_2.trim.sub.fastq transfer_output_remaps = \"$(sample).aligned.sam=results/$(sample).aligned.sam\" should_transfer_files = YES when_to_transfer_output = ON_EXIT log = logs/bwa_$(sample)_job.log output = logs/bwa_$(sample)_job.out error = logs/bwa_$(sample)_job.error +JobDurationCategory = \"Medium\" request_cpus = 1 request_memory = 0.5GB request_disk = 0.5GB requirements = (OSGVO_OS_STRING == \"RHEL 7\") queue sample from data/trimmed_fastq_small/samples.txt We will need to create an additional folder to store our aligned sequencing files in a folder called results : mkdir results To store the aligned sequencing files in the results folder, we can add the transfer_output_remaps feature to our submit file. This feature allows us to specify a name and a path to save our output files in the format of \"file1 = path/to/save/file2\", where file1 is the origional name of the document and file2 is the name that we want to save the file using. In the example above, we do not change the name of the resulting output files. This feature also helps us keep an organized working space, rather than having all of our resulting sequencing files be saved to our /home directory. Once our submit file has been updated, we can update our script to look like and call it something like bwa-alignment.sh : #!/bin/bash # Script name: bwa-alignment.sh echo \"Unpackage software\" tar -xzf bwa.tar.gz echo \"Set PATH for bwa\" export PATH=$_CONDOR_SCRATCH_DIR/bwa/:$PATH # Renaming first argument SAMPLE=$1 echo \"Index E.coli genome\" bwa index ecoli_rel606.fasta.gz echo \"Starting bwa alignment for ${SAMPLE}\" bwa mem ecoli_rel606.fasta.gz ${SAMPLE}_1.trim.sub.fastq ${SAMPLE}_2.trim.sub.fastq > ${SAMPLE}.aligned.sam echo \"Done with bwa alignment for ${SAMPLE}!\" echo \"Cleaning up workspace\" rm ecoli_rel606.fasta.gz.amb rm ecoli_rel606.fasta.gz.ann rm ecoli_rel606.fasta.gz.bwt rm ecoli_rel606.fasta.gz.pac rm ecoli_rel606.fasta.gz.sa Once ready, we can submit our job to HTCondor by using condor_submit bwa-alignment.sub . When we type condor_q , we see that three jobs have entered the queue (one for each of our three experimental samples). When our jobs are completed, we can confirm that our alignment output results files were created by typing: ls -lh results/* We can also investigate our log, error, and output files in the logs folder to ensure we obtained the resulting output of these files that we expected. For more information about running bioinformatics workflows on the OSG, we recommend our BLAST tutorial as well as our Samtools instillation guide.","title":"High-Throughput BWA Read Mapping"},{"location":"software_examples/bioinformatics/tutorial-bwa/#high-throughput-bwa-read-mapping","text":"This tutorial focuses on a subset of the Data Carpentry Genomics workshop curriculum - specifically, this page cover's how to run a BWA workflow on OSG resources. It will use the same general flow as the BWA segment of the Data Carpentry workshop with minor adjustments. The goal of this tutorial is to learn how to convert an existing BWA workflow to run on the OSPool.","title":"High-Throughput BWA Read Mapping"},{"location":"software_examples/bioinformatics/tutorial-bwa/#get-tutorial-files","text":"Logged into the submit node, we will run the tutorial command, that will create a folder for our analysis, as well as some sample files. tutorial bwa","title":"Get Tutorial Files"},{"location":"software_examples/bioinformatics/tutorial-bwa/#install-and-prepare-bwa","text":"First, we need to install BWA, also called Burrows-Wheeler Aligner. To do this, we will create and navigate to a new folder in our /home directory called software . We will then follow the developer's instructions (https://github.com/lh3/bwa) for using git clone to clone the software and then build the tool using make . cd ~/tutorial-bwa cd software git clone https://github.com/lh3/bwa.git cd bwa make Next, BWA needs to be added to our PATH variables, to test if the installation worked: export PATH=$PATH:/home/$USER/tutorial-bwa/software/bwa/ To check that BWA has been installed correctly, type bwa . You should receive output similar to the following: Program: bwa (alignment via Burrows-Wheeler transformation) Version: 0.7.17-r1198-dirty Contact: Heng Li Usage: bwa [options] Command: index index sequences in the FASTA format mem BWA-MEM algorithm fastmap identify super-maximal exact matches ... Now that we have successfully installed bwa , we will create a portable compressed tarball of this software so that it is smaller and quicker to transport when we submit our jobs to the OSPool. cd ~/tutorial-bwa/software tar -czvf bwa.tar.gz bwa Checking the size of this compressed tarball using ls -lh bwa.tar.gz reveals the file is approximately 4MB. The tarball should stay in /home.","title":"Install and Prepare BWA"},{"location":"software_examples/bioinformatics/tutorial-bwa/#download-data-to-analyze","text":"Now that we have installed BWA, we need to download data to analyze. For this tutorial, we will be downloading data used in the Data Carpentry workshop. This data includes both the genome of Escherichia coli (E. coli) and paired-end RNA sequencing reads obtained from a study carried out by Blount et al. published in PNAS . Additional information about how the data was modified in preparation for this analysis can be found on the Data Carpentry's workshop website . cd ~/tutorial-bwa ./download_data.sh Investigating the size of the downloaded genome by typing: ls -lh data/ref_genome/ reveals the file is 1.4 MB. Therefore, this file should remain in /home and does not need to be moved to /public. We should also check the trimmed fastq paired-end read files: ls -lh data/trimmed_fastq_small Once everything is downloaded, make sure you're still in the tutorial-bwa directory. cd ~/tutorial-bwa","title":"Download Data to Analyze"},{"location":"software_examples/bioinformatics/tutorial-bwa/#run-a-single-test-job","text":"Now that we have all items in our analysis ready, it is time to submit a single test job to map our RNA reads to the E. coli genome. For a single test job, we will choose a single sample to analyze. In the following example, we will align both the forward and reverse reads of SRR2584863 to the E. coli genome. Using a text editor such as nano or vim , we can create an example submit file for this test job called bwa-test.sub containing the following information: universe = vanilla executable = bwa-test.sh # arguments = # need to transfer bwa.tar.gz file, the reference # genome, and the trimmed fastq files transfer_input_files = software/bwa.tar.gz, data/ref_genome/ecoli_rel606.fasta.gz, data/trimmed_fastq_small/SRR2584863_1.trim.sub.fastq, data/trimmed_fastq_small/SRR2584863_2.trim.sub.fastq should_transfer_files = YES when_to_transfer_output = ON_EXIT log = logs/bwa_test_job.log output = logs/bwa_test_job.out error = logs/bwa_test_job.error +JobDurationCategory = \"Medium\" request_cpus = 1 request_memory = 2GB request_disk = 1GB requirements = (OSGVO_OS_STRING == \"RHEL 7\") queue 1 You will notice that the .log, .out, and .error files will be saved to a folder called logs . We need to create this folder using mkdir logs before we submit our job. We will call the script for this analysis bwa-test.sh and it should contain the following information: #!/bin/bash # Script name: bwa-test.sh echo \"Unpacking software\" tar -xzf bwa.tar.gz echo \"Setting PATH for bwa\" export PATH=$_CONDOR_SCRATCH_DIR/bwa/:$PATH echo \"Indexing E. coli genome\" bwa index ecoli_rel606.fasta.gz echo \"Starting bwa alignment for SRR2584863\" bwa mem ecoli_rel606.fasta.gz SRR2584863_1.trim.sub.fastq SRR2584863_2.trim.sub.fastq > SRR2584863.aligned.sam echo \"Done with bwa alignment for SRR2584863!\" echo \"Cleaning up files generated from genome indexing\" rm ecoli_rel606.fasta.gz.amb rm ecoli_rel606.fasta.gz.ann rm ecoli_rel606.fasta.gz.bwt rm ecoli_rel606.fasta.gz.pac rm ecoli_rel606.fasta.gz.sa We can submit this single test job to HTCondor by typing: condor_submit bwa-test.sub To check the status of the job, we can use condor_q . Upon the completion of the test job, we should investigate the output to ensure that it is what we expected and also review the .log file to help optimize future resource requests in preparation for scaling up. For example, when we investigate the bwa_test_job.log file created in this analysis, at the bottom of the file we see a resource table: Partitionable Resources : Usage Request Allocated Cpus : 1 1 Disk (KB) : 253770 1048576 27945123 Memory (MB) : 144 2048 2500 Here we see that we used less than half of both the disk space and memory we requested. In future jobs, we should request a smaller amount of each resource, such as 0.5 GB of disk space and 0.5 GB of memory. Prior to scaling up our analysis, we should run additional test jobs using these resource requests to ensure that they are sufficient to allow our job to complete successfully.","title":"Run a Single Test Job"},{"location":"software_examples/bioinformatics/tutorial-bwa/#scaling-up-to-analyze-multiple-samples","text":"In preparation for scaling up, please review our guide on how to scale up after a successful test job and how to easily submit multiple jobs with a single submit file . After reviewing how to submit multiple jobs with a single submit file, it is possible to determine that the most appropriate way to submit multiple jobs for this analysis is to use queue from . To use this option, we first need to create a file with just the sample names/IDs that we want to analyze. To do this, we want to cut all information after the \"_\" symbol to remove the forward/reverse read information and file extensions. For example, we want SRR2584863_1.trim.sub.fastq to become just SRR2584863. We will save the sample names in a file called samples.txt : cd ~/tutorial-bwa cd data/trimmed_fastq_small/ ls *.fastq | cut -f 1 -d '_' | uniq > samples.txt cd ~/tutorial-bwa Now, we can create a new submit file called bwa-alignment.sub to queue a new job for each sample. To make it simpler to start, you can copy the bwa-test.sub file ( cp bwa-test.sub bwa-alignment.sub ) and modify it. universe = vanilla executable = bwa-alignment.sh arguments = $(sample) transfer_input_files = software/bwa.tar.gz, data/ref_genome/ecoli_rel606.fasta.gz, data/trimmed_fastq_small/$(sample)_1.trim.sub.fastq, data/trimmed_fastq_small/$(sample)_2.trim.sub.fastq transfer_output_remaps = \"$(sample).aligned.sam=results/$(sample).aligned.sam\" should_transfer_files = YES when_to_transfer_output = ON_EXIT log = logs/bwa_$(sample)_job.log output = logs/bwa_$(sample)_job.out error = logs/bwa_$(sample)_job.error +JobDurationCategory = \"Medium\" request_cpus = 1 request_memory = 0.5GB request_disk = 0.5GB requirements = (OSGVO_OS_STRING == \"RHEL 7\") queue sample from data/trimmed_fastq_small/samples.txt We will need to create an additional folder to store our aligned sequencing files in a folder called results : mkdir results To store the aligned sequencing files in the results folder, we can add the transfer_output_remaps feature to our submit file. This feature allows us to specify a name and a path to save our output files in the format of \"file1 = path/to/save/file2\", where file1 is the origional name of the document and file2 is the name that we want to save the file using. In the example above, we do not change the name of the resulting output files. This feature also helps us keep an organized working space, rather than having all of our resulting sequencing files be saved to our /home directory. Once our submit file has been updated, we can update our script to look like and call it something like bwa-alignment.sh : #!/bin/bash # Script name: bwa-alignment.sh echo \"Unpackage software\" tar -xzf bwa.tar.gz echo \"Set PATH for bwa\" export PATH=$_CONDOR_SCRATCH_DIR/bwa/:$PATH # Renaming first argument SAMPLE=$1 echo \"Index E.coli genome\" bwa index ecoli_rel606.fasta.gz echo \"Starting bwa alignment for ${SAMPLE}\" bwa mem ecoli_rel606.fasta.gz ${SAMPLE}_1.trim.sub.fastq ${SAMPLE}_2.trim.sub.fastq > ${SAMPLE}.aligned.sam echo \"Done with bwa alignment for ${SAMPLE}!\" echo \"Cleaning up workspace\" rm ecoli_rel606.fasta.gz.amb rm ecoli_rel606.fasta.gz.ann rm ecoli_rel606.fasta.gz.bwt rm ecoli_rel606.fasta.gz.pac rm ecoli_rel606.fasta.gz.sa Once ready, we can submit our job to HTCondor by using condor_submit bwa-alignment.sub . When we type condor_q , we see that three jobs have entered the queue (one for each of our three experimental samples). When our jobs are completed, we can confirm that our alignment output results files were created by typing: ls -lh results/* We can also investigate our log, error, and output files in the logs folder to ensure we obtained the resulting output of these files that we expected. For more information about running bioinformatics workflows on the OSG, we recommend our BLAST tutorial as well as our Samtools instillation guide.","title":"Scaling Up to Analyze Multiple Samples"},{"location":"software_examples/bioinformatics/tutorial-fastqc/","text":"Bioinformatics Tutorial: Quality Assessment of Data with FastQC \u00b6 The first step of most biofinformatic analyses is to assess the quality of the data you have recieved. In this example, we are working with real DNA sequencing data from a research project studying E. coli. We will use a common software, FastQC , to assess the quality of the data. Before we start, let us download the materials for this tutorial if we don't already have them: git clone https://github.com/OSGConnect/tutorial-fastqc Then let's navigate inside the tutorial-fastqc directory: cd ~/tutorial-fastqc We can confirm our location by printing our working directory using pwd : pwd We should see /home//tutorial-fastqc . Step 1: Download data \u00b6 First, we need to download our sequencing data to that we want to analyze for our research project. For this tutorial, we will be downloading data used in the Data Carpentry workshop. This data includes both the genome of Escherichia coli (E. coli) and paired-end RNA sequencing reads obtained from a study carried out by Blount et al. published in PNAS . Additional information about how the data was modified in preparation for this analysis can be found on the Data Carpentry's workshop website . We have a script called download_data.sh that will download our bioinformatic data. Let's go ahead and run this script to download our data. ./download_data.sh Our sequencing data files, all ending in .fastq, can now be found in a folder called /data. Step 2: Prepare software \u00b6 Now that we have our data, we need to install the software we want to use to analyze it. There are different ways to install and use software, including installing from source, using pre-compiled binaries, and containers. In the biology domains, many software packages are already available as pre-built containers. We can fetch one of these containers and have HTCondor set it up for our job, which means we do not have to install the FastQC software or it's dependencies. We will use a Docker container built by the State Public Health Bioinformatics Community (staphb), and convert it to an apptainer container by creating an apptainer definition file: ls software/ cat software/fastqc.def And then running a command to build an apptainer container (which we won't run, but is listed here for future reference): $ apptainer build fastqc.sif software/fastqc.def Instead, we will download our ready-to-go apptainer .sif file: ./download_software.sh ls software/ Step 3: Prepare an Executable \u00b6 We need to create an executable to pass to our HTCondor jobs, so that HTCondor knows what to run on our behalf. Let's take a look at our executable, fastqc.sh : cat fastqc.sh Step 4: Prepare HTCondor Submit File to Run One Job \u00b6 Now we create our HTCondor submit file, which tells HTCondor what to run and how many resources to make available to our job: cat fastqc.submit Step 5: Submit One HTCondor Job and Check Results \u00b6 We are ready to submit our first job! condor_submit fastqc.submit We can check on the status of our job in HTCondor's queue using: condor_q By using transfer_output_remaps in our submit file, we told HTCondor to store our FastQC output files in the results directory. Let's take a look at our scientific results: ls results/ It's always good practice to look at our standard error, standard out, and HTCondor log files to catch unexpected output: ls logs/ Step 6: Scale Out Your Analysis \u00b6 Create A List of All Files We Want Analyzed \u00b6 To queue a job to analyze each of our sequencing data files, we will take advantage of HTCondor's queue statement. First, let's create a list of files we want analyzed: ls data/ | cut -f1 -d \".\" > list_of_samples.txt Let us take a look at the contents of this file: cat list_of_samples.txt Edits the Submit File to Queue a Job to Analyze Each Biological Sample HTCondor has different queue syntaxes to help researchers automatically queue many jobs. We will use queue from to queue a job for each of of our samples in list_of_samples.txt . Once we define , we can also use it elsewhere in the submit file. Let's replace each occurence of the sample identifier with the $(sample) variable, and then iterating through our list of samples as shown in list_of_samples.txt . cat many-fastqc.submit # HTCondor Submit File: fastqc.submit # Provide our executable and arguments executable = fastqc.sh arguments = $(sample).trim.sub.fastq # Provide the container for our software universe = container container_image = software/fastqc.sif # List files that need to be transferred to the job transfer_input_files = data/$(sample).trim.sub.fastq should_transfer_files = YES # Tell HTCondor to transfer output to our /results directory transfer_output_files = $(sample).trim.sub_fastqc.html transfer_output_remaps = \"$(sample).trim.sub_fastqc.html = results/$(sample).trim.sub_fastqc.html\" # Track job information log = logs/fastqc.log output = logs/fastqc.out error = logs/fastqc.err # Resource Requests request_cpus = 1 request_memory = 1GB request_disk = 1GB # Tell HTCondor to run our job once: queue sample from list_of_samples.txt And then submit many jobs using this single submit file! condor_submit many-fastqc.submit Notice that using a single submit file , we now have multiple jobs in the queue . We can check on the status of our multiple jobs in HTCondor's queue by using: condor_q When ready, we can check our results in our results/ directory: ls results/ Step 7: Return the output to your local computer \u00b6 Once you are done with your computational analysis, you will want to move the results to your local computer or to a long term storage location. Let's practice copying our .html files to our local laptop. First, open a new terminal. Do not log into your OSPool account. Instead, navigate to where you want the files to go on your computer. We will store them in our Downloads folder. cd ~/Downloads Then use scp (\"secure copy\") command to copy our results folder and it's contents: scp -r username@hostname:/home/username/tutorial-fastqc/results ./ For many files, it will be easiest to create a compressed tarball (.tar.gz file) of your files and transfer that instead of each file individually. An example of this could be scp -r username@ap40.uw.osg-htc.org:/home/username/results ./ Now, open the .html files using your internet browser on your local computer. Congratulations on finishing the first step of a sequencing analysis pipeline! \u00b6","title":"FastQC Quality Control"},{"location":"software_examples/bioinformatics/tutorial-fastqc/#bioinformatics-tutorial-quality-assessment-of-data-with-fastqc","text":"The first step of most biofinformatic analyses is to assess the quality of the data you have recieved. In this example, we are working with real DNA sequencing data from a research project studying E. coli. We will use a common software, FastQC , to assess the quality of the data. Before we start, let us download the materials for this tutorial if we don't already have them: git clone https://github.com/OSGConnect/tutorial-fastqc Then let's navigate inside the tutorial-fastqc directory: cd ~/tutorial-fastqc We can confirm our location by printing our working directory using pwd : pwd We should see /home//tutorial-fastqc .","title":"Bioinformatics Tutorial: Quality Assessment of Data with FastQC"},{"location":"software_examples/bioinformatics/tutorial-fastqc/#step-1-download-data","text":"First, we need to download our sequencing data to that we want to analyze for our research project. For this tutorial, we will be downloading data used in the Data Carpentry workshop. This data includes both the genome of Escherichia coli (E. coli) and paired-end RNA sequencing reads obtained from a study carried out by Blount et al. published in PNAS . Additional information about how the data was modified in preparation for this analysis can be found on the Data Carpentry's workshop website . We have a script called download_data.sh that will download our bioinformatic data. Let's go ahead and run this script to download our data. ./download_data.sh Our sequencing data files, all ending in .fastq, can now be found in a folder called /data.","title":"Step 1: Download data"},{"location":"software_examples/bioinformatics/tutorial-fastqc/#step-2-prepare-software","text":"Now that we have our data, we need to install the software we want to use to analyze it. There are different ways to install and use software, including installing from source, using pre-compiled binaries, and containers. In the biology domains, many software packages are already available as pre-built containers. We can fetch one of these containers and have HTCondor set it up for our job, which means we do not have to install the FastQC software or it's dependencies. We will use a Docker container built by the State Public Health Bioinformatics Community (staphb), and convert it to an apptainer container by creating an apptainer definition file: ls software/ cat software/fastqc.def And then running a command to build an apptainer container (which we won't run, but is listed here for future reference): $ apptainer build fastqc.sif software/fastqc.def Instead, we will download our ready-to-go apptainer .sif file: ./download_software.sh ls software/","title":"Step 2: Prepare software"},{"location":"software_examples/bioinformatics/tutorial-fastqc/#step-3-prepare-an-executable","text":"We need to create an executable to pass to our HTCondor jobs, so that HTCondor knows what to run on our behalf. Let's take a look at our executable, fastqc.sh : cat fastqc.sh","title":"Step 3: Prepare an Executable"},{"location":"software_examples/bioinformatics/tutorial-fastqc/#step-4-prepare-htcondor-submit-file-to-run-one-job","text":"Now we create our HTCondor submit file, which tells HTCondor what to run and how many resources to make available to our job: cat fastqc.submit","title":"Step 4: Prepare HTCondor Submit File to Run One Job"},{"location":"software_examples/bioinformatics/tutorial-fastqc/#step-5-submit-one-htcondor-job-and-check-results","text":"We are ready to submit our first job! condor_submit fastqc.submit We can check on the status of our job in HTCondor's queue using: condor_q By using transfer_output_remaps in our submit file, we told HTCondor to store our FastQC output files in the results directory. Let's take a look at our scientific results: ls results/ It's always good practice to look at our standard error, standard out, and HTCondor log files to catch unexpected output: ls logs/","title":"Step 5: Submit One HTCondor Job and Check Results"},{"location":"software_examples/bioinformatics/tutorial-fastqc/#step-6-scale-out-your-analysis","text":"","title":"Step 6: Scale Out Your Analysis"},{"location":"software_examples/bioinformatics/tutorial-fastqc/#create-a-list-of-all-files-we-want-analyzed","text":"To queue a job to analyze each of our sequencing data files, we will take advantage of HTCondor's queue statement. First, let's create a list of files we want analyzed: ls data/ | cut -f1 -d \".\" > list_of_samples.txt Let us take a look at the contents of this file: cat list_of_samples.txt Edits the Submit File to Queue a Job to Analyze Each Biological Sample HTCondor has different queue syntaxes to help researchers automatically queue many jobs. We will use queue from to queue a job for each of of our samples in list_of_samples.txt . Once we define , we can also use it elsewhere in the submit file. Let's replace each occurence of the sample identifier with the $(sample) variable, and then iterating through our list of samples as shown in list_of_samples.txt . cat many-fastqc.submit # HTCondor Submit File: fastqc.submit # Provide our executable and arguments executable = fastqc.sh arguments = $(sample).trim.sub.fastq # Provide the container for our software universe = container container_image = software/fastqc.sif # List files that need to be transferred to the job transfer_input_files = data/$(sample).trim.sub.fastq should_transfer_files = YES # Tell HTCondor to transfer output to our /results directory transfer_output_files = $(sample).trim.sub_fastqc.html transfer_output_remaps = \"$(sample).trim.sub_fastqc.html = results/$(sample).trim.sub_fastqc.html\" # Track job information log = logs/fastqc.log output = logs/fastqc.out error = logs/fastqc.err # Resource Requests request_cpus = 1 request_memory = 1GB request_disk = 1GB # Tell HTCondor to run our job once: queue sample from list_of_samples.txt And then submit many jobs using this single submit file! condor_submit many-fastqc.submit Notice that using a single submit file , we now have multiple jobs in the queue . We can check on the status of our multiple jobs in HTCondor's queue by using: condor_q When ready, we can check our results in our results/ directory: ls results/","title":"Create A List of All Files We Want Analyzed"},{"location":"software_examples/bioinformatics/tutorial-fastqc/#step-7-return-the-output-to-your-local-computer","text":"Once you are done with your computational analysis, you will want to move the results to your local computer or to a long term storage location. Let's practice copying our .html files to our local laptop. First, open a new terminal. Do not log into your OSPool account. Instead, navigate to where you want the files to go on your computer. We will store them in our Downloads folder. cd ~/Downloads Then use scp (\"secure copy\") command to copy our results folder and it's contents: scp -r username@hostname:/home/username/tutorial-fastqc/results ./ For many files, it will be easiest to create a compressed tarball (.tar.gz file) of your files and transfer that instead of each file individually. An example of this could be scp -r username@ap40.uw.osg-htc.org:/home/username/results ./ Now, open the .html files using your internet browser on your local computer.","title":"Step 7: Return the output to your local computer"},{"location":"software_examples/bioinformatics/tutorial-fastqc/#congratulations-on-finishing-the-first-step-of-a-sequencing-analysis-pipeline","text":"","title":"Congratulations on finishing the first step of a sequencing analysis pipeline!"},{"location":"software_examples/drug_discovery/tutorial-AutoDockVina/","text":"Running a Molecule Docking Job with AutoDock Vina \u00b6 AutoDock Vina is a molecular docking program useful for computer aided drug design. In this tutorial, we will learn how to run AutoDock Vina on the OSPool. Tutorial Files \u00b6 It is easiest to start with the git clone command to download the materials for this tutorial. Type: $ git clone https://github.com/OSGConnect/tutorial-AutoDockVina This will create a directory tutorial-AutoDockVina . Change into the directory and look at the available files: $ cd tutorial-AutoDockVina $ ls $ ls data/ You should see the following: data/ receptor_config.txt # Configuration file (input) receptor.pdbqt # Receptor coordinates and atomic charges (input) ligand.pdbqt # Ligand coordinates and atomic charges (input) logs/ # Empty folder for job log files vina_job.submit # Job submission file vina_run.sh # Execution script We need to download the AutoDock program separately into the this directory as well. Go to the AutoDock Vina website and click on the Download link at the top of the page. This will then lead you to the GitHub Downloads page . Download the Linux x86_64 version of the program; you can do this directly to the current directory by using the wget command and the download link. If you use the -O option shown below, it will rename the program to match what is used in the rest of the guide. $ wget https://github.com/ccsb-scripps/AutoDock-Vina/releases/download/v1.2.5/vina_1.2.5_linux_x86_64 -O vina Once downloaded, we also need to give the program executable permissions. We can test that it worked by running vina with the help flag: $ chmod +x vina $ ./vina --help Files Need to Submit the Job \u00b6 The file vina_job.submit is the job submission file and contains the description of the job in HTCondor language. Specifically, it includes an \"executable\" (the script HTCondor will use in the job to run vina), a list of the files needed to run the job (shown in \"transfer_input_files\"), and indications of where to write logging information and what resources and requirements the job needs. Change needed: If your downloaded program file has a different name, change the name in the transfer_input_files line below. executable = vina_run.sh transfer_input_files = data/, vina should_transfer_files = Yes when_to_transfer_output = ON_EXIT output = logs/job.$(Cluster).$(Process).out error = logs/job.$(Cluster).$(Process).error log = logs/job.$(Cluster).$(Process).log request_cpus = 1 request_memory = 1GB request_disk = 512MB queue 1 Next we see the execution script vina_run.sh . The execution script and its commands are executed on a worker node out in the Open Science Pool. Change needed: If your vina program file has a different name, change it in the script below: #!/bin/bash # Run vina ./vina --config receptor_config.txt \\ --ligand ligand.pdbqt --out receptor-ligand.pdbqt Submit the Docking Job \u00b6 We submit the job using condor_submit command as follows $ condor_submit vina_job.submit Now you have submitted the AutoDock Vina job on the OSPool. The present job should be finished quickly (less than 10 mins). You can check the status of the submitted job by using the condor_q command as follows: $ condor_q After job completion, you will see the output file receptor-ligand.pdbqt . Next Steps \u00b6 After running this example, you may want to scale up to testing multiple molecules or ligands. What to Consider \u00b6 Decide how many docking runs you want to try per job. If one molecule can be tested in a few seconds, you can probably run a few hundred in a job that runs in about an hour. How should you divide up the input data in this case? Do you need individual input files for each molecule, or can you use one to share? Should the molecule files all get copied to every job or just the jobs where they're needed? You can separate groups of files by putting them in separate directories or tar.gz files to help with this. Look at this guide to see different ways that you can use HTCondor to submit multiple jobs at once. If you want to use a different (or additional) docking programs, you can include them in the same job by downloading and including those software files in your job submission. Example of Multiple Runs \u00b6 Included in this directory is one approach to analyzing multiple ligands, by submitting multiple jobs. For the given files we are assuming that there are multiple directories with input files we want to run ( run01 , run02 , run03 , etc.) and each job will process all of the ligands in one of these \"run\" folders. In the script, vina_multi.sh , we had added a for loop in order to process all the ligands that were included with the job. We will also place those results into a single folder to make it easier to organize them back on the access point: #!/bin/bash # Make a directory for results mkdir results # Run vina on multiple ligands for LIGAND in *ligand.pdbqt do ./vina --config receptor_config.txt \\ --ligand ${LIGAND} --out results/receptor-${LIGAND} done Note that this for loop assumes that all of the ligands have a naming scheme that we can match using a wildcard (the * symbol). In the submit file, we have added a line called transfer_output_files to transfer back the results folder from each job. We have also replaced the single input directory data with a variable inputdir , representing one of the run directories. The value of that variable is set via the queue statement at the end of the submit file: executable = vina_multi.sh transfer_input_files = $(inputdir)/, vina transfer_output_files = results # ... other job options queue inputdir matching run* Getting Help \u00b6 For assistance or questions, please email the OSG User Support team at support@osg-htc.org or visit the help desk and community forums .","title":"Running a Molecule Docking Job with AutoDock Vina"},{"location":"software_examples/drug_discovery/tutorial-AutoDockVina/#running-a-molecule-docking-job-with-autodock-vina","text":"AutoDock Vina is a molecular docking program useful for computer aided drug design. In this tutorial, we will learn how to run AutoDock Vina on the OSPool.","title":"Running a Molecule Docking Job with AutoDock Vina"},{"location":"software_examples/drug_discovery/tutorial-AutoDockVina/#tutorial-files","text":"It is easiest to start with the git clone command to download the materials for this tutorial. Type: $ git clone https://github.com/OSGConnect/tutorial-AutoDockVina This will create a directory tutorial-AutoDockVina . Change into the directory and look at the available files: $ cd tutorial-AutoDockVina $ ls $ ls data/ You should see the following: data/ receptor_config.txt # Configuration file (input) receptor.pdbqt # Receptor coordinates and atomic charges (input) ligand.pdbqt # Ligand coordinates and atomic charges (input) logs/ # Empty folder for job log files vina_job.submit # Job submission file vina_run.sh # Execution script We need to download the AutoDock program separately into the this directory as well. Go to the AutoDock Vina website and click on the Download link at the top of the page. This will then lead you to the GitHub Downloads page . Download the Linux x86_64 version of the program; you can do this directly to the current directory by using the wget command and the download link. If you use the -O option shown below, it will rename the program to match what is used in the rest of the guide. $ wget https://github.com/ccsb-scripps/AutoDock-Vina/releases/download/v1.2.5/vina_1.2.5_linux_x86_64 -O vina Once downloaded, we also need to give the program executable permissions. We can test that it worked by running vina with the help flag: $ chmod +x vina $ ./vina --help","title":"Tutorial Files"},{"location":"software_examples/drug_discovery/tutorial-AutoDockVina/#files-need-to-submit-the-job","text":"The file vina_job.submit is the job submission file and contains the description of the job in HTCondor language. Specifically, it includes an \"executable\" (the script HTCondor will use in the job to run vina), a list of the files needed to run the job (shown in \"transfer_input_files\"), and indications of where to write logging information and what resources and requirements the job needs. Change needed: If your downloaded program file has a different name, change the name in the transfer_input_files line below. executable = vina_run.sh transfer_input_files = data/, vina should_transfer_files = Yes when_to_transfer_output = ON_EXIT output = logs/job.$(Cluster).$(Process).out error = logs/job.$(Cluster).$(Process).error log = logs/job.$(Cluster).$(Process).log request_cpus = 1 request_memory = 1GB request_disk = 512MB queue 1 Next we see the execution script vina_run.sh . The execution script and its commands are executed on a worker node out in the Open Science Pool. Change needed: If your vina program file has a different name, change it in the script below: #!/bin/bash # Run vina ./vina --config receptor_config.txt \\ --ligand ligand.pdbqt --out receptor-ligand.pdbqt","title":"Files Need to Submit the Job"},{"location":"software_examples/drug_discovery/tutorial-AutoDockVina/#submit-the-docking-job","text":"We submit the job using condor_submit command as follows $ condor_submit vina_job.submit Now you have submitted the AutoDock Vina job on the OSPool. The present job should be finished quickly (less than 10 mins). You can check the status of the submitted job by using the condor_q command as follows: $ condor_q After job completion, you will see the output file receptor-ligand.pdbqt .","title":"Submit the Docking Job"},{"location":"software_examples/drug_discovery/tutorial-AutoDockVina/#next-steps","text":"After running this example, you may want to scale up to testing multiple molecules or ligands.","title":"Next Steps"},{"location":"software_examples/drug_discovery/tutorial-AutoDockVina/#what-to-consider","text":"Decide how many docking runs you want to try per job. If one molecule can be tested in a few seconds, you can probably run a few hundred in a job that runs in about an hour. How should you divide up the input data in this case? Do you need individual input files for each molecule, or can you use one to share? Should the molecule files all get copied to every job or just the jobs where they're needed? You can separate groups of files by putting them in separate directories or tar.gz files to help with this. Look at this guide to see different ways that you can use HTCondor to submit multiple jobs at once. If you want to use a different (or additional) docking programs, you can include them in the same job by downloading and including those software files in your job submission.","title":"What to Consider"},{"location":"software_examples/drug_discovery/tutorial-AutoDockVina/#example-of-multiple-runs","text":"Included in this directory is one approach to analyzing multiple ligands, by submitting multiple jobs. For the given files we are assuming that there are multiple directories with input files we want to run ( run01 , run02 , run03 , etc.) and each job will process all of the ligands in one of these \"run\" folders. In the script, vina_multi.sh , we had added a for loop in order to process all the ligands that were included with the job. We will also place those results into a single folder to make it easier to organize them back on the access point: #!/bin/bash # Make a directory for results mkdir results # Run vina on multiple ligands for LIGAND in *ligand.pdbqt do ./vina --config receptor_config.txt \\ --ligand ${LIGAND} --out results/receptor-${LIGAND} done Note that this for loop assumes that all of the ligands have a naming scheme that we can match using a wildcard (the * symbol). In the submit file, we have added a line called transfer_output_files to transfer back the results folder from each job. We have also replaced the single input directory data with a variable inputdir , representing one of the run directories. The value of that variable is set via the queue statement at the end of the submit file: executable = vina_multi.sh transfer_input_files = $(inputdir)/, vina transfer_output_files = results # ... other job options queue inputdir matching run*","title":"Example of Multiple Runs"},{"location":"software_examples/drug_discovery/tutorial-AutoDockVina/#getting-help","text":"For assistance or questions, please email the OSG User Support team at support@osg-htc.org or visit the help desk and community forums .","title":"Getting Help"},{"location":"software_examples/freesurfer/Introduction/","text":"FreeSurfer \u00b6 Overview \u00b6 FreeSurfer is a software package to analyze MRI scans of human brains. OSG used to have a hosted service, called Fsurf. This is no longer available. Instead, OSG provides a container image, and one of our collaborators provides an optional workflow using that container. Container image: /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-freesurfer:latest and defined at https://github.com/opensciencegrid/osgvo-freesurfer FreeSurfer Workflow The container can be used with simple jobs as described below. Prerequisites \u00b6 To use the FreeSurfer on the Open Science Pool (OSPool), you need: Your own FreeSurfer license file (see: https://surfer.nmr.mgh.harvard.edu/fswiki/DownloadAndInstall#License ) An account on an OSPool access point. Privacy and Confidentiality of Subjects \u00b6 In order to protect the privacy of your participants\u2019 scans, we require that you submit only defaced and fully deidentified scans for processing . Single Job \u00b6 The following example job has three files: job.submit , freesurfer-wrapper.sh and license.txt job.submit contents: Requirements = HAS_SINGULARITY == True +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-freesurfer:latest\" executable = freesurfer-wrapper.sh transfer_input_files = license.txt, sub-THP0001_ses-THP0001UCI1_run-01_T1w.nii.gz error = job.$(Cluster).$(Process).error output = job.$(Cluster).$(Process).output log = job.$(Cluster).$(Process).log request_cpus = 1 request_memory = 1 GB request_disk = 4 GB queue 1 freesurfer-wrapper.sh contents: #!/bin/bash set -e # freesurfer environment . /opt/setup.sh # license file comes with the job export FS_LICENSE=`pwd`/license.txt export SUBJECTS_DIR=$PWD recon-all -subject THP0001 -i sub-THP0001_ses-THP0001UCI1_run-01_T1w.nii.gz -autorecon1 -cw256 # tar up the subjects directory so it gets transferred back tar czf THP0001.tar.gz THP0001 rm -rf THP0001 license.txt should have the license data obtained from the Freesurfer project.","title":"FreeSurfer"},{"location":"software_examples/freesurfer/Introduction/#freesurfer","text":"","title":"FreeSurfer"},{"location":"software_examples/freesurfer/Introduction/#overview","text":"FreeSurfer is a software package to analyze MRI scans of human brains. OSG used to have a hosted service, called Fsurf. This is no longer available. Instead, OSG provides a container image, and one of our collaborators provides an optional workflow using that container. Container image: /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-freesurfer:latest and defined at https://github.com/opensciencegrid/osgvo-freesurfer FreeSurfer Workflow The container can be used with simple jobs as described below.","title":"Overview"},{"location":"software_examples/freesurfer/Introduction/#prerequisites","text":"To use the FreeSurfer on the Open Science Pool (OSPool), you need: Your own FreeSurfer license file (see: https://surfer.nmr.mgh.harvard.edu/fswiki/DownloadAndInstall#License ) An account on an OSPool access point.","title":"Prerequisites"},{"location":"software_examples/freesurfer/Introduction/#privacy-and-confidentiality-of-subjects","text":"In order to protect the privacy of your participants\u2019 scans, we require that you submit only defaced and fully deidentified scans for processing .","title":"Privacy and Confidentiality of Subjects"},{"location":"software_examples/freesurfer/Introduction/#single-job","text":"The following example job has three files: job.submit , freesurfer-wrapper.sh and license.txt job.submit contents: Requirements = HAS_SINGULARITY == True +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-freesurfer:latest\" executable = freesurfer-wrapper.sh transfer_input_files = license.txt, sub-THP0001_ses-THP0001UCI1_run-01_T1w.nii.gz error = job.$(Cluster).$(Process).error output = job.$(Cluster).$(Process).output log = job.$(Cluster).$(Process).log request_cpus = 1 request_memory = 1 GB request_disk = 4 GB queue 1 freesurfer-wrapper.sh contents: #!/bin/bash set -e # freesurfer environment . /opt/setup.sh # license file comes with the job export FS_LICENSE=`pwd`/license.txt export SUBJECTS_DIR=$PWD recon-all -subject THP0001 -i sub-THP0001_ses-THP0001UCI1_run-01_T1w.nii.gz -autorecon1 -cw256 # tar up the subjects directory so it gets transferred back tar czf THP0001.tar.gz THP0001 rm -rf THP0001 license.txt should have the license data obtained from the Freesurfer project.","title":"Single Job"},{"location":"software_examples/machine_learning/tutorial-tensorflow-containers/","text":"Working with Tensorflow, GPUs, and containers \u00b6 In this tutorial, we explore GPUs and containers on OSG, using the popular Tensorflow sofware package. Tensorflow is a good example here as the software is too complex to bundle up and ship with your job. Containers solve this problem by defining a full OS image, containing not only the complex software package, but dependencies and environment configuration as well. https://www.tensorflow.org/ desribes TensorFlow as: TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google's Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well. Defining container images \u00b6 Defining containers is fully described in the Docker and Singularity Containers section. Here we will just provide an overview of how you could take something like an existing Tensorflow image provided by OSG staff, and extend it by adding your own modules to it. Let's assume you like Tensorflow version 2.3. The definition of this image can be found in Github: Dockerfile . You don't really need to understand how an image was built in order to use it. As described in the containers documentation, make sure the HTCondor submit file has: Requirements = HAS_SINGULARITY == TRUE +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow:2.3\" If you want to extend an existing image, you can just inherit from the parent image available on DockerHub here . For example, if you just need some additional Python packages, your new Dockerfile could look like: FROM opensciencegrid/tensorflow:2.3 RUN python3 -m pip install some_package_name You can then docker build and docker push it so that your new image is available on DockerHub. Note that OSG does not provide any infrastructure for these steps. You will have to complete them on your own computer or using the DockerHub build infrastructure. Adding a container to the OSG CVMFS distribution mechanism \u00b6 How to add a container image to the OSG CVMFS distribution mechanism is also described in Docker and Singularity Containers , but a quick scan of the cvmfs-singularity-sync and specifically the docker_images.txt file show us that the tensorflow images are listed as: opensciencegrid/tensorflow:* opensciencegrid/tensorflow-gpu:* Those two lines means that all tags from those two DockerHub repositories should be mapped to /cvmfs/singularity.opensciencegrid.org/ . On the login node, try running: ls /cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow:2.3/ This is the image in its expanded form - something we can execute with Singularity! Testing the container on the submit host \u00b6 First, download the files contained in this tutorial to the login node using the git clone command and cd into the tutorial directory that is created: git clone https://github.com/OSGConnect/tutorial-tensorflow-containers cd tutorial-tensorflow-containers Before submitting jobs to the OSG, it is always a good idea to test your code so that you understand runtime requirements. The containers can be tested on the OSGConnect submit hosts with singularity shell , which will drop you into a container and let you exlore it interactively. To explore the Tensorflow 2.3 image, run: singularity shell /cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow:2.3/ Note how the command line prompt changes, providing you an indicator that you are inside the image. You can exit any time by running exit . Another important thing to note is that your $HOME directory is automatically mounted inside the interactive container - allowing you to access your codes and test it out. First, start with a simple python3 import test to make sure tensorflow is available: $ python3 Python 3.6.9 (default, Jul 17 2020, 12:50:27) [GCC 8.4.0] on linux Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> import tensorflow 2021-01-15 17:32:33.901607: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory 2021-01-15 17:32:33.901735: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. >>> Tensorflow will warn you that no GPUs where found. This is expected as we do not have GPUs attached to our login nodes, and it is fine as Tensorflow works fine with regular CPUs (slower of course). Exit out of Python3 with CTRL+D and then we can run a Tensorflow testcode which can be found in this tutorial: $ python3 test.py 2021-01-15 17:37:43.152892: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory 2021-01-15 17:37:43.153021: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. 2021-01-15 17:37:44.899967: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory 2021-01-15 17:37:44.900063: W tensorflow/stream_executor/cuda/cuda_driver.cc:312] failed call to cuInit: UNKNOWN ERROR (303) 2021-01-15 17:37:44.900130: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (login05.osgconnect.net): /proc/driver/nvidia/version does not exist 2021-01-15 17:37:44.900821: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-01-15 17:37:44.912483: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2700000000 Hz 2021-01-15 17:37:44.915548: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4fa0bf0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2021-01-15 17:37:44.915645: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2021-01-15 17:37:44.921895: I tensorflow/core/common_runtime/eager/execute.cc:611] Executing op MatMul in device /job:localhost/replica:0/task:0/device:CPU:0 tf.Tensor( [[22. 28.] [49. 64.]], shape=(2, 2), dtype=float32) We will again see a bunch of warnings regarding GPUs not being available, but as we can see by the /job:localhost/replica:0/task:0/device:CPU:0 line, the code ran on one of the CPUs. When testing your own code like this, take note of how much memory, disk and runtime is required - it is needed in the next step. Once you are done with testing, use CTRL+D or run exit to exit out of the container. Note that you can not submit jobs from within the container. Running a CPU job \u00b6 If Tensorflow can run on GPUs, you might be wondering why we might want to run it on slower CPUs? One reason is that CPUs are plentiful while GPUs are still somewhat scarce. If you have a lot of shorter Tensorflow jobs, they might complete faster on available CPUs, rather than wait in the queue for the faster, less available, GPUs. The good news is that Tensorflow code should work in both enviroments automatically, so if your code runs too slow on CPUs, moving to GPUs should be easy. To submit our job, we need a submit file and a job wrapper script. The submit file is a basic OSGConnect flavored HTCondor file, specifying that we want the job to run in a container. cpu-job.submit contains: universe = vanilla # Job requirements - ensure we are running on a Singularity enabled # node and have enough resources to execute our code # Tensorflow also requires AVX instruction set and a newer host kernel Requirements = HAS_SINGULARITY == True && HAS_AVX2 == True && OSG_HOST_KERNEL_VERSION >= 31000 request_cpus = 1 request_gpus = 0 request_memory = 1 GB request_disk = 1 GB # Container image to run the job in +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow:2.3\" # Executable is the program your job will run It's often useful # to create a shell script to \"wrap\" your actual work. Executable = job-wrapper.sh Arguments = # Inputs/outputs - in this case we just need our python code. # If you leave out transfer_output_files, all generated files comes back transfer_input_files = test.py #transfer_output_files = # Error and output are the error and output channels from your job # that HTCondor returns from the remote host. Error = $(Cluster).$(Process).error Output = $(Cluster).$(Process).output # The LOG file is where HTCondor places information about your # job's status, success, and resource consumption. Log = $(Cluster).log # Send the job to Held state on failure. #on_exit_hold = (ExitBySignal == True) || (ExitCode != 0) # Periodically retry the jobs every 1 hour, up to a maximum of 5 retries. #periodic_release = (NumJobStarts < 5) && ((CurrentTime - EnteredCurrentStatus) > 60*60) # queue is the \"start button\" - it launches any jobs that have been # specified thus far. queue 1 And job-wrapper.sh: #!/bin/bash set -e # set TMPDIR variable export TMPDIR=$_CONDOR_SCRATCH_DIR echo echo \"I'm running on\" $(hostname -f) echo \"OSG site: $OSG_SITE_NAME\" echo python3 test.py 2>&1 The job can now be submitted with condor_submit cpu-job.submit . Once the job is done, check the files named after the job id for the outputs. Running a GPU job \u00b6 When moving the job to be run on a GPU, all we have to do is update two lines in the submit file: set request_gpus to 1 and specify a GPU enabled container image for +SingularityImage . The updated submit file can be found in gpu-job.submit with the contents: universe = vanilla # Job requirements - ensure we are running on a Singularity enabled # node and have enough resources to execute our code # Tensorflow also requires AVX instruction set and a newer host kernel Requirements = HAS_SINGULARITY == True && HAS_AVX2 == True && OSG_HOST_KERNEL_VERSION >= 31000 request_cpus = 1 request_gpus = 1 request_memory = 1 GB request_disk = 1 GB # Container image to run the job in +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow-gpu:2.3\" # Executable is the program your job will run It's often useful # to create a shell script to \"wrap\" your actual work. Executable = job-wrapper.sh Arguments = # Inputs/outputs - in this case we just need our python code. # If you leave out transfer_output_files, all generated files comes back transfer_input_files = test.py #transfer_output_files = # Error and output are the error and output channels from your job # that HTCondor returns from the remote host. Error = $(Cluster).$(Process).error Output = $(Cluster).$(Process).output # The LOG file is where HTCondor places information about your # job's status, success, and resource consumption. Log = $(Cluster).log # Send the job to Held state on failure. #on_exit_hold = (ExitBySignal == True) || (ExitCode != 0) # Periodically retry the jobs every 1 hour, up to a maximum of 5 retries. #periodic_release = (NumJobStarts < 5) && ((CurrentTime - EnteredCurrentStatus) > 60*60) # queue is the \"start button\" - it launches any jobs that have been # specified thus far. queue 1 Submit a job with condor_submit gpu-job.submit . Once the job is complete, check the .out file for a line stating the code was run under a GPU. Something similar to: 2021-02-02 23:25:19.022467: I tensorflow/core/common_runtime/eager/execute.cc:611] Executing op MatMul in device /job:localhost/replica:0/task:0/device:GPU:0 The GPU:0 parts shows that a GPU was found and used for the computation.","title":"Working with Tensorflow, GPUs, and containers"},{"location":"software_examples/machine_learning/tutorial-tensorflow-containers/#working-with-tensorflow-gpus-and-containers","text":"In this tutorial, we explore GPUs and containers on OSG, using the popular Tensorflow sofware package. Tensorflow is a good example here as the software is too complex to bundle up and ship with your job. Containers solve this problem by defining a full OS image, containing not only the complex software package, but dependencies and environment configuration as well. https://www.tensorflow.org/ desribes TensorFlow as: TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google's Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well.","title":"Working with Tensorflow, GPUs, and containers"},{"location":"software_examples/machine_learning/tutorial-tensorflow-containers/#defining-container-images","text":"Defining containers is fully described in the Docker and Singularity Containers section. Here we will just provide an overview of how you could take something like an existing Tensorflow image provided by OSG staff, and extend it by adding your own modules to it. Let's assume you like Tensorflow version 2.3. The definition of this image can be found in Github: Dockerfile . You don't really need to understand how an image was built in order to use it. As described in the containers documentation, make sure the HTCondor submit file has: Requirements = HAS_SINGULARITY == TRUE +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow:2.3\" If you want to extend an existing image, you can just inherit from the parent image available on DockerHub here . For example, if you just need some additional Python packages, your new Dockerfile could look like: FROM opensciencegrid/tensorflow:2.3 RUN python3 -m pip install some_package_name You can then docker build and docker push it so that your new image is available on DockerHub. Note that OSG does not provide any infrastructure for these steps. You will have to complete them on your own computer or using the DockerHub build infrastructure.","title":"Defining container images"},{"location":"software_examples/machine_learning/tutorial-tensorflow-containers/#adding-a-container-to-the-osg-cvmfs-distribution-mechanism","text":"How to add a container image to the OSG CVMFS distribution mechanism is also described in Docker and Singularity Containers , but a quick scan of the cvmfs-singularity-sync and specifically the docker_images.txt file show us that the tensorflow images are listed as: opensciencegrid/tensorflow:* opensciencegrid/tensorflow-gpu:* Those two lines means that all tags from those two DockerHub repositories should be mapped to /cvmfs/singularity.opensciencegrid.org/ . On the login node, try running: ls /cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow:2.3/ This is the image in its expanded form - something we can execute with Singularity!","title":"Adding a container to the OSG CVMFS distribution mechanism"},{"location":"software_examples/machine_learning/tutorial-tensorflow-containers/#testing-the-container-on-the-submit-host","text":"First, download the files contained in this tutorial to the login node using the git clone command and cd into the tutorial directory that is created: git clone https://github.com/OSGConnect/tutorial-tensorflow-containers cd tutorial-tensorflow-containers Before submitting jobs to the OSG, it is always a good idea to test your code so that you understand runtime requirements. The containers can be tested on the OSGConnect submit hosts with singularity shell , which will drop you into a container and let you exlore it interactively. To explore the Tensorflow 2.3 image, run: singularity shell /cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow:2.3/ Note how the command line prompt changes, providing you an indicator that you are inside the image. You can exit any time by running exit . Another important thing to note is that your $HOME directory is automatically mounted inside the interactive container - allowing you to access your codes and test it out. First, start with a simple python3 import test to make sure tensorflow is available: $ python3 Python 3.6.9 (default, Jul 17 2020, 12:50:27) [GCC 8.4.0] on linux Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> import tensorflow 2021-01-15 17:32:33.901607: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory 2021-01-15 17:32:33.901735: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. >>> Tensorflow will warn you that no GPUs where found. This is expected as we do not have GPUs attached to our login nodes, and it is fine as Tensorflow works fine with regular CPUs (slower of course). Exit out of Python3 with CTRL+D and then we can run a Tensorflow testcode which can be found in this tutorial: $ python3 test.py 2021-01-15 17:37:43.152892: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory 2021-01-15 17:37:43.153021: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. 2021-01-15 17:37:44.899967: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory 2021-01-15 17:37:44.900063: W tensorflow/stream_executor/cuda/cuda_driver.cc:312] failed call to cuInit: UNKNOWN ERROR (303) 2021-01-15 17:37:44.900130: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (login05.osgconnect.net): /proc/driver/nvidia/version does not exist 2021-01-15 17:37:44.900821: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-01-15 17:37:44.912483: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2700000000 Hz 2021-01-15 17:37:44.915548: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4fa0bf0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2021-01-15 17:37:44.915645: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2021-01-15 17:37:44.921895: I tensorflow/core/common_runtime/eager/execute.cc:611] Executing op MatMul in device /job:localhost/replica:0/task:0/device:CPU:0 tf.Tensor( [[22. 28.] [49. 64.]], shape=(2, 2), dtype=float32) We will again see a bunch of warnings regarding GPUs not being available, but as we can see by the /job:localhost/replica:0/task:0/device:CPU:0 line, the code ran on one of the CPUs. When testing your own code like this, take note of how much memory, disk and runtime is required - it is needed in the next step. Once you are done with testing, use CTRL+D or run exit to exit out of the container. Note that you can not submit jobs from within the container.","title":"Testing the container on the submit host"},{"location":"software_examples/machine_learning/tutorial-tensorflow-containers/#running-a-cpu-job","text":"If Tensorflow can run on GPUs, you might be wondering why we might want to run it on slower CPUs? One reason is that CPUs are plentiful while GPUs are still somewhat scarce. If you have a lot of shorter Tensorflow jobs, they might complete faster on available CPUs, rather than wait in the queue for the faster, less available, GPUs. The good news is that Tensorflow code should work in both enviroments automatically, so if your code runs too slow on CPUs, moving to GPUs should be easy. To submit our job, we need a submit file and a job wrapper script. The submit file is a basic OSGConnect flavored HTCondor file, specifying that we want the job to run in a container. cpu-job.submit contains: universe = vanilla # Job requirements - ensure we are running on a Singularity enabled # node and have enough resources to execute our code # Tensorflow also requires AVX instruction set and a newer host kernel Requirements = HAS_SINGULARITY == True && HAS_AVX2 == True && OSG_HOST_KERNEL_VERSION >= 31000 request_cpus = 1 request_gpus = 0 request_memory = 1 GB request_disk = 1 GB # Container image to run the job in +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow:2.3\" # Executable is the program your job will run It's often useful # to create a shell script to \"wrap\" your actual work. Executable = job-wrapper.sh Arguments = # Inputs/outputs - in this case we just need our python code. # If you leave out transfer_output_files, all generated files comes back transfer_input_files = test.py #transfer_output_files = # Error and output are the error and output channels from your job # that HTCondor returns from the remote host. Error = $(Cluster).$(Process).error Output = $(Cluster).$(Process).output # The LOG file is where HTCondor places information about your # job's status, success, and resource consumption. Log = $(Cluster).log # Send the job to Held state on failure. #on_exit_hold = (ExitBySignal == True) || (ExitCode != 0) # Periodically retry the jobs every 1 hour, up to a maximum of 5 retries. #periodic_release = (NumJobStarts < 5) && ((CurrentTime - EnteredCurrentStatus) > 60*60) # queue is the \"start button\" - it launches any jobs that have been # specified thus far. queue 1 And job-wrapper.sh: #!/bin/bash set -e # set TMPDIR variable export TMPDIR=$_CONDOR_SCRATCH_DIR echo echo \"I'm running on\" $(hostname -f) echo \"OSG site: $OSG_SITE_NAME\" echo python3 test.py 2>&1 The job can now be submitted with condor_submit cpu-job.submit . Once the job is done, check the files named after the job id for the outputs.","title":"Running a CPU job"},{"location":"software_examples/machine_learning/tutorial-tensorflow-containers/#running-a-gpu-job","text":"When moving the job to be run on a GPU, all we have to do is update two lines in the submit file: set request_gpus to 1 and specify a GPU enabled container image for +SingularityImage . The updated submit file can be found in gpu-job.submit with the contents: universe = vanilla # Job requirements - ensure we are running on a Singularity enabled # node and have enough resources to execute our code # Tensorflow also requires AVX instruction set and a newer host kernel Requirements = HAS_SINGULARITY == True && HAS_AVX2 == True && OSG_HOST_KERNEL_VERSION >= 31000 request_cpus = 1 request_gpus = 1 request_memory = 1 GB request_disk = 1 GB # Container image to run the job in +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow-gpu:2.3\" # Executable is the program your job will run It's often useful # to create a shell script to \"wrap\" your actual work. Executable = job-wrapper.sh Arguments = # Inputs/outputs - in this case we just need our python code. # If you leave out transfer_output_files, all generated files comes back transfer_input_files = test.py #transfer_output_files = # Error and output are the error and output channels from your job # that HTCondor returns from the remote host. Error = $(Cluster).$(Process).error Output = $(Cluster).$(Process).output # The LOG file is where HTCondor places information about your # job's status, success, and resource consumption. Log = $(Cluster).log # Send the job to Held state on failure. #on_exit_hold = (ExitBySignal == True) || (ExitCode != 0) # Periodically retry the jobs every 1 hour, up to a maximum of 5 retries. #periodic_release = (NumJobStarts < 5) && ((CurrentTime - EnteredCurrentStatus) > 60*60) # queue is the \"start button\" - it launches any jobs that have been # specified thus far. queue 1 Submit a job with condor_submit gpu-job.submit . Once the job is complete, check the .out file for a line stating the code was run under a GPU. Something similar to: 2021-02-02 23:25:19.022467: I tensorflow/core/common_runtime/eager/execute.cc:611] Executing op MatMul in device /job:localhost/replica:0/task:0/device:GPU:0 The GPU:0 parts shows that a GPU was found and used for the computation.","title":"Running a GPU job"},{"location":"software_examples/matlab_runtime/tutorial-Matlab-ScalingUp/","text":"Scaling up compute resources \u00b6 Scaling up the computational resources is a big advantage for doing certain large-scale calculations on OSG. Consider the extensive sampling for a multi-dimensional Monte Carlo integration or molecular dynamics simulation with several initial conditions. These types of calculations require submitting a lot of jobs. In the previous example, we submitted the job to a single-worker machine. About a million CPU hours per day are available to OSG users on an opportunistic basis. Learning how to scale up and control large numbers of jobs will enable us to realize the full potential of distributed high throughput computing on the OSG. In this section, we will see how to scale up the calculations with a simple example. Once we understand the basic HTCondor script, it is easy to scale up. Background \u00b6 For this example, we will use computational methods to estimate pi. First, we will define a square inscribed by a unit circle from which we will randomly sample points. The ratio of the points outside the circle to the points in the circle is calculated which approaches pi/4. This method converges extremely slowly, which makes it great for a CPU-intensive exercise (but bad for a real estimation!). Set up a Matlab Job \u00b6 First, we'll need to create a working directory, you can either run $ tutorial Matlab-ScalingUp or $ git clone https://github.com/OSGConnect/tutorial-Matlab-ScalingUp to copy all the necessary files. Otherwise, you can create the files type the following: $ mkdir tutorial-Matlab-ScalingUp $ cd tutorial-Matlab-ScalingUp Matlab Script \u00b6 Create an Matlab script by typing the following into a file called mcpi.m : % Monte Carlo method for estimating pi % Generate N random points in a unit square function[] =mcpi(N) x = rand(N,1); % x coordinates y = rand(N,1); % y coordinates % Count how many points are inside a unit circle inside = 0; % counter for i = 1:N % loop over points if x(i)^2 + y(i)^2 <= 1 % check if inside circle inside = inside + 1; % increment counter end end % Estimate pi as the ratio of points inside circle to total points pi_est = 4 * inside / N; % pi estimate % Display the result fprintf(pi_est); end Compilation \u00b6 OSG does not have a license to use the MATLAB compiler . On a Linux server with a MATLAB license, invoke the compiler mcc . We turn off all graphical options ( -nodisplay ), disable Java ( -nojvm ), and instruct MATLAB to run this application as a single-threaded application ( -singleCompThread ): mcc -m -R -singleCompThread -R -nodisplay -R -nojvm mcpi.m The flag -m means C language translation during compilation, and the flag -R indicates runtime options. The compilation would produce the files: `mcpi, run_mcpi.sh, mccExcludedFiles.log` and `readme.txt` The file mcpi is the standalone executable. The file run_mcpi.sh is MATLAB generated shell script. mccExcludedFiles.log is the log file and readme.txt contains the information about the compilation process. We just need the standalone binary file mcpi . Running standalone binary applications on OSG \u00b6 To see which releases are available on OSG visit our available containers page : Tutorial files \u00b6 Let us say you have created the standalone binary mcpi . Transfer the file mcpi to your Access Point. Alternatively, you may also use the readily available files by using the git clone command: $ git clone https://github.com/OSGConnect/tutorial-Matlab-ScalingUp # Copies input and script files to the directory tutorial-Matlab-ScalingUp. This will create a directory tutorial-Matlab-ScalingUp . Inside the directory, you will see the following files mcpi # compiled executable binary of mcpi.m mcpi.m # matlab program mcpi.submit # condor job description file mcpi.sh # execution script Executing the MATLAB application binary \u00b6 The compilation and execution environment need to the same. The file mcpi is a standalone binary of the matlab program mcpi.m which was compiled using MATLAB 2020b on a Linux platform. The Access Point and many of the worker nodes on OSG are based on Linux platform. In addition to the platform requirement, we also need to have the same MATLAB Runtime version. Load the MATLAB runtime for 2020b version via apptainer/singularity command. On the terminal prompt, type $ apptainer shell /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2020b The above command sets up the environment to run the matlab/2020b runtime applications. Now execute the binary $apptainer/singularity> ./mcpi 10 If you get the an output of the estimated value of pi, the binary execution is successful. Now, exit from the apptainer/singularity environment typing exit . Next, we see how to submit the job on a remote execute point using HTCondor. Job execution and submission files \u00b6 Let us take a look at mcpi.submit file: universe = vanilla # One OSG Connect vanilla, the preffered job universe is \"vanilla\" +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2020b\" executable = mcpi arguments = $(Process) Output = Log/job.$(Process).out\u22c5 # standard output Error = Log/job.$(Process).err # standard error Log = Log/job.$(Process).log # log information about job execution requirements = HAS_SINGULARITY == TRUE queue 100 # Submit 100 jobs Before we submit the job, make sure that the directory Log exists on the current working directory. Because HTCondor looks for Log directory to copy the standard output, error and log files as specified in the job description file. From your work directory, type $ mkdir -p Log Absence of Log directory may send the jobs to held state. Job submmision \u00b6 We submit the job using the condor_submit command as follows $ condor_submit mcpi.submit //Submit the condor job description file \"mcpi.submit\" Now you have submitted an ensemble of 100 MATLAB jobs. Each job prints the value of pi on the standard output. Check the status of the submitted job, $ condor_q username # The status of the job is printed on the screen. Here, username is your login name. Post Process\u22c5 \u00b6 Once the jobs are completed, you can use the information in the output files to calculate an average of all of our computed estimates of Pi. To see this, we can use the command: $ cat log/mcpi*.out* | awk '{ sum += $2; print $2\" \"NR} END { print \"---------------\\n Grand Average = \" sum/NR }' Key Points \u00b6 Scaling up the computational resources on OSG is crucial to taking full advantage of distributed computing. Changing the value of Queue allows the user to scale up the resources. Arguments allows you to pass parameters to a job script. $(Cluster) and $(Process) can be used to name log files uniquely. Getting Help \u00b6 For assistance or questions, please email the OSG User Support team at support@osg-htc.org .","title":"Scaling up MATLAB"},{"location":"software_examples/matlab_runtime/tutorial-Matlab-ScalingUp/#scaling-up-compute-resources","text":"Scaling up the computational resources is a big advantage for doing certain large-scale calculations on OSG. Consider the extensive sampling for a multi-dimensional Monte Carlo integration or molecular dynamics simulation with several initial conditions. These types of calculations require submitting a lot of jobs. In the previous example, we submitted the job to a single-worker machine. About a million CPU hours per day are available to OSG users on an opportunistic basis. Learning how to scale up and control large numbers of jobs will enable us to realize the full potential of distributed high throughput computing on the OSG. In this section, we will see how to scale up the calculations with a simple example. Once we understand the basic HTCondor script, it is easy to scale up.","title":"Scaling up compute resources"},{"location":"software_examples/matlab_runtime/tutorial-Matlab-ScalingUp/#background","text":"For this example, we will use computational methods to estimate pi. First, we will define a square inscribed by a unit circle from which we will randomly sample points. The ratio of the points outside the circle to the points in the circle is calculated which approaches pi/4. This method converges extremely slowly, which makes it great for a CPU-intensive exercise (but bad for a real estimation!).","title":"Background"},{"location":"software_examples/matlab_runtime/tutorial-Matlab-ScalingUp/#set-up-a-matlab-job","text":"First, we'll need to create a working directory, you can either run $ tutorial Matlab-ScalingUp or $ git clone https://github.com/OSGConnect/tutorial-Matlab-ScalingUp to copy all the necessary files. Otherwise, you can create the files type the following: $ mkdir tutorial-Matlab-ScalingUp $ cd tutorial-Matlab-ScalingUp","title":"Set up a Matlab Job"},{"location":"software_examples/matlab_runtime/tutorial-Matlab-ScalingUp/#matlab-script","text":"Create an Matlab script by typing the following into a file called mcpi.m : % Monte Carlo method for estimating pi % Generate N random points in a unit square function[] =mcpi(N) x = rand(N,1); % x coordinates y = rand(N,1); % y coordinates % Count how many points are inside a unit circle inside = 0; % counter for i = 1:N % loop over points if x(i)^2 + y(i)^2 <= 1 % check if inside circle inside = inside + 1; % increment counter end end % Estimate pi as the ratio of points inside circle to total points pi_est = 4 * inside / N; % pi estimate % Display the result fprintf(pi_est); end","title":"Matlab Script"},{"location":"software_examples/matlab_runtime/tutorial-Matlab-ScalingUp/#compilation","text":"OSG does not have a license to use the MATLAB compiler . On a Linux server with a MATLAB license, invoke the compiler mcc . We turn off all graphical options ( -nodisplay ), disable Java ( -nojvm ), and instruct MATLAB to run this application as a single-threaded application ( -singleCompThread ): mcc -m -R -singleCompThread -R -nodisplay -R -nojvm mcpi.m The flag -m means C language translation during compilation, and the flag -R indicates runtime options. The compilation would produce the files: `mcpi, run_mcpi.sh, mccExcludedFiles.log` and `readme.txt` The file mcpi is the standalone executable. The file run_mcpi.sh is MATLAB generated shell script. mccExcludedFiles.log is the log file and readme.txt contains the information about the compilation process. We just need the standalone binary file mcpi .","title":"Compilation"},{"location":"software_examples/matlab_runtime/tutorial-Matlab-ScalingUp/#running-standalone-binary-applications-on-osg","text":"To see which releases are available on OSG visit our available containers page :","title":"Running standalone binary applications on OSG"},{"location":"software_examples/matlab_runtime/tutorial-Matlab-ScalingUp/#tutorial-files","text":"Let us say you have created the standalone binary mcpi . Transfer the file mcpi to your Access Point. Alternatively, you may also use the readily available files by using the git clone command: $ git clone https://github.com/OSGConnect/tutorial-Matlab-ScalingUp # Copies input and script files to the directory tutorial-Matlab-ScalingUp. This will create a directory tutorial-Matlab-ScalingUp . Inside the directory, you will see the following files mcpi # compiled executable binary of mcpi.m mcpi.m # matlab program mcpi.submit # condor job description file mcpi.sh # execution script","title":"Tutorial files"},{"location":"software_examples/matlab_runtime/tutorial-Matlab-ScalingUp/#executing-the-matlab-application-binary","text":"The compilation and execution environment need to the same. The file mcpi is a standalone binary of the matlab program mcpi.m which was compiled using MATLAB 2020b on a Linux platform. The Access Point and many of the worker nodes on OSG are based on Linux platform. In addition to the platform requirement, we also need to have the same MATLAB Runtime version. Load the MATLAB runtime for 2020b version via apptainer/singularity command. On the terminal prompt, type $ apptainer shell /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2020b The above command sets up the environment to run the matlab/2020b runtime applications. Now execute the binary $apptainer/singularity> ./mcpi 10 If you get the an output of the estimated value of pi, the binary execution is successful. Now, exit from the apptainer/singularity environment typing exit . Next, we see how to submit the job on a remote execute point using HTCondor.","title":"Executing the MATLAB application binary"},{"location":"software_examples/matlab_runtime/tutorial-Matlab-ScalingUp/#job-execution-and-submission-files","text":"Let us take a look at mcpi.submit file: universe = vanilla # One OSG Connect vanilla, the preffered job universe is \"vanilla\" +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2020b\" executable = mcpi arguments = $(Process) Output = Log/job.$(Process).out\u22c5 # standard output Error = Log/job.$(Process).err # standard error Log = Log/job.$(Process).log # log information about job execution requirements = HAS_SINGULARITY == TRUE queue 100 # Submit 100 jobs Before we submit the job, make sure that the directory Log exists on the current working directory. Because HTCondor looks for Log directory to copy the standard output, error and log files as specified in the job description file. From your work directory, type $ mkdir -p Log Absence of Log directory may send the jobs to held state.","title":"Job execution and submission files"},{"location":"software_examples/matlab_runtime/tutorial-Matlab-ScalingUp/#job-submmision","text":"We submit the job using the condor_submit command as follows $ condor_submit mcpi.submit //Submit the condor job description file \"mcpi.submit\" Now you have submitted an ensemble of 100 MATLAB jobs. Each job prints the value of pi on the standard output. Check the status of the submitted job, $ condor_q username # The status of the job is printed on the screen. Here, username is your login name.","title":"Job submmision"},{"location":"software_examples/matlab_runtime/tutorial-Matlab-ScalingUp/#post-process","text":"Once the jobs are completed, you can use the information in the output files to calculate an average of all of our computed estimates of Pi. To see this, we can use the command: $ cat log/mcpi*.out* | awk '{ sum += $2; print $2\" \"NR} END { print \"---------------\\n Grand Average = \" sum/NR }'","title":"Post Process\u22c5"},{"location":"software_examples/matlab_runtime/tutorial-Matlab-ScalingUp/#key-points","text":"Scaling up the computational resources on OSG is crucial to taking full advantage of distributed computing. Changing the value of Queue allows the user to scale up the resources. Arguments allows you to pass parameters to a job script. $(Cluster) and $(Process) can be used to name log files uniquely.","title":"Key Points"},{"location":"software_examples/matlab_runtime/tutorial-Matlab-ScalingUp/#getting-help","text":"For assistance or questions, please email the OSG User Support team at support@osg-htc.org .","title":"Getting Help"},{"location":"software_examples/matlab_runtime/tutorial-matlab-HelloWorld/","text":"Basics of compiled MATLAB applications - Hello World example \u00b6 MATLAB\u00ae is a licensed high level language and modeling toolkit. The MATLAB Compiler\u2122 lets you share MATLAB programs as standalone applications. MATLAB Compiler is invoked with mcc . The compiler supports most toolboxes and user-developed interfaces. For more details, check the list of supported toolboxes and ineligible programs . All applications created with MATLAB Compiler use MATLAB Compiler Runtime\u2122 (MCR) , which enables royalty-free deployment and use. We assume you have access to a server that has MATLAB compiler because the compiler is not available on OSG Connect. MATLAB Runtime is available on OSG Connect. Although the compiled binaries are portable, they need to have a compatible, OS-specific matlab runtime to interpret the binary. We recommend the compilation of your matlab program against matlab versions that match the OSG containers , with the compilation executed on a server with Scientific Linux so that the compiled binaries are portable on OSG machines. In this tutorial, we learn the basics of compiling MATLAB programs on a licensed linux machine and running the compiled binaries using a matlab compiled runtime (MCR) in the OSG containers. MATLAB script: hello_world.m \u00b6 Lets start with a simple MATLAB script hello_world.m that prints Hello World! to standard output. function helloworld fprintf('\\n=============') fprintf('\\nHello, World!\\n') fprintf('=============\\n') end Compilation \u00b6 OSG connect does not have a license to use the MATLAB compiler . On a Linux server with a MATLAB license, invoke the compiler mcc . We turn off all graphical options ( -nodisplay ), disable Java ( -nojvm ), and instruct MATLAB to run this application as a single-threaded application ( -singleCompThread ): mcc -m -R -singleCompThread -R -nodisplay -R -nojvm hello_world.m The flag -m means C language translation during compilation, and the flag -R indicates runtime options. The compilation would produce the files: `hello_world, run_hello_world.sh, mccExcludedFiles.log` and `readme.txt` The file hello_world is the standalone executable. The file run_hello_world.sh is MATLAB generated shell script. mccExcludedFiles.log is the log file and readme.txt contains the information about the compilation process. We just need the standalone binary file hello_world . Running standalone binary applications on OSG \u00b6 To see which releases are available on OSG visit our available containers page : Tutorial files \u00b6 Let us say you have created the standalone binary hello_world . Transfer the file hello_world to your Access Point. Alternatively, you may also use the readily available files by using the git clone command: $ git clone https://github.com/OSGConnect/tutorial-matlab-HelloWorld # Copies input and script files to the directory tutorial-matlab-HelloWorld. This will create a directory tutorial-matlab-HelloWorld . Inside the directory, you will see the following files hello_world # compiled executable binary of hello_world.m hello_world.m # matlab program hello_world.submit # condor job description file hello_world.sh # execution script Executing the MATLAB application binary \u00b6 The compilation and execution environment need to the same. The file hello_world is a standalone binary of the matlab program hello_world.m which was compiled using MATLAB 2018b on a Linux platform. The Access Point and many of the worker nodes on OSG are based on Linux platform. In addition to the platform requirement, we also need to have the same MATLAB Runtime version. Load the MATLAB runtime for 2018b version via apptainer/singularity command. On the terminal prompt, type $ apptainer shell /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2018b The above command sets up the environment to run the matlab/2018b runtime applications. Now execute the binary $apptainer/singularity> ./hello_world (would produce the following output) ============= Hello, World! ============= If you get the above output, the binary execution is successful. Now, exit from the apptainer/singularity environment typing exit . Next, we see how to submit the job on a remote execute point using HTcondor. Job execution and submission files \u00b6 Let us take a look at hello_world.submit file: universe = vanilla # One OSG Connect vanilla, the preffered job universe is \"vanilla\" +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2018b\" executable = hello_world Output = Log/job.$(Process).out\u22c5 # standard output Error = Log/job.$(Process).err # standard error Log = Log/job.$(Process).log # log information about job execution requirements = HAS_SINGULARITY == TRUE queue 10 # Submit 10 jobs Before we submit the job, make sure that the directory Log exists on the current working directory. Because HTcondor looks for Log directory to copy the standard output, error and log files as specified in the job description file. From your work directory, type $ mkdir -p Log Absence of Log directory would send the jobs to held state. Job submmision \u00b6 We submit the job using the condor_submit command as follows $ condor_submit hello_world.submit //Submit the condor job description file \"hello_world.submit\" Now you have submitted an ensemble of 10 MATLAB jobs. Each job prints hello world on the standard output. Check the status of the submitted job, $ condor_q username # The status of the job is printed on the screen. Here, username is your login name. Job outputs \u00b6 The hello_world.m script sends the output to standard output. In the condor job description file, we expressed that the standard output is written on the Log/job.$(ProcessID).out . After job completion, ten output files are produced with the hello world message under the directory Log . What's next? \u00b6 Sure, it is not very exciting to print the same message on 10 output files. In the subsequent MATLAB examples, we see how to scale up MATLAB computation on HTC environment. Getting help \u00b6 For assistance or questions, please email the OSG User Support team at support@osg-htc.org or visit the help desk and community forums .","title":"Basics of compiled MATLAB applications - Hello World example"},{"location":"software_examples/matlab_runtime/tutorial-matlab-HelloWorld/#basics-of-compiled-matlab-applications-hello-world-example","text":"MATLAB\u00ae is a licensed high level language and modeling toolkit. The MATLAB Compiler\u2122 lets you share MATLAB programs as standalone applications. MATLAB Compiler is invoked with mcc . The compiler supports most toolboxes and user-developed interfaces. For more details, check the list of supported toolboxes and ineligible programs . All applications created with MATLAB Compiler use MATLAB Compiler Runtime\u2122 (MCR) , which enables royalty-free deployment and use. We assume you have access to a server that has MATLAB compiler because the compiler is not available on OSG Connect. MATLAB Runtime is available on OSG Connect. Although the compiled binaries are portable, they need to have a compatible, OS-specific matlab runtime to interpret the binary. We recommend the compilation of your matlab program against matlab versions that match the OSG containers , with the compilation executed on a server with Scientific Linux so that the compiled binaries are portable on OSG machines. In this tutorial, we learn the basics of compiling MATLAB programs on a licensed linux machine and running the compiled binaries using a matlab compiled runtime (MCR) in the OSG containers.","title":"Basics of compiled MATLAB applications - Hello World example"},{"location":"software_examples/matlab_runtime/tutorial-matlab-HelloWorld/#matlab-script-hello_worldm","text":"Lets start with a simple MATLAB script hello_world.m that prints Hello World! to standard output. function helloworld fprintf('\\n=============') fprintf('\\nHello, World!\\n') fprintf('=============\\n') end","title":"MATLAB script: hello_world.m"},{"location":"software_examples/matlab_runtime/tutorial-matlab-HelloWorld/#compilation","text":"OSG connect does not have a license to use the MATLAB compiler . On a Linux server with a MATLAB license, invoke the compiler mcc . We turn off all graphical options ( -nodisplay ), disable Java ( -nojvm ), and instruct MATLAB to run this application as a single-threaded application ( -singleCompThread ): mcc -m -R -singleCompThread -R -nodisplay -R -nojvm hello_world.m The flag -m means C language translation during compilation, and the flag -R indicates runtime options. The compilation would produce the files: `hello_world, run_hello_world.sh, mccExcludedFiles.log` and `readme.txt` The file hello_world is the standalone executable. The file run_hello_world.sh is MATLAB generated shell script. mccExcludedFiles.log is the log file and readme.txt contains the information about the compilation process. We just need the standalone binary file hello_world .","title":"Compilation"},{"location":"software_examples/matlab_runtime/tutorial-matlab-HelloWorld/#running-standalone-binary-applications-on-osg","text":"To see which releases are available on OSG visit our available containers page :","title":"Running standalone binary applications on OSG"},{"location":"software_examples/matlab_runtime/tutorial-matlab-HelloWorld/#tutorial-files","text":"Let us say you have created the standalone binary hello_world . Transfer the file hello_world to your Access Point. Alternatively, you may also use the readily available files by using the git clone command: $ git clone https://github.com/OSGConnect/tutorial-matlab-HelloWorld # Copies input and script files to the directory tutorial-matlab-HelloWorld. This will create a directory tutorial-matlab-HelloWorld . Inside the directory, you will see the following files hello_world # compiled executable binary of hello_world.m hello_world.m # matlab program hello_world.submit # condor job description file hello_world.sh # execution script","title":"Tutorial files"},{"location":"software_examples/matlab_runtime/tutorial-matlab-HelloWorld/#executing-the-matlab-application-binary","text":"The compilation and execution environment need to the same. The file hello_world is a standalone binary of the matlab program hello_world.m which was compiled using MATLAB 2018b on a Linux platform. The Access Point and many of the worker nodes on OSG are based on Linux platform. In addition to the platform requirement, we also need to have the same MATLAB Runtime version. Load the MATLAB runtime for 2018b version via apptainer/singularity command. On the terminal prompt, type $ apptainer shell /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2018b The above command sets up the environment to run the matlab/2018b runtime applications. Now execute the binary $apptainer/singularity> ./hello_world (would produce the following output) ============= Hello, World! ============= If you get the above output, the binary execution is successful. Now, exit from the apptainer/singularity environment typing exit . Next, we see how to submit the job on a remote execute point using HTcondor.","title":"Executing the MATLAB application binary"},{"location":"software_examples/matlab_runtime/tutorial-matlab-HelloWorld/#job-execution-and-submission-files","text":"Let us take a look at hello_world.submit file: universe = vanilla # One OSG Connect vanilla, the preffered job universe is \"vanilla\" +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2018b\" executable = hello_world Output = Log/job.$(Process).out\u22c5 # standard output Error = Log/job.$(Process).err # standard error Log = Log/job.$(Process).log # log information about job execution requirements = HAS_SINGULARITY == TRUE queue 10 # Submit 10 jobs Before we submit the job, make sure that the directory Log exists on the current working directory. Because HTcondor looks for Log directory to copy the standard output, error and log files as specified in the job description file. From your work directory, type $ mkdir -p Log Absence of Log directory would send the jobs to held state.","title":"Job execution and submission files"},{"location":"software_examples/matlab_runtime/tutorial-matlab-HelloWorld/#job-submmision","text":"We submit the job using the condor_submit command as follows $ condor_submit hello_world.submit //Submit the condor job description file \"hello_world.submit\" Now you have submitted an ensemble of 10 MATLAB jobs. Each job prints hello world on the standard output. Check the status of the submitted job, $ condor_q username # The status of the job is printed on the screen. Here, username is your login name.","title":"Job submmision"},{"location":"software_examples/matlab_runtime/tutorial-matlab-HelloWorld/#job-outputs","text":"The hello_world.m script sends the output to standard output. In the condor job description file, we expressed that the standard output is written on the Log/job.$(ProcessID).out . After job completion, ten output files are produced with the hello world message under the directory Log .","title":"Job outputs"},{"location":"software_examples/matlab_runtime/tutorial-matlab-HelloWorld/#whats-next","text":"Sure, it is not very exciting to print the same message on 10 output files. In the subsequent MATLAB examples, we see how to scale up MATLAB computation on HTC environment.","title":"What's next?"},{"location":"software_examples/matlab_runtime/tutorial-matlab-HelloWorld/#getting-help","text":"For assistance or questions, please email the OSG User Support team at support@osg-htc.org or visit the help desk and community forums .","title":"Getting help"},{"location":"software_examples/other_languages_tools/conda-container/","text":"Conda with Containers \u00b6 The Anaconda/Miniconda distribution of Python is a common tool for installing and managing Python-based software and other tools. There are two ways of using Conda on the OSPool: with a tarball , or via a custom Apptainer/Singularity container. Either works well, but the container solution might be better if your Conda environment contains non-Python tools. Overview \u00b6 When should you use Miniconda as an installation method in OSG? Your software has specific conda-centric installation instructions. The above is true and the software has a lot of dependencies. You mainly use Python to do your work. Notes on terminology: conda is a Python package manager and package ecosystem that exists in parallel with pip and PyPI . Miniconda is a slim Python distribution, containing the minimum amount of packages necessary for a Python installation that can use conda. Anaconda is a pre-built scientific Python distribution based on Miniconda that has many useful scientific packages pre-installed. To create the smallest, most portable Python installation possible, we recommend starting with Miniconda and installing only the packages you actually require. To use a Miniconda installation for your jobs, create an Apptainer/Singularity definition file and build it (general instructions here ). Apptainer/Singularity Definition File \u00b6 The definition file tells Apptainer/Singularity how the container should be built, and what the environment setup should take place when the container is instantiated. In the following example, the container is based on Ubuntu 22.04. A few base operating system tools are installed, then Miniconda, followed by a set of conda commands to define the Conda environment. The %environment is used to ensure jobs are getting the environment activated before the job runs. To build your own custom image, start by modifing the conda install line to include the packages you need. Bootstrap: docker From: ubuntu:22.04 %environment # set up environment for when using the container . /opt/conda/etc/profile.d/conda.sh conda activate %post # base os apt-get update -y apt-get install -y build-essential wget # install miniconda wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh bash Miniconda3-latest-Linux-x86_64.sh -b -f -p /opt/conda rm Miniconda3-latest-Linux-x86_64.sh # install conda components - add the packages you need here . /opt/conda/etc/profile.d/conda.sh conda activate conda install -y -c conda-forge numpy cowpy conda update --all The next step is to build the image. Run: $ apptainer build my-container.sif image.def You can explore the container locally to make sure it works as expected with the shell subcommand: $ apptainer shell my-container.sif This example will give you an interactive shell. You can explore the container and test your code with your own inputs from your /home directory, which is automatically mounted (but note - $HOME will not be available to your jobs later). Once you are down exploring, exit the container by running exit or with CTRL+D It is important to use the correct transfer mechanism to get the image to your job. Please make sure you use OSDF and version your container in the filename. For example: $ cp my-container.sif /ospool/protected//my-container-v1.sif Submit Jobs \u00b6 An example submit file could look like: # File Name: conda_submission.sub # specify the newly built image +SingularityImage = \"osdf:///ospool/protected//my-container-v1.sif\" # Specify your executable (single binary or a script that runs several # commands) and arguments to be passed to jobs. # $(Process) will be a integer number for each job, starting with \"0\" # and increasing for the relevant number of jobs. executable = science.py arguments = $(Process) # Specify the name of the log, standard error, and standard output (or \"screen output\") files. log = science_with_conda.log error = science_with_conda.err output = science_with_conda.out # Transfer any file needed for our job to complete. transfer_input_files = # Specify Job duration category as \"Medium\" (expected runtime <10 hr) or \"Long\" (expected runtime <20 hr). +JobDurationCategory = \u201cMedium\u201d # Tell HTCondor requirements your job needs, # what amount of compute resources each job will need on the computer where it runs. requirements = request_cpus = 1 request_memory = 1GB request_disk = 5GB # Tell HTCondor to run 1 instance of our job: queue 1 Specifying Exact Dependency Versions \u00b6 An important part of improving reproducibility and consistency between runs is to ensure that you use the correct/expected versions of your dependencies. When you run a command like conda install numpy conda tries to install the most recent version of numpy For example, numpy version 1.22.3 was released on Mar 7, 2022. To install exactly this version of numpy, you would run conda install numpy=1.22.3 (the same works for pip if you replace = with == ). We recommend installing with an explicit version to make sure you have exactly the version of a package that you want. This is often called \u201cpinning\u201d or \u201clocking\u201d the version of the package. If you want a record of what is installed in your environment, or want to reproduce your environment on another computer, conda can create a file, usually called environment.yml , that describes the exact versions of all of the packages you have installed in an environment. An example environment.yml file: channels: - conda-forge - defaults dependencies: - cowpy - numpy=1.25.0 To use the environment.yml in the build, modify the image definition to copy the file, and then replace the conda install with a conda env create . Also note that it is good style to name the environment. We call it science in this example: Bootstrap: docker From: ubuntu:22.04 %files environment.yml %environment # set up environment for when using the container . /opt/conda/etc/profile.d/conda.sh conda activate science %post # base os apt-get update -y apt-get install -y build-essential wget # install miniconda wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh bash Miniconda3-latest-Linux-x86_64.sh -b -f -p /opt/conda rm Miniconda3-latest-Linux-x86_64.sh # install conda components - add the packages you need here . /opt/conda/etc/profile.d/conda.sh conda activate conda env create -n science -f environment.yml conda update --all If you use a source control system like git , we recommend checking your environment.yml file into source control and making sure to recreate it when you make changes to your environment. Putting your environment under source control gives you a way to track how it changes along with your own code. More information on conda environments can be found in their documentation .","title":"Conda with Containers"},{"location":"software_examples/other_languages_tools/conda-container/#conda-with-containers","text":"The Anaconda/Miniconda distribution of Python is a common tool for installing and managing Python-based software and other tools. There are two ways of using Conda on the OSPool: with a tarball , or via a custom Apptainer/Singularity container. Either works well, but the container solution might be better if your Conda environment contains non-Python tools.","title":"Conda with Containers"},{"location":"software_examples/other_languages_tools/conda-container/#overview","text":"When should you use Miniconda as an installation method in OSG? Your software has specific conda-centric installation instructions. The above is true and the software has a lot of dependencies. You mainly use Python to do your work. Notes on terminology: conda is a Python package manager and package ecosystem that exists in parallel with pip and PyPI . Miniconda is a slim Python distribution, containing the minimum amount of packages necessary for a Python installation that can use conda. Anaconda is a pre-built scientific Python distribution based on Miniconda that has many useful scientific packages pre-installed. To create the smallest, most portable Python installation possible, we recommend starting with Miniconda and installing only the packages you actually require. To use a Miniconda installation for your jobs, create an Apptainer/Singularity definition file and build it (general instructions here ).","title":"Overview"},{"location":"software_examples/other_languages_tools/conda-container/#apptainersingularity-definition-file","text":"The definition file tells Apptainer/Singularity how the container should be built, and what the environment setup should take place when the container is instantiated. In the following example, the container is based on Ubuntu 22.04. A few base operating system tools are installed, then Miniconda, followed by a set of conda commands to define the Conda environment. The %environment is used to ensure jobs are getting the environment activated before the job runs. To build your own custom image, start by modifing the conda install line to include the packages you need. Bootstrap: docker From: ubuntu:22.04 %environment # set up environment for when using the container . /opt/conda/etc/profile.d/conda.sh conda activate %post # base os apt-get update -y apt-get install -y build-essential wget # install miniconda wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh bash Miniconda3-latest-Linux-x86_64.sh -b -f -p /opt/conda rm Miniconda3-latest-Linux-x86_64.sh # install conda components - add the packages you need here . /opt/conda/etc/profile.d/conda.sh conda activate conda install -y -c conda-forge numpy cowpy conda update --all The next step is to build the image. Run: $ apptainer build my-container.sif image.def You can explore the container locally to make sure it works as expected with the shell subcommand: $ apptainer shell my-container.sif This example will give you an interactive shell. You can explore the container and test your code with your own inputs from your /home directory, which is automatically mounted (but note - $HOME will not be available to your jobs later). Once you are down exploring, exit the container by running exit or with CTRL+D It is important to use the correct transfer mechanism to get the image to your job. Please make sure you use OSDF and version your container in the filename. For example: $ cp my-container.sif /ospool/protected//my-container-v1.sif","title":"Apptainer/Singularity Definition File"},{"location":"software_examples/other_languages_tools/conda-container/#submit-jobs","text":"An example submit file could look like: # File Name: conda_submission.sub # specify the newly built image +SingularityImage = \"osdf:///ospool/protected//my-container-v1.sif\" # Specify your executable (single binary or a script that runs several # commands) and arguments to be passed to jobs. # $(Process) will be a integer number for each job, starting with \"0\" # and increasing for the relevant number of jobs. executable = science.py arguments = $(Process) # Specify the name of the log, standard error, and standard output (or \"screen output\") files. log = science_with_conda.log error = science_with_conda.err output = science_with_conda.out # Transfer any file needed for our job to complete. transfer_input_files = # Specify Job duration category as \"Medium\" (expected runtime <10 hr) or \"Long\" (expected runtime <20 hr). +JobDurationCategory = \u201cMedium\u201d # Tell HTCondor requirements your job needs, # what amount of compute resources each job will need on the computer where it runs. requirements = request_cpus = 1 request_memory = 1GB request_disk = 5GB # Tell HTCondor to run 1 instance of our job: queue 1","title":"Submit Jobs"},{"location":"software_examples/other_languages_tools/conda-container/#specifying-exact-dependency-versions","text":"An important part of improving reproducibility and consistency between runs is to ensure that you use the correct/expected versions of your dependencies. When you run a command like conda install numpy conda tries to install the most recent version of numpy For example, numpy version 1.22.3 was released on Mar 7, 2022. To install exactly this version of numpy, you would run conda install numpy=1.22.3 (the same works for pip if you replace = with == ). We recommend installing with an explicit version to make sure you have exactly the version of a package that you want. This is often called \u201cpinning\u201d or \u201clocking\u201d the version of the package. If you want a record of what is installed in your environment, or want to reproduce your environment on another computer, conda can create a file, usually called environment.yml , that describes the exact versions of all of the packages you have installed in an environment. An example environment.yml file: channels: - conda-forge - defaults dependencies: - cowpy - numpy=1.25.0 To use the environment.yml in the build, modify the image definition to copy the file, and then replace the conda install with a conda env create . Also note that it is good style to name the environment. We call it science in this example: Bootstrap: docker From: ubuntu:22.04 %files environment.yml %environment # set up environment for when using the container . /opt/conda/etc/profile.d/conda.sh conda activate science %post # base os apt-get update -y apt-get install -y build-essential wget # install miniconda wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh bash Miniconda3-latest-Linux-x86_64.sh -b -f -p /opt/conda rm Miniconda3-latest-Linux-x86_64.sh # install conda components - add the packages you need here . /opt/conda/etc/profile.d/conda.sh conda activate conda env create -n science -f environment.yml conda update --all If you use a source control system like git , we recommend checking your environment.yml file into source control and making sure to recreate it when you make changes to your environment. Putting your environment under source control gives you a way to track how it changes along with your own code. More information on conda environments can be found in their documentation .","title":"Specifying Exact Dependency Versions"},{"location":"software_examples/other_languages_tools/conda-tarball/","text":"Conda with Tarballs \u00b6 The Anaconda/Miniconda distribution of Python is a common tool for installing and managing Python-based software and other tools. There are two ways of using Conda on the OSPool: with a tarball as described in this guide, or by installing Conda inside a custom Apptainer/Singularity container . Either works well, but the container solution might be better if your Conda environment requires access to non-Python tools. Overview \u00b6 When should you use Miniconda as an installation method in OSG? Your software has specific conda-centric installation instructions. The above is true and the software has a lot of dependencies. You mainly use Python to do your work. Notes on terminology: conda is a Python package manager and package ecosystem that exists in parallel with pip and PyPI . Miniconda is a slim Python distribution, containing the minimum amount of packages necessary for a Python installation that can use conda. Anaconda is a pre-built scientific Python distribution based on Miniconda that has many useful scientific packages pre-installed. To create the smallest, most portable Python installation possible, we recommend starting with Miniconda and installing only the packages you actually require. To use a Miniconda installation for your jobs, create your installation environment on the access point and send a zipped version to your jobs. Install Miniconda and Package for Jobs \u00b6 In this approach, we will create an entire software installation inside Miniconda and then use a tool called conda pack to package it up for running jobs. 1. Create a Miniconda Installation \u00b6 After logging into your access point, download the latest Linux miniconda installer and run it. For example, [alice@ap00]$ wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh [alice@ap00]$ sh Miniconda3-latest-Linux-x86_64.sh Accept the license agreement and default options. At the end, you can choose whether or not to \u201cinitialize Miniconda3 by running conda init?\u201d - If you enter \"no\", you would then run the eval command listed by the installer to \u201cactivate\u201d Miniconda. If you choose \u201cno\u201d you\u2019ll want to save this command so that you can reactivate the Miniconda installation when needed in the future. - If you enter \"yes\", miniconda will edit your .bashrc file and PATH environment variable so that you do not need to define a path to Miniconda each time you log in. If you choose \"yes\", before proceeding, you must log off and close your terminal for these changes to go into effect. Once you close your terminal, you can reopen it, log in to your access point, and proceed with the rest of the instructions below. 2. Create a conda \"Environment\" With Your Packages \u00b6 (If you are using an environment.yml file as described later , you should instead create the environment from your environment.yml file. If you don\u2019t have an environment.yml file to work with, follow the install instructions in this section. We recommend switching to the environment.yml method of creating environments once you understand the \u201cmanual\u201d method presented here.) Make sure that you\u2019ve activated the base Miniconda environment if you haven\u2019t already. Your prompt should look like this: (base)[alice@ap00]$ To create an environment, use the conda create command and then activate the environment: (base)[alice@ap00]$ conda create -n env-name (base)[alice@ap00]$ conda activate env-name Then, run the conda install command to install the different packages and software you want to include in the installation. How this should look is often listed in the installation examples for software (e.g. Qiime2 , Pytorch ). (env-name)[alice@ap00]$ conda install pkg1 pkg2 Some Conda packages are only available via specific Conda channels which serve as repositories for hosting and managing packages. If Conda is unable to locate the requested packages using the example above, you may need to have Conda search other channels. More detail are available at https://docs.conda.io/projects/conda/en/latest/user-guide/concepts/channels.html. Packages may also be installed via pip , but you should only do this when there is no conda package available. Once everything is installed, deactivate the environment to go back to the Miniconda \u201cbase\u201d environment. (env-name)[alice@ap00]$ conda deactivate For example, if you wanted to create an installation with pandas and matplotlib and call the environment py-data-sci , you would use this sequence of commands: (base)[alice@ap00]$ conda create -n py-data-sci (base)[alice@ap00]$ conda activate py-data-sci (py-data-sci)[alice@ap00]$ conda install pandas matplotlib (py-data-sci)[alice@ap00]$ conda deactivate (base)[alice@ap00]$ More About Miniconda \u00b6 See the official conda documentation for more information on creating and managing environments with conda . 3. Create Software Package \u00b6 Make sure that your job\u2019s Miniconda environment is created, but deactivated, so that you\u2019re in the \u201cbase\u201d Miniconda environment: (base)[alice@ap00]$ Then, run this command to install the conda pack tool: (base)[alice@ap00]$ conda install -c conda-forge conda-pack Enter y when it asks you to install. Finally, use conda pack to create a zipped tar.gz file of your environment (substitute the name of your conda environment where you see env-name ), set the proper permissions for this file using chmod , and check the size of the final tarball: (base)[alice@ap00]$ conda pack -n env-name (base)[alice@ap00]$ chmod 644 env-name.tar.gz (base)[alice@ap00]$ ls -sh env-name.tar.gz When this step finishes, you should see a file in your current directory named env-name.tar.gz . 4. Check Size of Conda Environment Tar Archive \u00b6 The tar archive, env-name.tar.gz , created in the previous step will be used as input for subsequent job submission. As with all job input files, you should check the size of this Conda environment file. If >1GB in size, you should move the file to either your /public or /protected folder, and transfer it to/from jobs using the osdf:/// link, as described in Overview: Data Staging and Transfer to Jobs . This is the most efficient way to transfer large files to/from jobs. 5. Create a Job Executable \u00b6 The job will need to go through a few steps to use this \u201cpacked\u201d conda environment; first, setting the PATH , then unzipping the environment, then activating it, and finally running whatever program you like. The script below is an example of what is needed (customize as indicated to match your choices above). For future reference, let's call this executable conda_science.sh . #!/bin/bash # File Name: science_with_conda.sh # have job exit if any command returns with non-zero exit status (aka failure) set -e # replace env-name on the right hand side of this line with the name of your conda environment ENVNAME=env-name # if you need the environment directory to be named something other than the environment name, change this line ENVDIR=$ENVNAME # these lines handle setting up the environment; you shouldn't have to modify them export PATH mkdir $ENVDIR tar -xzf $ENVNAME.tar.gz -C $ENVDIR . $ENVDIR/bin/activate # modify this line to run your desired Python script and any other work you need to do python3 hello.py 6. Submit Jobs \u00b6 In your HTCondor submit file, make sure to have the following: Your executable should be the the bash script you created in step 5 . Remember to transfer your Python script and the environment tar.gz file to the job. If the tar.gz file is larger than 1GB, please move the file to either your /protected or /public directories and use the osdf:/// file delivery mechanism as described above. An example submit file could look like: # File Name: conda_submission.sub # Specify your executable (single binary or a script that runs several # commands) and arguments to be passed to jobs. # $(Process) will be a integer number for each job, starting with \"0\" # and increasing for the relevant number of jobs. executable = science_with_conda.sh arguments = $(Process) # Specify the name of the log, standard error, and standard output (or \"screen output\") files. log = science_with_conda.log error = science_with_conda.err output = science_with_conda.out # Transfer any file needed for our job to complete. transfer_input_files = osdf:///ospool/apXX/data/alice/env-name.tar.gz, hello.py In the line above, the `XX` in `apXX` should be replaced with the numbers corresponding to your access point. # Specify Job duration category as \"Medium\" (expected runtime <10 hr) or \"Long\" (expected runtime <20 hr). +JobDurationCategory = \u201cMedium\u201d # Tell HTCondor requirements (e.g., operating system) your job needs, # what amount of compute resources each job will need on the computer where it runs. requirements = (OSGVO_OS_STRING == \"RHEL 9\") request_cpus = 1 request_memory = 1GB request_disk = 5GB # Tell HTCondor to run 1 instance of our job: queue 1 Specifying Exact Dependency Versions \u00b6 An important part of improving reproducibility and consistency between runs is to ensure that you use the correct/expected versions of your dependencies. When you run a command like conda install numpy conda tries to install the most recent version of numpy For example, numpy version 1.22.3 was released on Mar 7, 2022. To install exactly this version of numpy, you would run conda install numpy=1.22.3 (the same works for pip if you replace = with == ). We recommend installing with an explicit version to make sure you have exactly the version of a package that you want. This is often called \u201cpinning\u201d or \u201clocking\u201d the version of the package. If you want a record of what is installed in your environment, or want to reproduce your environment on another computer, conda can create a file, usually called environment.yml , that describes the exact versions of all of the packages you have installed in an environment. This file can be re-used by a different conda command to recreate that exact environment on another computer. To create an environment.yml file from your currently-activated environment, run [alice@ap00]$ conda env export > environment.yml This environment.yml will pin the exact version of every dependency in your environment. This can sometimes be problematic if you are moving between platforms because a package version may not be available on some other platform, causing an \u201cunsatisfiable dependency\u201d or \u201cinconsistent environment\u201d error. A much less strict pinning is [alice@ap00]$ conda env export --from-history > environment.yml which only lists packages that you installed manually, and does not pin their versions unless you yourself pinned them during installation . If you need an intermediate solution, it is also possible to manually edit environment.yml files; see the conda environment documentation for more details about the format and what is possible. In general, exact environment specifications are simply not guaranteed to be transferable between platforms (e.g., between Windows and Linux). We strongly recommend using the strictest possible pinning available to you . To create an environment from an environment.yml file, run [alice@ap00]$ conda env create -f environment.yml By default, the name of the environment will be whatever the name of the source environment was; you can change the name by adding a -n \\ option to the conda env create command. If you use a source control system like git , we recommend checking your environment.yml file into source control and making sure to recreate it when you make changes to your environment. Putting your environment under source control gives you a way to track how it changes along with your own code. If you are developing software on your local computer for eventual use on the Open Science Pool, your workflow might look like this: Set up a conda environment for local development and install packages as desired (e.g., conda create -n science; conda activate science; conda install numpy ). Once you are ready to run on the Open Science Pool, create an environment.yml file from your local environment (e.g., conda env export > environment.yml ). Move your environment.yml file from your local computer to the submit machine and create an environment from it (e.g., conda env create -f environment.yml ), then pack it for use in your jobs, as per Create Software Package above. More information on conda environments can be found in their documentation .","title":"Conda with Tarballs"},{"location":"software_examples/other_languages_tools/conda-tarball/#conda-with-tarballs","text":"The Anaconda/Miniconda distribution of Python is a common tool for installing and managing Python-based software and other tools. There are two ways of using Conda on the OSPool: with a tarball as described in this guide, or by installing Conda inside a custom Apptainer/Singularity container . Either works well, but the container solution might be better if your Conda environment requires access to non-Python tools.","title":"Conda with Tarballs"},{"location":"software_examples/other_languages_tools/conda-tarball/#overview","text":"When should you use Miniconda as an installation method in OSG? Your software has specific conda-centric installation instructions. The above is true and the software has a lot of dependencies. You mainly use Python to do your work. Notes on terminology: conda is a Python package manager and package ecosystem that exists in parallel with pip and PyPI . Miniconda is a slim Python distribution, containing the minimum amount of packages necessary for a Python installation that can use conda. Anaconda is a pre-built scientific Python distribution based on Miniconda that has many useful scientific packages pre-installed. To create the smallest, most portable Python installation possible, we recommend starting with Miniconda and installing only the packages you actually require. To use a Miniconda installation for your jobs, create your installation environment on the access point and send a zipped version to your jobs.","title":"Overview"},{"location":"software_examples/other_languages_tools/conda-tarball/#install-miniconda-and-package-for-jobs","text":"In this approach, we will create an entire software installation inside Miniconda and then use a tool called conda pack to package it up for running jobs.","title":"Install Miniconda and Package for Jobs"},{"location":"software_examples/other_languages_tools/conda-tarball/#1-create-a-miniconda-installation","text":"After logging into your access point, download the latest Linux miniconda installer and run it. For example, [alice@ap00]$ wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh [alice@ap00]$ sh Miniconda3-latest-Linux-x86_64.sh Accept the license agreement and default options. At the end, you can choose whether or not to \u201cinitialize Miniconda3 by running conda init?\u201d - If you enter \"no\", you would then run the eval command listed by the installer to \u201cactivate\u201d Miniconda. If you choose \u201cno\u201d you\u2019ll want to save this command so that you can reactivate the Miniconda installation when needed in the future. - If you enter \"yes\", miniconda will edit your .bashrc file and PATH environment variable so that you do not need to define a path to Miniconda each time you log in. If you choose \"yes\", before proceeding, you must log off and close your terminal for these changes to go into effect. Once you close your terminal, you can reopen it, log in to your access point, and proceed with the rest of the instructions below.","title":"1. Create a Miniconda Installation"},{"location":"software_examples/other_languages_tools/conda-tarball/#2-create-a-conda-environment-with-your-packages","text":"(If you are using an environment.yml file as described later , you should instead create the environment from your environment.yml file. If you don\u2019t have an environment.yml file to work with, follow the install instructions in this section. We recommend switching to the environment.yml method of creating environments once you understand the \u201cmanual\u201d method presented here.) Make sure that you\u2019ve activated the base Miniconda environment if you haven\u2019t already. Your prompt should look like this: (base)[alice@ap00]$ To create an environment, use the conda create command and then activate the environment: (base)[alice@ap00]$ conda create -n env-name (base)[alice@ap00]$ conda activate env-name Then, run the conda install command to install the different packages and software you want to include in the installation. How this should look is often listed in the installation examples for software (e.g. Qiime2 , Pytorch ). (env-name)[alice@ap00]$ conda install pkg1 pkg2 Some Conda packages are only available via specific Conda channels which serve as repositories for hosting and managing packages. If Conda is unable to locate the requested packages using the example above, you may need to have Conda search other channels. More detail are available at https://docs.conda.io/projects/conda/en/latest/user-guide/concepts/channels.html. Packages may also be installed via pip , but you should only do this when there is no conda package available. Once everything is installed, deactivate the environment to go back to the Miniconda \u201cbase\u201d environment. (env-name)[alice@ap00]$ conda deactivate For example, if you wanted to create an installation with pandas and matplotlib and call the environment py-data-sci , you would use this sequence of commands: (base)[alice@ap00]$ conda create -n py-data-sci (base)[alice@ap00]$ conda activate py-data-sci (py-data-sci)[alice@ap00]$ conda install pandas matplotlib (py-data-sci)[alice@ap00]$ conda deactivate (base)[alice@ap00]$","title":"2. Create a conda \"Environment\" With Your Packages"},{"location":"software_examples/other_languages_tools/conda-tarball/#more-about-miniconda","text":"See the official conda documentation for more information on creating and managing environments with conda .","title":"More About Miniconda"},{"location":"software_examples/other_languages_tools/conda-tarball/#3-create-software-package","text":"Make sure that your job\u2019s Miniconda environment is created, but deactivated, so that you\u2019re in the \u201cbase\u201d Miniconda environment: (base)[alice@ap00]$ Then, run this command to install the conda pack tool: (base)[alice@ap00]$ conda install -c conda-forge conda-pack Enter y when it asks you to install. Finally, use conda pack to create a zipped tar.gz file of your environment (substitute the name of your conda environment where you see env-name ), set the proper permissions for this file using chmod , and check the size of the final tarball: (base)[alice@ap00]$ conda pack -n env-name (base)[alice@ap00]$ chmod 644 env-name.tar.gz (base)[alice@ap00]$ ls -sh env-name.tar.gz When this step finishes, you should see a file in your current directory named env-name.tar.gz .","title":"3. Create Software Package"},{"location":"software_examples/other_languages_tools/conda-tarball/#4-check-size-of-conda-environment-tar-archive","text":"The tar archive, env-name.tar.gz , created in the previous step will be used as input for subsequent job submission. As with all job input files, you should check the size of this Conda environment file. If >1GB in size, you should move the file to either your /public or /protected folder, and transfer it to/from jobs using the osdf:/// link, as described in Overview: Data Staging and Transfer to Jobs . This is the most efficient way to transfer large files to/from jobs.","title":"4. Check Size of Conda Environment Tar Archive"},{"location":"software_examples/other_languages_tools/conda-tarball/#5-create-a-job-executable","text":"The job will need to go through a few steps to use this \u201cpacked\u201d conda environment; first, setting the PATH , then unzipping the environment, then activating it, and finally running whatever program you like. The script below is an example of what is needed (customize as indicated to match your choices above). For future reference, let's call this executable conda_science.sh . #!/bin/bash # File Name: science_with_conda.sh # have job exit if any command returns with non-zero exit status (aka failure) set -e # replace env-name on the right hand side of this line with the name of your conda environment ENVNAME=env-name # if you need the environment directory to be named something other than the environment name, change this line ENVDIR=$ENVNAME # these lines handle setting up the environment; you shouldn't have to modify them export PATH mkdir $ENVDIR tar -xzf $ENVNAME.tar.gz -C $ENVDIR . $ENVDIR/bin/activate # modify this line to run your desired Python script and any other work you need to do python3 hello.py","title":"5. Create a Job Executable"},{"location":"software_examples/other_languages_tools/conda-tarball/#6-submit-jobs","text":"In your HTCondor submit file, make sure to have the following: Your executable should be the the bash script you created in step 5 . Remember to transfer your Python script and the environment tar.gz file to the job. If the tar.gz file is larger than 1GB, please move the file to either your /protected or /public directories and use the osdf:/// file delivery mechanism as described above. An example submit file could look like: # File Name: conda_submission.sub # Specify your executable (single binary or a script that runs several # commands) and arguments to be passed to jobs. # $(Process) will be a integer number for each job, starting with \"0\" # and increasing for the relevant number of jobs. executable = science_with_conda.sh arguments = $(Process) # Specify the name of the log, standard error, and standard output (or \"screen output\") files. log = science_with_conda.log error = science_with_conda.err output = science_with_conda.out # Transfer any file needed for our job to complete. transfer_input_files = osdf:///ospool/apXX/data/alice/env-name.tar.gz, hello.py In the line above, the `XX` in `apXX` should be replaced with the numbers corresponding to your access point. # Specify Job duration category as \"Medium\" (expected runtime <10 hr) or \"Long\" (expected runtime <20 hr). +JobDurationCategory = \u201cMedium\u201d # Tell HTCondor requirements (e.g., operating system) your job needs, # what amount of compute resources each job will need on the computer where it runs. requirements = (OSGVO_OS_STRING == \"RHEL 9\") request_cpus = 1 request_memory = 1GB request_disk = 5GB # Tell HTCondor to run 1 instance of our job: queue 1","title":"6. Submit Jobs"},{"location":"software_examples/other_languages_tools/conda-tarball/#specifying-exact-dependency-versions","text":"An important part of improving reproducibility and consistency between runs is to ensure that you use the correct/expected versions of your dependencies. When you run a command like conda install numpy conda tries to install the most recent version of numpy For example, numpy version 1.22.3 was released on Mar 7, 2022. To install exactly this version of numpy, you would run conda install numpy=1.22.3 (the same works for pip if you replace = with == ). We recommend installing with an explicit version to make sure you have exactly the version of a package that you want. This is often called \u201cpinning\u201d or \u201clocking\u201d the version of the package. If you want a record of what is installed in your environment, or want to reproduce your environment on another computer, conda can create a file, usually called environment.yml , that describes the exact versions of all of the packages you have installed in an environment. This file can be re-used by a different conda command to recreate that exact environment on another computer. To create an environment.yml file from your currently-activated environment, run [alice@ap00]$ conda env export > environment.yml This environment.yml will pin the exact version of every dependency in your environment. This can sometimes be problematic if you are moving between platforms because a package version may not be available on some other platform, causing an \u201cunsatisfiable dependency\u201d or \u201cinconsistent environment\u201d error. A much less strict pinning is [alice@ap00]$ conda env export --from-history > environment.yml which only lists packages that you installed manually, and does not pin their versions unless you yourself pinned them during installation . If you need an intermediate solution, it is also possible to manually edit environment.yml files; see the conda environment documentation for more details about the format and what is possible. In general, exact environment specifications are simply not guaranteed to be transferable between platforms (e.g., between Windows and Linux). We strongly recommend using the strictest possible pinning available to you . To create an environment from an environment.yml file, run [alice@ap00]$ conda env create -f environment.yml By default, the name of the environment will be whatever the name of the source environment was; you can change the name by adding a -n \\ option to the conda env create command. If you use a source control system like git , we recommend checking your environment.yml file into source control and making sure to recreate it when you make changes to your environment. Putting your environment under source control gives you a way to track how it changes along with your own code. If you are developing software on your local computer for eventual use on the Open Science Pool, your workflow might look like this: Set up a conda environment for local development and install packages as desired (e.g., conda create -n science; conda activate science; conda install numpy ). Once you are ready to run on the Open Science Pool, create an environment.yml file from your local environment (e.g., conda env export > environment.yml ). Move your environment.yml file from your local computer to the submit machine and create an environment from it (e.g., conda env create -f environment.yml ), then pack it for use in your jobs, as per Create Software Package above. More information on conda environments can be found in their documentation .","title":"Specifying Exact Dependency Versions"},{"location":"software_examples/other_languages_tools/java-on-osg/","text":"Using Java in Jobs \u00b6 Overview \u00b6 If your code uses Java via a .jar file, it is easy to bring along your own copy of the Java Development Kit (JDK) which allows you to run your .jar file anywhere on the Open Science Pool. Steps to Use Java in Jobs \u00b6 Get a copy of Java/JDK. You can access the the Java Development Kit (JDK) from the JDK website . First select the link to the JDK that is listed as \"Ready for Use\" and then download the Linux/x64 version of the tar.gz file using a Unix command such as wget from your /home directory. For example, $ wget https://download.java.net/java/GA/jdk17.0.1/2a2082e5a09d4267845be086888add4f/12/GPL/openjdk-17.0.1_linux-x64_bin.tar.gz The downloaded file should end up in your home directory on the OSPool access point. Include Java in Input Files. Add the downloaded tar file to the transfer_input_files line of your submit file, along with the .jar file and any other input files the job needs: transfer_input_files = openjdk-17.0.1_linux-x64_bin.tar.gz, program.jar, other_input Setup Java inside the job. Write a script that unpacks the JDK tar file, sets the environment to find the java software, and then runs your program. This script will be your job\\'s executable. See this example for what the script should look like: #!/bin/bash # unzip the JDK tar -xzf openjdk-17.0.1_linux-x64_bin.tar.gz # Add the unzipped JDK folder to the environment export PATH=$PWD/jdk-17.0.1/bin:$PATH export JAVA_HOME=$PWD/jdk-17.0.1 # run your .jar file java -jar program.jar Note that the exact name of the unzipped JDK folder and the JDK tar.gz file will vary depending on the version you downloaded. You should unzip the JDK tar.gz file in your home directory to find out the correct directory name to add to the script.","title":"Using Java in Jobs "},{"location":"software_examples/other_languages_tools/java-on-osg/#using-java-in-jobs","text":"","title":"Using Java in Jobs"},{"location":"software_examples/other_languages_tools/java-on-osg/#overview","text":"If your code uses Java via a .jar file, it is easy to bring along your own copy of the Java Development Kit (JDK) which allows you to run your .jar file anywhere on the Open Science Pool.","title":"Overview"},{"location":"software_examples/other_languages_tools/java-on-osg/#steps-to-use-java-in-jobs","text":"Get a copy of Java/JDK. You can access the the Java Development Kit (JDK) from the JDK website . First select the link to the JDK that is listed as \"Ready for Use\" and then download the Linux/x64 version of the tar.gz file using a Unix command such as wget from your /home directory. For example, $ wget https://download.java.net/java/GA/jdk17.0.1/2a2082e5a09d4267845be086888add4f/12/GPL/openjdk-17.0.1_linux-x64_bin.tar.gz The downloaded file should end up in your home directory on the OSPool access point. Include Java in Input Files. Add the downloaded tar file to the transfer_input_files line of your submit file, along with the .jar file and any other input files the job needs: transfer_input_files = openjdk-17.0.1_linux-x64_bin.tar.gz, program.jar, other_input Setup Java inside the job. Write a script that unpacks the JDK tar file, sets the environment to find the java software, and then runs your program. This script will be your job\\'s executable. See this example for what the script should look like: #!/bin/bash # unzip the JDK tar -xzf openjdk-17.0.1_linux-x64_bin.tar.gz # Add the unzipped JDK folder to the environment export PATH=$PWD/jdk-17.0.1/bin:$PATH export JAVA_HOME=$PWD/jdk-17.0.1 # run your .jar file java -jar program.jar Note that the exact name of the unzipped JDK folder and the JDK tar.gz file will vary depending on the version you downloaded. You should unzip the JDK tar.gz file in your home directory to find out the correct directory name to add to the script.","title":"Steps to Use Java in Jobs"},{"location":"software_examples/other_languages_tools/julia-on-osg/","text":"Using Julia on the OSPool \u00b6 Overview \u00b6 This guide provides an introduction to running Julia code on the Open Science Pool. The Quickstart Instructions provide an outline of job submission. The following sections provide more details about installing Julia packages ( Install Julia Packages ) and creating a complete job submission ( Submit Julia Jobs ). This guide assumes that you have a script written in Julia and can identify the additional Julia packages needed to run the script. If you are using many Julia packages or have other software dependencies as part of your job, you may want to manage your software via a container instead of using the tar.gz file method described in this guide. The Research Computing Facilitation (RCF) team maintains a Julia container that can be used as a starting point for creating a customized container with added packages. See our Docker and Singularity/Apptainer Guide for more details. Quickstart Instructions \u00b6 Download the precompiled Julia software from https://julialang.org/downloads/ . You will need the 64-bit, tarball compiled for general use on a Linux x86 system. The file name will resemble something like julia-#.#.#-linux-x86_64.tar.gz . Tip: use wget to download directly to your /home directory on the access point, OR use transfer_input_files = url in your HTCondor submit files. Install your Julia packages on the access point, else skip to the next step. For more details, see the section on installing Julia packages below: Installing Julia Packages Submit a job that executes a Julia script using the Julia precompiled binary with base Julia and Standard Library, via a shell script like the following as the job's executable: #!/bin/bash # extract Julia tar.gz file tar -xzf julia-#.#.#-linux-x86_64.tar.gz # add Julia binary to PATH export PATH=$_CONDOR_SCRATCH_DIR/julia-#-#-#/bin:$PATH # run Julia script julia my-script.jl For more details on the job submission, see the section below: Submit Julia Jobs Install Julia Packages \u00b6 If your work requires additional Julia packages, you will need to peform a one-time installation of these packages within a Julia project. A copy of the project can then be saved for use in subsequent job submissions. For more details, please see Julia's documentation at Julia Pkg.jl . Download Julia and set up a \"project\" \u00b6 If you have not already downloaded a copy of Julia, download the precompiled Julia software from https://julialang.org/downloads/ . You will need the 64-bit, tarball compiled for general use on a Linux x86 system. The file name will resemble something like julia-#.#.#-linux-x86_64.tar.gz . We will need a copy of the original tar.gz file for running jobs, but to install packages, we also need an unpacked version of the software. Run the following commands to extract the Julia software and add Julia to your PATH : $ tar -xzf julia-#.#.#-linux-x86_64.tar.gz $ export PATH=$PWD/julia-#.#.#/bin:$PATH After these steps, you should be able to run Julia from the command line, e.g. $ julia --version Now create a project directory to install your packages (we've called it my-project/ below) and tell Julia its name: $ mkdir my-project $ export JULIA_DEPOT_PATH=$PWD/my-project If you already have a directory with Julia packages on the login node, you can add to it by skipping the mkdir step above and going straight to setting the JULIA_DEPOT_PATH variable. You can choose whatever name to use for this directory -- if you have different projects that you use for different jobs, you could use a more descriptive name than \"my-project\". Install Packages \u00b6 We will now use Julia to install any needed packages to the project directory we created in the previous step. Open Julia with the --project option set to the project directory: $ julia --project=my-project Once you've started up the Julia REPL (interpreter), start the Pkg REPL, used to install packages, by typing ] . Then install and test packages by using Julia's add Package syntax. _ _ _ _(_)_ | Documentation: https://docs.julialang.org (_) | (_) (_) | _ _ _| |_ __ _ | Type \"?\" for help, \"]?\" for Pkg help. | | | | | | |/ _` | | | | |_| | | | (_| | | Version 1.0.5 (2019-09-09) _/ |\\__'_|_|_|\\__'_| | Official https://julialang.org/ release |__/ | julia> ] (my-project) pkg> add Package (my-project) pkg> test Package If you have multiple packages to install they can be combined into a single command, e.g. (my-project) pkg> add Package1 Package2 Package3 . If you encounter issues getting packages to install successfully, please contact us at support@osg-htc.org Once you are done, you can exit the Pkg REPL by typing the DELETE key and then typing exit() (my-project) pkg> julia> exit() Your packages will have been installed to the my_project directory; we want to compress this folder so that it is easier to copy to jobs. $ tar -czf my-project.tar.gz my-project/ Submit Julia Jobs \u00b6 To submit a job that runs a Julia script, create a bash script and HTCondor submit file following the examples in this section. These example assume that you have downloaded a copy of Julia for Linux as a tar.gz file and if using packages, you have gone through the steps above to install them and create an additional tar.gz file of the installed packages. Create Executable Bash Script \u00b6 Your job will use a bash script as the HTCondor executable . This script will contain all the steps needed to unpack the Julia binaries and execute your Julia script ( script.jl below). What follows are two example bash scripts, one which can be used to execute a script with base Julia only, and one that will use packages you installed to a project directory (see Install Julia Packages ). Example Bash Script For Base Julia Only \u00b6 If your Julia script can run without additional packages (other than base Julia and the Julia Standard library) use the example script directly below. #!/bin/bash # julia-job.sh # extract Julia tar.gz file tar -xzf julia-#.#.#-linux-x86_64.tar.gz # add Julia binary to PATH export PATH=$_CONDOR_SCRATCH_DIR/julia-#.#.#/bin:$PATH # run Julia script julia script.jl Example Bash Script For Julia With Installed Packages \u00b6 #!/bin/bash # julia-job.sh # extract Julia tar.gz file and project tar.gz file tar -xzf julia-#.#.#-linux-x86_64.tar.gz tar -xzf my-project.tar.gz # add Julia binary to PATH export PATH=$_CONDOR_SCRATCH_DIR/julia-#.#.#/bin:$PATH # add Julia packages to DEPOT variable export JULIA_DEPOT_PATH=$_CONDOR_SCRATCH_DIR/my-project # run Julia script julia --project=my-project script.jl Create HTCondor Submit File \u00b6 After creating a bash script named julia-job.sh to run Julia, then create a submit file to submit the job. More details about setting up a submit file, including a submit file template, can be found in our quickstart guide: Quickstart Tutorial # File Name = julia-job.sub executable = julia-job.sh transfer_input_files = julia-#.#.#-linux-x86_64.tar.gz, script.jl should_transfer_files = Yes when_to_transfer_output = ON_EXIT output = job.$(Cluster).$(Process).out error = job.$(Cluster).$(Process).error log = job.$(Cluster).$(Process).log +JobDurationCategory = \"Medium\" requirements = (OSGVO_OS_STRING == \"RHEL 9\") request_cpus = 1 request_memory = 2GB request_disk = 2GB queue 1 If your Julia script needs to use packages installed for a project, be sure to include my-project.tar.gz as an input file in julia-job.sub . For project tarballs that are <1 GB, you can follow the below example: transfer_input_files = julia-#.#.#-linux-x86_64.tar.gz, script.jl, my-project.tar.gz Modify the CPU/memory request lines to match what is needed by the job. Test a few jobs for disk space/memory usage in order to make sure your requests for a large batch are accurate! Disk space and memory usage can be found in the log file after the job completes.","title":"Using Julia on the OSPool "},{"location":"software_examples/other_languages_tools/julia-on-osg/#using-julia-on-the-ospool","text":"","title":"Using Julia on the OSPool"},{"location":"software_examples/other_languages_tools/julia-on-osg/#overview","text":"This guide provides an introduction to running Julia code on the Open Science Pool. The Quickstart Instructions provide an outline of job submission. The following sections provide more details about installing Julia packages ( Install Julia Packages ) and creating a complete job submission ( Submit Julia Jobs ). This guide assumes that you have a script written in Julia and can identify the additional Julia packages needed to run the script. If you are using many Julia packages or have other software dependencies as part of your job, you may want to manage your software via a container instead of using the tar.gz file method described in this guide. The Research Computing Facilitation (RCF) team maintains a Julia container that can be used as a starting point for creating a customized container with added packages. See our Docker and Singularity/Apptainer Guide for more details.","title":"Overview"},{"location":"software_examples/other_languages_tools/julia-on-osg/#quickstart-instructions","text":"Download the precompiled Julia software from https://julialang.org/downloads/ . You will need the 64-bit, tarball compiled for general use on a Linux x86 system. The file name will resemble something like julia-#.#.#-linux-x86_64.tar.gz . Tip: use wget to download directly to your /home directory on the access point, OR use transfer_input_files = url in your HTCondor submit files. Install your Julia packages on the access point, else skip to the next step. For more details, see the section on installing Julia packages below: Installing Julia Packages Submit a job that executes a Julia script using the Julia precompiled binary with base Julia and Standard Library, via a shell script like the following as the job's executable: #!/bin/bash # extract Julia tar.gz file tar -xzf julia-#.#.#-linux-x86_64.tar.gz # add Julia binary to PATH export PATH=$_CONDOR_SCRATCH_DIR/julia-#-#-#/bin:$PATH # run Julia script julia my-script.jl For more details on the job submission, see the section below: Submit Julia Jobs","title":"Quickstart Instructions"},{"location":"software_examples/other_languages_tools/julia-on-osg/#install-julia-packages","text":"If your work requires additional Julia packages, you will need to peform a one-time installation of these packages within a Julia project. A copy of the project can then be saved for use in subsequent job submissions. For more details, please see Julia's documentation at Julia Pkg.jl .","title":"Install Julia Packages"},{"location":"software_examples/other_languages_tools/julia-on-osg/#download-julia-and-set-up-a-project","text":"If you have not already downloaded a copy of Julia, download the precompiled Julia software from https://julialang.org/downloads/ . You will need the 64-bit, tarball compiled for general use on a Linux x86 system. The file name will resemble something like julia-#.#.#-linux-x86_64.tar.gz . We will need a copy of the original tar.gz file for running jobs, but to install packages, we also need an unpacked version of the software. Run the following commands to extract the Julia software and add Julia to your PATH : $ tar -xzf julia-#.#.#-linux-x86_64.tar.gz $ export PATH=$PWD/julia-#.#.#/bin:$PATH After these steps, you should be able to run Julia from the command line, e.g. $ julia --version Now create a project directory to install your packages (we've called it my-project/ below) and tell Julia its name: $ mkdir my-project $ export JULIA_DEPOT_PATH=$PWD/my-project If you already have a directory with Julia packages on the login node, you can add to it by skipping the mkdir step above and going straight to setting the JULIA_DEPOT_PATH variable. You can choose whatever name to use for this directory -- if you have different projects that you use for different jobs, you could use a more descriptive name than \"my-project\".","title":"Download Julia and set up a \"project\""},{"location":"software_examples/other_languages_tools/julia-on-osg/#install-packages","text":"We will now use Julia to install any needed packages to the project directory we created in the previous step. Open Julia with the --project option set to the project directory: $ julia --project=my-project Once you've started up the Julia REPL (interpreter), start the Pkg REPL, used to install packages, by typing ] . Then install and test packages by using Julia's add Package syntax. _ _ _ _(_)_ | Documentation: https://docs.julialang.org (_) | (_) (_) | _ _ _| |_ __ _ | Type \"?\" for help, \"]?\" for Pkg help. | | | | | | |/ _` | | | | |_| | | | (_| | | Version 1.0.5 (2019-09-09) _/ |\\__'_|_|_|\\__'_| | Official https://julialang.org/ release |__/ | julia> ] (my-project) pkg> add Package (my-project) pkg> test Package If you have multiple packages to install they can be combined into a single command, e.g. (my-project) pkg> add Package1 Package2 Package3 . If you encounter issues getting packages to install successfully, please contact us at support@osg-htc.org Once you are done, you can exit the Pkg REPL by typing the DELETE key and then typing exit() (my-project) pkg> julia> exit() Your packages will have been installed to the my_project directory; we want to compress this folder so that it is easier to copy to jobs. $ tar -czf my-project.tar.gz my-project/","title":"Install Packages"},{"location":"software_examples/other_languages_tools/julia-on-osg/#submit-julia-jobs","text":"To submit a job that runs a Julia script, create a bash script and HTCondor submit file following the examples in this section. These example assume that you have downloaded a copy of Julia for Linux as a tar.gz file and if using packages, you have gone through the steps above to install them and create an additional tar.gz file of the installed packages.","title":"Submit Julia Jobs"},{"location":"software_examples/other_languages_tools/julia-on-osg/#create-executable-bash-script","text":"Your job will use a bash script as the HTCondor executable . This script will contain all the steps needed to unpack the Julia binaries and execute your Julia script ( script.jl below). What follows are two example bash scripts, one which can be used to execute a script with base Julia only, and one that will use packages you installed to a project directory (see Install Julia Packages ).","title":"Create Executable Bash Script"},{"location":"software_examples/other_languages_tools/julia-on-osg/#example-bash-script-for-base-julia-only","text":"If your Julia script can run without additional packages (other than base Julia and the Julia Standard library) use the example script directly below. #!/bin/bash # julia-job.sh # extract Julia tar.gz file tar -xzf julia-#.#.#-linux-x86_64.tar.gz # add Julia binary to PATH export PATH=$_CONDOR_SCRATCH_DIR/julia-#.#.#/bin:$PATH # run Julia script julia script.jl","title":"Example Bash Script For Base Julia Only"},{"location":"software_examples/other_languages_tools/julia-on-osg/#example-bash-script-for-julia-with-installed-packages","text":"#!/bin/bash # julia-job.sh # extract Julia tar.gz file and project tar.gz file tar -xzf julia-#.#.#-linux-x86_64.tar.gz tar -xzf my-project.tar.gz # add Julia binary to PATH export PATH=$_CONDOR_SCRATCH_DIR/julia-#.#.#/bin:$PATH # add Julia packages to DEPOT variable export JULIA_DEPOT_PATH=$_CONDOR_SCRATCH_DIR/my-project # run Julia script julia --project=my-project script.jl","title":"Example Bash Script For Julia With Installed Packages"},{"location":"software_examples/other_languages_tools/julia-on-osg/#create-htcondor-submit-file","text":"After creating a bash script named julia-job.sh to run Julia, then create a submit file to submit the job. More details about setting up a submit file, including a submit file template, can be found in our quickstart guide: Quickstart Tutorial # File Name = julia-job.sub executable = julia-job.sh transfer_input_files = julia-#.#.#-linux-x86_64.tar.gz, script.jl should_transfer_files = Yes when_to_transfer_output = ON_EXIT output = job.$(Cluster).$(Process).out error = job.$(Cluster).$(Process).error log = job.$(Cluster).$(Process).log +JobDurationCategory = \"Medium\" requirements = (OSGVO_OS_STRING == \"RHEL 9\") request_cpus = 1 request_memory = 2GB request_disk = 2GB queue 1 If your Julia script needs to use packages installed for a project, be sure to include my-project.tar.gz as an input file in julia-job.sub . For project tarballs that are <1 GB, you can follow the below example: transfer_input_files = julia-#.#.#-linux-x86_64.tar.gz, script.jl, my-project.tar.gz Modify the CPU/memory request lines to match what is needed by the job. Test a few jobs for disk space/memory usage in order to make sure your requests for a large batch are accurate! Disk space and memory usage can be found in the log file after the job completes.","title":"Create HTCondor Submit File"},{"location":"software_examples/python/manage-python-packages/","text":"Run Python Scripts on the OSPool \u00b6 Overview \u00b6 This guide will show you two examples of how to run jobs that use Python in the Open Science Pool. The first example will demonstrate how to submit a job that uses base Python. The second example will demonstrate the workflow for jobs that use specific Python packages, including how to install a custom set of Python packages to your home directory and how to add them to a Python job submission. Before getting started, you should know which Python packages you need to run your job. Running Base Python on the Open Science Pool \u00b6 Create a bash script to run Python \u00b6 To submit jobs that use a module to run base Python, first create a bash executable - for this example we'll call it run_py.sh - which will run our Python script called myscript.py . For example, run_py.sh : #!/bin/bash # Run the Python script python3 myscript.py If you need to use Python 2, replace the python3 above with python2 . Create an HTCondor submit file \u00b6 In order to submit run_py.sh as part of a job, we need to create an HTCondor submit file. This should include the following: run_py.sh specified as the executable use transfer_input_files to bring our Python script myscript.py to wherever the job runs include a standard container image that has Python installed. All together, the submit file will look something like this: universe = vanilla +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-ubuntu-20.04:latest\" executable = run_py.sh transfer_input_files = myscript.py log = job.log output = job.out error = job.error +JobDurationCategory = \"Medium\" request_cpus = 1 request_memory = 2GB request_disk = 2GB queue 1 Once everything is set up, the job can be submitted in the usual way, by running the condor_submit command with the name of the submit file. Running Python Jobs That Use Additional Packages \u00b6 It's likely that you'll need additional Python packages that are not present in the base Python installations. This portion of the guide describes how to install your packages to a custom directory and then include them as part of your jobs. Install Python packages \u00b6 While connected to your login node, start the base Singularity container that has a copy of Python inside: $ singularity shell /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-ubuntu-20.04:latest Next, create a directory for your files and set the PYTHONPATH Singularity> mkdir my_env Singularity> export PYTHONPATH=$PWD/my_env You can swap out my_env for a more descriptive name like scipy or word-analysis . Now we can use pip to install Python packages. Singularity> pip3 install --target=$PWD/my_env numpy ......some download message... Installing collected packages: numpy Installing collected packages: numpy Successfully installed numpy-1.16.3 Install each package that you need for your job using the pip install command. If you would like to test the package installation, you can run the python3 command and then try importing the packages you just installed. To exit the Python console, type \"quit()\" Once you are done, you can leave the virtual environment: Singularity> exit All of the packages that were just installed should be contained in a sub-directory of the my_env directory. To use these packages in a job, the entire my_env directory will be transfered as a tar.gz file. So our final step is to compress the directory, as follows: $ tar -czf my_env.tar.gz my_env Create executable script to use installed packages \u00b6 In addition to loading the appropriate Python module, we will need to add a few steps to our bash executable to set-up the virtual environment we just created. That will look something like this: #!/bin/bash # Unpack your envvironment (with your packages), and activate it tar -xzf my_env.tar.gz export PYTHONPATH=$PWD/my_env # Run the Python script python3 myscript.py Modify the HTCondor submit file to transfer Python packages \u00b6 The submit file for this job will be similar to the base Python job submit file shown above with one addition - we need to include my_env.tar.gz in the list of files specified by transfer_input_files . As an example: universe = vanilla +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-ubuntu-20.04:latest\" executable = run_py.sh transfer_input_files = myscript.py, my_env.tar.gz log = job.log output = job.out error = job.error +JobDurationCategory = \"Medium\" request_cpus = 1 request_memory = 2GB request_disk = 2GB queue 1 Other Considerations \u00b6 This guide mainly focuses on the nuts and bolts of running Python, but it's important to remember that additional files needed for your jobs (input data, setting files, etc.) need to be transferred with the job as well. See our Introduction to Data Management on OSG for details on the different ways to deliver inputs to your jobs. When you've prepared a real job submission, make sure to run a test job and then check the log file for disk and memory usage; if you're using significantly more or less than what you requested, make sure you adjust your requests. Getting Help \u00b6 For assistance or questions, please email the OSG Research Facilitation team at support@osg-htc.org or visit the help desk and community forums .","title":"Run Python Scripts on the OSPool "},{"location":"software_examples/python/manage-python-packages/#run-python-scripts-on-the-ospool","text":"","title":"Run Python Scripts on the OSPool"},{"location":"software_examples/python/manage-python-packages/#overview","text":"This guide will show you two examples of how to run jobs that use Python in the Open Science Pool. The first example will demonstrate how to submit a job that uses base Python. The second example will demonstrate the workflow for jobs that use specific Python packages, including how to install a custom set of Python packages to your home directory and how to add them to a Python job submission. Before getting started, you should know which Python packages you need to run your job.","title":"Overview"},{"location":"software_examples/python/manage-python-packages/#running-base-python-on-the-open-science-pool","text":"","title":"Running Base Python on the Open Science Pool"},{"location":"software_examples/python/manage-python-packages/#create-a-bash-script-to-run-python","text":"To submit jobs that use a module to run base Python, first create a bash executable - for this example we'll call it run_py.sh - which will run our Python script called myscript.py . For example, run_py.sh : #!/bin/bash # Run the Python script python3 myscript.py If you need to use Python 2, replace the python3 above with python2 .","title":"Create a bash script to run Python"},{"location":"software_examples/python/manage-python-packages/#create-an-htcondor-submit-file","text":"In order to submit run_py.sh as part of a job, we need to create an HTCondor submit file. This should include the following: run_py.sh specified as the executable use transfer_input_files to bring our Python script myscript.py to wherever the job runs include a standard container image that has Python installed. All together, the submit file will look something like this: universe = vanilla +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-ubuntu-20.04:latest\" executable = run_py.sh transfer_input_files = myscript.py log = job.log output = job.out error = job.error +JobDurationCategory = \"Medium\" request_cpus = 1 request_memory = 2GB request_disk = 2GB queue 1 Once everything is set up, the job can be submitted in the usual way, by running the condor_submit command with the name of the submit file.","title":"Create an HTCondor submit file"},{"location":"software_examples/python/manage-python-packages/#running-python-jobs-that-use-additional-packages","text":"It's likely that you'll need additional Python packages that are not present in the base Python installations. This portion of the guide describes how to install your packages to a custom directory and then include them as part of your jobs.","title":"Running Python Jobs That Use Additional Packages"},{"location":"software_examples/python/manage-python-packages/#install-python-packages","text":"While connected to your login node, start the base Singularity container that has a copy of Python inside: $ singularity shell /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-ubuntu-20.04:latest Next, create a directory for your files and set the PYTHONPATH Singularity> mkdir my_env Singularity> export PYTHONPATH=$PWD/my_env You can swap out my_env for a more descriptive name like scipy or word-analysis . Now we can use pip to install Python packages. Singularity> pip3 install --target=$PWD/my_env numpy ......some download message... Installing collected packages: numpy Installing collected packages: numpy Successfully installed numpy-1.16.3 Install each package that you need for your job using the pip install command. If you would like to test the package installation, you can run the python3 command and then try importing the packages you just installed. To exit the Python console, type \"quit()\" Once you are done, you can leave the virtual environment: Singularity> exit All of the packages that were just installed should be contained in a sub-directory of the my_env directory. To use these packages in a job, the entire my_env directory will be transfered as a tar.gz file. So our final step is to compress the directory, as follows: $ tar -czf my_env.tar.gz my_env","title":"Install Python packages"},{"location":"software_examples/python/manage-python-packages/#create-executable-script-to-use-installed-packages","text":"In addition to loading the appropriate Python module, we will need to add a few steps to our bash executable to set-up the virtual environment we just created. That will look something like this: #!/bin/bash # Unpack your envvironment (with your packages), and activate it tar -xzf my_env.tar.gz export PYTHONPATH=$PWD/my_env # Run the Python script python3 myscript.py","title":"Create executable script to use installed packages"},{"location":"software_examples/python/manage-python-packages/#modify-the-htcondor-submit-file-to-transfer-python-packages","text":"The submit file for this job will be similar to the base Python job submit file shown above with one addition - we need to include my_env.tar.gz in the list of files specified by transfer_input_files . As an example: universe = vanilla +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-ubuntu-20.04:latest\" executable = run_py.sh transfer_input_files = myscript.py, my_env.tar.gz log = job.log output = job.out error = job.error +JobDurationCategory = \"Medium\" request_cpus = 1 request_memory = 2GB request_disk = 2GB queue 1","title":"Modify the HTCondor submit file to transfer Python packages"},{"location":"software_examples/python/manage-python-packages/#other-considerations","text":"This guide mainly focuses on the nuts and bolts of running Python, but it's important to remember that additional files needed for your jobs (input data, setting files, etc.) need to be transferred with the job as well. See our Introduction to Data Management on OSG for details on the different ways to deliver inputs to your jobs. When you've prepared a real job submission, make sure to run a test job and then check the log file for disk and memory usage; if you're using significantly more or less than what you requested, make sure you adjust your requests.","title":"Other Considerations"},{"location":"software_examples/python/manage-python-packages/#getting-help","text":"For assistance or questions, please email the OSG Research Facilitation team at support@osg-htc.org or visit the help desk and community forums .","title":"Getting Help"},{"location":"software_examples/python/tutorial-ScalingUp-Python/","text":"Scaling Up With HTCondor\u2019s Queue Command \u00b6 Many large scale computations require the ability to process multiple jobs concurrently. Consider the extensive sampling done for a multi-dimensional Monte Carlo integration, parameter sweep for a given model or molecular dynamics simulation with several initial conditions. These calculations require submitting many jobs. About a million CPU hours per day are available to OSG users on an opportunistic basis. Learning how to scale up and control large numbers of jobs is essential to realize the full potential of distributed high throughput computing on the OSG. The HTCondor's queue command can run multiple jobs from a single job description file. In this tutorial, we will see how to scale up the calculations for a simple python example using the HTCondor\u2019s queue command. Once we understand the basic HTCondor script to run a single job, it is easy to scale up. To download the materials for this tutorial, use the command $ git clone https://github.com/OSGConnect/tutorial-ScalingUp-Python Inside the tutorial-ScalingUp-python directory, all the required files are available. This includes the sample python program, job description file and executable files. Move into the directory with $ cd tutorial-ScalingUp-Python Python script and the optimization function \u00b6 Let us take a look at our objective function that we are trying to optimize. f = (1 - x)**2 + (y - x**2)**2 This a two dimensional Rosenbrock function. Clearly, the minimum is located at (1,1). The Rosenbrock function is one of the test functions used to test the robustness of an optimization method. Here, we are going to use the brute force optimization approach to evaluate the two dimensional Rosenbrock function on grids of points. The boundary values for the grid points are randomly assigned inside the python script. However, these default values may be replaced by user supplied values. To run the calculations with the random boundary values, the script is executed without any argument: python3 rosen_brock_brute_opt.py To run the calculations with the user supplied values, the script is executed with input arguments: python3 rosen_brock_brute_opt.py x_low x_high y_low y_high where x_low and x_high are low and high values along x direction, and y_low and y_high are the low and high values along the y direction. For example, the boundary of x direction is (-3, 3) and the boundary of y direction is (-2, 3). python3 rosen_brock_brute_opt.py -3 3 -2 2 sets the boundary of x direction to (-3, 3) and the boundary of y direction to (-2, 3). The directory Example1 runs the python script with the default random values. The directories Example2 , and Example3 deal with supplying the boundary values as input arguments. The python script requires the SciPy package, which is typically not included in standard installations of Python 3. Therefore, we will use a container that has Python 3 and SciPy installed. If you'd like to test the script, you can do so with apptainer shell /cvmfs/singularity.opensciencegrid.org/htc/rocky:8 and then run one of the above commands. Submitting Jobs Concurrently \u00b6 Now let us take a look at job description file. cd Example1 cat ScalingUp-PythonCals.submit If we want to submit several jobs, we need to track log, out and error files for each job. An easy way to do this is to add the $(Cluster) and $(Process) variables to the file names. You can see this below in the names given to the standard output, standard error and HTCondor log files: +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/htc/rocky:8\" executable = ../rosen_brock_brute_opt.py log = Log/job.$(Cluster).$(Process).log output = Log/job.$(Cluster).$(Process).out error = Log/job.$(Cluster).$(Process).err +JobDurationCategory = \"Medium\" request_cpus = 1 request_memory = 1 GB request_disk = 1 GB queue 10 Note the queue 10 . This tells Condor to queue 10 copies of this job as one cluster. Let us submit the above job $ condor_submit ScalingUp-PythonCals.submit Submitting job(s).......... 10 job(s) submitted to cluster 329837. Apply your condor_q knowledge to see this job progress. After all jobs finished, execute the post_script.sh script to sort the results. ./post_script.sh Note that all ten jobs will have run with random arguments because we did not supply any from the submit file. What if we wanted to supply those arguments so that we could reproduce this analysis if needed? The next example shows how to do this. Providing Different Inputs to Jobs \u00b6 In the previous example, we did not pass any argument to the program and the program generated random boundary conditions. If we have some guess about what could be a better boundary condition, it is a good idea to supply the boundary condition as arguments. It is possible to use a single file to supply multiple arguments. We can take the job description file from the previous example, and modify it to include arguments. The modified job description file is available in the Example2 directory. Take a look at the job description file ScalingUp-PythonCals.submit . $ cd ../Example2 $ cat ScalingUp-PythonCals.submit +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/htc/rocky:8\" executable = ../rosen_brock_brute_opt.py arguments = $(x_low) $(x_high) $(y_low) $(y_high) log = Log/job.$(Cluster).$(Process).log output = Log/job.$(Cluster).$(Process).out error = Log/job.$(Cluster).$(Process).err +JobDurationCategory = \"Medium\" request_cpus = 1 request_memory = 1 GB request_disk = 1 GB queue x_low x_high y_low y_high from job_values.txt A major part of the job description file looks same as the previous example. The main difference is the addition of arguments keyword, which looks like this: arguments = $(x_low) $(x_high) $(y_low) $(y_high) The given arguments $(x_low) , $(x_high) , etc. are actually variables that represent the values we want to use. These values are set in the queue command at the end of the file: queue x_low x_high y_low y_high from job_values.txt Take a look at job_values.txt: $ cat job_values.txt -9 9 -9 9 -8 8 -8 8 -7 7 -7 7 -6 6 -6 6 -5 5 -5 5 -4 4 -4 4 -3 3 -3 3 -2 2 -2 2 -1 1 -1 1 The submit file's queue statement will read in this file and assign each value in a row to the four variables shown in the queue statement. Each row corresponds to the submission of a unique job with those four values. Let us submit the above job to see this: $ condor_submit ScalingUp-PythonCals.submit Submitting job(s).......... 9 job(s) submitted to cluster 329840. Apply your condor_q knowledge to see this job progress. After all jobs finished, execute the post_script.sh script to sort the results. ./post_process.sh Another Example of Different Inputs \u00b6 In the previous example, we split the input information into four variables that were included in the arguments line. However, we could have set the arguments line directly, without intermediate values. This is shown in Example 3: $ cd ../Example3 $ cat ScalingUp-PythonCals.submit +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/htc/rocky:8\" executable = ../rosen_brock_brute_opt.py log = Log/job.$(Cluster).$(Process).log output = Log/job.$(Cluster).$(Process).out error = Log/job.$(Cluster).$(Process).err +JobDurationCategory = \"Medium\" request_cpus = 1 request_memory = 1 GB request_disk = 1 GB queue arguments from job_values.txt Here, arguments has disappeared from the top of the file because we've included it in the queue statement at the end. The job_values.txt file has the same values as before; in this syntax, HTCondor will submit a job for each row of values and the job's arguments will be those four values. Let us submit the above job $ condor_submit ScalingUp-PythonCals.submit Submitting job(s).......... 9 job(s) submitted to cluster 329839. Apply your condor_q and connect watch knowledge to see this job progress. After all jobs finished, execute the post_script.sh script to sort the results. ./post_process.sh","title":"Scaling Up With HTCondor\u2019s Queue Command"},{"location":"software_examples/python/tutorial-ScalingUp-Python/#scaling-up-with-htcondors-queue-command","text":"Many large scale computations require the ability to process multiple jobs concurrently. Consider the extensive sampling done for a multi-dimensional Monte Carlo integration, parameter sweep for a given model or molecular dynamics simulation with several initial conditions. These calculations require submitting many jobs. About a million CPU hours per day are available to OSG users on an opportunistic basis. Learning how to scale up and control large numbers of jobs is essential to realize the full potential of distributed high throughput computing on the OSG. The HTCondor's queue command can run multiple jobs from a single job description file. In this tutorial, we will see how to scale up the calculations for a simple python example using the HTCondor\u2019s queue command. Once we understand the basic HTCondor script to run a single job, it is easy to scale up. To download the materials for this tutorial, use the command $ git clone https://github.com/OSGConnect/tutorial-ScalingUp-Python Inside the tutorial-ScalingUp-python directory, all the required files are available. This includes the sample python program, job description file and executable files. Move into the directory with $ cd tutorial-ScalingUp-Python","title":"Scaling Up With HTCondor\u2019s Queue Command"},{"location":"software_examples/python/tutorial-ScalingUp-Python/#python-script-and-the-optimization-function","text":"Let us take a look at our objective function that we are trying to optimize. f = (1 - x)**2 + (y - x**2)**2 This a two dimensional Rosenbrock function. Clearly, the minimum is located at (1,1). The Rosenbrock function is one of the test functions used to test the robustness of an optimization method. Here, we are going to use the brute force optimization approach to evaluate the two dimensional Rosenbrock function on grids of points. The boundary values for the grid points are randomly assigned inside the python script. However, these default values may be replaced by user supplied values. To run the calculations with the random boundary values, the script is executed without any argument: python3 rosen_brock_brute_opt.py To run the calculations with the user supplied values, the script is executed with input arguments: python3 rosen_brock_brute_opt.py x_low x_high y_low y_high where x_low and x_high are low and high values along x direction, and y_low and y_high are the low and high values along the y direction. For example, the boundary of x direction is (-3, 3) and the boundary of y direction is (-2, 3). python3 rosen_brock_brute_opt.py -3 3 -2 2 sets the boundary of x direction to (-3, 3) and the boundary of y direction to (-2, 3). The directory Example1 runs the python script with the default random values. The directories Example2 , and Example3 deal with supplying the boundary values as input arguments. The python script requires the SciPy package, which is typically not included in standard installations of Python 3. Therefore, we will use a container that has Python 3 and SciPy installed. If you'd like to test the script, you can do so with apptainer shell /cvmfs/singularity.opensciencegrid.org/htc/rocky:8 and then run one of the above commands.","title":"Python script and the optimization function"},{"location":"software_examples/python/tutorial-ScalingUp-Python/#submitting-jobs-concurrently","text":"Now let us take a look at job description file. cd Example1 cat ScalingUp-PythonCals.submit If we want to submit several jobs, we need to track log, out and error files for each job. An easy way to do this is to add the $(Cluster) and $(Process) variables to the file names. You can see this below in the names given to the standard output, standard error and HTCondor log files: +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/htc/rocky:8\" executable = ../rosen_brock_brute_opt.py log = Log/job.$(Cluster).$(Process).log output = Log/job.$(Cluster).$(Process).out error = Log/job.$(Cluster).$(Process).err +JobDurationCategory = \"Medium\" request_cpus = 1 request_memory = 1 GB request_disk = 1 GB queue 10 Note the queue 10 . This tells Condor to queue 10 copies of this job as one cluster. Let us submit the above job $ condor_submit ScalingUp-PythonCals.submit Submitting job(s).......... 10 job(s) submitted to cluster 329837. Apply your condor_q knowledge to see this job progress. After all jobs finished, execute the post_script.sh script to sort the results. ./post_script.sh Note that all ten jobs will have run with random arguments because we did not supply any from the submit file. What if we wanted to supply those arguments so that we could reproduce this analysis if needed? The next example shows how to do this.","title":"Submitting Jobs Concurrently"},{"location":"software_examples/python/tutorial-ScalingUp-Python/#providing-different-inputs-to-jobs","text":"In the previous example, we did not pass any argument to the program and the program generated random boundary conditions. If we have some guess about what could be a better boundary condition, it is a good idea to supply the boundary condition as arguments. It is possible to use a single file to supply multiple arguments. We can take the job description file from the previous example, and modify it to include arguments. The modified job description file is available in the Example2 directory. Take a look at the job description file ScalingUp-PythonCals.submit . $ cd ../Example2 $ cat ScalingUp-PythonCals.submit +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/htc/rocky:8\" executable = ../rosen_brock_brute_opt.py arguments = $(x_low) $(x_high) $(y_low) $(y_high) log = Log/job.$(Cluster).$(Process).log output = Log/job.$(Cluster).$(Process).out error = Log/job.$(Cluster).$(Process).err +JobDurationCategory = \"Medium\" request_cpus = 1 request_memory = 1 GB request_disk = 1 GB queue x_low x_high y_low y_high from job_values.txt A major part of the job description file looks same as the previous example. The main difference is the addition of arguments keyword, which looks like this: arguments = $(x_low) $(x_high) $(y_low) $(y_high) The given arguments $(x_low) , $(x_high) , etc. are actually variables that represent the values we want to use. These values are set in the queue command at the end of the file: queue x_low x_high y_low y_high from job_values.txt Take a look at job_values.txt: $ cat job_values.txt -9 9 -9 9 -8 8 -8 8 -7 7 -7 7 -6 6 -6 6 -5 5 -5 5 -4 4 -4 4 -3 3 -3 3 -2 2 -2 2 -1 1 -1 1 The submit file's queue statement will read in this file and assign each value in a row to the four variables shown in the queue statement. Each row corresponds to the submission of a unique job with those four values. Let us submit the above job to see this: $ condor_submit ScalingUp-PythonCals.submit Submitting job(s).......... 9 job(s) submitted to cluster 329840. Apply your condor_q knowledge to see this job progress. After all jobs finished, execute the post_script.sh script to sort the results. ./post_process.sh","title":"Providing Different Inputs to Jobs"},{"location":"software_examples/python/tutorial-ScalingUp-Python/#another-example-of-different-inputs","text":"In the previous example, we split the input information into four variables that were included in the arguments line. However, we could have set the arguments line directly, without intermediate values. This is shown in Example 3: $ cd ../Example3 $ cat ScalingUp-PythonCals.submit +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/htc/rocky:8\" executable = ../rosen_brock_brute_opt.py log = Log/job.$(Cluster).$(Process).log output = Log/job.$(Cluster).$(Process).out error = Log/job.$(Cluster).$(Process).err +JobDurationCategory = \"Medium\" request_cpus = 1 request_memory = 1 GB request_disk = 1 GB queue arguments from job_values.txt Here, arguments has disappeared from the top of the file because we've included it in the queue statement at the end. The job_values.txt file has the same values as before; in this syntax, HTCondor will submit a job for each row of values and the job's arguments will be those four values. Let us submit the above job $ condor_submit ScalingUp-PythonCals.submit Submitting job(s).......... 9 job(s) submitted to cluster 329839. Apply your condor_q and connect watch knowledge to see this job progress. After all jobs finished, execute the post_script.sh script to sort the results. ./post_process.sh","title":"Another Example of Different Inputs"},{"location":"software_examples/python/tutorial-wordfreq/","text":"Wordcount Tutorial for Submitting Multiple Jobs \u00b6 Imagine you have a collection of books, and you want to analyze how word usage varies from book to book or author to author. The type of workflow covered in this tutorial can be used to describe workflows that take have different input files or parameters from job to job. To download the materials for this tutorial, type: $ git clone https://github.com/OSGConnect/tutorial-wordfreq Analyzing One Book \u00b6 Test the Command \u00b6 We can analyze one book by running the wordcount.py script, with the name of the book we want to analyze: $ ./wordcount.py Alice_in_Wonderland.txt If you run the ls command, you should see a new file with the prefix counts which has the results of this python script. This is the output we want to produce within an HTCondor job. For now, remove the output: $ rm counts.Alice_in_Wonderland.tsv Create a Submit File \u00b6 To submit a single job that runs this command and analyzes the Alice's Adventures in Wonderland book, we need to translate this command into HTCondor submit file syntax. The two main components we care about are (1) the actual command and (2) the needed input files. The command gets turned into the submit file executable and arguments options: executable = wordcount.py arguments = Alice_in_Wonderland.txt The executable is the script that we want to run, and the arguments is everything else that follows the script when we run it, like the test above. The input file for this job is the Alice_in_Wonderland.txt text file. While we provided the name as in the arguments , we need to explicitly tell HTCondor to transfer the corresponding file. We include the file name in the following submit file option: transfer_input_files = Alice_in_Wonderland.txt There are other submit file options that control other aspects of the job, like where to save error and logging information, and how many resources to request per job. This tutorial has a sample submit file ( wordcount.sub ) with most of these submit file options filled in: $ cat wordcount.sub executable = arguments = transfer_input_files = should_transfer_files = Yes when_to_transfer_output = ON_EXIT log = logs/job.$(Cluster).$(Process).log error = logs/job.$(Cluster).$(Process).error output = logs/job.$(Cluster).$(Process).out +JobDurationCategory = \"Medium\" requirements = (OSGVO_OS_STRING == \"RHEL 7\") request_cpus = 1 request_memory = 512MB request_disk = 512MB queue 1 Open (or create) this file with a terminal-based text editor (like vi or nano ) and add the executable, arguments, and input information described above. Submit and Monitor the Job \u00b6 After saving the submit file, submit the job: $ condor_submit wordcount.sub You can check the job's progress using condor_q , which will print out the status of your jobs in the queue. You can also use the command condor_watch_q to monitor the queue in real time (use the keyboard shortcut Ctrl c to exit). Once the job finishes, you should see the same counts.Alice_in_Wonderland.tsv output when you enter ls . Analyzing Multiple Books \u00b6 Now suppose you wanted to analyze multiple books - more than one at a time. You could create a separate submit file for each book, and submit all of the files manually, but you'd have a lot of file lines to modify each time (in particular, the arguments and transfer_input_files lines from the previous submit file). This would be overly verbose and tedious. HTCondor has options that make it easy to submit many jobs from one submit file. Make a List of Inputs \u00b6 First we want to make a list of inputs that we want to use for our jobs. This should be a list where each item on the list corresponds to a job. In this example, our inputs are the different text files for different books. We want each job to analyze a different book, so our list should just contain the names of these text files. We can easily create this list by using an ls command and sending the output to a file: $ ls *.txt > book.list The book.list file now contains each of the .txt file names in the current directory. $ cat book.list Alice_in_Wonderland.txt Dracula.txt Huckleberry_Finn.txt Pride_and_Prejudice.txt Ulysses.txt Modify the Submit File \u00b6 Next, we will make changes to our submit file so that it submits a job for each book title in our list (seen in the book.list file). Create a copy of our existing submit file, which we will use for this job submission. $ cp wordcount.sub many-wordcount.sub We want to tell the queue keyword to use our list of inputs to submit jobs. The default syntax looks like this: queue from Open the many-wordcount.sub file with a text editor and go to the end. Following the syntax above, we modify the queue statement to fit our example: queue book from book.list This statement works like a for loop. For every item in the book.list file, HTCondor will create a job using this submit file but replacing every occurrence of $(book) with the item from book.list . The syntax $(variablename) represents a submit variable whose value will be substituted at the time of submission. Therefore, everywhere we used the name of the book in our submit file should be replaced with the variable $(book) (in the previous example, everywhere you entered \"Alice_in_Wonderland.txt\"). So the following lines in the submit file should be changed to use the variable $(book) : arguments = $(book) transfer_input_files = $(book) Submit and Monitor the Job \u00b6 We're now ready to submit all of our jobs. $ condor_submit many-wordcount.sub This will now submit five jobs (one for each book on our list). Once all five have finished running, we should see five \"counts\" files, one for each book in the directory. If you don't see all five \"counts\" files, consider investigating the log files and see if you can identify what caused that to happen.","title":"Wordcount Tutorial for Submitting Multiple Jobs"},{"location":"software_examples/python/tutorial-wordfreq/#wordcount-tutorial-for-submitting-multiple-jobs","text":"Imagine you have a collection of books, and you want to analyze how word usage varies from book to book or author to author. The type of workflow covered in this tutorial can be used to describe workflows that take have different input files or parameters from job to job. To download the materials for this tutorial, type: $ git clone https://github.com/OSGConnect/tutorial-wordfreq","title":"Wordcount Tutorial for Submitting Multiple Jobs"},{"location":"software_examples/python/tutorial-wordfreq/#analyzing-one-book","text":"","title":"Analyzing One Book"},{"location":"software_examples/python/tutorial-wordfreq/#test-the-command","text":"We can analyze one book by running the wordcount.py script, with the name of the book we want to analyze: $ ./wordcount.py Alice_in_Wonderland.txt If you run the ls command, you should see a new file with the prefix counts which has the results of this python script. This is the output we want to produce within an HTCondor job. For now, remove the output: $ rm counts.Alice_in_Wonderland.tsv","title":"Test the Command"},{"location":"software_examples/python/tutorial-wordfreq/#create-a-submit-file","text":"To submit a single job that runs this command and analyzes the Alice's Adventures in Wonderland book, we need to translate this command into HTCondor submit file syntax. The two main components we care about are (1) the actual command and (2) the needed input files. The command gets turned into the submit file executable and arguments options: executable = wordcount.py arguments = Alice_in_Wonderland.txt The executable is the script that we want to run, and the arguments is everything else that follows the script when we run it, like the test above. The input file for this job is the Alice_in_Wonderland.txt text file. While we provided the name as in the arguments , we need to explicitly tell HTCondor to transfer the corresponding file. We include the file name in the following submit file option: transfer_input_files = Alice_in_Wonderland.txt There are other submit file options that control other aspects of the job, like where to save error and logging information, and how many resources to request per job. This tutorial has a sample submit file ( wordcount.sub ) with most of these submit file options filled in: $ cat wordcount.sub executable = arguments = transfer_input_files = should_transfer_files = Yes when_to_transfer_output = ON_EXIT log = logs/job.$(Cluster).$(Process).log error = logs/job.$(Cluster).$(Process).error output = logs/job.$(Cluster).$(Process).out +JobDurationCategory = \"Medium\" requirements = (OSGVO_OS_STRING == \"RHEL 7\") request_cpus = 1 request_memory = 512MB request_disk = 512MB queue 1 Open (or create) this file with a terminal-based text editor (like vi or nano ) and add the executable, arguments, and input information described above.","title":"Create a Submit File"},{"location":"software_examples/python/tutorial-wordfreq/#submit-and-monitor-the-job","text":"After saving the submit file, submit the job: $ condor_submit wordcount.sub You can check the job's progress using condor_q , which will print out the status of your jobs in the queue. You can also use the command condor_watch_q to monitor the queue in real time (use the keyboard shortcut Ctrl c to exit). Once the job finishes, you should see the same counts.Alice_in_Wonderland.tsv output when you enter ls .","title":"Submit and Monitor the Job"},{"location":"software_examples/python/tutorial-wordfreq/#analyzing-multiple-books","text":"Now suppose you wanted to analyze multiple books - more than one at a time. You could create a separate submit file for each book, and submit all of the files manually, but you'd have a lot of file lines to modify each time (in particular, the arguments and transfer_input_files lines from the previous submit file). This would be overly verbose and tedious. HTCondor has options that make it easy to submit many jobs from one submit file.","title":"Analyzing Multiple Books"},{"location":"software_examples/python/tutorial-wordfreq/#make-a-list-of-inputs","text":"First we want to make a list of inputs that we want to use for our jobs. This should be a list where each item on the list corresponds to a job. In this example, our inputs are the different text files for different books. We want each job to analyze a different book, so our list should just contain the names of these text files. We can easily create this list by using an ls command and sending the output to a file: $ ls *.txt > book.list The book.list file now contains each of the .txt file names in the current directory. $ cat book.list Alice_in_Wonderland.txt Dracula.txt Huckleberry_Finn.txt Pride_and_Prejudice.txt Ulysses.txt","title":"Make a List of Inputs"},{"location":"software_examples/python/tutorial-wordfreq/#modify-the-submit-file","text":"Next, we will make changes to our submit file so that it submits a job for each book title in our list (seen in the book.list file). Create a copy of our existing submit file, which we will use for this job submission. $ cp wordcount.sub many-wordcount.sub We want to tell the queue keyword to use our list of inputs to submit jobs. The default syntax looks like this: queue from Open the many-wordcount.sub file with a text editor and go to the end. Following the syntax above, we modify the queue statement to fit our example: queue book from book.list This statement works like a for loop. For every item in the book.list file, HTCondor will create a job using this submit file but replacing every occurrence of $(book) with the item from book.list . The syntax $(variablename) represents a submit variable whose value will be substituted at the time of submission. Therefore, everywhere we used the name of the book in our submit file should be replaced with the variable $(book) (in the previous example, everywhere you entered \"Alice_in_Wonderland.txt\"). So the following lines in the submit file should be changed to use the variable $(book) : arguments = $(book) transfer_input_files = $(book)","title":"Modify the Submit File"},{"location":"software_examples/python/tutorial-wordfreq/#submit-and-monitor-the-job_1","text":"We're now ready to submit all of our jobs. $ condor_submit many-wordcount.sub This will now submit five jobs (one for each book on our list). Once all five have finished running, we should see five \"counts\" files, one for each book in the directory. If you don't see all five \"counts\" files, consider investigating the log files and see if you can identify what caused that to happen.","title":"Submit and Monitor the Job"},{"location":"software_examples/r/tutorial-R/","text":"Run R scripts on the OSPool \u00b6 This tutorial describes how to run a simple R script on the OSPool. We'll first run the program locally as a test. After that we'll create a submit file, submit it to the OSPool using an OSPool Access Point, and look at the results when the jobs finish. Set Up Directory and R Script \u00b6 First we'll need to create a working directory with our materials. You can either run $ git clone https://github.com/OSGConnect/tutorial-R to download the materials, OR create them yourself by typing the following: $ mkdir tutorial-R; cd tutorial-R Let's create a small script to use as a test example. Create the file hello_world.R using a text editor like nano or vim that contains the following: #!/usr/bin/env Rscript print(\"Hello World!\") The header #!/usr/bin/env Rscript indicates that if this script is run on its own, it needs to be executed using the R language (instead of Python, or bash, for example). We will run one more command that makes the script executable , meaning that it can be run directly from the command line: $ chmod +x hello_world.R Access R on the Access Point \u00b6 R is run using containers on the OSPool. To test it out on the Access Point, we can run: $ apptainer shell \\ /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-r:3.5.0 Other Supported R Versions \u00b6 To see a list of all containers containing R, look at the list of OSPool Supported Containers The previous command sometimes takes a minute or so to start. Once it starts, you should see the following prompt: Singularity :~/tutorial-R> Now, we can try to run R by typing R in our terminal: Singularity :~/tutorial-R> R R version 3.5.1 (2018-07-02) -- \"Feather Spray\" Copyright (C) 2018 The R Foundation for Statistical Computing Platform: x86_64-pc-linux-gnu (64-bit) R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. Natural language support but running in an English locale R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. > You can quit out with q() . > q() Save workspace image? [y/n/c]: n Singularity :~/tutorial-R> Great! R works. We'll leave the container running for the next step. See below on how to exit from the container. Test an R Script \u00b6 To run the R script we created earlier , we just need to execute it like so: Singularity :~/tutorial-R> ./hello_world.R If this works, we will have [1] \"Hello World!\" printed to our terminal. Once we have this output, we'll exit the container for now with exit : Singularity :~/tutorial-R> exit $ Build the HTCondor Job \u00b6 Let's build a HTCondor submit file to run our script. Using a text editor, create a file called R.submit with the following text inside it: +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-r:3.5.0\" executable = hello_world.R # arguments log = R.log.$(Cluster).$(Process) error = R.err.$(Cluster).$(Process) output = R.out.$(Cluster).$(Process) +JobDurationCategory = \"Medium\" request_cpus = 1 request_memory = 1GB request_disk = 1GB queue 1 The path you put in the +SingularityImage option should match whatever you used to test R above. We list the R script as the executable. The R.submit file may have included a few lines that you are unfamiliar with. For example, $(Cluster) and $(Process) are variables that will be replaced with the job's cluster and process numbers - these are automatically assigned by. This is useful when you have many jobs submitted in the same file. Any output and errors will be placed in a separate file for each job. Submit and View Output \u00b6 Finally, submit the job! $ condor_submit R.submit Submitting job(s). 1 job(s) submitted to cluster 3796250. $ condor_q alice -- Schedd: ap40.uw.osg-htc.org: <192.170.227.22:9618?... @ 04/13/23 09:51:04 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE TOTAL JOB_IDS alice ID: 3796250 4/13 09:50 _ _ 1 1 3796250.0 ... You can follow the status of your job cluster with the condor_watch_q command, which shows condor_q output that refreshes each 5 seconds. Press control-C to stop watching. Since our jobs prints to standard out, we can check the output files. Let's see what one looks like: $ cat R.out.3796250.0 [1] \"Hello World!\" Related Guides for Running R Code \u00b6 Use Custom Libraries with R Scale Up your R jobs","title":"Run R scripts on the OSPool"},{"location":"software_examples/r/tutorial-R/#run-r-scripts-on-the-ospool","text":"This tutorial describes how to run a simple R script on the OSPool. We'll first run the program locally as a test. After that we'll create a submit file, submit it to the OSPool using an OSPool Access Point, and look at the results when the jobs finish.","title":"Run R scripts on the OSPool"},{"location":"software_examples/r/tutorial-R/#set-up-directory-and-r-script","text":"First we'll need to create a working directory with our materials. You can either run $ git clone https://github.com/OSGConnect/tutorial-R to download the materials, OR create them yourself by typing the following: $ mkdir tutorial-R; cd tutorial-R Let's create a small script to use as a test example. Create the file hello_world.R using a text editor like nano or vim that contains the following: #!/usr/bin/env Rscript print(\"Hello World!\") The header #!/usr/bin/env Rscript indicates that if this script is run on its own, it needs to be executed using the R language (instead of Python, or bash, for example). We will run one more command that makes the script executable , meaning that it can be run directly from the command line: $ chmod +x hello_world.R","title":"Set Up Directory and R Script"},{"location":"software_examples/r/tutorial-R/#access-r-on-the-access-point","text":"R is run using containers on the OSPool. To test it out on the Access Point, we can run: $ apptainer shell \\ /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-r:3.5.0","title":"Access R on the Access Point"},{"location":"software_examples/r/tutorial-R/#other-supported-r-versions","text":"To see a list of all containers containing R, look at the list of OSPool Supported Containers The previous command sometimes takes a minute or so to start. Once it starts, you should see the following prompt: Singularity :~/tutorial-R> Now, we can try to run R by typing R in our terminal: Singularity :~/tutorial-R> R R version 3.5.1 (2018-07-02) -- \"Feather Spray\" Copyright (C) 2018 The R Foundation for Statistical Computing Platform: x86_64-pc-linux-gnu (64-bit) R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. Natural language support but running in an English locale R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. > You can quit out with q() . > q() Save workspace image? [y/n/c]: n Singularity :~/tutorial-R> Great! R works. We'll leave the container running for the next step. See below on how to exit from the container.","title":"Other Supported R Versions"},{"location":"software_examples/r/tutorial-R/#test-an-r-script","text":"To run the R script we created earlier , we just need to execute it like so: Singularity :~/tutorial-R> ./hello_world.R If this works, we will have [1] \"Hello World!\" printed to our terminal. Once we have this output, we'll exit the container for now with exit : Singularity :~/tutorial-R> exit $","title":"Test an R Script"},{"location":"software_examples/r/tutorial-R/#build-the-htcondor-job","text":"Let's build a HTCondor submit file to run our script. Using a text editor, create a file called R.submit with the following text inside it: +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-r:3.5.0\" executable = hello_world.R # arguments log = R.log.$(Cluster).$(Process) error = R.err.$(Cluster).$(Process) output = R.out.$(Cluster).$(Process) +JobDurationCategory = \"Medium\" request_cpus = 1 request_memory = 1GB request_disk = 1GB queue 1 The path you put in the +SingularityImage option should match whatever you used to test R above. We list the R script as the executable. The R.submit file may have included a few lines that you are unfamiliar with. For example, $(Cluster) and $(Process) are variables that will be replaced with the job's cluster and process numbers - these are automatically assigned by. This is useful when you have many jobs submitted in the same file. Any output and errors will be placed in a separate file for each job.","title":"Build the HTCondor Job"},{"location":"software_examples/r/tutorial-R/#submit-and-view-output","text":"Finally, submit the job! $ condor_submit R.submit Submitting job(s). 1 job(s) submitted to cluster 3796250. $ condor_q alice -- Schedd: ap40.uw.osg-htc.org: <192.170.227.22:9618?... @ 04/13/23 09:51:04 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE TOTAL JOB_IDS alice ID: 3796250 4/13 09:50 _ _ 1 1 3796250.0 ... You can follow the status of your job cluster with the condor_watch_q command, which shows condor_q output that refreshes each 5 seconds. Press control-C to stop watching. Since our jobs prints to standard out, we can check the output files. Let's see what one looks like: $ cat R.out.3796250.0 [1] \"Hello World!\"","title":"Submit and View Output"},{"location":"software_examples/r/tutorial-R/#related-guides-for-running-r-code","text":"Use Custom Libraries with R Scale Up your R jobs","title":"Related Guides for Running R Code"},{"location":"software_examples/r/tutorial-R-addlibSNA/","text":"Use R Packages in your R Jobs \u00b6 Often we may need to add R external libraries that are not part of the base R installation. This tutorial describes how to create custom R libraries for use in jobs on the OSPool. Background \u00b6 The material in this tutorial builds upon the Run R Scripts on the OSPool tutorial. If you are not already familiar with how to run R jobs on the OSPool, please see that tutorial first for a general introduction. Setup Directory and R Script \u00b6 First we'll need to create a working directory, you can either run $ git clone https://github.com/OSGConnect/tutorial-R-addlib or type the following: $ mkdir tutorial-R-addlib $ cd tutorial-R-addlib Similar to the general R tutorial, we will create a script to use as a test example. If you did not clone the tutorial, create a script called hello_world.R that contains the following: #!/usr/bin/env Rscript library(cowsay) say(\"Hello World!\", \"cow\") We will run one more command that makes the script executable , meaning that it can be run directly from the command line: $ chmod +x hello_world.R Create a Custom Container with R Packages \u00b6 Using the same container that we used for the general R tutorial, we will add the package we want to use (in this case, the cowsay package) to create a new container that we can use for our jobs. The new container will be generated from a \"definition\" file. If it isn't already present, create a file called cowsay.def that has the following lines: Bootstrap: docker From: opensciencegrid/osgvo-r:3.5.0 %post R -e \"install.packages('cowsay', dependencies=TRUE, repos='http://cran.rstudio.com/')\" This file basically says that we want to start with one of the existing OSPool R containers and add the cowsay package from CRAN. To create the new container, set the following variables: $ export TMPDIR=$HOME $ export APPTAINER_CACHE_DIR=$HOME And then run this command: apptainer build cowsay-test.sif cowsay.def It may take 5-10 minutes to run. Once complete, if you run ls , you should see a file in your current directory called cowsay-test.sif . This is the new container. Building containers can be a new skill and slightly different for different packages! We recommend looking at our container guides and container training materials to learn more -- these are both linked from our main guides page. There are also some additional tips at the end of this tutorial on building containers with R packages. Test Custom Container and R Script \u00b6 Start the container you created by running: $ apptainer shell cowsay-test.sif Now we can test our R script: Singularity :~/tutorial-R-addlib> ./hello_world.R If this works, we will have a message with a cow printed to our terminal. Once we have this output, we'll exit the container for now with exit : Singularity :~/tutorial-R-addlib> exit $ Build the HTCondor Job \u00b6 For this job, we want to use the custom container we just created. For efficiency, it is best to transfer this to the job using the OSDF . If you want to use the container you just built, copy it to the appropriate directory listed here, based on which Access Point you are using. Our submit file, R.submit should then look like this: +SingularityImage = \"osdf://osgconnect/public/osg/tutorial-R-addlib/cowsay-test.sif\" executable = hello_world.R # arguments log = R.log.$(Cluster).$(Process) error = R.err.$(Cluster).$(Process) output = R.out.$(Cluster).$(Process) +JobDurationCategory = \"Medium\" request_cpus = 1 request_memory = 1GB request_disk = 1GB queue 1 Change the osdf:// link in the submit file to be right for YOUR Access Point and username, if you are using your own container file. Reminder: Files placed in the OSDF can be copied to other data spaces (\"caches\") where they are NOT UPDATED. If you make a new container to use with your jobs, make sure to give it a different name or put it at a different path than the previous container. You will not be able to replace the exact path of the existing container. Submit Jobs and Review Output \u00b6 Now we are ready to submit the job: $ condor_submit R.submit and check the job status: $ condor_q Once the job finished running, check the output file as before. They should look like this: $ cat R.out.0000.0 ----- Hello World! ------ \\ ^__^ \\ (oo)\\ ________ (__)\\ )\\ /\\ ||------w| || || Tips for Building Containers with R Packages \u00b6 There is a lot of variety in how to build custom containers! The two main decisions you need to make are a) what to use as your \"base\" or starting container and what packages to install. There is a useful overview of building containers from our container training, linked on our training page . Base Containers \u00b6 In this guide we used one of the existing OSPool R containers. You can see the other versions of R that we support on our list of OSPool Supported Containers Another good option for a base container are the \"rocker\" Docker containers: Rocker on DockerHub To use a different container as the base container, you just change the top of the definition file. So to use the rocker tidyverse container as my starting point, I would have a definition file header like this: Bootstrap: docker From: rocker/tidyverse:4.1.3 When using containers from DockerHub, it's a good idea to pick a version (look at the \"Tags\" tab for options). Above, this container would be version 4.1.3 of R. Installing Packages \u00b6 The sample definition file from this tutorial installed one package. If you have multiple packages, you can change the \"install.packages\" command to install multiple packages: %post R -e \"install.packages(c('cowsay','here'), dependencies=TRUE, repos='http://cran.rstudio.com/')\" If your base container is one of the \"rocker\" containers , you can use a different tool to install packages that looks like this: %post install2.r cowsay or for multiple packages: %post install2.r cowsay here Remember, you only need to install packages that aren't already in the container. If you start with the tidyverse container, you don't need to install ggplot2 or dplyr - those are already in the container and you would be adding packages on top.","title":"Use External Packages in your R Jobs"},{"location":"software_examples/r/tutorial-R-addlibSNA/#use-r-packages-in-your-r-jobs","text":"Often we may need to add R external libraries that are not part of the base R installation. This tutorial describes how to create custom R libraries for use in jobs on the OSPool.","title":"Use R Packages in your R Jobs"},{"location":"software_examples/r/tutorial-R-addlibSNA/#background","text":"The material in this tutorial builds upon the Run R Scripts on the OSPool tutorial. If you are not already familiar with how to run R jobs on the OSPool, please see that tutorial first for a general introduction.","title":"Background"},{"location":"software_examples/r/tutorial-R-addlibSNA/#setup-directory-and-r-script","text":"First we'll need to create a working directory, you can either run $ git clone https://github.com/OSGConnect/tutorial-R-addlib or type the following: $ mkdir tutorial-R-addlib $ cd tutorial-R-addlib Similar to the general R tutorial, we will create a script to use as a test example. If you did not clone the tutorial, create a script called hello_world.R that contains the following: #!/usr/bin/env Rscript library(cowsay) say(\"Hello World!\", \"cow\") We will run one more command that makes the script executable , meaning that it can be run directly from the command line: $ chmod +x hello_world.R","title":"Setup Directory and R Script"},{"location":"software_examples/r/tutorial-R-addlibSNA/#create-a-custom-container-with-r-packages","text":"Using the same container that we used for the general R tutorial, we will add the package we want to use (in this case, the cowsay package) to create a new container that we can use for our jobs. The new container will be generated from a \"definition\" file. If it isn't already present, create a file called cowsay.def that has the following lines: Bootstrap: docker From: opensciencegrid/osgvo-r:3.5.0 %post R -e \"install.packages('cowsay', dependencies=TRUE, repos='http://cran.rstudio.com/')\" This file basically says that we want to start with one of the existing OSPool R containers and add the cowsay package from CRAN. To create the new container, set the following variables: $ export TMPDIR=$HOME $ export APPTAINER_CACHE_DIR=$HOME And then run this command: apptainer build cowsay-test.sif cowsay.def It may take 5-10 minutes to run. Once complete, if you run ls , you should see a file in your current directory called cowsay-test.sif . This is the new container. Building containers can be a new skill and slightly different for different packages! We recommend looking at our container guides and container training materials to learn more -- these are both linked from our main guides page. There are also some additional tips at the end of this tutorial on building containers with R packages.","title":"Create a Custom Container with R Packages"},{"location":"software_examples/r/tutorial-R-addlibSNA/#test-custom-container-and-r-script","text":"Start the container you created by running: $ apptainer shell cowsay-test.sif Now we can test our R script: Singularity :~/tutorial-R-addlib> ./hello_world.R If this works, we will have a message with a cow printed to our terminal. Once we have this output, we'll exit the container for now with exit : Singularity :~/tutorial-R-addlib> exit $","title":"Test Custom Container and R Script"},{"location":"software_examples/r/tutorial-R-addlibSNA/#build-the-htcondor-job","text":"For this job, we want to use the custom container we just created. For efficiency, it is best to transfer this to the job using the OSDF . If you want to use the container you just built, copy it to the appropriate directory listed here, based on which Access Point you are using. Our submit file, R.submit should then look like this: +SingularityImage = \"osdf://osgconnect/public/osg/tutorial-R-addlib/cowsay-test.sif\" executable = hello_world.R # arguments log = R.log.$(Cluster).$(Process) error = R.err.$(Cluster).$(Process) output = R.out.$(Cluster).$(Process) +JobDurationCategory = \"Medium\" request_cpus = 1 request_memory = 1GB request_disk = 1GB queue 1 Change the osdf:// link in the submit file to be right for YOUR Access Point and username, if you are using your own container file. Reminder: Files placed in the OSDF can be copied to other data spaces (\"caches\") where they are NOT UPDATED. If you make a new container to use with your jobs, make sure to give it a different name or put it at a different path than the previous container. You will not be able to replace the exact path of the existing container.","title":"Build the HTCondor Job"},{"location":"software_examples/r/tutorial-R-addlibSNA/#submit-jobs-and-review-output","text":"Now we are ready to submit the job: $ condor_submit R.submit and check the job status: $ condor_q Once the job finished running, check the output file as before. They should look like this: $ cat R.out.0000.0 ----- Hello World! ------ \\ ^__^ \\ (oo)\\ ________ (__)\\ )\\ /\\ ||------w| || ||","title":"Submit Jobs and Review Output"},{"location":"software_examples/r/tutorial-R-addlibSNA/#tips-for-building-containers-with-r-packages","text":"There is a lot of variety in how to build custom containers! The two main decisions you need to make are a) what to use as your \"base\" or starting container and what packages to install. There is a useful overview of building containers from our container training, linked on our training page .","title":"Tips for Building Containers with R Packages"},{"location":"software_examples/r/tutorial-R-addlibSNA/#base-containers","text":"In this guide we used one of the existing OSPool R containers. You can see the other versions of R that we support on our list of OSPool Supported Containers Another good option for a base container are the \"rocker\" Docker containers: Rocker on DockerHub To use a different container as the base container, you just change the top of the definition file. So to use the rocker tidyverse container as my starting point, I would have a definition file header like this: Bootstrap: docker From: rocker/tidyverse:4.1.3 When using containers from DockerHub, it's a good idea to pick a version (look at the \"Tags\" tab for options). Above, this container would be version 4.1.3 of R.","title":"Base Containers"},{"location":"software_examples/r/tutorial-R-addlibSNA/#installing-packages","text":"The sample definition file from this tutorial installed one package. If you have multiple packages, you can change the \"install.packages\" command to install multiple packages: %post R -e \"install.packages(c('cowsay','here'), dependencies=TRUE, repos='http://cran.rstudio.com/')\" If your base container is one of the \"rocker\" containers , you can use a different tool to install packages that looks like this: %post install2.r cowsay or for multiple packages: %post install2.r cowsay here Remember, you only need to install packages that aren't already in the container. If you start with the tidyverse container, you don't need to install ggplot2 or dplyr - those are already in the container and you would be adding packages on top.","title":"Installing Packages"},{"location":"software_examples/r/tutorial-ScalingUp-R/","text":"Scaling up compute resources \u00b6 Scaling up the computational resources is a big advantage for doing certain large scale calculations on OSPool. Consider the extensive sampling for a multi-dimensional Monte Carlo integration or molecular dynamics simulation with several initial conditions. These type of calculations require submitting a lot of jobs. About a million CPU hours per day are available to OSPool users on an opportunistic basis. Learning how to scale up and control large numbers of jobs is key to realizing the full potential of distributed high throughput computing on the OSPool. In this tutorial, we will see how to scale up calculations for a simple example. To download the materials for this tutorial, use the command $ git clone https://github.com/OSGConnect/tutorial-ScalingUp-R Background \u00b6 For this example, we will use computational methods to estimate \u03c0. First, we will define a square inscribed by a unit circle from which we will randomly sample points. The ratio of the points outside the circle to the points in the circle is calculated, which approaches \u03c0/4. This method converges extremely slowly, which makes it great for a CPU-intensive exercise (but bad for a real estimation!). Set up an R Job \u00b6 If you downloaded the tutorial files, you should see the directory \"tutorial-ScalingUp-R\" when you run the ls command. This directory contains the files used in this tutorial. Alternatively, you can write the necessary files from scratch. In that case, create a working directory using the command $ mkdir tutorial-ScalingUp-R Either way, move into the directory before continuing: $ cd tutorial-ScalingUp-R Create and test an R Script \u00b6 Our code is a simple R script that does the estimation. It takes in a single argument in order to differentiate the jobs. The code for the script is contained in the file mcpi.R . If you didn't download the tutorial files, create an R script called mcpi.R and add the following contents: #!/usr/bin/env Rscript args = commandArgs(trailingOnly = TRUE) iternum = as.numeric(args[[1]]) + 100 montecarloPi <- function(trials) { count = 0 for(i in 1:trials) { if((runif(1,0,1)^2 + runif(1,0,1)^2)<1) { count = count + 1 } } return((count*4)/trials) } montecarloPi(iternum) The header at the top of the file (the line starting with #! ) indicates that this script is meant to be run using R. If we were running a more intensive script, we would want to test our pipeline with a shortened, test script first. If you want to test the script, start an R container, and then run the script using Rscript . For example: $ apptainer shell \\ /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-r:3.5.0 Singularity :~/tutorial-ScalingUp-R> Rscript mcpi.R 10 [1] 3.14 Singularity :~/tutorial-ScalingUp-R> exit $ Create a Submit File and Log Directories \u00b6 Now that we have our R script written and tested, we can begin building the submit file for our job. If we want to submit several jobs, we need to track log, output, and error files for each job. An easy way to do this is to use the Cluster and Process ID values assigned by HTCondor to create unique files for each job in our overall workflow. In this example, the submit file is called R.submit . If you did not download the tutorial files, create a submit file named R.submit and add the following contents: +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-r:3.5.0\" executable = mcpi.R arguments = $(Process) #transfer_input_files = should_transfer_files = YES when_to_transfer_output = ON_EXIT log = logs/job.log.$(Cluster).$(Process) error = logs/job.error.$(Cluster).$(Process) output = output/mcpi.out.$(Cluster).$(Process) request_cpus = 1 request_memory = 1GB request_disk = 1GB queue 100 If you did not download the tutorial files, you will also need to create the logs and output directories to hold the files that will be created for each job. You can create both directories at once with the command $ mkdir logs output There are several items to note about this submit file: The queue 100 statement in the submit file. This tells Condor to enqueue 100 copies of this job as one cluster. The submit variables $(Cluster) and $(Process) . These are used to specify unique output files. HTCondor will replace these with the Cluster and Process ID numbers for each individual process within the cluster. The $(Process) variable is also passed as an argument to our R script. Submit the Jobs \u00b6 Now it is time to submit our job! You'll see something like the following upon submission: $ condor_submit R.submit Submitting job(s)......................... 100 job(s) submitted to cluster 837. Apply your condor_q knowledge to see the progress of these jobs. Check your logs folder to see the error and HTCondor log files and the output folder to see the results of the scripts. Post Process \u00b6 Once the jobs are completed, you can use the information in the output files to calculate an average of all of our computed estimates of \u03c0. To see this, we can use the command: $ cat output/mcpi*.out* | awk '{ sum += $2; print $2\" \"NR} END { print \"---------------\\n Grand Average = \" sum/NR }' Key Points \u00b6 Scaling up the number of jobs is crucial for taking full advantage of the computational resources of the OSPool. Changing the queue statement allows the user to scale up the resources. The arguments option can be used to pass parameters to a job script. The submit variables $(Cluster) and $(Process) can be used to name log files uniquely.","title":"Scaling up compute resources"},{"location":"software_examples/r/tutorial-ScalingUp-R/#scaling-up-compute-resources","text":"Scaling up the computational resources is a big advantage for doing certain large scale calculations on OSPool. Consider the extensive sampling for a multi-dimensional Monte Carlo integration or molecular dynamics simulation with several initial conditions. These type of calculations require submitting a lot of jobs. About a million CPU hours per day are available to OSPool users on an opportunistic basis. Learning how to scale up and control large numbers of jobs is key to realizing the full potential of distributed high throughput computing on the OSPool. In this tutorial, we will see how to scale up calculations for a simple example. To download the materials for this tutorial, use the command $ git clone https://github.com/OSGConnect/tutorial-ScalingUp-R","title":"Scaling up compute resources"},{"location":"software_examples/r/tutorial-ScalingUp-R/#background","text":"For this example, we will use computational methods to estimate \u03c0. First, we will define a square inscribed by a unit circle from which we will randomly sample points. The ratio of the points outside the circle to the points in the circle is calculated, which approaches \u03c0/4. This method converges extremely slowly, which makes it great for a CPU-intensive exercise (but bad for a real estimation!).","title":"Background"},{"location":"software_examples/r/tutorial-ScalingUp-R/#set-up-an-r-job","text":"If you downloaded the tutorial files, you should see the directory \"tutorial-ScalingUp-R\" when you run the ls command. This directory contains the files used in this tutorial. Alternatively, you can write the necessary files from scratch. In that case, create a working directory using the command $ mkdir tutorial-ScalingUp-R Either way, move into the directory before continuing: $ cd tutorial-ScalingUp-R","title":"Set up an R Job"},{"location":"software_examples/r/tutorial-ScalingUp-R/#create-and-test-an-r-script","text":"Our code is a simple R script that does the estimation. It takes in a single argument in order to differentiate the jobs. The code for the script is contained in the file mcpi.R . If you didn't download the tutorial files, create an R script called mcpi.R and add the following contents: #!/usr/bin/env Rscript args = commandArgs(trailingOnly = TRUE) iternum = as.numeric(args[[1]]) + 100 montecarloPi <- function(trials) { count = 0 for(i in 1:trials) { if((runif(1,0,1)^2 + runif(1,0,1)^2)<1) { count = count + 1 } } return((count*4)/trials) } montecarloPi(iternum) The header at the top of the file (the line starting with #! ) indicates that this script is meant to be run using R. If we were running a more intensive script, we would want to test our pipeline with a shortened, test script first. If you want to test the script, start an R container, and then run the script using Rscript . For example: $ apptainer shell \\ /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-r:3.5.0 Singularity :~/tutorial-ScalingUp-R> Rscript mcpi.R 10 [1] 3.14 Singularity :~/tutorial-ScalingUp-R> exit $","title":"Create and test an R Script"},{"location":"software_examples/r/tutorial-ScalingUp-R/#create-a-submit-file-and-log-directories","text":"Now that we have our R script written and tested, we can begin building the submit file for our job. If we want to submit several jobs, we need to track log, output, and error files for each job. An easy way to do this is to use the Cluster and Process ID values assigned by HTCondor to create unique files for each job in our overall workflow. In this example, the submit file is called R.submit . If you did not download the tutorial files, create a submit file named R.submit and add the following contents: +SingularityImage = \"/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-r:3.5.0\" executable = mcpi.R arguments = $(Process) #transfer_input_files = should_transfer_files = YES when_to_transfer_output = ON_EXIT log = logs/job.log.$(Cluster).$(Process) error = logs/job.error.$(Cluster).$(Process) output = output/mcpi.out.$(Cluster).$(Process) request_cpus = 1 request_memory = 1GB request_disk = 1GB queue 100 If you did not download the tutorial files, you will also need to create the logs and output directories to hold the files that will be created for each job. You can create both directories at once with the command $ mkdir logs output There are several items to note about this submit file: The queue 100 statement in the submit file. This tells Condor to enqueue 100 copies of this job as one cluster. The submit variables $(Cluster) and $(Process) . These are used to specify unique output files. HTCondor will replace these with the Cluster and Process ID numbers for each individual process within the cluster. The $(Process) variable is also passed as an argument to our R script.","title":"Create a Submit File and Log Directories"},{"location":"software_examples/r/tutorial-ScalingUp-R/#submit-the-jobs","text":"Now it is time to submit our job! You'll see something like the following upon submission: $ condor_submit R.submit Submitting job(s)......................... 100 job(s) submitted to cluster 837. Apply your condor_q knowledge to see the progress of these jobs. Check your logs folder to see the error and HTCondor log files and the output folder to see the results of the scripts.","title":"Submit the Jobs"},{"location":"software_examples/r/tutorial-ScalingUp-R/#post-process","text":"Once the jobs are completed, you can use the information in the output files to calculate an average of all of our computed estimates of \u03c0. To see this, we can use the command: $ cat output/mcpi*.out* | awk '{ sum += $2; print $2\" \"NR} END { print \"---------------\\n Grand Average = \" sum/NR }'","title":"Post Process"},{"location":"software_examples/r/tutorial-ScalingUp-R/#key-points","text":"Scaling up the number of jobs is crucial for taking full advantage of the computational resources of the OSPool. Changing the queue statement allows the user to scale up the resources. The arguments option can be used to pass parameters to a job script. The submit variables $(Cluster) and $(Process) can be used to name log files uniquely.","title":"Key Points"},{"location":"software_examples/r/tutorial-spills-R/","text":"Analyzing Chemical Spills Datasets (.csv files) \u00b6 An OSPool Tutorial \u00b6 Spills of hazardous materials, like petroleum, mercury, and battery acid, that can impact water and land quality are required to be reported to the United State's government by law. In this tutorial, we will analyze records provided by the state of New York on occurrences of spills of hazardous materials that occurred from 1950 to 2019. The data used in this tutorial was collected from https://catalog.data.gov/dataset/spill-incidents/resource/a8f9d3c8-c3fa-4ca1-a97a-55e55ca6f8c0 and modified for teaching purposes. To access all of the materials to complete this tutorial, first log into your OSPool access point and run the following command: git clone https://github.com/OSGConnect/tutorial-spills-R/ . Step 1: Get to Know Hazardous Spills Dataset \u00b6 Let's explore the data files that we will be analyzing. Before we do so, we must make sure we are in the tutorial directory ( tutorial-spills-R/ ). We can do this by printing your working directory ( pwd ): pwd We should see something similar to /home/jovyan/tutorial-spills-R/ , where jovyan could alternatively be your OSG account username. Next, let's navigate to our /data directory and list ( ls ) the files inside of it: cd data/ ls We should see seven .csv files, one for each decade between 1950-2019. To explore the contents of these files, we can use commands like head -n 5 to view the first 5 lines of our data files. head -n 5 spills_1980_1989.csv We can also use the navigation bar on the left side of your notebook to double-click and open each comma-separated value (\"csv\") .csv file and see it in a table format, instead of a traditional command line rendering above. Step 2: Prepare the R Executable \u00b6 Next, we need to create an R script to analyze our datasets. An example of an R script can be found in our main tutorial directory, so let's navigate there: cd ../ # change directory to move one up ls # list files cat spill_calculation.r Then let us print the contents of our executable script: cat spill_calculation.r This script will read in different datasets as arguments and then will carry out summary statistics to print out the number of spills recorded per decade and the total size (in gallons) of the hazardous spills. Step 3: Prepare Portable Software \u00b6 Some common software, like R, is provided by OSG using containers. Because of this, you do not need to install R yourself, you will just tell HTCondor what container to use for your jobs. Additionally, this tutorial just uses base-R and no special libraries, but if you need libraries (e.g., tidyverse, ggplot2) you can always install them in your R container. A list of containers and other software provided by OSG staff can be found on our website https://portal.osg-htc.org/documentation/ , along with resources for learning how to add libraries to your container. We will be using the R container for R 3.5.0, which is accessible under /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-r:3.5.0 , so we must make sure to tell HTCondor to fetch this container when starting each of our jobs. To learn how to tell HTCondor to do this, see below. Step 4: Prepare and Submit an HTCondor Submit File for One Test Job \u00b6 The HTCondor submit file tells the HTCondor how you would like your job to be run on your behalf. For example, you should specify what executable you want to run, if you want a container/the name of that container, the resources you would like available to your job, and any special requirements. Step 4A: Prepare and Submit an HTCondor Submit File \u00b6 A sample submit file to analyze our smallest dataset, spills_1950_1959.csv , might look like: cat R.submit We can submit this job using condor_submit : condor_submit R.submit We can check on the status of our job in HTCondor's queue by running: condor_q Once our job is done running, it will leave HTCondor's queue automatically. Step 4B: Review Test Job Results \u00b6 Once our job is done running, we can check the results by looking in our output folder: cat output/spills.out We should see that from 1950-1959, New York recorded five spills that totalled less than 0 recorded gallons. Step 5: Scale Out Your Workflow to Analyze Many Datasets \u00b6 We just prepared and ran one job analyzing the spills_1950_1959.csv dataset! But now, we want to analyze the remaining 6 datasets. Luckily, HTCondor is very helpful when it comes to rapidly queueing many small jobs! To do so, we will update our submit file to use the queue from syntax. But before we do this, we need to create a list of the files we want to queue a job for: ls data > list_of_datasets.txt cat list_of_datasets.txt Great! Now we have a list of the files we want analyzed, where each file is on it's own seperate line. Step 5A: Update submit file to queue a job for each dataset \u00b6 Now, let's modify the queue line of our submit file to use the new queue syntax. For this, we can choose almost any variable name, so for simplicity, let's choose dataset such that we have queue dataset from list_of_datasets.txt . We can then call this new variable, dataset , elsewhere in our submit file by wrapping it with $() like so: $(dataset) . Our updated submit file might look like this: cat many-R.submit Step 5B: Submit Many Jobs \u00b6 Now we can submit our new submit file using condor_submit again: condor_submit many-R.submit Notice that we have now queued 7 jobs using one submit file! Step 5C: Analysis Completed! \u00b6 We can check on the status of our 7 jobs using condor_q : condor_q Once our jobs are done, we can also review our output files: cat output/*.csv.out In a few minutes, we were able to take our R script and run several jobs to analyze all of our real-world data. Congratulations!","title":"Analyzing .csv Data with R"},{"location":"software_examples/r/tutorial-spills-R/#analyzing-chemical-spills-datasets-csv-files","text":"","title":"Analyzing Chemical Spills Datasets (.csv files)"},{"location":"software_examples/r/tutorial-spills-R/#an-ospool-tutorial","text":"Spills of hazardous materials, like petroleum, mercury, and battery acid, that can impact water and land quality are required to be reported to the United State's government by law. In this tutorial, we will analyze records provided by the state of New York on occurrences of spills of hazardous materials that occurred from 1950 to 2019. The data used in this tutorial was collected from https://catalog.data.gov/dataset/spill-incidents/resource/a8f9d3c8-c3fa-4ca1-a97a-55e55ca6f8c0 and modified for teaching purposes. To access all of the materials to complete this tutorial, first log into your OSPool access point and run the following command: git clone https://github.com/OSGConnect/tutorial-spills-R/ .","title":"An OSPool Tutorial"},{"location":"software_examples/r/tutorial-spills-R/#step-1-get-to-know-hazardous-spills-dataset","text":"Let's explore the data files that we will be analyzing. Before we do so, we must make sure we are in the tutorial directory ( tutorial-spills-R/ ). We can do this by printing your working directory ( pwd ): pwd We should see something similar to /home/jovyan/tutorial-spills-R/ , where jovyan could alternatively be your OSG account username. Next, let's navigate to our /data directory and list ( ls ) the files inside of it: cd data/ ls We should see seven .csv files, one for each decade between 1950-2019. To explore the contents of these files, we can use commands like head -n 5 to view the first 5 lines of our data files. head -n 5 spills_1980_1989.csv We can also use the navigation bar on the left side of your notebook to double-click and open each comma-separated value (\"csv\") .csv file and see it in a table format, instead of a traditional command line rendering above.","title":"Step 1: Get to Know Hazardous Spills Dataset"},{"location":"software_examples/r/tutorial-spills-R/#step-2-prepare-the-r-executable","text":"Next, we need to create an R script to analyze our datasets. An example of an R script can be found in our main tutorial directory, so let's navigate there: cd ../ # change directory to move one up ls # list files cat spill_calculation.r Then let us print the contents of our executable script: cat spill_calculation.r This script will read in different datasets as arguments and then will carry out summary statistics to print out the number of spills recorded per decade and the total size (in gallons) of the hazardous spills.","title":"Step 2: Prepare the R Executable"},{"location":"software_examples/r/tutorial-spills-R/#step-3-prepare-portable-software","text":"Some common software, like R, is provided by OSG using containers. Because of this, you do not need to install R yourself, you will just tell HTCondor what container to use for your jobs. Additionally, this tutorial just uses base-R and no special libraries, but if you need libraries (e.g., tidyverse, ggplot2) you can always install them in your R container. A list of containers and other software provided by OSG staff can be found on our website https://portal.osg-htc.org/documentation/ , along with resources for learning how to add libraries to your container. We will be using the R container for R 3.5.0, which is accessible under /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-r:3.5.0 , so we must make sure to tell HTCondor to fetch this container when starting each of our jobs. To learn how to tell HTCondor to do this, see below.","title":"Step 3: Prepare Portable Software"},{"location":"software_examples/r/tutorial-spills-R/#step-4-prepare-and-submit-an-htcondor-submit-file-for-one-test-job","text":"The HTCondor submit file tells the HTCondor how you would like your job to be run on your behalf. For example, you should specify what executable you want to run, if you want a container/the name of that container, the resources you would like available to your job, and any special requirements.","title":"Step 4: Prepare and Submit an HTCondor Submit File for One Test Job"},{"location":"software_examples/r/tutorial-spills-R/#step-4a-prepare-and-submit-an-htcondor-submit-file","text":"A sample submit file to analyze our smallest dataset, spills_1950_1959.csv , might look like: cat R.submit We can submit this job using condor_submit : condor_submit R.submit We can check on the status of our job in HTCondor's queue by running: condor_q Once our job is done running, it will leave HTCondor's queue automatically.","title":"Step 4A: Prepare and Submit an HTCondor Submit File"},{"location":"software_examples/r/tutorial-spills-R/#step-4b-review-test-job-results","text":"Once our job is done running, we can check the results by looking in our output folder: cat output/spills.out We should see that from 1950-1959, New York recorded five spills that totalled less than 0 recorded gallons.","title":"Step 4B: Review Test Job Results"},{"location":"software_examples/r/tutorial-spills-R/#step-5-scale-out-your-workflow-to-analyze-many-datasets","text":"We just prepared and ran one job analyzing the spills_1950_1959.csv dataset! But now, we want to analyze the remaining 6 datasets. Luckily, HTCondor is very helpful when it comes to rapidly queueing many small jobs! To do so, we will update our submit file to use the queue from syntax. But before we do this, we need to create a list of the files we want to queue a job for: ls data > list_of_datasets.txt cat list_of_datasets.txt Great! Now we have a list of the files we want analyzed, where each file is on it's own seperate line.","title":"Step 5: Scale Out Your Workflow to Analyze Many Datasets"},{"location":"software_examples/r/tutorial-spills-R/#step-5a-update-submit-file-to-queue-a-job-for-each-dataset","text":"Now, let's modify the queue line of our submit file to use the new queue syntax. For this, we can choose almost any variable name, so for simplicity, let's choose dataset such that we have queue dataset from list_of_datasets.txt . We can then call this new variable, dataset , elsewhere in our submit file by wrapping it with $() like so: $(dataset) . Our updated submit file might look like this: cat many-R.submit","title":"Step 5A: Update submit file to queue a job for each dataset"},{"location":"software_examples/r/tutorial-spills-R/#step-5b-submit-many-jobs","text":"Now we can submit our new submit file using condor_submit again: condor_submit many-R.submit Notice that we have now queued 7 jobs using one submit file!","title":"Step 5B: Submit Many Jobs"},{"location":"software_examples/r/tutorial-spills-R/#step-5c-analysis-completed","text":"We can check on the status of our 7 jobs using condor_q : condor_q Once our jobs are done, we can also review our output files: cat output/*.csv.out In a few minutes, we were able to take our R script and run several jobs to analyze all of our real-world data. Congratulations!","title":"Step 5C: Analysis Completed!"},{"location":"support_and_training/support/getting-help-from-RCFs/","text":"Email, Office Hours, and 1-1 Meetings \u00b6 There are multiple ways to get help from OSG\u2019s Research Computing Facilitators. Get in touch anytime! To help researchers effectively utilize large-scale computing, our Research Computing Facilitators (RCFs) are here to answer questions and provide guidance and support. If we're not able to help with a specific problem, we will do our best to connect you with another group or service that can. We don\u2019t expect that you should be able to address all of your questions by consulting our documentation , searching online, or just working through things on your own. Please utilize the methods below if you are stuck or have questions. Help via Email \u00b6 We provide ongoing support via email to support@osg-htc.org . You can typically expect a first response within a few business hours. support@osg-htc.org Virtual Office Hours \u00b6 Drop-in for live help: Tuesdays, 4-5:30pm ET / 1-2:30pm PT Thursdays, 11:30am-1pm ET / 8:30-10am PT You can find the URL to the Virtual Office Hours meeting room in the welcome message when you log into an OSG-managed Access Point, or in the signature of a support email from an RCF. Once you arrive in the room, please sign in. Sign-in for office hours Cancellations will be announced via email. If the times above don\u2019t work for you, please email us at our usual support address to schedule a separate meeting. Make an Appointment \u00b6 We are happy to arrange meetings outside of designated Office Hours. Email us to schedule a time to meet! support@osg-htc.org Training Opportunities \u00b6 The RCF team runs regular new user training on the first Tuesday of the month and a special topic training on the third Tuesday of the month. See upcoming training dates, registration information, and materials on our training page. OSPool Training page","title":"Email, Office Hours, and 1-1 Meetings "},{"location":"support_and_training/support/getting-help-from-RCFs/#email-office-hours-and-1-1-meetings","text":"There are multiple ways to get help from OSG\u2019s Research Computing Facilitators. Get in touch anytime! To help researchers effectively utilize large-scale computing, our Research Computing Facilitators (RCFs) are here to answer questions and provide guidance and support. If we're not able to help with a specific problem, we will do our best to connect you with another group or service that can. We don\u2019t expect that you should be able to address all of your questions by consulting our documentation , searching online, or just working through things on your own. Please utilize the methods below if you are stuck or have questions.","title":"Email, Office Hours, and 1-1 Meetings"},{"location":"support_and_training/support/getting-help-from-RCFs/#help-via-email","text":"We provide ongoing support via email to support@osg-htc.org . You can typically expect a first response within a few business hours. support@osg-htc.org","title":"Help via Email"},{"location":"support_and_training/support/getting-help-from-RCFs/#virtual-office-hours","text":"Drop-in for live help: Tuesdays, 4-5:30pm ET / 1-2:30pm PT Thursdays, 11:30am-1pm ET / 8:30-10am PT You can find the URL to the Virtual Office Hours meeting room in the welcome message when you log into an OSG-managed Access Point, or in the signature of a support email from an RCF. Once you arrive in the room, please sign in. Sign-in for office hours Cancellations will be announced via email. If the times above don\u2019t work for you, please email us at our usual support address to schedule a separate meeting.","title":"Virtual Office Hours"},{"location":"support_and_training/support/getting-help-from-RCFs/#make-an-appointment","text":"We are happy to arrange meetings outside of designated Office Hours. Email us to schedule a time to meet! support@osg-htc.org","title":"Make an Appointment"},{"location":"support_and_training/support/getting-help-from-RCFs/#training-opportunities","text":"The RCF team runs regular new user training on the first Tuesday of the month and a special topic training on the third Tuesday of the month. See upcoming training dates, registration information, and materials on our training page. OSPool Training page","title":"Training Opportunities"},{"location":"support_and_training/training/osg-user-school/","text":"Annual, Week-Long OSG User School \u00b6 OSG School 2024 Group Photo Overview \u00b6 During this week-long training event held at the University of Wisconsin-Madison every summer, students learn to use high-throughput computing (HTC) systems \u2014 at their own campus or using the OSG \u2014 to run large-scale computing applications that are at the heart of today\u2019s cutting-edge science. Through lectures, discussions, and lots of hands-on activities with experienced OSG staff, students will learn how HTC systems work, how to run and manage lots of jobs and huge datasets, to implement a scientific computing workflow, and where to turn for more information and help. The School is ideal for graduate students in any science or research domain where large-scale computing is a vital part of the research process, plus we will consider applications from advanced undergraduates, post-doctoral students, faculty, and staff. Students accepted to this program will receive financial support for basic travel and local costs associated with the School. Next OSG User School \u00b6 The next OSG User School will be held in the summer of 2025. Applications will likely open in early 2025. Open Materials and Recordings \u00b6 The OSG User School want virtual in 2020 and 2021, which means that we were able to record lectures to complement lecture and exercise materials! OSG Virtual School Pilot, August 2021 OSG Virtual School Pilot, July 2020 Past OSG Schools \u00b6 OSG School, August 5-9, 2024 OSG User School, August 7-11, 2023 OSG User School, July 25-29, 2022 OSG User School, July 15-19, 2019 OSG User School, July 9-13, 2018 OSG User School, July 17-21, 2017 OSG User School, July 25-29, 2016 OSG User School, July 27-31, 2015 OSG User School, July 7-10, 2014","title":"Annual, Week-Long OSG User School "},{"location":"support_and_training/training/osg-user-school/#annual-week-long-osg-user-school","text":"OSG School 2024 Group Photo","title":"Annual, Week-Long OSG User School"},{"location":"support_and_training/training/osg-user-school/#overview","text":"During this week-long training event held at the University of Wisconsin-Madison every summer, students learn to use high-throughput computing (HTC) systems \u2014 at their own campus or using the OSG \u2014 to run large-scale computing applications that are at the heart of today\u2019s cutting-edge science. Through lectures, discussions, and lots of hands-on activities with experienced OSG staff, students will learn how HTC systems work, how to run and manage lots of jobs and huge datasets, to implement a scientific computing workflow, and where to turn for more information and help. The School is ideal for graduate students in any science or research domain where large-scale computing is a vital part of the research process, plus we will consider applications from advanced undergraduates, post-doctoral students, faculty, and staff. Students accepted to this program will receive financial support for basic travel and local costs associated with the School.","title":"Overview"},{"location":"support_and_training/training/osg-user-school/#next-osg-user-school","text":"The next OSG User School will be held in the summer of 2025. Applications will likely open in early 2025.","title":"Next OSG User School"},{"location":"support_and_training/training/osg-user-school/#open-materials-and-recordings","text":"The OSG User School want virtual in 2020 and 2021, which means that we were able to record lectures to complement lecture and exercise materials! OSG Virtual School Pilot, August 2021 OSG Virtual School Pilot, July 2020","title":"Open Materials and Recordings"},{"location":"support_and_training/training/osg-user-school/#past-osg-schools","text":"OSG School, August 5-9, 2024 OSG User School, August 7-11, 2023 OSG User School, July 25-29, 2022 OSG User School, July 15-19, 2019 OSG User School, July 9-13, 2018 OSG User School, July 17-21, 2017 OSG User School, July 25-29, 2016 OSG User School, July 27-31, 2015 OSG User School, July 7-10, 2014","title":"Past OSG Schools"},{"location":"support_and_training/training/osgusertraining/","text":"OSG User Training (regular/monthly) \u00b6 All User Training sessions are offered on Tuesdays from 2:30-4pm ET (11:30am - 1pm PT) , on the third Tuesday of the month. The training's are designed as stand alone subjects. You do not need to bring/have your dataset prepared before the training. The only prerequisites are some familiarities with using command line inteface or shell . Having some familiarities with HTCondor job submissions are useful but not required. Registration opens a month before the training date, and closes 24 hours before the event. You can register for all of our trainings via setmore: Register Here Fall 2024 Training Schedule \u00b6 Tuesday, September 17 OSPool Basics: Get Running on the OSPool Learning Objectives: Topics covered in this workshop include: An introduction to OSG services and the OSPool Basics of HTCondor job submission Hands-on practice submitting HTCondor jobs If you\u2019re new to the OSPool (or been away for awhile) and want to get started, this is an ideal opportunity to go through core concepts and practice hands-on skills. Prerequisites/Audience: There are no prerequisites for this workshop. This workshop is designed for new HTCondor and OSPool users. Tuesday, October 15 Workflows with Pegasus Learning Objectives: An introduction to the Pegasus Workflow Management System, which is a useful tool for researchers needing to execute a large number of jobs or complex workflows. Attendees will learn how to construct and manage workflows, capabilities like automatic data transfers, and higher level tooling to analyze the workflow performance. Prerequisites/Audience: There are no prerequisites for this workshop, however, a basic understanding of HTCondor job submission and HTCondor submit files will make it easier to understand the content presented. Tuesday, November 19 DAGMan: HTCondor\u2019s Workflow Manager Learning Objectives: In this training, you will be guided through hands-on exercises to learn how to use DAGMan to automate your HTCondor job submissions. This training is especially useful for anyone who has constructed different job types and wants to be able to run them in a certain order. Prerequisites/Audience: A basic understanding of HTCondor job submission For a calendar version of these events see: Google Calendar Download and add to your calendar app Materials \u00b6 All of our training materials are public and provided below: [Webinar] Principles of Distributed High Throughput Computing Learning Objectives Have you ever wondered about the \u201cwhy\u201d of HTCondor? Join us to hear about the \u201cphilosophy\u201d of high throughput computing and how HTCondor has evolved to make throughput computing possible. This workshop will be led by a core HTCondor developer, Greg Thain, and is a perfect opportunity for longer-term OSPool users to learn more about our underlying technology. Prerequisites/Audience There are no prerequisites for this webinar. Available Materials Presentation Slides Materials Last Updated Spring 2024 [Webinar] Move Your Data with Pelican (and the OSDF) Learning Objectives Pelican is a platform created to enable easier data sharing - within or beyond your institution! This training will cover how Pelican is used to move data within the OSPool and also how you can use Pelican tools to host, upload and download your data. This training is relevant for researchers with large amounts of data, as well as campus representatives, to learn about how Pelican can help with your data movement needs. Prerequisites/Audience There are no prerequisites for this webinar. Available Materials Presentation Slides Video Recording Materials Last Updated Summer 2024 [Workshop] OSPool Basics: Get Running on the OSPool Learning Objectives Topics covered in this workshop include: An introduction to OSG services and the OSPool Basics of HTCondor job submission Hands-on practice submitting HTCondor jobs Prerequisites/Audience There are no prerequisites for this workshop. This workshop is designed for new HTCondor and OSPool users. Available Materials Presentation Slides Video Recording Wordcount Frequency Tutorial Interactive Lesson Materials Last Updated Winter 2023 [Webinar] Learn About the PATh Facility Learning Objectives The PATh Facility provides dedicated throughput computing capacity to NSF-funded researchers for longer and larger jobs. This training will describe its features and how to get started. If you have found your jobs need more resources (cores, memory, time, data) than is typically available in the OSPool, this resource might be for you! Prerequisites/Audience There are no prerequisites for this webinar. Available Materials Presentation Slides Materials Last Updated Winter 2023 [Workshop] DAGMan: HTCondor's Workflow Manager Learning Objectives Presented by an HTCondor DAGMan developer, this workshop is designed for researchers that would like to learn how to implement DAG workflows and automate workflow management on the OSPool. Prerequisites/Audience A basic understanding of HTCondor job submission and of an HTCondor submit file is highly recommended for this workshop. Available Materials Presentation Slides DAGMan Tutorial Materials Last Updated Winter 2023 [Workshop] Organizing and Submitting HTC Workloads Learning Objectives This workshop will present useful HTCondor features to help researchers automatically organize their workspaces on High Throughput Computing systems. Prerequisites/Audience There are no prerequisites for this workshop, however, a basic understanding of HTCondor job submission and HTCondor submit files will make it easier to understand the content presented. Available Materials Presentation Slides Video Recording Wordcount Frequency Tutorial Materials Last Updated Summer 2023 [Workshop] Using Containerized Software on the Open Science Pool Learning Objectives This workshop is designed to introduce software containers such as Docker, Apptainer, and Singularity. Content covered includes how to create a container, use a container, and techniques for troubleshooting containerized software. Prerequisites/Audience There are no prerequisites for this workshop, however, a basic understanding of HTCondor job submission and HTCondor submit files will make it easier to understand the content presented. Available Materials Presentation Slides Video Recording Materials Last Updated Fall 2023 [Workshop] Pegasus Workflow Management System on the Open Science Pool Learning Objectives This workshop is designed to introduce Pegasus Workflow Management System, a useful tool for researchers needing to execute a large number of jobs or complex workflows. Prerequisites/Audience There are no prerequisites for this workshop, however, a basic understanding of HTCondor job submission and HTCondor submit files will make it easier to understand the content presented. Available Materials Presentation Slides Recorded Video Materials Last Updated Fall 2023 [Workshop] Software Portability on the Open Science Pool Learning Objectives This workshop is designed to introduce concepts pertaining to software portability, including containers, different ways to install software, setting file paths, and other important introductory concepts. Prerequisites/Audience There are no prerequisites for this workshop, however, a basic understanding of HTCondor job submission and HTCondor submit files will make it easier to understand the content presented. Available Materials Presentation Slides List of Commands Tutorials (used in part) Using Julia on the OSPool High Throughput BWA Read Mapping Materials Last Updated Summer 2023 [Workshop] Access the OSPool via Jupyter Interface Learning Objectives This workshop is designed to introduce researchers to the OSPool's new Jupyter interface feature, including how to access and use Jupyter notebooks. Prerequisites/Audience There are no prerequisites for this workshop. Available Materials Presentation Slides Materials Last Updated Fall 2023 [Workshop] Bioinformatics Analyses on the OSPool: A BWA Example Learning Objectives This workshop is designed to show the process of implementing and scaling out a bioinformatics workflow using HTCondor. Prerequisites/Audience There are no prerequisites for this workshop, however, a basic understanding of HTCondor job submission and HTCondor submit files will make it easier to understand the content presented. Available Materials Presentation Slides Materials Last Updated Summer 2023 [Workshop] HTCondor Tips & Tricks: Using condor_q and condor_history to Learn about Your Jobs Learning Objectives This workshop is designed to introduce researchers to helpful HTCondor tools for learning about their HTCondor jobs. Prerequisites/Audience There are no prerequisites for this workshop, however, a basic understanding of HTCondor job submission and HTCondor submit files will make it easier to understand the content presented. Available Materials Presentation Slides Materials Last Updated Spring 2023 [Workshop] Special Environments, GPUs Learning Objectives This workshop is designed for researchers interested in learning about using special environments, architectures, or resources such as GPUs. Prerequisites/Audience There are no prerequisites for this workshop, however, a basic understanding of HTCondor job submission and HTCondor submit files will make it easier to understand the content presented. Available Materials Presentation Slides Materials Last Updated Spring 2023","title":"Monthly OSG User Training (registration+materials)"},{"location":"support_and_training/training/osgusertraining/#osg-user-training-regularmonthly","text":"All User Training sessions are offered on Tuesdays from 2:30-4pm ET (11:30am - 1pm PT) , on the third Tuesday of the month. The training's are designed as stand alone subjects. You do not need to bring/have your dataset prepared before the training. The only prerequisites are some familiarities with using command line inteface or shell . Having some familiarities with HTCondor job submissions are useful but not required. Registration opens a month before the training date, and closes 24 hours before the event. You can register for all of our trainings via setmore: Register Here","title":"OSG User Training (regular/monthly)"},{"location":"support_and_training/training/osgusertraining/#fall-2024-training-schedule","text":"Tuesday, September 17 OSPool Basics: Get Running on the OSPool Learning Objectives: Topics covered in this workshop include: An introduction to OSG services and the OSPool Basics of HTCondor job submission Hands-on practice submitting HTCondor jobs If you\u2019re new to the OSPool (or been away for awhile) and want to get started, this is an ideal opportunity to go through core concepts and practice hands-on skills. Prerequisites/Audience: There are no prerequisites for this workshop. This workshop is designed for new HTCondor and OSPool users. Tuesday, October 15 Workflows with Pegasus Learning Objectives: An introduction to the Pegasus Workflow Management System, which is a useful tool for researchers needing to execute a large number of jobs or complex workflows. Attendees will learn how to construct and manage workflows, capabilities like automatic data transfers, and higher level tooling to analyze the workflow performance. Prerequisites/Audience: There are no prerequisites for this workshop, however, a basic understanding of HTCondor job submission and HTCondor submit files will make it easier to understand the content presented. Tuesday, November 19 DAGMan: HTCondor\u2019s Workflow Manager Learning Objectives: In this training, you will be guided through hands-on exercises to learn how to use DAGMan to automate your HTCondor job submissions. This training is especially useful for anyone who has constructed different job types and wants to be able to run them in a certain order. Prerequisites/Audience: A basic understanding of HTCondor job submission For a calendar version of these events see: Google Calendar Download and add to your calendar app","title":"Fall 2024 Training Schedule"},{"location":"support_and_training/training/osgusertraining/#materials","text":"All of our training materials are public and provided below: [Webinar] Principles of Distributed High Throughput Computing Learning Objectives Have you ever wondered about the \u201cwhy\u201d of HTCondor? Join us to hear about the \u201cphilosophy\u201d of high throughput computing and how HTCondor has evolved to make throughput computing possible. This workshop will be led by a core HTCondor developer, Greg Thain, and is a perfect opportunity for longer-term OSPool users to learn more about our underlying technology. Prerequisites/Audience There are no prerequisites for this webinar. Available Materials Presentation Slides Materials Last Updated Spring 2024 [Webinar] Move Your Data with Pelican (and the OSDF) Learning Objectives Pelican is a platform created to enable easier data sharing - within or beyond your institution! This training will cover how Pelican is used to move data within the OSPool and also how you can use Pelican tools to host, upload and download your data. This training is relevant for researchers with large amounts of data, as well as campus representatives, to learn about how Pelican can help with your data movement needs. Prerequisites/Audience There are no prerequisites for this webinar. Available Materials Presentation Slides Video Recording Materials Last Updated Summer 2024 [Workshop] OSPool Basics: Get Running on the OSPool Learning Objectives Topics covered in this workshop include: An introduction to OSG services and the OSPool Basics of HTCondor job submission Hands-on practice submitting HTCondor jobs Prerequisites/Audience There are no prerequisites for this workshop. This workshop is designed for new HTCondor and OSPool users. Available Materials Presentation Slides Video Recording Wordcount Frequency Tutorial Interactive Lesson Materials Last Updated Winter 2023 [Webinar] Learn About the PATh Facility Learning Objectives The PATh Facility provides dedicated throughput computing capacity to NSF-funded researchers for longer and larger jobs. This training will describe its features and how to get started. If you have found your jobs need more resources (cores, memory, time, data) than is typically available in the OSPool, this resource might be for you! Prerequisites/Audience There are no prerequisites for this webinar. Available Materials Presentation Slides Materials Last Updated Winter 2023 [Workshop] DAGMan: HTCondor's Workflow Manager Learning Objectives Presented by an HTCondor DAGMan developer, this workshop is designed for researchers that would like to learn how to implement DAG workflows and automate workflow management on the OSPool. Prerequisites/Audience A basic understanding of HTCondor job submission and of an HTCondor submit file is highly recommended for this workshop. Available Materials Presentation Slides DAGMan Tutorial Materials Last Updated Winter 2023 [Workshop] Organizing and Submitting HTC Workloads Learning Objectives This workshop will present useful HTCondor features to help researchers automatically organize their workspaces on High Throughput Computing systems. Prerequisites/Audience There are no prerequisites for this workshop, however, a basic understanding of HTCondor job submission and HTCondor submit files will make it easier to understand the content presented. Available Materials Presentation Slides Video Recording Wordcount Frequency Tutorial Materials Last Updated Summer 2023 [Workshop] Using Containerized Software on the Open Science Pool Learning Objectives This workshop is designed to introduce software containers such as Docker, Apptainer, and Singularity. Content covered includes how to create a container, use a container, and techniques for troubleshooting containerized software. Prerequisites/Audience There are no prerequisites for this workshop, however, a basic understanding of HTCondor job submission and HTCondor submit files will make it easier to understand the content presented. Available Materials Presentation Slides Video Recording Materials Last Updated Fall 2023 [Workshop] Pegasus Workflow Management System on the Open Science Pool Learning Objectives This workshop is designed to introduce Pegasus Workflow Management System, a useful tool for researchers needing to execute a large number of jobs or complex workflows. Prerequisites/Audience There are no prerequisites for this workshop, however, a basic understanding of HTCondor job submission and HTCondor submit files will make it easier to understand the content presented. Available Materials Presentation Slides Recorded Video Materials Last Updated Fall 2023 [Workshop] Software Portability on the Open Science Pool Learning Objectives This workshop is designed to introduce concepts pertaining to software portability, including containers, different ways to install software, setting file paths, and other important introductory concepts. Prerequisites/Audience There are no prerequisites for this workshop, however, a basic understanding of HTCondor job submission and HTCondor submit files will make it easier to understand the content presented. Available Materials Presentation Slides List of Commands Tutorials (used in part) Using Julia on the OSPool High Throughput BWA Read Mapping Materials Last Updated Summer 2023 [Workshop] Access the OSPool via Jupyter Interface Learning Objectives This workshop is designed to introduce researchers to the OSPool's new Jupyter interface feature, including how to access and use Jupyter notebooks. Prerequisites/Audience There are no prerequisites for this workshop. Available Materials Presentation Slides Materials Last Updated Fall 2023 [Workshop] Bioinformatics Analyses on the OSPool: A BWA Example Learning Objectives This workshop is designed to show the process of implementing and scaling out a bioinformatics workflow using HTCondor. Prerequisites/Audience There are no prerequisites for this workshop, however, a basic understanding of HTCondor job submission and HTCondor submit files will make it easier to understand the content presented. Available Materials Presentation Slides Materials Last Updated Summer 2023 [Workshop] HTCondor Tips & Tricks: Using condor_q and condor_history to Learn about Your Jobs Learning Objectives This workshop is designed to introduce researchers to helpful HTCondor tools for learning about their HTCondor jobs. Prerequisites/Audience There are no prerequisites for this workshop, however, a basic understanding of HTCondor job submission and HTCondor submit files will make it easier to understand the content presented. Available Materials Presentation Slides Materials Last Updated Spring 2023 [Workshop] Special Environments, GPUs Learning Objectives This workshop is designed for researchers interested in learning about using special environments, architectures, or resources such as GPUs. Prerequisites/Audience There are no prerequisites for this workshop, however, a basic understanding of HTCondor job submission and HTCondor submit files will make it easier to understand the content presented. Available Materials Presentation Slides Materials Last Updated Spring 2023","title":"Materials"},{"location":"support_and_training/training/ospool_for_education/","text":"OSPool Resources for Teaching & Education \u00b6 The OSPool provides a free, ready-to-use platform for instructors who are teaching high throughput computing concepts for academic courses, conference workshops, and other events. Instructors can choose for their students to have Guest or Full Accounts on the OSPool. For Guest Accounts, students/attendees can launch an OSPool Notebook at any time and practice job submission with smaller workflows. For Full Accounts, students/attendees will need to request an account (which will be approved within one business day), but then are able to submit large scale high throughput computing workflows free of charge. The table below outlines suggested steps for bringing OSPool resources to your training or event. Please reach out to the facilitation team at any time if you have questions or want to chat about your goals. Explore our Tools Explore HTCondor job submission with a OSPool Guest Account - To launch a guest OSPool Notebook, go to https://notebook.ospool.osg-htc.org using an internet browser. - Visit our OSPool Jupyter Notebooks guide to learn about Guest and Full Accounts Conduct Initial Testing of your Event Materials Using the Guest Account, we recommend conducting initial testing of your event materials to help inform next steps. We provide supplementary materials supplementary materials that you may use to help teach high throughput computing and OSPool-related concepts. Discuss your event goals with a Research Computing Facilitator (Optional) The Facilitation team is here to help discuss your event goals and provide guidance about how to best leverage existing OSG services and resources. Fill out this form and a Facilitator will contact you within one business day about scheduling a short virtual meeting. Evaluate Guest or Full Account for Attendees Option 1: Guest OSPool Accounts You are welcome have your attendees use an OSPool Guest Acccount for the event. This is a good option for events that: - may not know registrants in advance - run less than 4 hours, or can easily recreate files that may be lost upon session time out (4 hours) - want to only use a notebook interface Option 2: Full OSPool Accounts You can request that your students have the ability to submit jobs to the OSPool using Full Accounts. This is a good option for events that: - know registrants in advance - will run for more than 1-2 days - with more than 50 participants - would like jobs to access the full capacity of the OSPool - would like to submit jobs using a notebook or classic terminal interface Prior to the Event If using full accounts, the instructor will provide a list of participants to the OSG Research Facilitation Team, and participants should request an account a few days in advance of the event (does not apply to guest accounts). It is also good practice to test your full workshop code and any software on your account of choice. Start of Event We require all events to provide a short (~5 minute) introduction to OSG Policies . Feedback After your event, please email us to let us know how it went, and the number of participants. Teaching Resources \u00b6 Here are some resources you can use for your event: Worksheets for Public Use \u00b6 Scale Out My Computing Brainstorming Worksheet Slide Presentations for Public Use \u00b6 OSG Policies and Intro for Course Use OSPool Training Slides and Recordings Video Recordings \u00b6 OSPool Training Slides and Recordings HTCondor User Tutorials Partnership to Advance Throughput Computing YouTube channel Frequently Asked Questions (FAQs) \u00b6 Why use OSPool resources for my course/event? OSPool resources provide a free, easy-to-use toolkit for you to use to teach computing concepts at your next course/event. Event attendees do not need an account, but can request continued access to use OSPool resources for their own research. The OSPool staff also offer free assistance with helping you convert an existing workflow to work on the OSPool. We provide guidance about using OSG resources, using HTCondor, and are happy to answer any questions you may have regarding our resources. If I request full accounts for my students/attendees, when will their accounts be deactivated? We work with instructors to choose a date that works well for their event, but typically accounts are deactivated several days after the event completes. If attendees are interested in continuing to use OSPool resources for their research, they can request their account remains active by emailing support@osg-htc.org. Do you have slides and video recordings of workshops that used OSPool resources to help me prepare for my event(s)? Yes! We provide hands-on tutorial materials for topics such as running common software or workflows on the OSPool (e.g., python, R, MATLAB, bioinformatic workflows), recordings of tutorials and introductory materials, presentation slides, and other materials. Some of the materials are linked under the Teaching Resources section above. When should I not use OSPool resources for my course/event? Events are typically bound by the same limitations as regular users/jobs. This means that any event needing to use licensed software or submit individual multi-core jobs or jobs running longer than 20 hours may not be a good fit for our system. Who should I contact with questions or concerns? The OSG Research Computing Facilitation Team is happy to answer any questions or concerns you may have about using OSPool resources for your event(s). Please direct questions to support@osg-htc.org. A Facilitator will respond within one business day.","title":"OSPool Resources for Teaching & Education"},{"location":"support_and_training/training/ospool_for_education/#ospool-resources-for-teaching-education","text":"The OSPool provides a free, ready-to-use platform for instructors who are teaching high throughput computing concepts for academic courses, conference workshops, and other events. Instructors can choose for their students to have Guest or Full Accounts on the OSPool. For Guest Accounts, students/attendees can launch an OSPool Notebook at any time and practice job submission with smaller workflows. For Full Accounts, students/attendees will need to request an account (which will be approved within one business day), but then are able to submit large scale high throughput computing workflows free of charge. The table below outlines suggested steps for bringing OSPool resources to your training or event. Please reach out to the facilitation team at any time if you have questions or want to chat about your goals. Explore our Tools Explore HTCondor job submission with a OSPool Guest Account - To launch a guest OSPool Notebook, go to https://notebook.ospool.osg-htc.org using an internet browser. - Visit our OSPool Jupyter Notebooks guide to learn about Guest and Full Accounts Conduct Initial Testing of your Event Materials Using the Guest Account, we recommend conducting initial testing of your event materials to help inform next steps. We provide supplementary materials supplementary materials that you may use to help teach high throughput computing and OSPool-related concepts. Discuss your event goals with a Research Computing Facilitator (Optional) The Facilitation team is here to help discuss your event goals and provide guidance about how to best leverage existing OSG services and resources. Fill out this form and a Facilitator will contact you within one business day about scheduling a short virtual meeting. Evaluate Guest or Full Account for Attendees Option 1: Guest OSPool Accounts You are welcome have your attendees use an OSPool Guest Acccount for the event. This is a good option for events that: - may not know registrants in advance - run less than 4 hours, or can easily recreate files that may be lost upon session time out (4 hours) - want to only use a notebook interface Option 2: Full OSPool Accounts You can request that your students have the ability to submit jobs to the OSPool using Full Accounts. This is a good option for events that: - know registrants in advance - will run for more than 1-2 days - with more than 50 participants - would like jobs to access the full capacity of the OSPool - would like to submit jobs using a notebook or classic terminal interface Prior to the Event If using full accounts, the instructor will provide a list of participants to the OSG Research Facilitation Team, and participants should request an account a few days in advance of the event (does not apply to guest accounts). It is also good practice to test your full workshop code and any software on your account of choice. Start of Event We require all events to provide a short (~5 minute) introduction to OSG Policies . Feedback After your event, please email us to let us know how it went, and the number of participants.","title":"OSPool Resources for Teaching & Education"},{"location":"support_and_training/training/ospool_for_education/#teaching-resources","text":"Here are some resources you can use for your event:","title":"Teaching Resources"},{"location":"support_and_training/training/ospool_for_education/#worksheets-for-public-use","text":"Scale Out My Computing Brainstorming Worksheet","title":"Worksheets for Public Use"},{"location":"support_and_training/training/ospool_for_education/#slide-presentations-for-public-use","text":"OSG Policies and Intro for Course Use OSPool Training Slides and Recordings","title":"Slide Presentations for Public Use"},{"location":"support_and_training/training/ospool_for_education/#video-recordings","text":"OSPool Training Slides and Recordings HTCondor User Tutorials Partnership to Advance Throughput Computing YouTube channel","title":"Video Recordings"},{"location":"support_and_training/training/ospool_for_education/#frequently-asked-questions-faqs","text":"Why use OSPool resources for my course/event? OSPool resources provide a free, easy-to-use toolkit for you to use to teach computing concepts at your next course/event. Event attendees do not need an account, but can request continued access to use OSPool resources for their own research. The OSPool staff also offer free assistance with helping you convert an existing workflow to work on the OSPool. We provide guidance about using OSG resources, using HTCondor, and are happy to answer any questions you may have regarding our resources. If I request full accounts for my students/attendees, when will their accounts be deactivated? We work with instructors to choose a date that works well for their event, but typically accounts are deactivated several days after the event completes. If attendees are interested in continuing to use OSPool resources for their research, they can request their account remains active by emailing support@osg-htc.org. Do you have slides and video recordings of workshops that used OSPool resources to help me prepare for my event(s)? Yes! We provide hands-on tutorial materials for topics such as running common software or workflows on the OSPool (e.g., python, R, MATLAB, bioinformatic workflows), recordings of tutorials and introductory materials, presentation slides, and other materials. Some of the materials are linked under the Teaching Resources section above. When should I not use OSPool resources for my course/event? Events are typically bound by the same limitations as regular users/jobs. This means that any event needing to use licensed software or submit individual multi-core jobs or jobs running longer than 20 hours may not be a good fit for our system. Who should I contact with questions or concerns? The OSG Research Computing Facilitation Team is happy to answer any questions or concerns you may have about using OSPool resources for your event(s). Please direct questions to support@osg-htc.org. A Facilitator will respond within one business day.","title":"Frequently Asked Questions (FAQs)"},{"location":"support_and_training/training/previous-training-events/","text":"Other Past Training Events \u00b6 Overview \u00b6 We offer on-site training and tutorials on a periodic basis, usually at conferences (including the annual OSG All Hands Meeting) where many researchers and/or research computing staff are gathered. Below are some trainings for which the materials were public. (Apologies if any links/materials aren't accessible anymore, as some of these are external to our own web location. Feel free to let us know via support@osg-htc.org, in case we can fix/remove them.) Workshops/Tutorials \u00b6 Empowering Research Computing at Your Organization Through the OSG (PEARC 21) Organizing and Submitting HTC Workloads (OSG User Training pilot, June 2021) Empower Research Computing at your Organization Through the OSG (RMACC 2021) dHTC Campus Workshop (February 2021) Empowering Research Computing at Your Campus Through the OSG (PEARC 20) Deploy jobs on the Open Science Grid (Gateways/eScience 2019) High Throughput Computation on the Open Science Grid (Internet2 2018 Technology Exchange) Open Science Grid Workshop (The Quilt 2018) High Throughput Computation on the Open Science Grid (RMACC 18) Tutorials at Recent OSG All-Hands Meetings \u00b6 The below were offered on-site at OSG All-Hands Meetings. Note that the last on-site AHM in 2020 was canceled due to the pandemic, though we've linked to the materials. User/Facilitator Training at the OSG All Hands Meeting, University of Oklahoma (OU), March 2020 User Training at the OSG All Hands Meeting, Thomas Jefferson National Accelerator Facility (JLAB), March 2019 User Training at the OSG All Hands Meeting, University of Utah, March 2018","title":"Other Past Training Events "},{"location":"support_and_training/training/previous-training-events/#other-past-training-events","text":"","title":"Other Past Training Events"},{"location":"support_and_training/training/previous-training-events/#overview","text":"We offer on-site training and tutorials on a periodic basis, usually at conferences (including the annual OSG All Hands Meeting) where many researchers and/or research computing staff are gathered. Below are some trainings for which the materials were public. (Apologies if any links/materials aren't accessible anymore, as some of these are external to our own web location. Feel free to let us know via support@osg-htc.org, in case we can fix/remove them.)","title":"Overview"},{"location":"support_and_training/training/previous-training-events/#workshopstutorials","text":"Empowering Research Computing at Your Organization Through the OSG (PEARC 21) Organizing and Submitting HTC Workloads (OSG User Training pilot, June 2021) Empower Research Computing at your Organization Through the OSG (RMACC 2021) dHTC Campus Workshop (February 2021) Empowering Research Computing at Your Campus Through the OSG (PEARC 20) Deploy jobs on the Open Science Grid (Gateways/eScience 2019) High Throughput Computation on the Open Science Grid (Internet2 2018 Technology Exchange) Open Science Grid Workshop (The Quilt 2018) High Throughput Computation on the Open Science Grid (RMACC 18)","title":"Workshops/Tutorials"},{"location":"support_and_training/training/previous-training-events/#tutorials-at-recent-osg-all-hands-meetings","text":"The below were offered on-site at OSG All-Hands Meetings. Note that the last on-site AHM in 2020 was canceled due to the pandemic, though we've linked to the materials. User/Facilitator Training at the OSG All Hands Meeting, University of Oklahoma (OU), March 2020 User Training at the OSG All Hands Meeting, Thomas Jefferson National Accelerator Facility (JLAB), March 2019 User Training at the OSG All Hands Meeting, University of Utah, March 2018","title":"Tutorials at Recent OSG All-Hands Meetings"}]} \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml new file mode 100644 index 00000000..dca0d7d6 --- /dev/null +++ b/sitemap.xml @@ -0,0 +1,403 @@ + + + + https://portal.osg-htc.org/documentation/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/hpc_administration/test-document/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/hpc_administration/administrators/osg-flock/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/automated_workflows/dagman-simple-example/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/automated_workflows/dagman-workflows/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/automated_workflows/tutorial-dagman-intermediate/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/automated_workflows/tutorial-pegasus/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/managing_data/file-transfer-via-htcondor/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/managing_data/file-transfer-via-http/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/managing_data/osdf/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/managing_data/overview/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/managing_data/scp/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/specific_resource/arm64/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/specific_resource/el9-transition/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/specific_resource/gpu-jobs/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/specific_resource/large-memory-jobs/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/specific_resource/multicore-jobs/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/specific_resource/openmpi-jobs/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/specific_resource/requirements/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/submitting_workloads/Slurm_to_HTCondor/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/submitting_workloads/checkpointing-on-OSPool/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/submitting_workloads/jupyter/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/submitting_workloads/monitor_review_jobs/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/submitting_workloads/submit-multiple-jobs/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/submitting_workloads/tutorial-command/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/submitting_workloads/tutorial-error101/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/submitting_workloads/tutorial-organizing/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/submitting_workloads/tutorial-osg-locations/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/submitting_workloads/tutorial-quickstart/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/using_software/available-containers-list/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/using_software/compiling-applications/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/using_software/containers-docker/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/using_software/containers-singularity/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/using_software/example-compilation/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/using_software/software-overview/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/using_software/software-request/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/workload_planning/htcondor_job_submission/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/workload_planning/jobdurationcategory/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/workload_planning/preparing-to-scale-up/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/htc_workloads/workload_planning/roadmap/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/overview/account_setup/ap20-ap21-migration/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/overview/account_setup/ap7-access/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/overview/account_setup/comanage-access/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/overview/account_setup/connect-access/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/overview/account_setup/generate-add-sshkey/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/overview/account_setup/is-it-for-you/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/overview/account_setup/registration-and-login/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/overview/account_setup/starting-project/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/overview/references/acknowledgeOSG/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/overview/references/contact-information/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/overview/references/frequently-asked-questions/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/overview/references/gracc/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/overview/references/policy/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/software_examples/ai/scikit-learn/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/software_examples/ai/tensorflow/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/software_examples/ai/tutorial-pytorch/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/software_examples/bioinformatics/tutorial-blast-split/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/software_examples/bioinformatics/tutorial-bwa/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/software_examples/bioinformatics/tutorial-fastqc/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/software_examples/drug_discovery/tutorial-AutoDockVina/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/software_examples/freesurfer/Introduction/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/software_examples/machine_learning/tutorial-tensorflow-containers/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/software_examples/matlab_runtime/tutorial-Matlab-ScalingUp/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/software_examples/matlab_runtime/tutorial-matlab-HelloWorld/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/software_examples/other_languages_tools/conda-container/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/software_examples/other_languages_tools/conda-tarball/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/software_examples/other_languages_tools/java-on-osg/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/software_examples/other_languages_tools/julia-on-osg/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/software_examples/python/manage-python-packages/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/software_examples/python/tutorial-ScalingUp-Python/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/software_examples/python/tutorial-wordfreq/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/software_examples/r/tutorial-R/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/software_examples/r/tutorial-R-addlibSNA/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/software_examples/r/tutorial-ScalingUp-R/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/software_examples/r/tutorial-spills-R/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/support_and_training/support/getting-help-from-RCFs/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/support_and_training/training/osg-user-school/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/support_and_training/training/osgusertraining/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/support_and_training/training/ospool_for_education/ + 2024-12-16 + daily + + + https://portal.osg-htc.org/documentation/support_and_training/training/previous-training-events/ + 2024-12-16 + daily + + \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz new file mode 100644 index 00000000..ffbfa88e Binary files /dev/null and b/sitemap.xml.gz differ diff --git a/software_examples/ai/scikit-learn/index.html b/software_examples/ai/scikit-learn/index.html new file mode 100644 index 00000000..bcded4df --- /dev/null +++ b/software_examples/ai/scikit-learn/index.html @@ -0,0 +1,2488 @@ + + + + + + + + + + + + + + + + + + scikit-learn - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    scikit-learn

    +

    scikit-learn is a machine learning toolkit for Python.

    +

    Below you will find an example on how to use an OSG-provided software container that contains +scikit-learn. However, it is good to keep in mind that +you have two options when it comes to integrating your own +code:

    +
      +
    1. If the code is simple, send it with the job (this is what the + example uses)
    2. +
    3. For more complex codes, consider extending the provided + containers and integrate the code into the new custom + container
    4. +
    +

    Containers are detailed in our general documentation:

    + +

    Scikit-learn Python Code

    +

    An example scikit-learn machine learning executable is:

    +
    #!/usr/bin/env python3
    +
    +# example adopted from https://scikit-learn.org/stable/tutorial/basic/tutorial.html
    +
    +from sklearn import datasets
    +from sklearn import svm
    +
    +iris = datasets.load_iris()
    +digits = datasets.load_digits()
    +
    +# learning
    +clf = svm.SVC(gamma=0.001, C=100.)
    +clf.fit(digits.data[:-1], digits.target[:-1])
    +
    +# predicting
    +print(clf.predict(digits.data[-1:]))
    +
    +

    Submit File

    +
    universe = container
    +container_image = /cvmfs/singularity.opensciencegrid.org/htc/scikit-learn:1.3
    +
    +log = job_$(Cluster)_$(Process).log
    +error = job_$(Cluster)_$(Process).err
    +output = job_$(Cluster)_$(Process).out
    +
    +executable = run-scikit-learn.py
    +#arguments =
    +
    +# specify both general requirements and gpu requirements if there are any
    +# requirements = True
    +# require_gpus =
    +
    ++JobDurationCategory = "Medium"
    +
    +request_gpus = 0
    +request_cpus = 1
    +request_memory = 4GB
    +request_disk = 4GB
    +
    +queue 1
    +
    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/software_examples/ai/tensorflow/index.html b/software_examples/ai/tensorflow/index.html new file mode 100644 index 00000000..0aabd9da --- /dev/null +++ b/software_examples/ai/tensorflow/index.html @@ -0,0 +1,2541 @@ + + + + + + + + + + + + + + + + + + TensorFlow - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    The OSPool enables AI (Artificial Intelligence) workloads by providing + access to GPUs and custom software stacks via containers. An example of this support is the machine learning platform TensorFlow.

    +

    TensorFlow

    +

    https://www.tensorflow.org/ desribes TensorFlow as:

    +
    +

    TensorFlow is an open source software library for numerical +computation using data flow graphs. Nodes in the graph represent +mathematical operations, while the graph edges represent the +multidimensional data arrays (tensors) communicated between them. The +flexible architecture allows you to deploy computation to one or more +CPUs or GPUs in a desktop, server, or mobile device with a single +API. TensorFlow was originally developed by researchers and engineers +working on the Google Brain Team within Google's Machine Intelligence +research organization for the purposes of conducting machine learning +and deep neural networks research, but the system is general enough to +be applicable in a wide variety of other domains as well.

    +
    +

    TensorFlow can be a complicated software to install as it requires many dependencies and specific environmental configurations. Software ontainers solve this problem by defining a +full operating system image, containing not only the complex software package, but +dependencies and environment configuration as well. Working with GPUs and +containers are detailed in the general documentation:

    + +

    TensorFlow Python Code

    +

    An example TensorFlow executable that builds a machine learning model and evaluates it is:

    +
    #!/usr/bin/env python3
    +
    +# example adopted from https://www.tensorflow.org/tutorials/quickstart/beginner
    +
    +import tensorflow as tf
    +print("TensorFlow version:", tf.__version__)
    +
    +# this will show that the GPU was found
    +tf.debugging.set_log_device_placement(True)
    +
    +# load a dataset
    +mnist = tf.keras.datasets.mnist
    +
    +(x_train, y_train), (x_test, y_test) = mnist.load_data()
    +x_train, x_test = x_train / 255.0, x_test / 255.0
    +
    +# build a machine learning model
    +model = tf.keras.models.Sequential([
    +  tf.keras.layers.Flatten(input_shape=(28, 28)),
    +  tf.keras.layers.Dense(128, activation='relu'),
    +  tf.keras.layers.Dropout(0.2),
    +  tf.keras.layers.Dense(10)
    +])
    +
    +predictions = model(x_train[:1]).numpy()
    +
    +# convert to probabilities
    +tf.nn.softmax(predictions).numpy()
    +
    +# loss function
    +loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
    +loss_fn(y_train[:1], predictions).numpy()
    +
    +# compile model
    +model.compile(optimizer='adam',
    +              loss=loss_fn,
    +              metrics=['accuracy'])
    +
    +# train
    +model.fit(x_train, y_train, epochs=5)
    +
    +# evaluate
    +model.evaluate(x_test,  y_test, verbose=2)
    +
    +

    HTCondor Submit File

    +

    To run this TensorFlow script, create an HTCondor submit file to tell HTCondor how you would like it run on your behalf. An example HTCondor submit file for this job is below. Because TensorFlow is optimized to run with GPUs, make sure to tell HTCondor to assign your job to a GPU machine:

    +
    universe = container
    +container_image = /cvmfs/singularity.opensciencegrid.org/htc/tensorflow:2.15
    +
    +log = job_$(Cluster)_$(Process).log
    +error = job_$(Cluster)_$(Process).err
    +output = job_$(Cluster)_$(Process).out
    +
    +executable = run-tf.py
    +#arguments =
    +
    ++JobDurationCategory = "Medium"
    +
    +# specify both general requirements and gpu requirements if needed
    +# requirements = True
    +require_gpus = (Capability > 7.5)
    +
    +request_gpus = 1
    +request_cpus = 1
    +request_memory = 4GB
    +request_disk = 4GB
    +
    +queue 1
    +
    +

    Run TensorFlow

    +

    Since we have prepared our executable, submit file, and are using an OSG-provided TensorFlow container, we are ready to submit this job to run on one of the OSPool GPU machines.

    +

    To submit this job to run, type condor_submit TensorFlow.submit. The status of your job can be checked at any time by running condor_q.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/software_examples/ai/tutorial-pytorch/index.html b/software_examples/ai/tutorial-pytorch/index.html new file mode 100644 index 00000000..d6d0b64c --- /dev/null +++ b/software_examples/ai/tutorial-pytorch/index.html @@ -0,0 +1,2637 @@ + + + + + + + + + + + + + + + + + + PyTorch - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    + +
    + + +
    +
    + + + + +

    PyTorch

    + +

    The OSPool can be used as a platform to carry out machine learning and artificial intelligence research. +The following tutorial uses the common machine learning framework, PyTorch.

    +

    Using PyTorch on OSPool

    +

    The preferred method of using a software on the the OSPool is to use a container. The guide shows two ways of running PyTorch on the OSPool. Firstly, downloading our desired version of PyTorch images from dockerhub. Secondly, how to use an already created singularity container of PyTorch to submit a HTCondor job on OSPool.

    +

    Pulling an Image from Docker

    +

    Please note that the docker build will not work on the access point. Apptainer is installed on the access point and users can use Apptainer to either build an image from the definition file or use apptainer pull to create a .sif file from Docker images. At the time the guide is written, the latest version of PyTorch is 2.1.1. Before pulling the image/software from Docker it is a good practice to set up the cache directory of Apptainer. Run the following command on the command prompt

    +
    [user@ap]$ mkdir $HOME/tmp
    +[user@ap]$ export TMPDIR=$HOME/tmp
    +[user@ap]$ export APPTAINER_TMPDIR=$HOME/tmp
    +[user@ap]$ export APPTAINER_CACHEDIR=$HOME/tmp
    +
    +

    Now, we pull the image and convert it to a .sif file using apptainer pull

    +
    [user@ap]$ apptainer pull  pytorch-2.1.1.sif docker://pytorch/pytorch:2.1.1-cuda12.1-cudnn8-runtime
    +
    +

    Transfer the image using OSDF

    +

    The above command will create a singularity container named pytorch-2.1.1.sif in your current directory. The image will be reused for each job, and thus the preferred transfer method is OSDF. Store the pytorch-2.1.1.sif file under the "protected" area on your access point (see table here), and then use the OSDF url directly in the +SingularityImage attribute.  Note that you can not use shell variable expansion in the submit file - be sure to replace the username with your actual OSPool username.

    +
    +SingularityImage = "osdf:///ospool/PROTECTED/<USERNAME>/pytorch-2.1.1.sif" 
    +<other usual submit file lines> 
    +queue
    +
    +

    Using an existing PyTorch container

    +

    OSG has the pytorch-2.1.1.sif container image. To use the OSG built container just provide the address of the container- '/ospool/uc-shared/public/OSG-Staff/pytorch-2.1.1.sif' in to your submit file

    +
    +SingularityImage = "osdf:///ospool/uc-shared/public/OSG-Staff/pytorch-2.1.1.sif" 
    +<other usual submit file lines> 
    +queue
    +
    +

    Running an ML job using PyTorch

    +

    For this tutorial, we will see how to use PyTorch to run a machine learning workflow from the MNIST database. To download the materials for this tutorial, use the command

    +
    git clone https://github.com/OSGConnect/tutorial-pytorch
    +
    +

    The github repository contains a tarball of the MNIST data-MNIST_data.tar.gz, a wrapper script- pytorch_cnn.sh that untars the data and runs the python script-main.py to train a neural network on this MNIST database. The content of the pytorch_cnn.sh wrapper script is given below:

    +
    #!/bin/bash
    +echo "Hello OSPool from Job $1 running on `hostname`"
    +
    +# untar the test and training data
    +tar zxf MNIST_data.tar.gz
    +
    +# run the PyTorch model
    +python main.py --save-model --epochs 20
    +
    +# remove the data directory
    +rm -r data
    +
    +

    A submit script-pytorch_cnn.sub is also there to submit the PyTorch job on the OSPool using the container that is provided by OSG. The contents of pytorch_cnn.sub file are:

    +
    # PyTorch test of convolutional neural network
    +# Submit file 
    ++SingularityImage = "osdf:///ospool/uc-shared/public/OSG-Staff/pytorch-2.1.1.sif" 
    +
    +# set the log, error and output files 
    +log = logs/pytorch_cnn.log.txt
    +error = logs/pytorch_cnn.err.txt
    +output = output/pytorch_cnn.out.txt
    +
    +# set the executable to run
    +executable = pytorch_cnn.sh
    +arguments = $(Process)
    +
    +# Transfer the python script and the MNIST database to the compute node
    +transfer_input_files = main.py, MNIST_data.tar.gz
    +
    +should_transfer_files = YES
    +when_to_transfer_output = ON_EXIT
    +
    +# We require a machine with a compatible version of the CUDA driver
    +require_gpus = (DriverVersion >= 10.1)
    +
    +# We must request 1 CPU in addition to 1 GPU
    +request_cpus = 1
    +request_gpus = 1
    +
    +# select some memory and disk space
    +request_memory = 3GB
    +request_disk = 5GB
    +
    +# Tell HTCondor to run 1 instance of our job:
    +queue 1
    +
    +

    Please note, if you want to use your own container please replace the +SingularityImage attribute accordingly.

    +

    Create Log Directories and Submit Job

    +

    You will need to create the logs and output directories to hold the files that will be created for each job. You can create both directories at once with the command

    +
    mkdir logs output
    +
    +

    Submit the job using

    +
    condor_submit pytorch_cnn.sub
    +
    +

    Output

    +

    The output of the code will be the CNN Network that was trained. It will be returned to us as a file +mnist_cnn.pt. The are also some output stats +on the training and test error in the pytorch_cnn.out.txt. file

    +
    Test set: Average loss: 0.0278, Accuracy: 9909/10000 (99%)
    +
    +

    Getting help

    +

    For assistance or questions, please email the OSG User Support team at support@osg-htc.org or visit the help desk and community forums.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/software_examples/bioinformatics/tutorial-blast-split/index.html b/software_examples/bioinformatics/tutorial-blast-split/index.html new file mode 100644 index 00000000..639167bb --- /dev/null +++ b/software_examples/bioinformatics/tutorial-blast-split/index.html @@ -0,0 +1,2607 @@ + + + + + + + + + + + + + + + + + + High-Throughput BLAST - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    + +
    + + +
    +
    + + + + +

    High-Throughput BLAST

    +

    This tutorial will put together several OSG tools and ideas - handling a larger +data file, splitting a large file into smaller pieces, and transferring a portable +software program.

    +

    Job components and plan

    +

    To run BLAST, we need three things: +1. the BLAST program (specifically the blastx binary) +2. a reference database (this is usually a larger file) +3. the file we want to query against the database

    +

    The database and the input file will each get special treatment. The database we are using +is large enough that we will want to use OSG Connect's stashcache capability (more information +about that here). The input +file is large enough that a) it is near the upper limit of what is practical to transfer, +b) it would take hours to complete a single blastx +analysis for it, and c) the resulting output file would be huge.

    +

    Because the BLAST process is +run over the input file line by line, it is scientifically valid to split up the input query file, analyze the pieces, and then put the results back together at the end! By splitting the input query file into smaller pieces, each of the queries can be run as separate jobs. On the other hand, BLAST databases should not be split, because the blast output includes a score value for each sequence that is calculated relative to the entire length of the database.

    +

    Get materials and set up files

    +

    Run the tutorial command:

    +
    tutorial blast-split
    +
    +

    Once the tutorial has downloaded, move into the folder and run the download_files.sh script to download the remaining files:

    +
    cd tutorial-blast-split
    +./download_files.sh
    +
    +

    This command will have downloaded and unzipped the BLAST program (ncbi-blast-2.9.0+), the file we want to query +(mouse_rna.fa) and a set of tools that will split the file into smaller pieces +(gt-1.5.10-Linux_x86_64-64bit-complete).

    +

    Next, we will use the command gt from the genome tools package to split our input query file into 2 MB chunks as indicated by the -targetsize flag. To split the file, run this command:

    +
    ./gt-1.5.10-Linux_x86_64-64bit-complete/bin/gt splitfasta -targetsize 2 mouse_rna.fa
    +
    +

    Later, we'll need a list of the split files, so run this command to generate that list:

    +
    ls mouse_rna.fa.* > list.txt
    +
    +

    Examine the submit file

    +

    The submit file, blast.submit looks like this:

    +
    executable = run_blast.sh
    +arguments = $(inputfile)
    +transfer_input_files = ncbi-blast-2.9.0+/bin/blastx, $(inputfile), stash:///osgconnect/public/osg/BlastTutorial/pdbaa.tar.gz
    +
    +output = logs/job_$(process).out
    +error = logs/job_$(process).err
    +log = logs/job_$(process).log
    +
    +requirements = OSGVO_OS_STRING == "RHEL 7" && Arch == "X86_64"
    +
    +request_memory = 2GB
    +request_disk = 1GB
    +request_cpus = 1
    +
    +queue inputfile from list.txt
    +
    +

    The executable run_blast.sh is a script that runs blast and takes in a file to +query as its argument. We'll look at this script in more detail in a minute.

    +

    Our job will need to transfer the blastx executable and the input file being used for +queries, shown in the transfer_input_files line. Because of the size of our database, +we'll be using stash:/// to transfer the database to our job.

    +
    +

    Note on stash:///: In this job, we're copying the file from a particular +/public folder (osg/BlastTutorialV1), but you have your own /public folder that you +could use for the database. If you wanted to try this, you would want to navigate to your /public folder, download the +pdbaa.tar.gz file, return to your /home folder, and change the path in the stash:/// +command above. This might look like:

    +

    cd /public/username + wget http://stash.osgconnect.net/public/osg/BlastTutorialV1/pdbaa.tar.gz + cd /home/username

    +
    +

    Finally, you may have already noticed that instead of listing the individual input file +by name, we've used the following syntax: $(inputfile). This is a variable that represents +the name of an individual input file. We've done this so that we can set the variable as +a different file name for each job.

    +

    We can set the variable by using the queue syntax shown at the bottom of the file:

    +
    queue inputfile from list.txt
    +
    +

    This command will pull file names from the list.txt file that we created earlier, and +submit one job per file and set the "inputfile" variable to that file name.

    +

    Examine the wrapper script

    +

    The submit file had a script called run_blast.sh:

    +
    #!/bin/bash
    +
    +# get input file from arguments
    +inputfile=$1
    +
    +# Prepare our database and unzip into new dir
    +tar -xzvf pdbaa.tar.gz
    +rm pdbaa.tar.gz
    +
    +# run blast query on input file
    +./blastx -db pdbaa/pdbaa -query $inputfile -out $inputfile.result
    +
    +

    It saves the name of the input file, unpacks our database, and then +runs the BLAST query from the input file we transferred and used as the argument.

    +

    Submit the jobs

    +

    Our jobs should be set and ready to go. To submit them, run this command:

    +
    condor_submit blast.submit
    +
    +

    And you should see that 51 jobs have been submitted:

    +
    Submitting job(s)................................................
    +51 job(s) submitted to cluster 90363.
    +
    +

    You can check on your jobs' progress using condor_q

    +

    Bonus: a BLAST workflow

    +

    We had to go through multiple steps to run the jobs above. There was an initial +step to split the files and generate a list of them; then we submitted the jobs. These +two steps can be tied together in a workflow using the HTCondor DAGMan workflow tool.

    +

    First, we would create a script (split_files.sh) that does the file splitting steps:

    +
    #!/bin/bash
    +
    +filesize=$1
    +./gt-1.5.10-Linux_x86_64-64bit-complete/bin/gt splitfasta -targetsize $filesize mouse_rna.fa
    +ls mouse_rna.fa.* > list.txt
    +
    +

    This script will need executable permissions:

    +
    chmod +x split_files.sh
    +
    +

    Then, we create a DAG workflow file that ties the two steps together:

    +
    ## DAG: blastrun.dag
    +JOB blast blast.submit
    +SCRIPT PRE blast split_files.sh 2
    +
    +

    To submit this DAG, we use this command:

    +
    condor_submit_dag blastrun.dag
    +
    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/software_examples/bioinformatics/tutorial-bwa/index.html b/software_examples/bioinformatics/tutorial-bwa/index.html new file mode 100644 index 00000000..b5c48c13 --- /dev/null +++ b/software_examples/bioinformatics/tutorial-bwa/index.html @@ -0,0 +1,2656 @@ + + + + + + + + + + + + + + + + + + High-Throughput BWA Read Mapping - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    + +
    +
    + + +
    +
    + + + + +

    High-Throughput BWA Read Mapping

    +

    This tutorial focuses on a subset of the Data Carpentry Genomics workshop curriculum - specifically, this page cover's how to run a BWA workflow on OSG resources. It will use the same general flow as the BWA segment of the Data Carpentry workshop with minor adjustments. The goal of this tutorial is to learn how to convert an existing BWA workflow to run on the OSPool.

    +

    Get Tutorial Files

    +

    Logged into the submit node, we will run the tutorial command, that will +create a folder for our analysis, as well as some sample files.

    +
    tutorial bwa
    +
    +

    Install and Prepare BWA

    +

    First, we need to install BWA, also called Burrows-Wheeler Aligner. To do this, we will create and navigate to a new folder in our /home directory called software. We will then follow the developer's instructions (https://github.com/lh3/bwa) for using git clone to clone the software and then build the tool using make.

    +
    cd ~/tutorial-bwa
    +cd software
    +git clone https://github.com/lh3/bwa.git
    +cd bwa
    +make
    +
    +

    Next, BWA needs to be added to our PATH variables, to test if the installation worked:

    +
    export PATH=$PATH:/home/$USER/tutorial-bwa/software/bwa/
    +
    +

    To check that BWA has been installed correctly, type bwa. You should receive output similar to the following:

    +
    Program: bwa (alignment via Burrows-Wheeler transformation)
    +Version: 0.7.17-r1198-dirty
    +Contact: Heng Li <hli@ds.dfci.harvard.edu>
    +
    +Usage:   bwa <command> [options]
    +
    +Command: index         index sequences in the FASTA format
    +         mem           BWA-MEM algorithm
    +         fastmap       identify super-maximal exact matches
    +...
    +
    +
    +

    Now that we have successfully installed bwa, we will create a portable compressed tarball of this software so that it is smaller and quicker to transport when we submit our jobs to the OSPool.

    +
    cd ~/tutorial-bwa/software
    +tar -czvf bwa.tar.gz bwa
    +
    +

    Checking the size of this compressed tarball using ls -lh bwa.tar.gz reveals the file is approximately 4MB. The tarball should stay in /home.

    +

    Download Data to Analyze

    +

    Now that we have installed BWA, we need to download data to analyze. For this tutorial, we will be downloading data used in the Data Carpentry workshop. This data includes both the genome of Escherichia coli (E. coli) and paired-end RNA sequencing reads obtained from a study carried out by Blount et al. published in PNAS. Additional information about how the data was modified in preparation for this analysis can be found on the Data Carpentry's workshop website.

    +
    cd ~/tutorial-bwa
    +./download_data.sh
    +
    +

    Investigating the size of the downloaded genome by typing:

    +
    ls -lh data/ref_genome/
    +
    +

    reveals the file is 1.4 MB. Therefore, this file should remain in /home and does not need to be moved to /public. We should also check the trimmed fastq paired-end read files:

    +
    ls -lh data/trimmed_fastq_small
    +
    +

    Once everything is downloaded, make sure you're still in the tutorial-bwa directory.

    +
    cd ~/tutorial-bwa
    +
    +

    Run a Single Test Job

    +

    Now that we have all items in our analysis ready, it is time to submit a single test job to map our RNA reads to the E. coli genome. For a single test job, we will choose a single sample to analyze. In the following example, we will align both the forward and reverse reads of SRR2584863 to the E. coli genome. Using a text editor such as nano or vim, we can create an example submit file for this test job called bwa-test.sub containing the following information:

    +
    universe    = vanilla
    +executable  = bwa-test.sh
    +# arguments = 
    +
    +# need to transfer bwa.tar.gz file, the reference
    +# genome, and the trimmed fastq files
    +transfer_input_files = software/bwa.tar.gz, data/ref_genome/ecoli_rel606.fasta.gz, data/trimmed_fastq_small/SRR2584863_1.trim.sub.fastq, data/trimmed_fastq_small/SRR2584863_2.trim.sub.fastq
    +should_transfer_files = YES
    +when_to_transfer_output = ON_EXIT
    +
    +log         = logs/bwa_test_job.log
    +output      = logs/bwa_test_job.out
    +error       = logs/bwa_test_job.error
    +
    ++JobDurationCategory = "Medium"
    +request_cpus    = 1
    +request_memory  = 2GB
    +request_disk    = 1GB
    +
    +requirements = (OSGVO_OS_STRING == "RHEL 7")
    +
    +queue 1
    +
    +

    You will notice that the .log, .out, and .error files will be saved to a folder called logs. We need to create this folder using mkdir logs before we submit our job.

    +

    We will call the script for this analysis bwa-test.sh and it should contain the following information:

    +
    #!/bin/bash
    +# Script name: bwa-test.sh
    +
    +echo "Unpacking software"
    +tar -xzf bwa.tar.gz
    +
    +echo "Setting PATH for bwa" 
    +export PATH=$_CONDOR_SCRATCH_DIR/bwa/:$PATH
    +
    +echo "Indexing E. coli genome"
    +bwa index ecoli_rel606.fasta.gz
    +
    +echo "Starting bwa alignment for SRR2584863"
    +bwa mem ecoli_rel606.fasta.gz SRR2584863_1.trim.sub.fastq SRR2584863_2.trim.sub.fastq > SRR2584863.aligned.sam
    +
    +echo "Done with bwa alignment for SRR2584863!"
    +
    +echo "Cleaning up files generated from genome indexing"
    +rm ecoli_rel606.fasta.gz.amb
    +rm ecoli_rel606.fasta.gz.ann
    +rm ecoli_rel606.fasta.gz.bwt
    +rm ecoli_rel606.fasta.gz.pac
    +rm ecoli_rel606.fasta.gz.sa
    +
    +

    We can submit this single test job to HTCondor by typing:

    +
    condor_submit bwa-test.sub
    +
    +

    To check the status of the job, we can use condor_q.

    +

    Upon the completion of the test job, we should investigate the output to ensure that it is what we expected and also review the .log file to help optimize future resource requests in preparation for scaling up.

    +

    For example, when we investigate the bwa_test_job.log file created in this analysis, at the bottom of the file we see a resource table:

    +
           Partitionable Resources :    Usage  Request Allocated 
    +           Cpus                 :                 1         1 
    +           Disk (KB)            :   253770  1048576  27945123 
    +           Memory (MB)          :      144     2048      2500
    +
    +

    Here we see that we used less than half of both the disk space and memory we requested. In future jobs, we should request a smaller amount of each resource, such as 0.5 GB of disk space and 0.5 GB of memory. Prior to scaling up our analysis, we should run additional test jobs using these resource requests to ensure that they are sufficient to allow our job to complete successfully.

    +

    Scaling Up to Analyze Multiple Samples

    +

    In preparation for scaling up, please review our guide on how to scale up after a successful test job and how to +easily submit multiple jobs with a single submit file.

    +

    After reviewing how to submit multiple jobs with a single submit file, it is possible to determine that the most appropriate way to submit multiple jobs for this analysis is to use queue <var> from <list.txt>.

    +

    To use this option, we first need to create a file with just the sample names/IDs that we want to analyze. To do this, we want to cut all information after the "_" symbol to remove the forward/reverse read information and file extensions. For example, we want SRR2584863_1.trim.sub.fastq to become just SRR2584863.

    +

    We will save the sample names in a file called samples.txt:

    +
    cd ~/tutorial-bwa
    +cd data/trimmed_fastq_small/
    +ls *.fastq | cut -f 1 -d '_' | uniq > samples.txt
    +cd ~/tutorial-bwa
    +
    +

    Now, we can create a new submit file called bwa-alignment.sub to queue a new job for each sample. To make it simpler to start, you can copy the bwa-test.sub file (cp bwa-test.sub bwa-alignment.sub) and modify it.

    +
    universe    = vanilla
    +executable  = bwa-alignment.sh
    +arguments   = $(sample)
    +
    +transfer_input_files = software/bwa.tar.gz, data/ref_genome/ecoli_rel606.fasta.gz, data/trimmed_fastq_small/$(sample)_1.trim.sub.fastq, data/trimmed_fastq_small/$(sample)_2.trim.sub.fastq
    +transfer_output_remaps = "$(sample).aligned.sam=results/$(sample).aligned.sam"
    +should_transfer_files = YES
    +when_to_transfer_output = ON_EXIT
    +
    +log         = logs/bwa_$(sample)_job.log
    +output      = logs/bwa_$(sample)_job.out
    +error       = logs/bwa_$(sample)_job.error
    +
    ++JobDurationCategory = "Medium"
    +request_cpus    = 1 
    +request_memory  = 0.5GB
    +request_disk    = 0.5GB
    +
    +requirements = (OSGVO_OS_STRING == "RHEL 7")
    +
    +queue sample from data/trimmed_fastq_small/samples.txt
    +
    +

    We will need to create an additional folder to store our aligned sequencing files in a folder called results:

    +
    mkdir results
    +
    +

    To store the aligned sequencing files in the results folder, we can add the transfer_output_remaps feature to our submit file. This feature allows us to specify a name and a path to save our output files in the format of "file1 = path/to/save/file2", where file1 is the origional name of the document and file2 is the name that we want to save the file using. In the example above, we do not change the name of the resulting output files. This feature also helps us keep an organized working space, rather than having all of our resulting sequencing files be saved to our /home directory.

    +

    Once our submit file has been updated, we can update our script to look like and call it something like bwa-alignment.sh:

    +
    #!/bin/bash
    +# Script name: bwa-alignment.sh
    +
    +echo "Unpackage software"
    +tar -xzf bwa.tar.gz
    +
    +echo "Set PATH for bwa" 
    +export PATH=$_CONDOR_SCRATCH_DIR/bwa/:$PATH
    +
    +# Renaming first argument
    +SAMPLE=$1
    +
    +echo "Index E.coli genome"
    +bwa index ecoli_rel606.fasta.gz
    +
    +echo "Starting bwa alignment for ${SAMPLE}"
    +bwa mem ecoli_rel606.fasta.gz ${SAMPLE}_1.trim.sub.fastq ${SAMPLE}_2.trim.sub.fastq > ${SAMPLE}.aligned.sam
    +
    +echo "Done with bwa alignment for ${SAMPLE}!"
    +
    +echo "Cleaning up workspace"
    +rm ecoli_rel606.fasta.gz.amb
    +rm ecoli_rel606.fasta.gz.ann
    +rm ecoli_rel606.fasta.gz.bwt
    +rm ecoli_rel606.fasta.gz.pac
    +rm ecoli_rel606.fasta.gz.sa
    +
    +

    Once ready, we can submit our job to HTCondor by using condor_submit bwa-alignment.sub.

    +

    When we type condor_q, we see that three jobs have entered the queue (one for each of our three experimental samples).

    +

    When our jobs are completed, we can confirm that our alignment output results files were created by typing:

    +
    ls -lh results/*
    +
    +

    We can also investigate our log, error, and output files in the logs folder to ensure we obtained the resulting output of these files that we expected.

    +

    For more information about running bioinformatics workflows on the OSG, we recommend our BLAST tutorial as well as our Samtools instillation guide.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/software_examples/bioinformatics/tutorial-fastqc/index.html b/software_examples/bioinformatics/tutorial-fastqc/index.html new file mode 100644 index 00000000..4d3a96ac --- /dev/null +++ b/software_examples/bioinformatics/tutorial-fastqc/index.html @@ -0,0 +1,2674 @@ + + + + + + + + + + + + + + + + + + FastQC Quality Control - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Bioinformatics Tutorial: Quality Assessment of Data with FastQC

    +

    The first step of most biofinformatic analyses is to assess the quality of the data you have recieved. In this example, we are working with real DNA sequencing data from a research project studying E. coli. We will use a common software, FastQC, to assess the quality of the data.

    +

    Before we start, let us download the materials for this tutorial if we don't already have them:

    +
    git clone https://github.com/OSGConnect/tutorial-fastqc
    +
    +

    Then let's navigate inside the tutorial-fastqc directory:

    +
    cd ~/tutorial-fastqc
    +
    +

    We can confirm our location by printing our working directory using pwd:

    +
    pwd
    +
    +

    We should see /home/<username>/tutorial-fastqc.

    +

    Step 1: Download data

    +

    First, we need to download our sequencing data to that we want to analyze for our research project. For this tutorial, we will be downloading data used in the Data Carpentry workshop. This data includes both the genome of Escherichia coli (E. coli) and paired-end RNA sequencing reads obtained from a study carried out by Blount et al. published in PNAS. Additional information about how the data was modified in preparation for this analysis can be found on the Data Carpentry's workshop website.

    +

    We have a script called download_data.sh that will download our bioinformatic data. Let's go ahead and run this script to download our data.

    +
    ./download_data.sh
    +
    +

    Our sequencing data files, all ending in .fastq, can now be found in a folder called /data.

    +

    Step 2: Prepare software

    +

    Now that we have our data, we need to install the software we want to use to analyze it.

    +

    There are different ways to install and use software, including installing from source, using pre-compiled binaries, and containers. In the biology domains, many software packages are already available as pre-built containers. We can fetch one of these containers and have HTCondor set it up for our job, which means we do not have to install the FastQC software or it's dependencies.

    +

    We will use a Docker container built by the State Public Health Bioinformatics Community (staphb), and convert it to an apptainer container by creating an apptainer definition file:

    +
    ls software/
    +
    +
    cat software/fastqc.def
    +
    +

    And then running a command to build an apptainer container (which we won't run, but is listed here for future reference): +$ apptainer build fastqc.sif software/fastqc.def

    +

    Instead, we will download our ready-to-go apptainer .sif file:

    +
    ./download_software.sh
    +
    +
    ls software/
    +
    +

    Step 3: Prepare an Executable

    +

    We need to create an executable to pass to our HTCondor jobs, so that HTCondor knows what to run on our behalf.

    +

    Let's take a look at our executable, fastqc.sh:

    +
    cat fastqc.sh
    +
    +



    +

    Step 4: Prepare HTCondor Submit File to Run One Job

    +

    Now we create our HTCondor submit file, which tells HTCondor what to run and how many resources to make available to our job:

    +
    cat fastqc.submit
    +
    +

    Step 5: Submit One HTCondor Job and Check Results

    +

    We are ready to submit our first job!

    +
    condor_submit fastqc.submit
    +
    +

    We can check on the status of our job in HTCondor's queue using:

    +
    condor_q
    +
    +

    By using transfer_output_remaps in our submit file, we told HTCondor to store our FastQC output files in the results directory. Let's take a look at our scientific results:

    +
    ls results/
    +
    +

    It's always good practice to look at our standard error, standard out, and HTCondor log files to catch unexpected output:

    +
    ls logs/
    +
    +

    Step 6: Scale Out Your Analysis

    +

    Create A List of All Files We Want Analyzed

    +

    To queue a job to analyze each of our sequencing data files, we will take advantage of HTCondor's queue statement. First, let's create a list of files we want analyzed:

    +
    ls data/ | cut -f1 -d "." > list_of_samples.txt
    +
    +

    Let us take a look at the contents of this file:

    +
    cat list_of_samples.txt
    +
    +

    Edits the Submit File to Queue a Job to Analyze Each Biological Sample

    +

    HTCondor has different queue syntaxes to help researchers automatically queue many jobs. We will use queue <variable> from <list.txt> to queue a job for each of of our samples in list_of_samples.txt.

    +

    Once we define <variable>, we can also use it elsewhere in the submit file.

    +

    Let's replace each occurence of the sample identifier with the $(sample) variable, and then iterating through our list of samples as shown in list_of_samples.txt.

    +
    cat many-fastqc.submit
    +
    +
    # HTCondor Submit File: fastqc.submit
    +
    +# Provide our executable and arguments
    +executable = fastqc.sh
    +arguments = $(sample).trim.sub.fastq
    +
    +# Provide the container for our software
    +universe    = container
    +container_image = software/fastqc.sif
    +
    +# List files that need to be transferred to the job
    +transfer_input_files = data/$(sample).trim.sub.fastq
    +should_transfer_files = YES
    +
    +# Tell HTCondor to transfer output to our /results directory
    +transfer_output_files = $(sample).trim.sub_fastqc.html
    +transfer_output_remaps = "$(sample).trim.sub_fastqc.html = results/$(sample).trim.sub_fastqc.html"
    +
    +# Track job information
    +log = logs/fastqc.log
    +output = logs/fastqc.out
    +error = logs/fastqc.err
    +
    +# Resource Requests
    +request_cpus = 1
    +request_memory = 1GB
    +request_disk = 1GB
    +
    +# Tell HTCondor to run our job once:
    +queue sample from list_of_samples.txt
    +
    +

    And then submit many jobs using this single submit file!

    +
    condor_submit many-fastqc.submit
    +
    +

    Notice that using a single submit file, we now have multiple jobs in the queue.

    +

    We can check on the status of our multiple jobs in HTCondor's queue by using:

    +
    condor_q
    +
    +

    When ready, we can check our results in our results/ directory:

    +
    ls results/
    +
    +

    Step 7: Return the output to your local computer

    +

    Once you are done with your computational analysis, you will want to move the results to your local computer or to a long term storage location.

    +

    Let's practice copying our .html files to our local laptop.

    +

    First, open a new terminal. Do not log into your OSPool account. Instead, navigate to where you want the files to go on your computer. We will store them in our Downloads folder.

    +
    cd ~/Downloads
    +
    +

    Then use scp ("secure copy") command to copy our results folder and it's contents:

    +
    scp -r username@hostname:/home/username/tutorial-fastqc/results ./
    +
    +

    For many files, it will be easiest to create a compressed tarball (.tar.gz file) of your files and transfer that instead of each file individually.

    +

    An example of this could be scp -r username@ap40.uw.osg-htc.org:/home/username/results ./

    +

    Now, open the .html files using your internet browser on your local computer.

    +

    Congratulations on finishing the first step of a sequencing analysis pipeline!

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/software_examples/drug_discovery/tutorial-AutoDockVina/index.html b/software_examples/drug_discovery/tutorial-AutoDockVina/index.html new file mode 100644 index 00000000..dcb9356b --- /dev/null +++ b/software_examples/drug_discovery/tutorial-AutoDockVina/index.html @@ -0,0 +1,2624 @@ + + + + + + + + + + + + + + + + + + Running a Molecule Docking Job with AutoDock Vina - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Running a Molecule Docking Job with AutoDock Vina

    +

    AutoDock Vina is a molecular docking program useful for computer aided drug design. In this tutorial, we will learn how to run AutoDock Vina on the OSPool.

    +

    Tutorial Files

    +

    It is easiest to start with the git clone command to download the materials for this tutorial. Type:

    +
    $ git clone https://github.com/OSGConnect/tutorial-AutoDockVina
    +
    +

    This will create a directory tutorial-AutoDockVina. Change into the directory and look at the available files:

    +
    $ cd tutorial-AutoDockVina
    +$ ls
    +$ ls data/
    +
    +

    You should see the following:

    +
    data/
    +    receptor_config.txt  # Configuration file (input)
    +    receptor.pdbqt       # Receptor coordinates and atomic charges (input)
    +    ligand.pdbqt         # Ligand coordinates and atomic charges (input)
    +logs/                    # Empty folder for job log files
    +vina_job.submit          # Job submission file
    +vina_run.sh              # Execution script
    +
    +

    We need to download the AutoDock program separately into the this directory as well. Go +to the AutoDock Vina website and click on the Download link at the top of the page. This will then lead you to the GitHub Downloads page. +Download the Linux x86_64 version of the program; you can do this directly to the current directory by using the wget command and the download link. If you use the +-O option shown below, it will rename the program to match what is used in the rest of the guide.

    +
    $ wget https://github.com/ccsb-scripps/AutoDock-Vina/releases/download/v1.2.5/vina_1.2.5_linux_x86_64 -O vina
    +
    +

    Once downloaded, we also need to give the program executable permissions. We can test that +it worked by running vina with the help flag:

    +
    $ chmod +x vina
    +$ ./vina --help
    +
    +

    Files Need to Submit the Job

    +

    The file vina_job.submit is the job submission file and contains the description of the job in HTCondor language. Specifically, it includes an "executable" (the script HTCondor will use in the job to run vina), a list of the files needed to run the job (shown in "transfer_input_files"), and indications of where to write logging information and what resources and requirements the job needs.

    +

    Change needed: If your downloaded program file has a different name, change the name in the transfer_input_files line below.

    +
    executable = vina_run.sh
    +
    +transfer_input_files    = data/, vina
    +should_transfer_files   = Yes
    +when_to_transfer_output = ON_EXIT
    +
    +output        = logs/job.$(Cluster).$(Process).out
    +error         = logs/job.$(Cluster).$(Process).error
    +log           = logs/job.$(Cluster).$(Process).log
    +
    +request_cpus   = 1
    +request_memory = 1GB
    +request_disk   = 512MB
    +
    +queue 1
    +
    +

    Next we see the execution script vina_run.sh. The execution script and its commands are executed on a worker node out in the Open Science Pool.

    +

    Change needed: If your vina program file has a different name, change it in the +script below:

    +
    #!/bin/bash
    +
    +# Run vina
    +./vina --config receptor_config.txt \
    +     --ligand ligand.pdbqt --out receptor-ligand.pdbqt
    +
    +

    Submit the Docking Job

    +

    We submit the job using condor_submit command as follows

    +
    $ condor_submit vina_job.submit
    +
    +

    Now you have submitted the AutoDock Vina job on the OSPool. The present job should be finished quickly (less than 10 mins). You can check the status of the submitted job by using thecondor_q command as follows:

    +
    $ condor_q
    +
    +

    After job completion, you will see the output file receptor-ligand.pdbqt.

    +

    Next Steps

    +

    After running this example, you may want to scale up to testing multiple molecules or ligands.

    +

    What to Consider

    +
      +
    • Decide how many docking runs you want to try per job. If one molecule can be tested in a few seconds, you can probably run a few hundred in a job that runs in about an hour.
    • +
    • How should you divide up the input data in this case? Do you need individual input files for each molecule, or can you use one to share? Should the molecule files all get copied to every job or just the jobs where they're needed? You can separate groups of files by putting them in separate directories or tar.gz files to help with this.
    • +
    • Look at this guide to see different ways that you can use HTCondor to submit multiple jobs at once.
    • +
    +

    If you want to use a different (or additional) docking programs, you can include them in the same job by downloading and including those software files in your job submission.

    +

    Example of Multiple Runs

    +

    Included in this directory is one approach to analyzing multiple ligands, by +submitting multiple jobs. For the given files we are assuming that there are multiple +directories with input files we want to run (run01, run02, run03, etc.) and each +job will process all of the ligands in one of these "run" folders.

    +

    In the script, vina_multi.sh, we had added a for loop in order to process all +the ligands that were included with the job. We will also place those results into +a single folder to make it easier to organize them back on the access point:

    +
    #!/bin/bash
    +
    +# Make a directory for results
    +mkdir results
    +
    +# Run vina on multiple ligands
    +for LIGAND in *ligand.pdbqt
    +do 
    +./vina --config receptor_config.txt \
    +     --ligand ${LIGAND} --out results/receptor-${LIGAND}
    +done
    +
    +

    Note that this for loop assumes that all of the ligands have a naming scheme that we can +match using a wildcard (the * symbol).

    +

    In the submit file, we have added a line called transfer_output_files to transfer +back the results folder from each job. We have also replaced the single input directory data with +a variable inputdir, representing one of the run directories. The value +of that variable is set via the queue statement +at the end of the submit file:

    +
    executable = vina_multi.sh
    +
    +transfer_input_files    = $(inputdir)/, vina
    +transfer_output_files   = results
    +
    +# ... other job options
    +
    +queue inputdir matching run*
    +
    +

    Getting Help

    +

    For assistance or questions, please email the OSG User Support team at support@osg-htc.org or visit the help desk and community forums.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/software_examples/freesurfer/Introduction/index.html b/software_examples/freesurfer/Introduction/index.html new file mode 100644 index 00000000..90b2609f --- /dev/null +++ b/software_examples/freesurfer/Introduction/index.html @@ -0,0 +1,2516 @@ + + + + + + + + + + + + + + + + + + FreeSurfer - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    FreeSurfer

    +

    Overview

    +

    FreeSurfer is a software package to analyze MRI scans +of human brains.

    +

    OSG used to have a hosted service, called Fsurf. This is no longer available. Instead, +OSG provides a container image, and one of our collaborators provides an optional +workflow using that container.

    + +

    The container can be used with simple jobs as described below.

    +

    Prerequisites

    +

    To use the FreeSurfer on the Open Science Pool (OSPool), you need:

    + +

    Privacy and Confidentiality of Subjects

    +

    In order to protect the privacy of your participants’ scans, we require that you +submit only defaced and fully deidentified scans for processing.

    +

    Single Job

    +

    The following example job has three files: job.submit, freesurfer-wrapper.sh and license.txt

    +

    job.submit contents:

    +
    Requirements = HAS_SINGULARITY == True 
    ++SingularityImage = "/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-freesurfer:latest"
    +
    +executable = freesurfer-wrapper.sh
    +transfer_input_files = license.txt, sub-THP0001_ses-THP0001UCI1_run-01_T1w.nii.gz
    +
    +error = job.$(Cluster).$(Process).error
    +output = job.$(Cluster).$(Process).output
    +log = job.$(Cluster).$(Process).log
    +
    +request_cpus = 1
    +request_memory = 1 GB
    +request_disk = 4 GB
    +
    +queue 1
    +
    +

    freesurfer-wrapper.sh contents:

    +
    #!/bin/bash
    +
    +set -e
    +
    +# freesurfer environment
    +. /opt/setup.sh
    +
    +# license file comes with the job
    +export FS_LICENSE=`pwd`/license.txt
    +
    +export SUBJECTS_DIR=$PWD
    +
    +recon-all -subject THP0001 -i sub-THP0001_ses-THP0001UCI1_run-01_T1w.nii.gz -autorecon1 -cw256
    +
    +# tar up the subjects directory so it gets transferred back
    +tar czf THP0001.tar.gz THP0001
    +rm -rf THP0001
    +
    +

    license.txt should have the license data obtained from the Freesurfer project.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/software_examples/machine_learning/tutorial-tensorflow-containers/index.html b/software_examples/machine_learning/tutorial-tensorflow-containers/index.html new file mode 100644 index 00000000..a54c935b --- /dev/null +++ b/software_examples/machine_learning/tutorial-tensorflow-containers/index.html @@ -0,0 +1,2597 @@ + + + + + + + + + + + + + + + + + + Working with Tensorflow, GPUs, and containers - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Working with Tensorflow, GPUs, and containers

    +

    In this tutorial, we explore GPUs and containers on OSG, using the popular Tensorflow +sofware package. Tensorflow is a good example here as the software is too complex to +bundle up and ship with your job. Containers solve this problem by defining a full +OS image, containing not only the complex software package, but dependencies and +environment configuration as well.

    +

    https://www.tensorflow.org/ desribes TensorFlow as:

    +
    +

    TensorFlow is an open source software library for numerical +computation using data flow graphs. Nodes in the graph represent +mathematical operations, while the graph edges represent the +multidimensional data arrays (tensors) communicated between them. The +flexible architecture allows you to deploy computation to one or more +CPUs or GPUs in a desktop, server, or mobile device with a single +API. TensorFlow was originally developed by researchers and engineers +working on the Google Brain Team within Google's Machine Intelligence +research organization for the purposes of conducting machine learning +and deep neural networks research, but the system is general enough to +be applicable in a wide variety of other domains as well.

    +
    +

    Defining container images

    +

    Defining containers is fully described in the Docker and Singularity Containers +section. Here we will just provide an overview of how you could take something +like an existing Tensorflow image provided by OSG staff, and extend it by +adding your own modules to it. Let's assume you like Tensorflow version +2.3. The definition of this image can be found in Github: Dockerfile. You don't really need to +understand how an image was built in order to use it. As described in +the containers documentation, make sure the HTCondor submit file has:

    +
    Requirements = HAS_SINGULARITY == TRUE
    ++SingularityImage = "/cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow:2.3"
    +
    +

    If you want to extend an existing image, you can just inherit from the +parent image available on DockerHub here. +For example, if you just need some additional Python packages, your +new Dockerfile could look like:

    +
    FROM opensciencegrid/tensorflow:2.3
    +
    +RUN python3 -m pip install some_package_name
    +
    +

    You can then docker build and docker push it so that your new +image is available on DockerHub. Note that OSG does not provide +any infrastructure for these steps. You will have to complete +them on your own computer or using the DockerHub build +infrastructure.

    +

    Adding a container to the OSG CVMFS distribution mechanism

    +

    How to add a container image to the OSG CVMFS distribution mechanism is also +described in Docker and Singularity Containers, +but a quick scan of the cvmfs-singularity-sync and specifically the docker_images.txt file show us that the tensorflow +images are listed as:

    +
    opensciencegrid/tensorflow:*
    +opensciencegrid/tensorflow-gpu:*
    +
    +

    Those two lines means that all tags from those two DockerHub repositories should +be mapped to /cvmfs/singularity.opensciencegrid.org/. On the login node, try +running:

    +
    ls /cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow:2.3/
    +
    +

    This is the image in its expanded form - something we can execute with Singularity!

    +

    Testing the container on the submit host

    +

    First, download the files contained in this tutorial to the login node using the git clone command and cd into the tutorial directory that is created:

    +
    git clone https://github.com/OSGConnect/tutorial-tensorflow-containers
    +cd tutorial-tensorflow-containers
    +
    +

    Before submitting jobs to the OSG, it is always a good idea to test your code +so that you understand runtime requirements. The containers can be tested +on the OSGConnect submit hosts with singularity shell, which will drop you +into a container and let you exlore it interactively. To explore the +Tensorflow 2.3 image, run:

    +
    singularity shell /cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow:2.3/
    +
    +

    Note how the command line prompt changes, providing you an indicator that +you are inside the image. You can exit any time by running exit. Another +important thing to note is that your $HOME directory is automatically +mounted inside the interactive container - allowing you to access your +codes and test it out. First, start with a simple python3 import test to +make sure tensorflow is available:

    +
    $ python3
    +Python 3.6.9 (default, Jul 17 2020, 12:50:27) 
    +[GCC 8.4.0] on linux
    +Type "help", "copyright", "credits" or "license" for more information.
    +>>> import tensorflow
    +2021-01-15 17:32:33.901607: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory
    +2021-01-15 17:32:33.901735: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
    +>>>
    +
    +

    Tensorflow will warn you that no GPUs where found. This is expected as we +do not have GPUs attached to our login nodes, and it is fine as Tensorflow +works fine with regular CPUs (slower of course).

    +

    Exit out of Python3 with CTRL+D and then we can run a Tensorflow testcode +which can be found in this tutorial:

    +
    $ python3 test.py 
    +2021-01-15 17:37:43.152892: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory
    +2021-01-15 17:37:43.153021: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
    +2021-01-15 17:37:44.899967: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
    +2021-01-15 17:37:44.900063: W tensorflow/stream_executor/cuda/cuda_driver.cc:312] failed call to cuInit: UNKNOWN ERROR (303)
    +2021-01-15 17:37:44.900130: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (login05.osgconnect.net): /proc/driver/nvidia/version does not exist
    +2021-01-15 17:37:44.900821: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F FMA
    +To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
    +2021-01-15 17:37:44.912483: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2700000000 Hz
    +2021-01-15 17:37:44.915548: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4fa0bf0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
    +2021-01-15 17:37:44.915645: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
    +2021-01-15 17:37:44.921895: I tensorflow/core/common_runtime/eager/execute.cc:611] Executing op MatMul in device /job:localhost/replica:0/task:0/device:CPU:0
    +tf.Tensor(
    +[[22. 28.]
    + [49. 64.]], shape=(2, 2), dtype=float32)
    +
    +

    We will again see a bunch of warnings regarding GPUs not being available, but as +we can see by the /job:localhost/replica:0/task:0/device:CPU:0 line, the code ran +on one of the CPUs. When testing your own code like this, take note of how much +memory, disk and runtime is required - it is needed in the next step.

    +

    Once you are done with testing, use CTRL+D or run exit to exit out of +the container. Note that you can not submit jobs from within the container.

    +

    Running a CPU job

    +

    If Tensorflow can run on GPUs, you might be wondering why we might want to run +it on slower CPUs? One reason is that CPUs are plentiful while GPUs are still +somewhat scarce. If you have a lot of shorter Tensorflow jobs, they might +complete faster on available CPUs, rather than wait in the queue for the +faster, less available, GPUs. The good news is that Tensorflow code should +work in both enviroments automatically, so if your code runs too slow on CPUs, +moving to GPUs should be easy.

    +

    To submit our job, we need a submit file and a job wrapper script. The +submit file is a basic OSGConnect flavored HTCondor file, specifying that +we want the job to run in a container. cpu-job.submit contains:

    +
    universe = vanilla
    +
    +# Job requirements - ensure we are running on a Singularity enabled
    +# node and have enough resources to execute our code
    +# Tensorflow also requires AVX instruction set and a newer host kernel
    +Requirements = HAS_SINGULARITY == True && HAS_AVX2 == True && OSG_HOST_KERNEL_VERSION >= 31000
    +request_cpus = 1
    +request_gpus = 0
    +request_memory = 1 GB
    +request_disk = 1 GB
    +
    +# Container image to run the job in
    ++SingularityImage = "/cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow:2.3"
    +
    +# Executable is the program your job will run It's often useful
    +# to create a shell script to "wrap" your actual work.
    +Executable = job-wrapper.sh
    +Arguments =
    +
    +# Inputs/outputs - in this case we just need our python code.
    +# If you leave out transfer_output_files, all generated files comes back
    +transfer_input_files = test.py
    +#transfer_output_files =
    +
    +# Error and output are the error and output channels from your job
    +# that HTCondor returns from the remote host.
    +Error = $(Cluster).$(Process).error
    +Output = $(Cluster).$(Process).output
    +
    +# The LOG file is where HTCondor places information about your
    +# job's status, success, and resource consumption.
    +Log = $(Cluster).log
    +
    +# Send the job to Held state on failure. 
    +#on_exit_hold = (ExitBySignal == True) || (ExitCode != 0)
    +
    +# Periodically retry the jobs every 1 hour, up to a maximum of 5 retries.
    +#periodic_release =  (NumJobStarts < 5) && ((CurrentTime - EnteredCurrentStatus) > 60*60)
    +
    +# queue is the "start button" - it launches any jobs that have been
    +# specified thus far.
    +queue 1
    +
    +

    And job-wrapper.sh:

    +
    #!/bin/bash
    +
    +set -e
    +
    +# set TMPDIR variable
    +export TMPDIR=$_CONDOR_SCRATCH_DIR
    +
    +echo
    +echo "I'm running on" $(hostname -f)
    +echo "OSG site: $OSG_SITE_NAME"
    +echo
    +
    +python3 test.py 2>&1
    +
    +

    The job can now be submitted with condor_submit cpu-job.submit. Once the job +is done, check the files named after the job id for the outputs.

    +

    Running a GPU job

    +

    When moving the job to be run on a GPU, all we have to do is update two lines +in the submit file: set request_gpus to 1 and specify a GPU enabled +container image for +SingularityImage. The updated submit file can be found +in gpu-job.submit with the contents:

    +
    universe = vanilla
    +
    +# Job requirements - ensure we are running on a Singularity enabled
    +# node and have enough resources to execute our code
    +# Tensorflow also requires AVX instruction set and a newer host kernel
    +Requirements = HAS_SINGULARITY == True && HAS_AVX2 == True && OSG_HOST_KERNEL_VERSION >= 31000
    +request_cpus = 1
    +request_gpus = 1
    +request_memory = 1 GB
    +request_disk = 1 GB
    +
    +# Container image to run the job in
    ++SingularityImage = "/cvmfs/singularity.opensciencegrid.org/opensciencegrid/tensorflow-gpu:2.3"
    +
    +# Executable is the program your job will run It's often useful
    +# to create a shell script to "wrap" your actual work.
    +Executable = job-wrapper.sh
    +Arguments =
    +
    +# Inputs/outputs - in this case we just need our python code.
    +# If you leave out transfer_output_files, all generated files comes back
    +transfer_input_files = test.py
    +#transfer_output_files =
    +
    +# Error and output are the error and output channels from your job
    +# that HTCondor returns from the remote host.
    +Error = $(Cluster).$(Process).error
    +Output = $(Cluster).$(Process).output
    +
    +# The LOG file is where HTCondor places information about your
    +# job's status, success, and resource consumption.
    +Log = $(Cluster).log
    +
    +# Send the job to Held state on failure. 
    +#on_exit_hold = (ExitBySignal == True) || (ExitCode != 0)
    +
    +# Periodically retry the jobs every 1 hour, up to a maximum of 5 retries.
    +#periodic_release =  (NumJobStarts < 5) && ((CurrentTime - EnteredCurrentStatus) > 60*60)
    +
    +# queue is the "start button" - it launches any jobs that have been
    +# specified thus far.
    +queue 1
    +
    +

    Submit a job with condor_submit gpu-job.submit. Once the job is complete, check +the .out file for a line stating the code was run under a GPU. Something similar +to:

    +
    2021-02-02 23:25:19.022467: I tensorflow/core/common_runtime/eager/execute.cc:611] Executing op MatMul in device /job:localhost/replica:0/task:0/device:GPU:0
    +
    +

    The GPU:0 parts shows that a GPU was found and used for the computation.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/software_examples/matlab_runtime/tutorial-Matlab-ScalingUp/index.html b/software_examples/matlab_runtime/tutorial-Matlab-ScalingUp/index.html new file mode 100644 index 00000000..52afb766 --- /dev/null +++ b/software_examples/matlab_runtime/tutorial-Matlab-ScalingUp/index.html @@ -0,0 +1,2672 @@ + + + + + + + + + + + + + + + + + + Scaling up MATLAB - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Scaling up compute resources

    +

    Scaling up the computational resources is a big advantage for doing +certain large-scale calculations on OSG. Consider the extensive +sampling for a multi-dimensional Monte Carlo integration or molecular +dynamics simulation with several initial conditions. These types of +calculations require submitting a lot of jobs.

    +

    In the previous example, we submitted the job to a single-worker +machine. About a million CPU hours per day are available to OSG users +on an opportunistic basis. Learning how to scale up and control large +numbers of jobs will enable us to realize the full potential of distributed high throughput computing on the OSG.

    +

    In this section, we will see how to scale up the calculations with +a simple example. Once we understand the basic HTCondor script, it is easy +to scale up.

    +

    Background

    +

    For this example, we will use computational methods to estimate pi. First, +we will define a square inscribed by a unit circle from which we will +randomly sample points. The ratio of the points outside the circle to +the points in the circle is calculated which approaches pi/4.

    +

    This method converges extremely slowly, which makes it great for a +CPU-intensive exercise (but bad for a real estimation!).

    +

    Set up a Matlab Job

    +

    First, we'll need to create a working directory, you can either run +$ tutorial Matlab-ScalingUp or $ git clone https://github.com/OSGConnect/tutorial-Matlab-ScalingUp to copy all the necessary files. Otherwise, you can create the files type the following:

    +
    $ mkdir tutorial-Matlab-ScalingUp
    +$ cd tutorial-Matlab-ScalingUp
    +
    +

    Matlab Script

    +

    Create an Matlab script by typing the following into a file called mcpi.m:

    +
      % Monte Carlo method for estimating pi
    +  % Generate N random points in a unit square
    +  function[] =mcpi(N)
    +  x = rand(N,1); % x coordinates
    +  y = rand(N,1); % y coordinates
    +  % Count how many points are inside a unit circle
    +  inside = 0; % counter
    +  for i = 1:N % loop over points
    +    if x(i)^2 + y(i)^2 <= 1 % check if inside circle
    +        inside = inside + 1; % increment counter
    +    end
    +  end
    +  % Estimate pi as the ratio of points inside circle to total points
    +  pi_est = 4 * inside / N; % pi estimate
    +  % Display the result
    +  fprintf(pi_est);
    +  end
    +
    +

    Compilation

    +

    OSG does not have a license to use the MATLAB compiler. On a Linux server with a MATLAB +license, invoke the compiler mcc. We turn off all graphical options (-nodisplay), disable Java (-nojvm), and instruct MATLAB to run this application as a single-threaded application (-singleCompThread):

    +
    mcc -m -R -singleCompThread -R -nodisplay -R -nojvm mcpi.m
    +
    +

    The flag -m means C language translation during compilation, and the flag -R indicates runtime options. The compilation would produce the files:

    +
    `mcpi, run_mcpi.sh, mccExcludedFiles.log` and `readme.txt`
    +
    +

    The file mcpi is the standalone executable. The file run_mcpi.sh is MATLAB generated shell script. mccExcludedFiles.log is the log file and readme.txt contains the information about the compilation process. We just need the standalone binary file mcpi.

    +

    Running standalone binary applications on OSG

    +

    To see which releases are available on OSG visit our available containers page :

    +

    Tutorial files

    +

    Let us say you have created the standalone binary mcpi. Transfer the file mcpi to your Access Point. Alternatively, you may also use the readily available files by using the git clone command:

    +
    $ git clone https://github.com/OSGConnect/tutorial-Matlab-ScalingUp # Copies input and script files to the directory tutorial-Matlab-ScalingUp.
    +
    +

    This will create a directory tutorial-Matlab-ScalingUp. Inside the directory, you will see the following files

    +
    mcpi             # compiled executable binary of mcpi.m
    +mcpi.m           # matlab program
    +mcpi.submit      # condor job description file
    +mcpi.sh          # execution script
    +
    +

    Executing the MATLAB application binary

    +

    The compilation and execution environment need to the same. The file mcpi is a standalone binary of the matlab program mcpi.m which was compiled using MATLAB 2020b on a Linux platform. The Access Point and many of the worker nodes on OSG are based on Linux platform. In addition to the platform requirement, we also need to have the same MATLAB Runtime version.

    +

    Load the MATLAB runtime for 2020b version via apptainer/singularity command. On the terminal prompt, type

    +
    $ apptainer shell /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2020b
    +
    +

    The above command sets up the environment to run the matlab/2020b runtime applications. Now execute the binary

    +
    $apptainer/singularity> ./mcpi 10
    +
    +

    If you get the an output of the estimated value of pi, the binary execution is successful. Now, exit from the apptainer/singularity environment typing exit. Next, we see how to submit the job on a remote execute point using HTCondor.

    +

    Job execution and submission files

    +

    Let us take a look at mcpi.submit file:

    +
    universe = vanilla                          # One OSG Connect vanilla, the preffered job universe is "vanilla"
    ++SingularityImage = "/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2020b"
    +
    +executable =  mcpi                
    +arguments = $(Process)
    +
    +Output = Log/job.$(Process).out⋅            # standard output 
    +Error =  Log/job.$(Process).err             # standard error
    +Log =    Log/job.$(Process).log             # log information about job execution
    +
    +requirements = HAS_SINGULARITY == TRUE 
    +queue 100                                   # Submit 100  jobs
    +
    +

    Before we submit the job, make sure that the directory Log exists on the current working directory. Because HTCondor looks for Log directory to copy the standard output, error and log files as specified in the job description file.

    +

    From your work directory, type

    +
    $ mkdir -p Log
    +
    +

    Absence of Log directory may send the jobs to held state.

    +

    Job submmision

    +

    We submit the job using the condor_submit command as follows

    +
    $ condor_submit mcpi.submit //Submit the condor job description file "mcpi.submit"
    +
    +

    Now you have submitted an ensemble of 100 MATLAB jobs. Each job prints the value of pi on the standard +output. Check the status of the submitted job,

    +
    $ condor_q username  # The status of the job is printed on the screen. Here, username is your login name.
    +
    +

    Post Process⋅

    +

    Once the jobs are completed, you can use the information in the output files +to calculate an average of all of our computed estimates of Pi.

    +

    To see this, we can use the command:

    +
    $ cat log/mcpi*.out* | awk '{ sum += $2; print $2"   "NR} END { print "---------------\n Grand Average = " sum/NR }'
    +
    +

    Key Points

    +
      +
    • Scaling up the computational resources on OSG is crucial to taking full advantage of distributed computing.
    • +
    • Changing the value of Queue allows the user to scale up the resources.
    • +
    • Arguments allows you to pass parameters to a job script.
    • +
    • $(Cluster) and $(Process) can be used to name log files uniquely.
    • +
    +

    Getting Help

    +

    For assistance or questions, please email the OSG User Support team at +support@osg-htc.org.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/software_examples/matlab_runtime/tutorial-matlab-HelloWorld/index.html b/software_examples/matlab_runtime/tutorial-matlab-HelloWorld/index.html new file mode 100644 index 00000000..d9dcf820 --- /dev/null +++ b/software_examples/matlab_runtime/tutorial-matlab-HelloWorld/index.html @@ -0,0 +1,2640 @@ + + + + + + + + + + + + + + + + + + Basics of compiled MATLAB applications - Hello World example - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Basics of compiled MATLAB applications - Hello World example

    +

    MATLAB® is a licensed high level language and modeling toolkit. The MATLAB Compiler™ lets you share MATLAB programs as standalone applications. MATLAB Compiler is invoked with mcc. The compiler supports most toolboxes and user-developed +interfaces. For more details, check the list of supported toolboxes +and ineligible programs.

    +

    All applications created with MATLAB Compiler use MATLAB Compiler Runtime™ (MCR), which enables royalty-free deployment and use. We assume you have access to a server that has MATLAB compiler because the compiler is not available on OSG Connect. MATLAB Runtime is available +on OSG Connect.

    +

    Although the compiled binaries are portable, they need to have a compatible, OS-specific matlab runtime to interpret the binary. We recommend the +compilation of your matlab program against matlab versions that match the OSG containers, with the compilation executed on a server with +Scientific Linux so that the compiled binaries are portable on OSG machines.

    +

    In this tutorial, we learn the basics of compiling MATLAB programs on a licensed linux machine and running the +compiled binaries using a matlab compiled runtime (MCR) in the OSG containers.

    +

    MATLAB script: hello_world.m

    +

    Lets start with a simple MATLAB script hello_world.m that prints Hello World! to standard output.

    +
    function helloworld
    +    fprintf('\n=============')
    +    fprintf('\nHello, World!\n')
    +    fprintf('=============\n')
    +end
    +
    +

    Compilation

    +

    OSG connect does not have a license to use the MATLAB compiler. On a Linux server with a MATLAB +license, invoke the compiler mcc. We turn off all graphical options (-nodisplay), disable Java (-nojvm), and instruct MATLAB to run this application as a single-threaded application (-singleCompThread):

    +
    mcc -m -R -singleCompThread -R -nodisplay -R -nojvm hello_world.m
    +
    +

    The flag -m means C language translation during compilation, and the flag -R indicates runtime options. The compilation would produce the files:

    +
    `hello_world, run_hello_world.sh, mccExcludedFiles.log` and `readme.txt`
    +
    +

    The file hello_world is the standalone executable. The file run_hello_world.sh is MATLAB generated shell script. mccExcludedFiles.log is the log file and readme.txt contains the information about the compilation process. We just need the standalone binary file hello_world.

    +

    Running standalone binary applications on OSG

    +

    To see which releases are available on OSG visit our available containers page :

    +

    Tutorial files

    +

    Let us say you have created the standalone binary hello_world. Transfer the file hello_world to your Access Point. Alternatively, you may also use the readily available files by using the git clone command:

    +
    $ git clone https://github.com/OSGConnect/tutorial-matlab-HelloWorld # Copies input and script files to the directory tutorial-matlab-HelloWorld.
    +
    +

    This will create a directory tutorial-matlab-HelloWorld. Inside the directory, you will see the following files

    +
    hello_world             # compiled executable binary of hello_world.m
    +hello_world.m           # matlab program
    +hello_world.submit      # condor job description file
    +hello_world.sh          # execution script
    +
    +

    Executing the MATLAB application binary

    +

    The compilation and execution environment need to the same. The file hello_world is a standalone binary of the matlab program hello_world.m which was compiled using MATLAB 2018b on a Linux platform. The Access Point and many of the worker nodes on OSG are based on Linux platform. In addition to the platform requirement, we also need to have the same MATLAB Runtime version.

    +

    Load the MATLAB runtime for 2018b version via apptainer/singularity command. On the terminal prompt, type

    +
    $ apptainer shell /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2018b
    +
    +

    The above command sets up the environment to run the matlab/2018b runtime applications. Now execute the binary

    +
    $apptainer/singularity> ./hello_world
    +(would produce the following output)
    +
    +=============
    +Hello, World!
    +=============
    +
    +

    If you get the above output, the binary execution is successful. Now, exit from the apptainer/singularity environment typing exit. Next, we see how to submit the job on a remote execute point using HTcondor.

    +

    Job execution and submission files

    +

    Let us take a look at hello_world.submit file:

    +
    universe = vanilla                          # One OSG Connect vanilla, the preffered job universe is "vanilla"
    ++SingularityImage = "/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-matlab-runtime:R2018b"
    +
    +executable =  hello_world
    +
    +Output = Log/job.$(Process).out⋅            # standard output 
    +Error =  Log/job.$(Process).err             # standard error
    +Log =    Log/job.$(Process).log             # log information about job execution
    +
    +requirements = HAS_SINGULARITY == TRUE 
    +queue 10                                     # Submit 10  jobs
    +
    +

    Before we submit the job, make sure that the directory Log exists on the current working directory. Because HTcondor looks for Log directory to copy the standard output, error and log files as specified in the job description file.

    +

    From your work directory, type

    +
    $ mkdir -p Log
    +
    +

    Absence of Log directory would send the jobs to held state.

    +

    Job submmision

    +

    We submit the job using the condor_submit command as follows

    +
    $ condor_submit hello_world.submit //Submit the condor job description file "hello_world.submit"
    +
    +

    Now you have submitted an ensemble of 10 MATLAB jobs. Each job prints hello world on the standard +output. Check the status of the submitted job,

    +
    $ condor_q username  # The status of the job is printed on the screen. Here, username is your login name.
    +
    +

    Job outputs

    +

    The hello_world.m script sends the output to standard output. In the condor job description file, we expressed that the standard output is written on the Log/job.$(ProcessID).out. After job completion, ten output files are produced with the hello world message under the directory Log.

    +

    What's next?

    +

    Sure, it is not very exciting to print the same message on 10 output files. In the subsequent MATLAB +examples, we see how to scale up MATLAB computation on HTC environment.

    +

    Getting help

    +

    For assistance or questions, please email the OSG User Support team at support@osg-htc.org or visit the help desk and community forums.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/software_examples/other_languages_tools/conda-container/index.html b/software_examples/other_languages_tools/conda-container/index.html new file mode 100644 index 00000000..574e6d67 --- /dev/null +++ b/software_examples/other_languages_tools/conda-container/index.html @@ -0,0 +1,2618 @@ + + + + + + + + + + + + + + + + + + Conda with Containers - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Conda with Containers

    +

    The Anaconda/Miniconda distribution of Python is a common tool for installing and managing Python-based software and other tools.

    +

    There are two ways of using Conda on the OSPool: with a tarball, or via a +custom Apptainer/Singularity container. Either works well, but the container +solution might be better if your Conda environment contains non-Python tools.

    +

    Overview

    +

    When should you use Miniconda as an installation method in OSG?

    +
      +
    • Your software has specific conda-centric installation instructions.
    • +
    • The above is true and the software has a lot of dependencies.
    • +
    • You mainly use Python to do your work.
    • +
    +

    Notes on terminology:

    +
      +
    • conda is a Python package manager and package ecosystem that exists in parallel with pip and PyPI.
    • +
    • Miniconda is a slim Python distribution, containing the minimum amount of packages necessary for a Python installation that can use conda.
    • +
    • Anaconda is a pre-built scientific Python distribution based on Miniconda that has many useful scientific packages pre-installed.
    • +
    +

    To create the smallest, most portable Python installation possible, we recommend starting with Miniconda and installing only the packages you actually require.

    + +

    To use a Miniconda installation for your jobs, create an Apptainer/Singularity definition file and +build it (general instructions here).

    +

    Apptainer/Singularity Definition File

    +

    The definition file tells Apptainer/Singularity how the container should be built, +and what the environment setup should take place when the container is instantiated. +In the following example, the container is based on Ubuntu 22.04. A few base +operating system tools are installed, then Miniconda, followed by a set of +conda commands to define the Conda environment. The %environment is used +to ensure jobs are getting the environment activated before the job runs. To build +your own custom image, start by modifing the conda install line to include +the packages you need.

    +
    Bootstrap: docker
    +From: ubuntu:22.04
    +
    +%environment
    +    # set up environment for when using the container
    +    . /opt/conda/etc/profile.d/conda.sh
    +    conda activate
    +
    +%post
    +    # base os
    +    apt-get update -y
    +    apt-get install -y build-essential wget
    +
    +    # install miniconda
    +    wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
    +    bash Miniconda3-latest-Linux-x86_64.sh -b -f -p /opt/conda
    +    rm Miniconda3-latest-Linux-x86_64.sh
    +
    +    # install conda components - add the packages you need here
    +    . /opt/conda/etc/profile.d/conda.sh
    +    conda activate
    +    conda install -y -c conda-forge numpy cowpy
    +    conda update --all
    +
    +

    The next step is to build the image. Run:

    +
    $ apptainer build my-container.sif image.def
    +
    +

    You can explore the container locally to make sure it works as expected with the shell subcommand:

    +
    $ apptainer shell my-container.sif
    +
    +

    This example will give you an interactive shell. You can explore the +container and test your code with your own inputs from your /home +directory, which is automatically mounted (but note - $HOME will not be +available to your jobs later). Once you are down exploring, exit the +container by running exit or with CTRL+D

    +

    It is important to use the correct transfer mechanism to get the +image to your job. Please make sure you use OSDF +and version your container in the filename. For example:

    +
    $ cp my-container.sif /ospool/protected/<username>/my-container-v1.sif
    +
    +

    Submit Jobs

    +

    An example submit file could look like:

    +
    # File Name: conda_submission.sub
    +
    +# specify the newly built image
    ++SingularityImage = "osdf:///ospool/protected/<username>/my-container-v1.sif"
    +
    +# Specify your executable (single binary or a script that runs several
    +#  commands) and arguments to be passed to jobs. 
    +#  $(Process) will be a integer number for each job, starting with "0"
    +#  and increasing for the relevant number of jobs.
    +executable = science.py
    +arguments = $(Process)
    +
    +# Specify the name of the log, standard error, and standard output (or "screen output") files.
    +
    +log = science_with_conda.log
    +error = science_with_conda.err
    +output = science_with_conda.out
    +
    +# Transfer any file needed for our job to complete. 
    +transfer_input_files = 
    +
    +# Specify Job duration category as "Medium" (expected runtime <10 hr) or "Long" (expected runtime <20 hr). 
    ++JobDurationCategory = “Medium”
    +
    +# Tell HTCondor requirements your job needs, 
    +# what amount of compute resources each job will need on the computer where it runs.
    +requirements = 
    +request_cpus = 1
    +request_memory = 1GB
    +request_disk = 5GB
    +
    +# Tell HTCondor to run 1 instance of our job:
    +queue 1
    +
    +

    Specifying Exact Dependency Versions

    +

    An important part of improving reproducibility and consistency between runs is to ensure that you use the correct/expected versions of your dependencies.

    +

    When you run a command like conda install numpy conda tries to install the most recent version of numpy For example, numpy version 1.22.3 was released on Mar 7, 2022. To install exactly this version of numpy, you would run conda install numpy=1.22.3 (the same works for pip if you replace = with ==). We recommend installing with an explicit version to make sure you have exactly the version of a package that you want. This is often called “pinning” or “locking” the version of the package.

    +

    If you want a record of what is installed in your environment, or want to reproduce your environment on another computer, conda can create a file, usually called environment.yml, that describes the exact versions of all of the packages you have installed in an environment. An example environment.yml file:

    +
    channels:
    +  - conda-forge
    +  - defaults
    +dependencies:
    +  - cowpy
    +  - numpy=1.25.0
    +
    +

    To use the environment.yml in the build, modify the image definition to copy the file, and +then replace the conda install with a conda env create. Also note that it is good style +to name the environment. We call it science in this example:

    +
    Bootstrap: docker
    +From: ubuntu:22.04
    +
    +%files
    +    environment.yml
    +
    +%environment
    +    # set up environment for when using the container
    +    . /opt/conda/etc/profile.d/conda.sh
    +    conda activate science
    +
    +%post
    +    # base os
    +    apt-get update -y
    +    apt-get install -y build-essential wget
    +
    +    # install miniconda
    +    wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
    +    bash Miniconda3-latest-Linux-x86_64.sh -b -f -p /opt/conda
    +    rm Miniconda3-latest-Linux-x86_64.sh
    +
    +    # install conda components - add the packages you need here
    +    . /opt/conda/etc/profile.d/conda.sh
    +    conda activate
    +    conda env create -n science -f environment.yml
    +    conda update --all
    +
    +

    If you use a source control system like git, we recommend checking your environment.yml file into source control and making sure to recreate it when you make changes to your environment. Putting your environment under source control gives you a way to track how it changes along with your own code.

    +

    More information on conda environments can be found in their documentation.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/software_examples/other_languages_tools/conda-tarball/index.html b/software_examples/other_languages_tools/conda-tarball/index.html new file mode 100644 index 00000000..10c20692 --- /dev/null +++ b/software_examples/other_languages_tools/conda-tarball/index.html @@ -0,0 +1,2712 @@ + + + + + + + + + + + + + + + + + + Conda with Tarballs - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Conda with Tarballs

    +

    The Anaconda/Miniconda distribution of Python is a common tool for installing and managing Python-based software and other tools.

    +

    There are two ways of using Conda on the OSPool: with a tarball as described in this guide, or by installing Conda inside a +custom Apptainer/Singularity container. Either works well, but the container +solution might be better if your Conda environment requires access to non-Python tools.

    +

    Overview

    +

    When should you use Miniconda as an installation method in OSG?

    +
      +
    • Your software has specific conda-centric installation instructions.
    • +
    • The above is true and the software has a lot of dependencies.
    • +
    • You mainly use Python to do your work.
    • +
    +

    Notes on terminology:

    +
      +
    • conda is a Python package manager and package ecosystem that exists in parallel with pip and PyPI.
    • +
    • Miniconda is a slim Python distribution, containing the minimum amount of packages necessary for a Python installation that can use conda.
    • +
    • Anaconda is a pre-built scientific Python distribution based on Miniconda that has many useful scientific packages pre-installed.
    • +
    +

    To create the smallest, most portable Python installation possible, we recommend starting with Miniconda and installing only the packages you actually require.

    + +

    To use a Miniconda installation for your jobs, create your installation environment on the access point and send a zipped version to your jobs.

    +

    Install Miniconda and Package for Jobs

    +

    In this approach, we will create an entire software installation inside Miniconda and then use a tool called conda pack to package it up for running jobs.

    +

    1. Create a Miniconda Installation

    +

    After logging into your access point, download the latest Linux miniconda installer and run it. For example,

    +
      [alice@ap00]$ wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
    +  [alice@ap00]$ sh Miniconda3-latest-Linux-x86_64.sh
    +
    +

    Accept the license agreement and default options. At the end, you can choose whether or not to “initialize Miniconda3 by running conda init?” +- If you enter "no", you would then run the eval command listed by the installer to “activate” Miniconda. If you choose “no” you’ll want to save this command so that you can reactivate the Miniconda installation when needed in the future. +- If you enter "yes", miniconda will edit your .bashrc file and PATH environment variable so that you do not need to define a path to Miniconda each time you log in. If you choose "yes", before proceeding, you must log off and close your terminal for these changes to go into effect. Once you close your terminal, you can reopen it, log in to your access point, and proceed with the rest of the instructions below.

    +

    2. Create a conda "Environment" With Your Packages

    +

    (If you are using an environment.yml file as described later, you should instead create the environment from your environment.yml file. If you don’t have an environment.yml file to work with, follow the install instructions in this section. We recommend switching to the environment.yml method of creating environments once you understand the “manual” method presented here.)

    +

    Make sure that you’ve activated the base Miniconda environment if you haven’t already. Your prompt should look like this:

    +
      (base)[alice@ap00]$
    +
    +

    To create an environment, use the conda create command and then activate the environment:

    +
      (base)[alice@ap00]$ conda create -n env-name
    +  (base)[alice@ap00]$ conda activate env-name
    +
    +

    Then, run the conda install command to install the different packages and software you want to include in the installation. How this should look is often listed in the installation examples for software (e.g. Qiime2, Pytorch).

    +
      (env-name)[alice@ap00]$ conda install pkg1 pkg2
    +
    +

    Some Conda packages are only available via specific Conda channels which serve as repositories for hosting and managing packages. If Conda is unable to locate the requested packages using the example above, you may need to have Conda search other channels. More detail are available at https://docs.conda.io/projects/conda/en/latest/user-guide/concepts/channels.html.

    +

    Packages may also be installed via pip, but you should only do this when there is no conda package available.

    +

    Once everything is installed, deactivate the environment to go back to the Miniconda “base” environment.

    +
      (env-name)[alice@ap00]$ conda deactivate
    +
    +

    For example, if you wanted to create an installation with pandas and matplotlib and call the environment py-data-sci, you would use this sequence of commands:

    +
      (base)[alice@ap00]$ conda create -n py-data-sci
    +  (base)[alice@ap00]$ conda activate py-data-sci
    +  (py-data-sci)[alice@ap00]$ conda install pandas matplotlib
    +  (py-data-sci)[alice@ap00]$ conda deactivate
    +  (base)[alice@ap00]$
    +
    +
    +

    More About Miniconda

    +

    See the official conda documentation for more information on creating and managing environments with conda.

    +
    +

    3. Create Software Package

    +

    Make sure that your job’s Miniconda environment is created, but deactivated, so that you’re in the “base” Miniconda environment:

    +
      (base)[alice@ap00]$
    +
    +

    Then, run this command to install the conda pack tool:

    +
      (base)[alice@ap00]$ conda install -c conda-forge conda-pack
    +
    +

    Enter y when it asks you to install.

    +

    Finally, use conda pack to create a zipped tar.gz file of your environment (substitute the name of your conda environment where you see env-name), set the proper permissions for this file using chmod, and check the size of the final tarball:

    +
      (base)[alice@ap00]$ conda pack -n env-name
    +  (base)[alice@ap00]$ chmod 644 env-name.tar.gz
    +  (base)[alice@ap00]$ ls -sh env-name.tar.gz
    +
    +

    When this step finishes, you should see a file in your current directory named env-name.tar.gz.

    +

    4. Check Size of Conda Environment Tar Archive

    +

    The tar archive, env-name.tar.gz, created in the previous step will be used as input for subsequent job submission. As with all job input files, you should check the size of this Conda environment file. If >1GB in size, you should move the file to either your /public or /protected folder, and transfer it to/from jobs using the osdf:/// link, as described in Overview: Data Staging and Transfer to Jobs. This is the most efficient way to transfer large files to/from jobs.

    +

    5. Create a Job Executable

    +

    The job will need to go through a few steps to use this “packed” conda environment; first, setting the PATH, then unzipping the environment, then activating it, and finally running whatever program you like. The script below is an example of what is needed (customize as indicated to match your choices above). For future reference, let's call this executable conda_science.sh.

    +
      #!/bin/bash
    +  # File Name: science_with_conda.sh
    +
    +  # have job exit if any command returns with non-zero exit status (aka failure)
    +  set -e
    +
    +  # replace env-name on the right hand side of this line with the name of your conda environment
    +  ENVNAME=env-name
    +
    +  # if you need the environment directory to be named something other than the environment name, change this line
    +  ENVDIR=$ENVNAME
    +
    +  # these lines handle setting up the environment; you shouldn't have to modify them
    +  export PATH
    +  mkdir $ENVDIR
    +  tar -xzf $ENVNAME.tar.gz -C $ENVDIR
    +  . $ENVDIR/bin/activate
    +
    +  # modify this line to run your desired Python script and any other work you need to do
    +  python3 hello.py
    +
    +

    6. Submit Jobs

    +

    In your HTCondor submit file, make sure to have the following:

    +
      +
    • Your executable should be the the bash script you created in step 5.
    • +
    • Remember to transfer your Python script and the environment tar.gz file to the job. If the tar.gz file is larger than 1GB, please move the file to either your /protected or /public directories and use the osdf:/// file delivery mechanism as described above.
    • +
    +

    An example submit file could look like:

    +
    # File Name: conda_submission.sub
    +
    +# Specify your executable (single binary or a script that runs several
    +#  commands) and arguments to be passed to jobs. 
    +#  $(Process) will be a integer number for each job, starting with "0"
    +#  and increasing for the relevant number of jobs.
    +executable = science_with_conda.sh
    +arguments = $(Process)
    +
    +# Specify the name of the log, standard error, and standard output (or "screen output") files.
    +
    +log = science_with_conda.log
    +error = science_with_conda.err
    +output = science_with_conda.out
    +
    +# Transfer any file needed for our job to complete. 
    +transfer_input_files = osdf:///ospool/apXX/data/alice/env-name.tar.gz, hello.py
    +
    +In the line above, the `XX` in `apXX` should be replaced with the numbers corresponding to your access point. 
    +# Specify Job duration category as "Medium" (expected runtime <10 hr) or "Long" (expected runtime <20 hr). 
    ++JobDurationCategory = “Medium”
    +
    +# Tell HTCondor requirements (e.g., operating system) your job needs, 
    +# what amount of compute resources each job will need on the computer where it runs.
    +requirements = (OSGVO_OS_STRING == "RHEL 9")
    +request_cpus = 1
    +request_memory = 1GB
    +request_disk = 5GB
    +
    +# Tell HTCondor to run 1 instance of our job:
    +queue 1
    +
    +

    Specifying Exact Dependency Versions

    +

    An important part of improving reproducibility and consistency between runs is to ensure that you use the correct/expected versions of your dependencies.

    +

    When you run a command like conda install numpy conda tries to install the most recent version of numpy For example, numpy version 1.22.3 was released on Mar 7, 2022. To install exactly this version of numpy, you would run conda install numpy=1.22.3 (the same works for pip if you replace = with ==). We recommend installing with an explicit version to make sure you have exactly the version of a package that you want. This is often called “pinning” or “locking” the version of the package.

    +

    If you want a record of what is installed in your environment, or want to reproduce your environment on another computer, conda can create a file, usually called environment.yml, that describes the exact versions of all of the packages you have installed in an environment. This file can be re-used by a different conda command to recreate that exact environment on another computer.

    +

    To create an environment.yml file from your currently-activated environment, run

    +
      [alice@ap00]$ conda env export > environment.yml
    +
    +

    This environment.yml will pin the exact version of every dependency in your environment. This can sometimes be problematic if you are moving between platforms because a package version may not be available on some other platform, causing an “unsatisfiable dependency” or “inconsistent environment” error. A much less strict pinning is

    +
      [alice@ap00]$ conda env export --from-history > environment.yml
    +
    +

    which only lists packages that you installed manually, and does not pin their versions unless you yourself pinned them during installation. If you need an intermediate solution, it is also possible to manually edit environment.yml files; see the conda environment documentation for more details about the format and what is possible. In general, exact environment specifications are simply not guaranteed to be transferable between platforms (e.g., between Windows and Linux). We strongly recommend using the strictest possible pinning available to you.

    +

    To create an environment from an environment.yml file, run

    +
      [alice@ap00]$ conda env create -f environment.yml
    +
    +

    By default, the name of the environment will be whatever the name of the source environment was; you can change the name by adding a -n \<name> option to the conda env create command.

    +

    If you use a source control system like git, we recommend checking your environment.yml file into source control and making sure to recreate it when you make changes to your environment. Putting your environment under source control gives you a way to track how it changes along with your own code.

    +

    If you are developing software on your local computer for eventual use on the Open Science Pool, your workflow might look like this:

    +
      +
    1. Set up a conda environment for local development and install packages as desired (e.g., conda create -n science; conda activate science; conda install numpy).
    2. +
    3. Once you are ready to run on the Open Science Pool, create an environment.yml file from your local environment (e.g., conda env export > environment.yml).
    4. +
    5. Move your environment.yml file from your local computer to the submit machine and create an environment from it (e.g., conda env create -f environment.yml), then pack it for use in your jobs, as per Create Software Package above.
    6. +
    +

    More information on conda environments can be found in their documentation.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/software_examples/other_languages_tools/java-on-osg/index.html b/software_examples/other_languages_tools/java-on-osg/index.html new file mode 100644 index 00000000..bc78090a --- /dev/null +++ b/software_examples/other_languages_tools/java-on-osg/index.html @@ -0,0 +1,2475 @@ + + + + + + + + + + + + + + + + + + Using Java in Jobs - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Using Java in Jobs

    +

    Overview

    +

    If your code uses Java via a .jar file, it is easy to bring along your +own copy of the Java Development Kit (JDK) which allows you to run +your .jar file anywhere on the Open Science Pool.

    +

    Steps to Use Java in Jobs

    +
      +
    1. Get a copy of Java/JDK. You can access the the Java Development Kit (JDK) from +the JDK website. First select the link to the +JDK that is listed as "Ready for Use" and then download the Linux/x64 +version of the tar.gz file using a Unix command such as wget from your /home directory. +For example,
      $ wget https://download.java.net/java/GA/jdk17.0.1/2a2082e5a09d4267845be086888add4f/12/GPL/openjdk-17.0.1_linux-x64_bin.tar.gz
      +
      +
    2. +
    +

    The downloaded file should end up in your +home directory on the OSPool access point.

    +
      +
    1. +

      Include Java in Input Files. Add the downloaded tar file to the transfer_input_files line of your +submit file, along with the .jar file and any other input files the job needs:

      +
      transfer_input_files = openjdk-17.0.1_linux-x64_bin.tar.gz, program.jar, other_input
      +
      +
    2. +
    3. +

      Setup Java inside the job. Write a script that unpacks the JDK tar file, sets +the environment to +find the java software, and then runs your program. This script will be +your job\'s executable. See this example for what the script should look +like:

      +
      #!/bin/bash
      +
      +# unzip the JDK
      +tar -xzf openjdk-17.0.1_linux-x64_bin.tar.gz
      +# Add the unzipped JDK folder to the environment
      +export PATH=$PWD/jdk-17.0.1/bin:$PATH
      +export JAVA_HOME=$PWD/jdk-17.0.1
      +
      +# run your .jar file
      +java -jar program.jar
      +
      +

      Note that the exact name of the unzipped JDK folder and the JDK tar.gz file will +vary depending on the version you downloaded. You should unzip the JDK tar.gz +file in your home directory to find out the correct directory name to add to +the script.

      +
    4. +
    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/software_examples/other_languages_tools/julia-on-osg/index.html b/software_examples/other_languages_tools/julia-on-osg/index.html new file mode 100644 index 00000000..4cf501a1 --- /dev/null +++ b/software_examples/other_languages_tools/julia-on-osg/index.html @@ -0,0 +1,2764 @@ + + + + + + + + + + + + + + + + + + Using Julia on the OSPool - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Using Julia on the OSPool

    +

    Overview

    +

    This guide provides an introduction to running Julia code on the Open +Science Pool. The Quickstart Instructions provide +an outline of job submission. The following sections provide more details about +installing Julia packages (Install Julia Packages) and creating a complete +job submission (Submit Julia Jobs). This guide assumes that +you have a script written in Julia and can identify the additional Julia packages +needed to run the script.

    +

    If you are using many Julia packages or have other software dependencies as +part of your job, you may want to manage your software via a container instead +of using the tar.gz file method described in this guide. The Research Computing Facilitation (RCF) team +maintains a Julia container that can be used as a starting point +for creating a customized container with added packages. See +our Docker and Singularity/Apptainer Guide for more details.

    +

    Quickstart Instructions

    +
      +
    1. +

      Download the precompiled Julia software from https://julialang.org/downloads/. +You will need the 64-bit, tarball compiled for general use on a Linux x86 system. The +file name will resemble something like julia-#.#.#-linux-x86_64.tar.gz.

      +
        +
      • Tip: use wget to download directly to your /home directory on the +access point, OR use transfer_input_files = url in your HTCondor submit files.
      • +
      +
    2. +
    3. +

      Install your Julia packages on the access point, else skip to the next step.

      + +
    4. +
    5. +

      Submit a job that executes a Julia script using the Julia precompiled binary +with base Julia and Standard Library, via a shell script like the following as +the job's executable:

      +
      #!/bin/bash
      +
      +# extract Julia tar.gz file
      +tar -xzf julia-#.#.#-linux-x86_64.tar.gz
      +
      +# add Julia binary to PATH
      +export PATH=$_CONDOR_SCRATCH_DIR/julia-#-#-#/bin:$PATH
      +
      +# run Julia script
      +julia my-script.jl
      +
      +
        +
      • For more details on the job submission, see the section +below: Submit Julia Jobs
      • +
      +
    6. +
    +

    Install Julia Packages

    +

    If your work requires additional Julia packages, you will need to peform a one-time +installation of these packages within a Julia project. A copy of the project +can then be saved for use in subsequent job submissions. For more details, +please see Julia's documentation at Julia Pkg.jl.

    +

    Download Julia and set up a "project"

    +

    If you have not already downloaded a copy of Julia, download the +precompiled Julia software from https://julialang.org/downloads/. +You will need the 64-bit, tarball compiled for general use on a Linux x86 system. The +file name will resemble something like julia-#.#.#-linux-x86_64.tar.gz.

    +

    We will need a copy of the original tar.gz file for running jobs, but to install +packages, we also need an unpacked version of the software. Run the following commands +to extract the Julia software and add Julia to your PATH:

    +
    $ tar -xzf julia-#.#.#-linux-x86_64.tar.gz
    +$ export PATH=$PWD/julia-#.#.#/bin:$PATH
    +
    +

    After these steps, you should be able to run Julia from the command line, e.g.

    +
    $ julia --version
    +
    +

    Now create a project directory to install your packages (we've called +it my-project/ below) and tell Julia its name:

    +
    $ mkdir my-project
    +$ export JULIA_DEPOT_PATH=$PWD/my-project
    +
    +
    +

    If you already have a directory with Julia packages on the login node, you can +add to it by skipping the mkdir step above and going straight to setting the +JULIA_DEPOT_PATH variable.

    +
    +

    You can choose whatever name to use for this directory -- if you have +different projects that you use for different jobs, you could +use a more descriptive name than "my-project".

    +

    Install Packages

    +

    We will now use Julia to install any needed packages to the project directory +we created in the previous step.

    +

    Open Julia with the --project option set to the project directory:

    +
    $ julia --project=my-project
    +
    +

    Once you've started up the Julia REPL (interpreter), start the Pkg REPL, used to +install packages, by typing ]. Then install and test packages by using +Julia's add Package syntax.

    +
                   _
    +   _       _ _(_)_     |  Documentation: https://docs.julialang.org
    +  (_)     | (_) (_)    |
    +   _ _   _| |_  __ _   |  Type "?" for help, "]?" for Pkg help.
    +  | | | | | | |/ _` |  |
    +  | | |_| | | | (_| |  |  Version 1.0.5 (2019-09-09)
    + _/ |\__'_|_|_|\__'_|  |  Official https://julialang.org/ release
    +|__/                   |
    +
    +julia> ]
    +(my-project) pkg> add Package
    +(my-project) pkg> test Package
    +
    +

    If you have multiple packages to install they can be combined +into a single command, e.g. (my-project) pkg> add Package1 Package2 Package3.

    +

    If you encounter issues getting packages to install successfully, please +contact us at support@osg-htc.org

    +

    Once you are done, you can exit the Pkg REPL by typing the DELETE key and then +typing exit()

    +
    (my-project) pkg> 
    +julia> exit()
    +
    +

    Your packages will have been installed to the my_project directory; we want +to compress this folder so that it is easier to copy to jobs.

    +
    $ tar -czf my-project.tar.gz my-project/
    +
    +

    Submit Julia Jobs

    +

    To submit a job that runs a Julia script, create a bash +script and HTCondor submit file following the examples in this section. These +example assume that you have downloaded a copy of Julia for Linux as a tar.gz +file and if using packages, you have gone through the steps above to install them +and create an additional tar.gz file of the installed packages.

    +

    Create Executable Bash Script

    +

    Your job will use a bash script as the HTCondor executable. This script +will contain all the steps needed to unpack the Julia binaries and +execute your Julia script (script.jl below). What follows are two example bash scripts, +one which can be used to execute a script with base Julia only, and one that +will use packages you installed to a project directory (see Install Julia Packages).

    +

    Example Bash Script For Base Julia Only

    +

    If your Julia script can run without additional packages (other than base Julia and +the Julia Standard library) use the example script directly below.

    +
    #!/bin/bash
    +
    +# julia-job.sh
    +
    +# extract Julia tar.gz file
    +tar -xzf julia-#.#.#-linux-x86_64.tar.gz
    +
    +# add Julia binary to PATH
    +export PATH=$_CONDOR_SCRATCH_DIR/julia-#.#.#/bin:$PATH
    +
    +# run Julia script
    +julia script.jl
    +
    +

    Example Bash Script For Julia With Installed Packages

    +
    #!/bin/bash
    +
    +# julia-job.sh
    +
    +# extract Julia tar.gz file and project tar.gz file
    +tar -xzf julia-#.#.#-linux-x86_64.tar.gz
    +tar -xzf my-project.tar.gz
    +
    +# add Julia binary to PATH
    +export PATH=$_CONDOR_SCRATCH_DIR/julia-#.#.#/bin:$PATH
    +# add Julia packages to DEPOT variable
    +export JULIA_DEPOT_PATH=$_CONDOR_SCRATCH_DIR/my-project
    +
    +# run Julia script
    +julia --project=my-project script.jl
    +
    +

    Create HTCondor Submit File

    +

    After creating a bash script named julia-job.sh to run Julia, then create a submit file to submit the job.

    +

    More details about setting up a submit file, including a submit file template, +can be found in our quickstart guide: Quickstart Tutorial

    +
    # File Name = julia-job.sub
    +
    +executable = julia-job.sh
    +
    +transfer_input_files = julia-#.#.#-linux-x86_64.tar.gz, script.jl
    +should_transfer_files   = Yes
    +when_to_transfer_output = ON_EXIT
    +
    +output        = job.$(Cluster).$(Process).out
    +error         = job.$(Cluster).$(Process).error
    +log           = job.$(Cluster).$(Process).log
    +
    ++JobDurationCategory = "Medium"
    +
    +requirements   = (OSGVO_OS_STRING == "RHEL 9")
    +request_cpus   = 1
    +request_memory = 2GB
    +request_disk   = 2GB
    +
    +queue 1
    +
    +

    If your Julia script needs to use packages installed for a project, +be sure to include my-project.tar.gz as an input file in julia-job.sub. +For project tarballs that are <1 GB, you can follow the below example:

    +
    transfer_input_files = julia-#.#.#-linux-x86_64.tar.gz, script.jl, my-project.tar.gz
    +
    +

    Modify the CPU/memory request lines to match what is needed by the job. +Test a few jobs for disk space/memory usage in order to make sure your +requests for a large batch are accurate! Disk space and memory usage can be found in the +log file after the job completes.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/software_examples/python/manage-python-packages/index.html b/software_examples/python/manage-python-packages/index.html new file mode 100644 index 00000000..be81baa9 --- /dev/null +++ b/software_examples/python/manage-python-packages/index.html @@ -0,0 +1,2703 @@ + + + + + + + + + + + + + + + + + + Run Python Scripts on the OSPool - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Run Python Scripts on the OSPool

    +

    Overview

    +

    This guide will show you two examples of how to run jobs that use Python in the Open Science Pool. +The first example will demonstrate how to submit a job that uses base Python. +The second example will demonstrate the workflow for jobs that use specific Python packages, including +how to install a custom set of Python packages to your home directory and how to add them to a Python job submission.

    +

    Before getting started, you should know which Python packages you need to run your job.

    +

    Running Base Python on the Open Science Pool

    +

    Create a bash script to run Python

    +

    To submit jobs that use a module to run base Python, first create a bash executable - for +this example we'll call it run_py.sh - which will run our Python script called myscript.py.

    +

    For example, run_py.sh:

    +
    #!/bin/bash
    +
    +# Run the Python script 
    +python3 myscript.py
    +
    +
    +

    If you need to use Python 2, +replace the python3 above with python2.

    +
    +

    Create an HTCondor submit file

    +

    In order to submit run_py.sh as part of a job, we need to create an HTCondor +submit file. This should include the following:

    +
      +
    • run_py.sh specified as the executable
    • +
    • use transfer_input_files to bring our Python script myscript.pyto wherever the job runs
    • +
    • include a standard container image that has Python installed.
    • +
    +

    All together, the submit file will look something like this:

    +
    universe    = vanilla
    +
    ++SingularityImage = "/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-ubuntu-20.04:latest"
    +
    +executable  = run_py.sh
    +
    +transfer_input_files = myscript.py
    +
    +log         = job.log
    +output      = job.out
    +error       = job.error
    +
    ++JobDurationCategory = "Medium"
    +
    +request_cpus    = 1 
    +request_memory  = 2GB
    +request_disk    = 2GB
    +
    +queue 1
    +
    +

    Once everything is set up, the job can be submitted in the usual way, by running +the condor_submit command with the name of the submit file.

    +

    Running Python Jobs That Use Additional Packages

    +

    It's likely that you'll need additional Python packages that are not +present in the base Python installations. This portion of the +guide describes how to install your packages to a custom directory and +then include them as part of your jobs.

    +

    Install Python packages

    +

    While connected to your login node, start the base Singularity container that has a +copy of Python inside:

    +
     $ singularity shell /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-ubuntu-20.04:latest
    +
    +

    Next, create a directory for your files and set the PYTHONPATH

    +
     Singularity> mkdir my_env
    + Singularity> export PYTHONPATH=$PWD/my_env
    +
    +
    +

    You can swap out my_env for a more descriptive name like scipy or word-analysis.

    +
    +

    Now we can use pip to install Python packages.

    +
    Singularity> pip3 install --target=$PWD/my_env numpy
    +......some download message...
    +Installing collected packages: numpy
    +Installing collected packages: numpy
    +Successfully installed numpy-1.16.3
    +
    +

    Install each package that you need for your job using the pip install command.

    +
    +

    If you would like to test the package installation, you can run the python3 command +and then try importing the packages you just installed. To exit the Python console, +type "quit()"

    +
    +

    Once you are done, you can leave the virtual environment:

    +
    Singularity> exit
    +
    +

    All of the packages that were just installed should be contained in a sub-directory +of the my_env directory. To use these packages in a job, the entire my_env directory +will be transfered as a tar.gz file. So our final step is to compress the +directory, as follows:

    +
    $ tar -czf my_env.tar.gz my_env
    +
    +

    Create executable script to use installed packages

    +

    In addition to loading the appropriate Python module, we will need to add a few +steps to our bash executable to set-up the virtual environment we +just created. That will look something like this:

    +
    #!/bin/bash
    +
    +# Unpack your envvironment (with your packages), and activate it
    +tar -xzf my_env.tar.gz
    +export PYTHONPATH=$PWD/my_env
    +
    +# Run the Python script 
    +python3 myscript.py
    +
    +

    Modify the HTCondor submit file to transfer Python packages

    +

    The submit file for this job will be similar to the base Python job submit file shown above +with one addition - we need to include my_env.tar.gz in the list of files specified by transfer_input_files. +As an example:

    +
    universe    = vanilla
    +
    ++SingularityImage = "/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-ubuntu-20.04:latest"
    +
    +executable  = run_py.sh
    +
    +transfer_input_files = myscript.py, my_env.tar.gz
    +
    +log         = job.log
    +output      = job.out
    +error       = job.error
    +
    ++JobDurationCategory = "Medium"
    +
    +request_cpus    = 1 
    +request_memory  = 2GB
    +request_disk    = 2GB
    +
    +queue 1
    +
    +

    Other Considerations

    +

    This guide mainly focuses on the nuts and bolts of running Python, but it's important +to remember that additional files needed for your jobs (input data, setting files, etc.) +need to be transferred with the job as well. See our Introduction to Data Management +on OSG for details on the different ways to deliver inputs to your jobs.

    +

    When you've prepared a real job submission, make sure to run a test job and then check +the log file for disk and memory usage; if you're using significantly more or less +than what you requested, make sure you adjust your requests.

    +

    Getting Help

    +

    For assistance or questions, please email the OSG Research Facilitation +team at support@osg-htc.org or visit the help desk and community forums.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/software_examples/python/tutorial-ScalingUp-Python/index.html b/software_examples/python/tutorial-ScalingUp-Python/index.html new file mode 100644 index 00000000..76e72d5a --- /dev/null +++ b/software_examples/python/tutorial-ScalingUp-Python/index.html @@ -0,0 +1,2653 @@ + + + + + + + + + + + + + + + + + + Scaling Up With HTCondor’s Queue Command - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Scaling Up With HTCondor’s Queue Command

    +

    Many large scale computations require the ability to process multiple jobs concurrently. Consider the extensive +sampling done for a multi-dimensional Monte Carlo integration, parameter sweep for a given model or molecular +dynamics simulation with several initial conditions. These calculations require +submitting many jobs. About a million CPU hours per day are available to OSG users +on an opportunistic basis. Learning how to scale up and control large +numbers of jobs is essential to realize the full potential of distributed high +throughput computing on the OSG.

    +

    fig 1

    +

    The HTCondor's queue command can run multiple jobs from a single job description file. In this tutorial, we will see how to scale up the calculations for a simple python example using the HTCondor’s queue command.

    +

    Once we understand the basic HTCondor script to run a single job, it is easy +to scale up.

    +

    To download the materials for this tutorial, use the command

    +
    $ git clone https://github.com/OSGConnect/tutorial-ScalingUp-Python
    + +

    Inside the tutorial-ScalingUp-python directory, all the required files are available. This includes the sample python program, job description file and executable files.
    +Move into the directory with

    +
    $ cd tutorial-ScalingUp-Python
    + +

    Python script and the optimization function

    +

    Let us take a look at our objective function that we are trying to optimize.

    +
    f = (1 - x)**2 + (y - x**2)**2
    +
    +

    This a two dimensional Rosenbrock function. Clearly, the minimum is located at (1,1). +The Rosenbrock function is one of the test functions used to test the robustness of an optimization method.

    + + +

    Here, we are going to use the brute force optimization approach to evaluate the two dimensional Rosenbrock function on grids of points. +The boundary values for the grid points are randomly assigned inside the python script. However, these default values may be replaced by +user supplied values.

    +

    To run the calculations with the random boundary values, the script is executed without any argument:

    +
    python3 rosen_brock_brute_opt.py
    + +

    To run the calculations with the user supplied values, the script is executed with input arguments:

    +
    python3 rosen_brock_brute_opt.py x_low x_high y_low y_high
    +
    +

    where x_low and x_high are low and high values along x direction, and y_low and y_high are the low and high values along the y direction.

    +

    For example, the boundary of x direction is (-3, 3) and the boundary of y direction is (-2, 3).

    +
    python3 rosen_brock_brute_opt.py  -3 3 -2 2
    + +

    sets the boundary of x direction to (-3, 3) and the boundary of y direction to (-2, 3).

    +

    The directory Example1 runs the python script with the default random values. The directories Example2, and Example3 deal with supplying the boundary values as input arguments.

    +

    The python script requires the SciPy package, which is typically not included in standard installations of Python 3. +Therefore, we will use a container that has Python 3 and SciPy installed. +If you'd like to test the script, you can do so with

    +
    apptainer shell /cvmfs/singularity.opensciencegrid.org/htc/rocky:8
    + +

    and then run one of the above commands.

    + + +

    Submitting Jobs Concurrently

    +

    fig 3

    +

    Now let us take a look at job description file.

    +
    cd Example1
    +cat ScalingUp-PythonCals.submit
    + +

    If we want to submit several jobs, we need to track log, out and error files for each job. An easy way to do this is to add the $(Cluster) and $(Process) variables to the file names. You can see this below in the names given to the standard output, standard +error and HTCondor log files:

    +
    +SingularityImage = "/cvmfs/singularity.opensciencegrid.org/htc/rocky:8"
    +executable = ../rosen_brock_brute_opt.py
    +
    +log = Log/job.$(Cluster).$(Process).log
    +output = Log/job.$(Cluster).$(Process).out
    +error = Log/job.$(Cluster).$(Process).err
    +
    ++JobDurationCategory = "Medium"
    +
    +request_cpus = 1
    +request_memory = 1 GB
    +request_disk = 1 GB
    +
    +queue 10
    + +

    Note the queue 10. This tells Condor to queue 10 copies of this job as one cluster.

    +

    Let us submit the above job

    +
    $ condor_submit ScalingUp-PythonCals.submit
    +Submitting job(s)..........
    +10 job(s) submitted to cluster 329837.
    + +

    Apply your condor_q knowledge to see this job progress. After all +jobs finished, execute the post_script.sh script to sort the results.

    +
    ./post_script.sh
    + +

    Note that all ten jobs will have run with random arguments because we did not supply +any from the submit file. What if we wanted to supply those arguments so that we could +reproduce this analysis if needed? The next example shows how to do this.

    +

    Providing Different Inputs to Jobs

    +

    In the previous example, we did not pass +any argument to the program and the program generated random boundary conditions. If we have some guess about what could be a better boundary condition, it is a good idea to supply the boundary +condition as arguments.

    +

    It is possible to use a single file to supply multiple arguments. We can take the job description file from the previous example, and modify it to include arguments. The modified job description file is available in the Example2 directory. Take a look at the job description file ScalingUp-PythonCals.submit.

    +
    $ cd ../Example2
    +$ cat ScalingUp-PythonCals.submit
    + +
    +SingularityImage = "/cvmfs/singularity.opensciencegrid.org/htc/rocky:8"
    +executable = ../rosen_brock_brute_opt.py
    +arguments = $(x_low) $(x_high) $(y_low) $(y_high)
    +
    +log = Log/job.$(Cluster).$(Process).log
    +output = Log/job.$(Cluster).$(Process).out
    +error = Log/job.$(Cluster).$(Process).err
    +
    ++JobDurationCategory = "Medium"
    +
    +request_cpus = 1
    +request_memory = 1 GB
    +request_disk = 1 GB
    +
    +queue x_low x_high y_low y_high from job_values.txt
    + +

    A major part of the job description file looks same as the previous example. The main +difference is the addition of arguments keyword, which looks like this:

    +
    arguments = $(x_low) $(x_high) $(y_low) $(y_high)
    + +

    The given arguments $(x_low), $(x_high), etc. are actually variables that represent +the values we want to use. These values are set in the queue command at the end of the +file:

    +
    queue x_low x_high y_low y_high from job_values.txt
    + +

    Take a look at job_values.txt:

    +
    $ cat job_values.txt
    + +
    -9 9 -9 9
    +-8 8 -8 8
    +-7 7 -7 7
    +-6 6 -6 6
    +-5 5 -5 5
    +-4 4 -4 4
    +-3 3 -3 3
    +-2 2 -2 2
    +-1 1 -1 1
    + +

    The submit file's queue statement will read in this file and assign each value in +a row to the four variables shown in the queue statement. Each row corresponds to the +submission of a unique job with those four values.

    +

    Let us submit the above job to see this:

    +
    $ condor_submit ScalingUp-PythonCals.submit
    +Submitting job(s)..........
    +9 job(s) submitted to cluster 329840.
    + +

    Apply your condor_q knowledge to see this job progress. After all +jobs finished, execute the post_script.sh script to sort the results.

    +
    ./post_process.sh
    + +

    Another Example of Different Inputs

    +

    In the previous example, we split the input information into four variables +that were included in the arguments line. However, we could have set the +arguments line directly, without intermediate values. This is shown in +Example 3:

    +
    $ cd ../Example3
    +$ cat ScalingUp-PythonCals.submit
    + +
    +SingularityImage = "/cvmfs/singularity.opensciencegrid.org/htc/rocky:8"
    +executable = ../rosen_brock_brute_opt.py
    +
    +log = Log/job.$(Cluster).$(Process).log
    +output = Log/job.$(Cluster).$(Process).out
    +error = Log/job.$(Cluster).$(Process).err
    +
    ++JobDurationCategory = "Medium"
    +
    +request_cpus = 1
    +request_memory = 1 GB
    +request_disk = 1 GB
    +
    +queue arguments from job_values.txt
    + +

    Here, arguments has disappeared from the top of the file because we've included +it in the queue statement at the end. The job_values.txt file has the same values +as before; in this syntax, HTCondor will submit a job for each row of values and the +job's arguments will be those four values.

    +

    Let us submit the above job

    +
    $ condor_submit ScalingUp-PythonCals.submit
    +Submitting job(s)..........
    +9 job(s) submitted to cluster 329839.
    + +

    Apply your condor_q and connect watch knowledge to see this job progress. After all +jobs finished, execute the post_script.sh script to sort the results.

    +
    ./post_process.sh
    +
    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/software_examples/python/tutorial-wordfreq/index.html b/software_examples/python/tutorial-wordfreq/index.html new file mode 100644 index 00000000..5b77b0c3 --- /dev/null +++ b/software_examples/python/tutorial-wordfreq/index.html @@ -0,0 +1,2670 @@ + + + + + + + + + + + + + + + + + + Wordcount Tutorial for Submitting Multiple Jobs - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Wordcount Tutorial for Submitting Multiple Jobs

    +

    Imagine you have a collection of books, and you want to analyze how word usage +varies from book to book or author to author. The type of workflow covered in +this tutorial can be used to describe workflows that take have different input +files or parameters from job to job.

    +

    To download the materials for this tutorial, type:

    +
    $ git clone https://github.com/OSGConnect/tutorial-wordfreq
    +
    +

    Analyzing One Book

    +

    Test the Command

    +

    We can analyze one book by running the wordcount.py script, with the +name of the book we want to analyze:

    +
    $ ./wordcount.py Alice_in_Wonderland.txt
    +
    +

    If you run the ls command, you should see a new file with the prefix counts +which has the results of this python script. This is the output we want to +produce within an HTCondor job. For now, remove the output:

    +
    $ rm counts.Alice_in_Wonderland.tsv
    +
    +

    Create a Submit File

    +

    To submit a single job that runs this command and analyzes the +Alice's Adventures in Wonderland book, we need to translate this command +into HTCondor submit file syntax. The two main components we care about +are (1) the actual command and (2) the needed input files.

    +

    The command gets turned into the submit file executable and arguments options:

    +
    executable = wordcount.py
    +arguments = Alice_in_Wonderland.txt
    +
    +

    The executable is the script that we want to run, and the arguments is +everything else that follows the script when we run it, like the test above. +The input file for this job is the Alice_in_Wonderland.txt +text file. While we provided the name as in the arguments, we need +to explicitly tell HTCondor to transfer the corresponding file. +We include the file name in the following submit file option:

    +
    transfer_input_files = Alice_in_Wonderland.txt
    +
    +

    There are other submit file options that control other aspects of the job, like +where to save error and logging information, and how many resources to request per +job.

    +

    This tutorial has a sample submit file (wordcount.sub) with most of these submit file options filled in:

    +
    $ cat wordcount.sub
    +executable = 
    +arguments =
    +
    +transfer_input_files =
    +
    +should_transfer_files   = Yes
    +when_to_transfer_output = ON_EXIT
    +
    +log           = logs/job.$(Cluster).$(Process).log
    +error         = logs/job.$(Cluster).$(Process).error
    +output        = logs/job.$(Cluster).$(Process).out
    +
    ++JobDurationCategory = "Medium"
    +requirements   = (OSGVO_OS_STRING == "RHEL 7")
    +
    +request_cpus   = 1
    +request_memory = 512MB
    +request_disk   = 512MB
    +
    +queue 1
    +
    +

    Open (or create) this file with a terminal-based text editor (like vi or nano) and +add the executable, arguments, and input information described above.

    +

    Submit and Monitor the Job

    +

    After saving the submit file, submit the job:

    +
    $ condor_submit wordcount.sub
    +
    +

    You can check the job's progress using condor_q, which will print out the status of +your jobs in the queue. You can also use the command condor_watch_q to monitor the +queue in real time (use the keyboard shortcut Ctrl c to exit). Once the job finishes, you +should see the same counts.Alice_in_Wonderland.tsv output when you enter ls.

    +

    Analyzing Multiple Books

    +

    Now suppose you wanted to analyze multiple books - more than one at a time. +You could create a separate submit file for each book, and submit all of the +files manually, but you'd have a lot of file lines to modify each time +(in particular, the arguments and transfer_input_files lines from the +previous submit file).

    +

    This would be overly verbose and tedious. HTCondor has options that make it easy to +submit many jobs from one submit file.

    +

    Make a List of Inputs

    +

    First we want to make a list of inputs that we want to use for our jobs. This +should be a list where each item on the list corresponds to a job.

    +

    In this example, our inputs are the different text files for different books. We +want each job to analyze a different book, so our list should just contain the +names of these text files. We can easily create this list by using an ls command and +sending the output to a file:

    +
    $ ls *.txt > book.list
    +
    +

    The book.list file now contains each of the .txt file names in the current directory.

    +
    $ cat book.list
    +Alice_in_Wonderland.txt
    +Dracula.txt
    +Huckleberry_Finn.txt
    +Pride_and_Prejudice.txt
    +Ulysses.txt
    +
    +

    Modify the Submit File

    +

    Next, we will make changes to our submit file so that it submits a job for +each book title in our list (seen in the book.list file).

    +

    Create a copy of our existing submit file, which we will use for this job submission.

    +
    $ cp wordcount.sub many-wordcount.sub
    +
    +

    We want to tell the queue keyword to use our list of inputs to submit jobs. +The default syntax looks like this:

    +
    queue <item> from <list>
    +
    +

    Open the many-wordcount.sub file with a text editor and go to the end. +Following the syntax above, we modify the queue statement to fit our example:

    +
    queue book from book.list
    +
    +

    This statement works like a for loop. For every item in the book.list +file, HTCondor will create a job using this submit file but replacing every +occurrence of $(book) with the item from book.list.

    +
    +

    The syntax $(variablename) represents a submit variable whose value +will be substituted at the time of submission.

    +
    +

    Therefore, everywhere we used the name of the book in our submit file should be +replaced with the variable $(book) (in the previous example, everywhere you entered +"Alice_in_Wonderland.txt").

    +

    So the following lines in the submit file should be changed to use the variable $(book):

    +
    arguments = $(book)
    +
    +transfer_input_files = $(book)
    +
    +

    Submit and Monitor the Job

    +

    We're now ready to submit all of our jobs.

    +
    $ condor_submit many-wordcount.sub
    +
    +

    This will now submit five jobs (one for each book on our list). Once all five +have finished running, we should see five "counts" files, one for each book in the directory.

    +

    If you don't see all five "counts" files, consider investigating the log files and see if +you can identify what caused that to happen.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/software_examples/r/tutorial-R-addlibSNA/index.html b/software_examples/r/tutorial-R-addlibSNA/index.html new file mode 100644 index 00000000..b472fa7f --- /dev/null +++ b/software_examples/r/tutorial-R-addlibSNA/index.html @@ -0,0 +1,2695 @@ + + + + + + + + + + + + + + + + + + Use External Packages in your R Jobs - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Use R Packages in your R Jobs

    +

    Often we may need to add R external libraries that are not part of +the base R installation. +This tutorial describes how to create custom R libraries for use in jobs +on the OSPool.

    +

    Background

    +

    The material in this tutorial builds upon the +Run R Scripts on the OSPool +tutorial. If you are not already familiar with how to run R jobs on +the OSPool, please see that tutorial first for a general introduction.

    +

    Setup Directory and R Script

    +

    First we'll need to create a working directory, you can either run +$ git clone https://github.com/OSGConnect/tutorial-R-addlib or type the following:

    +
    $ mkdir tutorial-R-addlib
    +$ cd tutorial-R-addlib
    +
    +

    Similar to the general R tutorial, we will create a script to use as a test +example. If you did not clone the tutorial, create a script called +hello_world.R that contains the following:

    +
    #!/usr/bin/env Rscript
    +
    +library(cowsay)
    +
    +say("Hello World!", "cow")
    +
    +

    We will run one more command that makes the script executable, meaning that it +can be run directly from the command line:

    +
    $ chmod +x hello_world.R
    +
    +

    Create a Custom Container with R Packages

    +

    Using the same container that we used for the general R tutorial, we will +add the package we want to use (in this case, the cowsay package) to create +a new container that we can use for our jobs.

    +

    The new container will be generated from a "definition" file. If it isn't already +present, create a file called cowsay.def that has the following lines:

    +
    Bootstrap: docker
    +From: opensciencegrid/osgvo-r:3.5.0
    +
    +%post
    +    R -e "install.packages('cowsay', dependencies=TRUE, repos='http://cran.rstudio.com/')"
    +
    +

    This file basically says that we want to start with one of the existing OSPool R +containers and add the cowsay package from CRAN.

    +

    To create the new container, set the following variables:

    +
    $ export TMPDIR=$HOME
    +$ export APPTAINER_CACHE_DIR=$HOME
    +
    +

    And then run this command:

    +
    apptainer build cowsay-test.sif cowsay.def
    +
    +

    It may take 5-10 minutes to run. Once complete, if you run ls, you should see a +file in your current directory called cowsay-test.sif. This is the new container.

    +
    +

    Building containers can be a new skill and slightly different for different +packages! We recommend looking at our container guides and container training +materials to learn more -- these are both linked from our main guides page. +There are also some additional tips at the end of this tutorial on building +containers with R packages.

    +
    +

    Test Custom Container and R Script

    +

    Start the container you created by running:

    +
    $ apptainer shell cowsay-test.sif
    +
    +

    Now we can test our R script:

    +
    Singularity :~/tutorial-R-addlib> ./hello_world.R
    +
    +

    If this works, we will have a message with a cow printed to our terminal. Once we have this output, we'll exit the container for now with exit:

    +
    Singularity :~/tutorial-R-addlib>  exit
    +$
    +
    +

    Build the HTCondor Job

    +

    For this job, we want to use the custom container we just created. For +efficiency, it is best to transfer this to the job using the OSDF. +If you want to use the container you just built, copy it to the appropriate +directory listed here, based on which Access Point you are using.

    +

    Our submit file, R.submit should then look like this:

    +
    +SingularityImage = "osdf://osgconnect/public/osg/tutorial-R-addlib/cowsay-test.sif"
    +executable        = hello_world.R
    +# arguments
    +
    +log    = R.log.$(Cluster).$(Process)
    +error  = R.err.$(Cluster).$(Process)
    +output = R.out.$(Cluster).$(Process)
    +
    ++JobDurationCategory = "Medium"
    +
    +request_cpus   = 1
    +request_memory = 1GB
    +request_disk   = 1GB
    +
    +queue 1
    +
    +

    Change the osdf:// link in the submit file to be right for YOUR Access Point and +username, if you are using your own container file.

    +
    +

    Reminder: Files placed in the OSDF can be copied to other data spaces ("caches") +where they are NOT UPDATED. If you make a new container to use with your jobs, +make sure to give it a different name or put it at a different path than the +previous container. You will not be able to replace the exact path of the existing +container.

    +
    +

    Submit Jobs and Review Output

    +

    Now we are ready to submit the job:

    +
    $ condor_submit R.submit
    +
    +

    and check the job status:

    +
    $ condor_q
    +
    +

    Once the job finished running, check the output file as before. They should look like this:

    +
    $ cat R.out.0000.0
    + ----- 
    +Hello World! 
    + ------ 
    +    \   ^__^ 
    +     \  (oo)\ ________ 
    +        (__)\         )\ /\ 
    +             ||------w|
    +             ||      ||
    +
    +

    Tips for Building Containers with R Packages

    +

    There is a lot of variety in how to build custom containers! The two main decisions +you need to make are a) what to use as your "base" or starting container and what +packages to install.

    +

    There is a useful overview of building containers from our container training, +linked on our training page.

    +

    Base Containers

    +

    In this guide we used one of the existing OSPool R containers. You +can see the other versions of R that we support on our list of OSPool Supported Containers

    +

    Another good option for a base container are the "rocker" Docker containers: +Rocker on DockerHub

    +

    To use a different container as the base container, you just change the top of +the definition file. So to use the rocker tidyverse container as my starting point, I would +have a definition file header like this:

    +
    Bootstrap: docker
    +From: rocker/tidyverse:4.1.3
    +
    +

    When using containers from DockerHub, it's a good idea to pick a version (look at +the "Tags" tab for options). Above, this container would be version 4.1.3 of R.

    +

    Installing Packages

    +

    The sample definition file from this tutorial installed one package. If you have +multiple packages, you can change the "install.packages" command to install +multiple packages:

    +
    %post
    +  R -e "install.packages(c('cowsay','here'), dependencies=TRUE, repos='http://cran.rstudio.com/')"
    +
    +

    If your base container is one of the "rocker" containers, you can use a different +tool to install packages that looks like this:

    +
    %post
    +  install2.r cowsay
    +
    +

    or for multiple packages:

    +
    %post
    +  install2.r cowsay here
    +
    +

    Remember, you only need to install packages that aren't already in the container. If +you start with the tidyverse container, you don't need to install ggplot2 or dplyr - +those are already in the container and you would be adding packages on top.

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/software_examples/r/tutorial-R/index.html b/software_examples/r/tutorial-R/index.html new file mode 100644 index 00000000..5cf35fb0 --- /dev/null +++ b/software_examples/r/tutorial-R/index.html @@ -0,0 +1,2624 @@ + + + + + + + + + + + + + + + + + + Run R scripts on the OSPool - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Run R scripts on the OSPool

    +

    This tutorial describes how to run a simple R script on the OSPool. We'll first run the program locally as a test. After that we'll create a submit file, submit it to the OSPool using an OSPool Access Point, and look at the results when the jobs finish.

    +

    Set Up Directory and R Script

    +

    First we'll need to create a working directory with our materials. You can either

    +
      +
    1. run $ git clone https://github.com/OSGConnect/tutorial-R to download the materials, OR create them yourself by
    2. +
    3. typing the following: + $ mkdir tutorial-R; cd tutorial-R
    4. +
    +

    Let's create a small script to use as a test example. Create the file hello_world.R using a text editor like nano or vim that contains the following:

    +
    #!/usr/bin/env Rscript
    +
    +print("Hello World!")
    +
    +

    The header #!/usr/bin/env Rscript indicates that if this script is run on its +own, it needs to be executed using the R language (instead of Python, or bash, for example).

    +

    We will run one more command that makes the script executable, meaning that it +can be run directly from the command line:

    +
    $ chmod +x hello_world.R
    +
    +

    Access R on the Access Point

    +

    R is run using containers on the OSPool. To test it out on the Access Point, we can run:

    +
    $ apptainer shell \
    +   /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-r:3.5.0
    +
    +
    +

    Other Supported R Versions

    +

    To see a list of all containers containing R, look at the +list of OSPool Supported Containers

    +
    +

    The previous command sometimes takes a minute or so to start. Once it starts, you +should see the following prompt:

    +
    Singularity :~/tutorial-R>
    +
    +

    Now, we can try to run R by typing R in our terminal:

    +
    Singularity :~/tutorial-R>  R
    +
    +R version 3.5.1 (2018-07-02) -- "Feather Spray"
    +Copyright (C) 2018 The R Foundation for Statistical Computing
    +Platform: x86_64-pc-linux-gnu (64-bit)
    +
    +R is free software and comes with ABSOLUTELY NO WARRANTY.
    +You are welcome to redistribute it under certain conditions.
    +Type 'license()' or 'licence()' for distribution details.
    +
    +  Natural language support but running in an English locale
    +
    +R is a collaborative project with many contributors.
    +Type 'contributors()' for more information and
    +'citation()' on how to cite R or R packages in publications.
    +
    +Type 'demo()' for some demos, 'help()' for on-line help, or
    +'help.start()' for an HTML browser interface to help.
    +Type 'q()' to quit R.
    +
    +>
    +
    +

    You can quit out with q().

    +
    > q()
    +Save workspace image? [y/n/c]: n
    +Singularity :~/tutorial-R>
    +
    +

    Great! R works. We'll leave the container running for the next step. See below +on how to exit from the container.

    +

    Test an R Script

    +

    To run the R script we created earlier, we just need to execute it like so:

    +
    Singularity :~/tutorial-R> ./hello_world.R
    +
    +

    If this works, we will have [1] "Hello World!" printed to our terminal. Once we have this output, we'll exit the container for now with exit:

    +
    Singularity :~/tutorial-R>  exit
    +$
    +
    +

    Build the HTCondor Job

    +

    Let's build a HTCondor submit file to run our script. Using a text editor, create a file called R.submit with the following text inside it:

    +
    +SingularityImage = "/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-r:3.5.0"
    +executable        = hello_world.R
    +# arguments
    +
    +log    = R.log.$(Cluster).$(Process)
    +error  = R.err.$(Cluster).$(Process)
    +output = R.out.$(Cluster).$(Process)
    +
    ++JobDurationCategory = "Medium"
    +
    +request_cpus   = 1
    +request_memory = 1GB
    +request_disk   = 1GB
    +
    +queue 1
    +
    +

    The path you put in the +SingularityImage option should match whatever you used +to test R above. We list the R script as the executable.

    +

    The R.submit file may have included a few lines that you are unfamiliar with. For example, $(Cluster) and $(Process) are variables that will be replaced with the job's cluster and process numbers - these are automatically assigned by. This is useful when you have many jobs submitted in the same file. Any output and errors will be placed in a separate file for each job.

    +

    Submit and View Output

    +

    Finally, submit the job!

    +
    $ condor_submit R.submit
    +Submitting job(s).
    +1 job(s) submitted to cluster 3796250.
    +$ condor_q alice
    +-- Schedd: ap40.uw.osg-htc.org: <192.170.227.22:9618?... @ 04/13/23 09:51:04
    +OWNER      BATCH_NAME     SUBMITTED   DONE   RUN    IDLE  TOTAL JOB_IDS
    +alice      ID: 3796250   4/13 09:50      _      _      1      1 3796250.0
    +...
    +
    +

    You can follow the status of your job cluster with the condor_watch_q command, which shows condor_q output that refreshes each 5 seconds. Press control-C to stop watching.

    +

    Since our jobs prints to standard out, we can check the output files. Let's see what one looks like:

    +
    $ cat R.out.3796250.0
    +[1] "Hello World!"
    +
    + + + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/software_examples/r/tutorial-ScalingUp-R/index.html b/software_examples/r/tutorial-ScalingUp-R/index.html new file mode 100644 index 00000000..2fb40bb6 --- /dev/null +++ b/software_examples/r/tutorial-ScalingUp-R/index.html @@ -0,0 +1,2618 @@ + + + + + + + + + + + + + + + + + + Scaling up compute resources - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Scaling up compute resources

    +

    Scaling up the computational resources is a big advantage for doing +certain large scale calculations on OSPool. Consider the extensive +sampling for a multi-dimensional Monte Carlo integration or molecular +dynamics simulation with several initial conditions. These type of +calculations require submitting a lot of jobs.

    +

    About a million CPU hours per day are available to OSPool users +on an opportunistic basis. Learning how to scale up and control large +numbers of jobs is key to realizing the full potential of distributed high +throughput computing on the OSPool.

    +

    In this tutorial, we will see how to scale up calculations for a +simple example. To download the materials for this tutorial, use the command

    +
    $ git clone https://github.com/OSGConnect/tutorial-ScalingUp-R
    +
    +

    Background

    +

    For this example, we will use computational methods to estimate π. First, +we will define a square inscribed by a unit circle from which we will +randomly sample points. The ratio of the points outside the circle to +the points in the circle is calculated, which approaches π/4.

    +

    This method converges extremely slowly, which makes it great for a +CPU-intensive exercise (but bad for a real estimation!).

    +

    Set up an R Job

    +

    If you downloaded the tutorial files, you should see the directory +"tutorial-ScalingUp-R" when you run the ls command. +This directory contains the files used in this tutorial. +Alternatively, you can write the necessary files from scratch. +In that case, create a working directory using the command

    +
    $ mkdir tutorial-ScalingUp-R
    +
    +

    Either way, move into the directory before continuing:

    +
    $ cd tutorial-ScalingUp-R
    +
    +

    Create and test an R Script

    +

    Our code is a simple R script that does the estimation. +It takes in a single argument in order to differentiate the jobs. +The code for the script is contained in the file mcpi.R. +If you didn't download the tutorial files, create an R script +called mcpi.R and add the following contents:

    +
    #!/usr/bin/env Rscript
    +
    +args = commandArgs(trailingOnly = TRUE)
    +iternum = as.numeric(args[[1]]) + 100
    +
    +montecarloPi <- function(trials) {
    +  count = 0
    +  for(i in 1:trials) {
    +    if((runif(1,0,1)^2 + runif(1,0,1)^2)<1) {
    +      count = count + 1
    +    }
    +  }
    +  return((count*4)/trials)
    +}
    +
    +montecarloPi(iternum)
    +
    +

    The header at the top of the file (the line starting with #!) indicates that this script is +meant to be run using R.

    +

    If we were running a more intensive script, we would want to test our pipeline +with a shortened, test script first.

    +
    +

    If you want to test the script, start an R container, and then run +the script using Rscript. For example:

    +
    $ apptainer shell \
    + /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-r:3.5.0
    +Singularity :~/tutorial-ScalingUp-R> Rscript mcpi.R 10
    +[1] 3.14
    +Singularity :~/tutorial-ScalingUp-R> exit
    +$
    +
    +
    +

    Create a Submit File and Log Directories

    +

    Now that we have our R script written and tested, +we can begin building the submit file for our job. If we want to submit several +jobs, we need to track log, output, and error files for each +job. An easy way to do this is to use the Cluster and Process ID +values assigned by HTCondor to create unique files for each job in our +overall workflow.

    +

    In this example, the submit file is called R.submit. +If you did not download the tutorial files, create a submit file named R.submit +and add the following contents:

    +
    +SingularityImage = "/cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-r:3.5.0"
    +
    +executable = mcpi.R
    +arguments = $(Process)
    +
    +#transfer_input_files = 
    +should_transfer_files = YES
    +when_to_transfer_output = ON_EXIT
    +
    +log = logs/job.log.$(Cluster).$(Process)
    +error = logs/job.error.$(Cluster).$(Process)
    +output = output/mcpi.out.$(Cluster).$(Process)
    +
    +request_cpus = 1
    +request_memory = 1GB
    +request_disk = 1GB
    +
    +queue 100
    +
    +

    If you did not download the tutorial files, you will also need to create the +logs and output directories to hold the files that will be created for each job. +You can create both directories at once with the command

    +
    $ mkdir logs output
    +
    +

    There are several items to note about this submit file:

    +
      +
    • The queue 100 statement in the submit file. This tells Condor to enqueue 100 copies + of this job as one cluster.
    • +
    • The submit variables $(Cluster) and $(Process). These are used to specify unique output files. + HTCondor will replace these with the Cluster and Process ID numbers for each individual process + within the cluster. The $(Process) variable is also passed as an argument to our R script.
    • +
    +

    Submit the Jobs

    +

    Now it is time to submit our job! You'll see something like the following upon submission:

    +
    $ condor_submit R.submit
    +Submitting job(s).........................
    +100 job(s) submitted to cluster 837.
    +
    +

    Apply your condor_q knowledge to see the progress of these jobs. +Check your logs folder to see the error and HTCondor log +files and the output folder to see the results of the scripts.

    +

    Post Process

    +

    Once the jobs are completed, you can use the information in the output files +to calculate an average of all of our computed estimates of π.

    +

    To see this, we can use the command:

    +
    $ cat output/mcpi*.out* | awk '{ sum += $2; print $2"   "NR} END { print "---------------\n Grand Average = " sum/NR }'
    +
    +

    Key Points

    +
      +
    • Scaling up the number of jobs is crucial for taking full advantage of the computational resources of the OSPool.
    • +
    • Changing the queue statement allows the user to scale up the resources.
    • +
    • The arguments option can be used to pass parameters to a job script.
    • +
    • The submit variables $(Cluster) and $(Process) can be used to name log files uniquely.
    • +
    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/software_examples/r/tutorial-spills-R/index.html b/software_examples/r/tutorial-spills-R/index.html new file mode 100644 index 00000000..f88d4679 --- /dev/null +++ b/software_examples/r/tutorial-spills-R/index.html @@ -0,0 +1,2656 @@ + + + + + + + + + + + + + + + + + + Analyzing .csv Data with R - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + + + + +
    +
    + + + + +

    Analyzing Chemical Spills Datasets (.csv files)

    +

    An OSPool Tutorial

    +

    Spills of hazardous materials, like petroleum, mercury, and battery acid, that can impact water and land quality are required to be reported to the United State's government by law. In this tutorial, we will analyze records provided by the state of New York on occurrences of spills of hazardous materials that occurred from 1950 to 2019.

    +

    The data used in this tutorial was collected from https://catalog.data.gov/dataset/spill-incidents/resource/a8f9d3c8-c3fa-4ca1-a97a-55e55ca6f8c0 and modified for teaching purposes.

    +

    To access all of the materials to complete this tutorial, first log into your OSPool access point and run the following command: git clone https://github.com/OSGConnect/tutorial-spills-R/.

    +

    Step 1: Get to Know Hazardous Spills Dataset

    +

    Let's explore the data files that we will be analyzing. Before we do so, we must make sure we are in the tutorial directory (tutorial-spills-R/). We can do this by printing your working directory (pwd):

    +
    pwd
    +
    +

    We should see something similar to /home/jovyan/tutorial-spills-R/, where jovyan could alternatively be your OSG account username.

    +

    Next, let's navigate to our /data directory and list (ls) the files inside of it:

    +
    cd data/
    +ls
    +
    +

    We should see seven .csv files, one for each decade between 1950-2019.

    +

    To explore the contents of these files, we can use commands like head -n 5 <fileName> to view the first 5 lines of our data files.

    +
    head -n 5 spills_1980_1989.csv  
    +
    +

    We can also use the navigation bar on the left side of your notebook to double-click and open each comma-separated value ("csv") .csv file and see it in a table format, instead of a traditional command line rendering above.

    +

    Step 2: Prepare the R Executable

    +

    Next, we need to create an R script to analyze our datasets. An example of an R script can be found in our main tutorial directory, so let's navigate there:

    +
    cd ../ # change directory to move one up
    +ls # list files
    +cat spill_calculation.r
    +
    +

    Then let us print the contents of our executable script:

    +
    cat spill_calculation.r
    +
    +

    This script will read in different datasets as arguments and then will carry out summary statistics to print out the number of spills recorded per decade and the total size (in gallons) of the hazardous spills.

    +

    Step 3: Prepare Portable Software

    +

    Some common software, like R, is provided by OSG using containers. Because of this, you do not need to install R yourself, you will just tell HTCondor what container to use for your jobs. Additionally, this tutorial just uses base-R and no special libraries, but if you need libraries (e.g., tidyverse, ggplot2) you can always install them in your R container.

    +

    A list of containers and other software provided by OSG staff can be found on our website https://portal.osg-htc.org/documentation/, along with resources for learning how to add libraries to your container.

    +

    We will be using the R container for R 3.5.0, which is accessible under /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osgvo-r:3.5.0, so we must make sure to tell HTCondor to fetch this container when starting each of our jobs. To learn how to tell HTCondor to do this, see below.

    +

    Step 4: Prepare and Submit an HTCondor Submit File for One Test Job

    +

    The HTCondor submit file tells the HTCondor how you would like your job to be run on your behalf.

    +

    For example, you should specify what executable you want to run, if you want a container/the name of that container, the resources you would like available to your job, and any special requirements.

    +

    Step 4A: Prepare and Submit an HTCondor Submit File

    +

    A sample submit file to analyze our smallest dataset, spills_1950_1959.csv, might look like:

    +
    cat R.submit
    +
    +

    We can submit this job using condor_submit <SubmitFile>:

    +
    condor_submit R.submit
    +
    +

    We can check on the status of our job in HTCondor's queue by running:

    +
    condor_q
    +
    +

    Once our job is done running, it will leave HTCondor's queue automatically.

    +

    Step 4B: Review Test Job Results

    +

    Once our job is done running, we can check the results by looking in our output folder:

    +
    cat output/spills.out
    +
    +

    We should see that from 1950-1959, New York recorded five spills that totalled less than 0 recorded gallons.

    +

    Step 5: Scale Out Your Workflow to Analyze Many Datasets

    +

    We just prepared and ran one job analyzing the spills_1950_1959.csv dataset! But now, we want to analyze the remaining 6 datasets. Luckily, HTCondor is very helpful when it comes to rapidly queueing many small jobs!

    +

    To do so, we will update our submit file to use the queue <variable> from <list> syntax. But before we do this, we need to create a list of the files we want to queue a job for:

    +
    ls data > list_of_datasets.txt
    +cat list_of_datasets.txt
    +
    +

    Great! Now we have a list of the files we want analyzed, where each file is on it's own seperate line.

    +

    Step 5A: Update submit file to queue a job for each dataset

    +

    Now, let's modify the queue line of our submit file to use the new queue syntax. For this, we can choose almost any variable name, so for simplicity, let's choose dataset such that we have queue dataset from list_of_datasets.txt.

    +

    We can then call this new variable, dataset, elsewhere in our submit file by wrapping it with $() like so: $(dataset).

    +

    Our updated submit file might look like this:

    +
    cat many-R.submit
    +
    +

    Step 5B: Submit Many Jobs

    +

    Now we can submit our new submit file using condor_submit again:

    +
    condor_submit many-R.submit
    +
    +

    Notice that we have now queued 7 jobs using one submit file!

    +

    Step 5C: Analysis Completed!

    +

    We can check on the status of our 7 jobs using condor_q:

    +
    condor_q
    +
    +

    Once our jobs are done, we can also review our output files:

    +
    cat output/*.csv.out
    +
    +

    In a few minutes, we were able to take our R script and run several jobs to analyze all of our real-world data. Congratulations!

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/stylesheets/code-highlight.css b/stylesheets/code-highlight.css new file mode 100644 index 00000000..6eb1402d --- /dev/null +++ b/stylesheets/code-highlight.css @@ -0,0 +1,7 @@ +span.hll { + background-color: rgba(255,255,0,.5) +} + +button.md-clipboard.md-icon { + display: none; +} \ No newline at end of file diff --git a/stylesheets/osg.css b/stylesheets/osg.css new file mode 100644 index 00000000..f367b4a7 --- /dev/null +++ b/stylesheets/osg.css @@ -0,0 +1,84 @@ +:root { + --md-default-fg-color: #000000; + --md-primary-fg-color: #F1A52C; + --md-primary-bg-color: #1F2C36; + --md-accent-fg-color: #FFC364; + --md-default-fg-color--light: #000000; +} + +:root>* { + --md-footer-bg-color: #3a3a3a; + + --md-typeset-a-color: #ad510c; + --md-text-font-family: "Helvetica Neue", Helvetica, Arial, "Lucida Grande", sans-serif; +} + +.md-typeset h1, .md-typeset h2 { + font-weight: 500; +} + +.md-tabs__link { + font-size: .8rem; + color: black; +} + +pre.term { + border-left: 0px solid #323232; + background: #484848; + color: #F0F0F0; +} + +pre.term > code { + background: #484848; + color: #F0F0F0; +} + +pre.sub { + border-left: 5px solid var(--md-primary-fg-color); +} + +pre.file { + border-left: 5px solid var(--md-primary-bg-color); +} + +/* Tweak the headers */ +.md-typeset h1 { + font-size: 2.4em; +} + +.md-typeset h2 { + font-size: 1.8em; + border-bottom: solid 2px var(--md-primary-fg-color); +} + +.md-typeset h3 { + font-size: 1.3em; +} + +.md-typeset h6 { + font-size: .8em; +} + + +/*$text: #3a3a3a;*/ +/*$primary: */ +/*$secondary: #F1A52C;*/ +/*$info: #FFC364;*/ +/*$warning: #fff1c7;*/ +/*$white-offset: #fff7ea;*/ +/*$nav-link-color: $text;*/ + + +/*$nav-link-font-weight: 500;*/ +/*$navbar-light-color: #3a3a3a;*/ + +/*$border-radius: 1rem;*/ + +/*$btn-border-radius: .25rem;*/ + +/*$card-border-color: black;*/ +/*$card-border-radius: $border-radius;*/ + +/*$font-family-base: "Helvetica Neue", Helvetica, Arial, "Lucida Grande", sans-serif;*/ +/*$headings-font-family: "Helvetica Neue", Helvetica, Arial, "Lucida Grande", sans-serif;*/ +/*$headings-font-weight: 600;*/ \ No newline at end of file diff --git a/support_and_training/support/getting-help-from-RCFs/index.html b/support_and_training/support/getting-help-from-RCFs/index.html new file mode 100644 index 00000000..ef397721 --- /dev/null +++ b/support_and_training/support/getting-help-from-RCFs/index.html @@ -0,0 +1,2482 @@ + + + + + + + + + + + + + + + + + + Email, Office Hours, and 1-1 Meetings - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Email, Office Hours, and 1-1 Meetings

    +

    There are multiple ways to get help from OSG’s Research Computing Facilitators. Get in touch anytime!

    +

    To help researchers effectively utilize large-scale computing, our Research Computing +Facilitators (RCFs) are here to answer questions and provide guidance and support. +If we're not able to help with a specific problem, we will +do our best to connect you with another group or service that can.

    +

    We don’t expect that you should be able to address all of your questions by consulting our documentation, searching online, or just working through things on your own. Please utilize the methods below if you are stuck or have questions.

    +

    Help via Email

    +

    We provide ongoing support via email to support@osg-htc.org. You can typically expect a first response within a few business hours.

    +

    support@osg-htc.org

    +

    Virtual Office Hours

    +

    Drop-in for live help:

    +
      +
    • Tuesdays, 4-5:30pm ET / 1-2:30pm PT
    • +
    • Thursdays, 11:30am-1pm ET / 8:30-10am PT
    • +
    +

    You can find the URL to the Virtual Office Hours meeting room in the welcome message when you log into an OSG-managed Access Point, or in the signature of a support email from an RCF.

    +

    Once you arrive in the room, please sign in.

    +

    Sign-in for office hours

    +

    Cancellations will be announced via email. If the times above don’t work for you, please email us at our usual support address to schedule a separate meeting.

    +

    Make an Appointment

    +

    We are happy to arrange meetings outside of designated Office Hours. Email us to schedule a time to meet!

    +

    support@osg-htc.org

    +

    Training Opportunities

    +

    The RCF team runs regular new user training on the first Tuesday of the month and a special topic training on the third Tuesday of the month. See upcoming training dates, registration information, and materials on our training page.

    +

    OSPool Training page

    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/support_and_training/training/osg-user-school/index.html b/support_and_training/training/osg-user-school/index.html new file mode 100644 index 00000000..b34395be --- /dev/null +++ b/support_and_training/training/osg-user-school/index.html @@ -0,0 +1,2495 @@ + + + + + + + + + + + + + + + + + + Annual, Week-Long OSG User School - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Annual, Week-Long OSG User School

    +

    +

    OSG School 2024 Group Photo

    +

    Overview

    +

    During this week-long training event held at the University of Wisconsin-Madison every summer, students learn to use high-throughput computing (HTC) systems — at their own campus or using the OSG — to run large-scale computing applications that are at the heart of today’s cutting-edge science. Through lectures, discussions, and lots of hands-on activities with experienced OSG staff, students will learn how HTC systems work, how to run and manage lots of jobs and huge datasets, to implement a scientific computing workflow, and where to turn for more information and help.

    +

    The School is ideal for graduate students in any science or research domain where large-scale computing is a vital part of the research process, plus we will consider applications from advanced undergraduates, post-doctoral students, faculty, and staff. Students accepted to this program will receive financial support for basic travel and local costs associated with the School.

    +

    Next OSG User School

    +

    The next OSG User School will be held in the summer of 2025. +Applications will likely open in early 2025.

    +

    Open Materials and Recordings

    +

    The OSG User School want virtual in 2020 and 2021, which means that we were able to record lectures to complement lecture and exercise materials!

    + +

    Past OSG Schools

    + + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/support_and_training/training/osgusertraining/index.html b/support_and_training/training/osgusertraining/index.html new file mode 100644 index 00000000..4d8947c3 --- /dev/null +++ b/support_and_training/training/osgusertraining/index.html @@ -0,0 +1,2808 @@ + + + + + + + + + + + + + + + + + + Monthly OSG User Training (registration+materials) - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    OSG User Training (regular/monthly)

    +

    All User Training sessions are offered on Tuesdays from 2:30-4pm ET (11:30am - 1pm PT), on the third Tuesday of the month. The training's are designed as stand alone subjects. You do not need to bring/have your dataset prepared before the training. The only prerequisites are some familiarities with using command line inteface or shell. Having some familiarities with HTCondor job submissions are useful but not required.

    +

    Registration opens a month before the training date, and closes 24 hours before the event. You can register for all of our trainings via setmore:

    +

    Register Here

    +

    Fall 2024 Training Schedule

    + + + + + + + + + + + + + +
    Tuesday, September 17OSPool Basics: Get Running on the OSPool +

    Learning Objectives: Topics covered in this workshop include:

    +
      +
    • An introduction to OSG services and the OSPool +
    • Basics of HTCondor job submission +
    • Hands-on practice submitting HTCondor jobs +
    +

    If you’re new to the OSPool (or been away for awhile) and want to get started, this is an ideal opportunity to go through core concepts and practice hands-on skills.

    +

    Prerequisites/Audience: There are no prerequisites for this workshop. This workshop is designed for new HTCondor and OSPool users.

    +
    Tuesday, October 15Workflows with Pegasus +

    + Learning Objectives: An introduction to the Pegasus Workflow Management System, which is a useful tool for researchers needing to execute a large number of jobs or complex workflows. Attendees will learn how to construct and manage workflows, capabilities like automatic data transfers, and higher level tooling to analyze the workflow performance. +

    +

    + Prerequisites/Audience: There are no prerequisites for this workshop, however, a basic understanding of HTCondor job submission and HTCondor submit files will make it easier to understand the content presented. +

    +
    Tuesday, November 19DAGMan: HTCondor’s Workflow Manager +

    + Learning Objectives: In this training, you will be guided through hands-on exercises to learn how to use DAGMan to automate your HTCondor job submissions. This training is especially useful for anyone who has constructed different job types and wants to be able to run them in a certain order. +

    +

    + Prerequisites/Audience: A basic understanding of HTCondor job submission +

    +
    + +

    For a calendar version of these events see:

    + +

    Materials

    +

    All of our training materials are public and provided below:

    +
    +[Webinar] Principles of Distributed High Throughput Computing +
    +Learning Objectives +
    +Have you ever wondered about the “why” of HTCondor? Join us to hear about the “philosophy” of high throughput computing and how HTCondor has evolved to make throughput computing possible. This workshop will be led by a core HTCondor developer, Greg Thain, and is a perfect opportunity for longer-term OSPool users to learn more about our underlying technology. +
    +
    +Prerequisites/Audience +
    +There are no prerequisites for this webinar. +
    +
    +Available Materials +
    + +Materials Last Updated +
    +Spring 2024 +
    +
    + +
    +[Webinar] Move Your Data with Pelican (and the OSDF) +
    +Learning Objectives +
    +Pelican is a platform created to enable easier data sharing - within or beyond your institution! This training will cover how Pelican is used to move data within the OSPool and also how you can use Pelican tools to host, upload and download your data. This training is relevant for researchers with large amounts of data, as well as campus representatives, to learn about how Pelican can help with your data movement needs. +
    +
    +Prerequisites/Audience +
    +There are no prerequisites for this webinar. +
    +
    +Available Materials +
    + +Materials Last Updated +
    +Summer 2024 +
    +
    + +
    +[Workshop] OSPool Basics: Get Running on the OSPool +
    +Learning Objectives +
    +Topics covered in this workshop include: +
      +
    1. An introduction to OSG services and the OSPool
    2. +
    3. Basics of HTCondor job submission
    4. +
    5. Hands-on practice submitting HTCondor jobs
    6. +
    +Prerequisites/Audience +
    +There are no prerequisites for this workshop. This workshop is designed for new HTCondor and OSPool users. +
    +
    +Available Materials +
    + +Materials Last Updated +
    +Winter 2023 +
    +
    + +
    +[Webinar] Learn About the PATh Facility +
    +Learning Objectives +
    +The PATh Facility provides dedicated throughput computing capacity to NSF-funded researchers for longer and larger jobs. This training will describe its features and how to get started. If you have found your jobs need more resources (cores, memory, time, data) than is typically available in the OSPool, this resource might be for you! +
    +
    +Prerequisites/Audience +
    +There are no prerequisites for this webinar. +
    +
    +Available Materials +
    + +Materials Last Updated +
    +Winter 2023 +
    +
    + +
    +[Workshop] DAGMan: HTCondor's Workflow Manager +
    +Learning Objectives +
    +Presented by an HTCondor DAGMan developer, this workshop is designed for researchers that would like to learn how to implement DAG workflows and automate workflow management on the OSPool. +
    +
    +Prerequisites/Audience +
    +A basic understanding of HTCondor job submission and of an HTCondor submit file is highly recommended for this workshop. +
    +
    +Available Materials +
    + +Materials Last Updated +
    +Winter 2023 +
    +
    + +
    +[Workshop] Organizing and Submitting HTC Workloads +
    +Learning Objectives +
    +This workshop will present useful HTCondor features to help researchers automatically organize their workspaces on High Throughput Computing systems. +
    +
    +Prerequisites/Audience +
    +There are no prerequisites for this workshop, however, a basic understanding of HTCondor job submission and HTCondor submit files will make it easier to understand the content presented. +
    +
    +Available Materials +
    + +Materials Last Updated +
    +Summer 2023 +
    +
    + +
    +[Workshop] Using Containerized Software on the Open Science Pool +
    +Learning Objectives +
    +This workshop is designed to introduce software containers such as Docker, Apptainer, and Singularity. Content covered includes how to create a container, use a container, and techniques for troubleshooting containerized software. +
    +
    +Prerequisites/Audience +
    +There are no prerequisites for this workshop, however, a basic understanding of HTCondor job submission and HTCondor submit files will make it easier to understand the content presented. +
    +
    +Available Materials +
    + +Materials Last Updated +
    +Fall 2023 +
    +
    + +
    +[Workshop] Pegasus Workflow Management System on the Open Science Pool +
    +Learning Objectives +
    +This workshop is designed to introduce Pegasus Workflow Management System, a useful tool for researchers needing to execute a large number of jobs or complex workflows. +
    +
    +Prerequisites/Audience +
    +There are no prerequisites for this workshop, however, a basic understanding of HTCondor job submission and HTCondor submit files will make it easier to understand the content presented. +
    +
    +Available Materials +
    + +Materials Last Updated +
    +Fall 2023 +
    +
    + +
    +[Workshop] Software Portability on the Open Science Pool +
    +Learning Objectives +
    +This workshop is designed to introduce concepts pertaining to software portability, including containers, different ways to install software, setting file paths, and other important introductory concepts. +
    +
    +Prerequisites/Audience +
    +There are no prerequisites for this workshop, however, a basic understanding of HTCondor job submission and HTCondor submit files will make it easier to understand the content presented. +
    +
    +Available Materials +
    + +Materials Last Updated +
    +Summer 2023 +
    +
    + +
    +[Workshop] Access the OSPool via Jupyter Interface +
    +Learning Objectives +
    +This workshop is designed to introduce researchers to the OSPool's new Jupyter interface feature, including how to access and use Jupyter notebooks. +
    +
    +Prerequisites/Audience +
    +There are no prerequisites for this workshop. +
    +
    +Available Materials +
    + +Materials Last Updated +
    +Fall 2023 +
    +
    + +
    +[Workshop] Bioinformatics Analyses on the OSPool: A BWA Example +
    +Learning Objectives +
    +This workshop is designed to show the process of implementing and scaling out a bioinformatics workflow using HTCondor. +
    +
    +Prerequisites/Audience +
    +There are no prerequisites for this workshop, however, a basic understanding of HTCondor job submission and HTCondor submit files will make it easier to understand the content presented. +
    +
    +Available Materials +
    + +Materials Last Updated +
    +Summer 2023 +
    +
    + +
    +[Workshop] HTCondor Tips & Tricks: Using condor_q and condor_history to Learn about Your Jobs +
    +Learning Objectives +
    +This workshop is designed to introduce researchers to helpful HTCondor tools for learning about their HTCondor jobs. +
    +
    +Prerequisites/Audience +
    +There are no prerequisites for this workshop, however, a basic understanding of HTCondor job submission and HTCondor submit files will make it easier to understand the content presented. +
    +
    +Available Materials +
    + +Materials Last Updated +
    +Spring 2023 +
    +
    + +
    +[Workshop] Special Environments, GPUs +
    +Learning Objectives +
    +This workshop is designed for researchers interested in learning about using special environments, architectures, or resources such as GPUs. +
    +
    +Prerequisites/Audience +
    +There are no prerequisites for this workshop, however, a basic understanding of HTCondor job submission and HTCondor submit files will make it easier to understand the content presented. +
    +
    +Available Materials +
    + +Materials Last Updated +
    +Spring 2023 +
    +
    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/support_and_training/training/ospool_for_education/index.html b/support_and_training/training/ospool_for_education/index.html new file mode 100644 index 00000000..b3a57544 --- /dev/null +++ b/support_and_training/training/ospool_for_education/index.html @@ -0,0 +1,2620 @@ + + + + + + + + + + + + + + + + + + OSPool Resources for Teaching & Education - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    OSPool Resources for Teaching & Education

    +

    The OSPool provides a free, ready-to-use platform for instructors who are teaching high throughput computing concepts for academic courses, conference workshops, and other events.

    +

    Instructors can choose for their students to have Guest or Full Accounts on the OSPool. For Guest Accounts, students/attendees can launch an OSPool Notebook at any time and practice job submission with smaller workflows. For Full Accounts, students/attendees will need to request an account (which will be approved within one business day), but then are able to submit large scale high throughput computing workflows free of charge.

    +

    The table below outlines suggested steps for bringing OSPool resources to your training or event. Please reach out to the facilitation team at any time if you have questions or want to chat about your goals.

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Explore our ToolsExplore HTCondor job submission with a OSPool Guest Account +
    + - To launch a guest OSPool Notebook, go to https://notebook.ospool.osg-htc.org using an internet browser. +
    + - Visit our OSPool Jupyter Notebooks guide to learn about Guest and Full Accounts +
    +
    Conduct Initial Testing of your Event MaterialsUsing the Guest Account, we recommend conducting initial testing of your event materials to help inform next steps. +
    +
    + We provide supplementary materials supplementary materials that you may use to help teach high throughput computing and OSPool-related concepts. +
    Discuss your event goals with a Research Computing Facilitator (Optional)The Facilitation team is here to help discuss your event goals and provide guidance about how to best leverage existing OSG services and resources. Fill out this form and a Facilitator will contact you within one business day about scheduling a short virtual meeting. +
    Evaluate Guest or Full Account for AttendeesOption 1: Guest OSPool Accounts +
    +
    + You are welcome have your attendees use an OSPool Guest Acccount for the event. +
    +
    + This is a good option for events that: +
    +
    + - may not know registrants in advance +
    + - run less than 4 hours, or can easily recreate files that may be lost upon session time out (4 hours) +
    +
    + - want to only use a notebook interface +
    Option 2: Full OSPool Accounts +
    +
    + You can request that your students have the ability to submit jobs to the OSPool using Full Accounts. +
    +
    + This is a good option for events that: +
    +
    + - know registrants in advance +
    + - will run for more than 1-2 days +
    + - with more than 50 participants +
    + - would like jobs to access the full capacity of the OSPool + - would like to submit jobs using a notebook or classic terminal interface +
    Prior to the Event + If using full accounts, the instructor will provide a list of participants to the OSG Research Facilitation Team, and participants should request an account a few days in advance of the event (does not apply to guest accounts). It is also good practice to test your full workshop code and any software on your account of choice. +
    Start of EventWe require all events to provide a short (~5 minute) introduction to OSG Policies.
    FeedbackAfter your event, please email us to let us know how it went, and the number of participants. +
    + +

    Teaching Resources

    +

    Here are some resources you can use for your event:

    +

    Worksheets for Public Use

    +

    Scale Out My Computing Brainstorming Worksheet

    +

    Slide Presentations for Public Use

    +

    OSG Policies and Intro for Course Use

    +

    OSPool Training Slides and Recordings

    +

    Video Recordings

    +

    OSPool Training Slides and Recordings

    +

    HTCondor User Tutorials

    +

    Partnership to Advance Throughput Computing YouTube channel

    +

    Frequently Asked Questions (FAQs)

    +
    +Why use OSPool resources for my course/event? +
    + OSPool resources provide a free, easy-to-use toolkit for you to use to teach computing concepts at your next course/event. Event attendees do not need + an account, but can request continued access to use OSPool resources for their own research. +
    +
    + The OSPool staff also offer free assistance with helping you convert an existing workflow to work on the OSPool. We provide guidance about using OSG + resources, using HTCondor, and are happy to answer any questions you may have regarding our resources. +
    +
    + +
    +If I request full accounts for my students/attendees, when will their accounts be deactivated? +
    + We work with instructors to choose a date that works well for their event, but typically accounts are deactivated several days after the event + completes. If attendees are interested in continuing to use OSPool resources for their research, they can request their account remains active by + emailing support@osg-htc.org. +
    +
    + +
    +Do you have slides and video recordings of workshops that used OSPool resources to help me prepare for my event(s)? +
    + Yes! +
    +
    + We provide hands-on tutorial materials for topics such as running common software or workflows on the OSPool (e.g., python, R, MATLAB, bioinformatic + workflows), recordings of tutorials and introductory materials, presentation slides, and other materials. Some of the materials are linked under the + Teaching Resources section above. +
    + +
    +When should I not use OSPool resources for my course/event? +
    + Events are typically bound by the same limitations as regular users/jobs. This means that any event needing to use licensed software or submit + individual multi-core jobs or jobs running longer than 20 hours may not be a good fit for our system. +
    +
    + +
    +Who should I contact with questions or concerns? +
    + The OSG Research Computing Facilitation Team is happy to answer any questions or concerns you may have about using OSPool resources for your event(s). Please direct questions to support@osg-htc.org. A Facilitator will respond within one business day. +
    +
    + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file diff --git a/support_and_training/training/previous-training-events/index.html b/support_and_training/training/previous-training-events/index.html new file mode 100644 index 00000000..c9e6b140 --- /dev/null +++ b/support_and_training/training/previous-training-events/index.html @@ -0,0 +1,2464 @@ + + + + + + + + + + + + + + + + + + Other Past Training Events - OSPool Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + +
    +
    + + + + +

    Other Past Training Events

    +

    Overview

    +

    We offer on-site training and tutorials on a periodic basis, usually at conferences (including the annual OSG All Hands Meeting) where many researchers and/or research computing staff are gathered. Below are some trainings for which the materials were public. (Apologies if any links/materials aren't accessible anymore, as some of these are external to our own web location. Feel free to let us know via support@osg-htc.org, in case we can fix/remove them.)

    +

    Workshops/Tutorials

    + +

    Tutorials at Recent OSG All-Hands Meetings

    +

    The below were offered on-site at OSG All-Hands Meetings. Note that the last on-site AHM in 2020 was canceled due to the pandemic, though we've linked to the materials.

    + + + +
    +
    +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + \ No newline at end of file