content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Glare Node¶ The Glare node is used to add lens flares, fog, glows around exposed parts of an image and much more. Eigenschaften¶ - Glare Type - Ghosts Creates a haze over the image. - Streaks Creates bright streaks used to simulate lens flares. - Streaks Total number of streaks. - Angle Offset The rotation offset factor of the streaks. - Fade Fade out factor for the streaks. - Fog Glow Looks similar to Ghost. However, it is much smaller in size and gives more of an atmospheric haze or „glow“ around the image. - Size Scale of the glow relative to the size of the original bright pixels. - Simple Star Works similar to Streaks but gives a simpler shape looking like a star. - Fade Fade out factor for the streaks. - Rotate 45 Rotate the streaks by 45°. - Quality If not set to something other the High, then the glare effect will only be applied to a low resolution copy of the image. This can be helpful to save render times while only doing preview renders. - Iterations The number of times to run through the filter algorithm. Higher values will give more accurate results but will take longer to compute. Note that, this is not available for Fog Glow as it does not use an iterative-based algorithm. - Color Modulation Used for Streaks and Ghosts to create a special dispersion effect. Johannes Itten describes this effect, Color Modulation, as subtle variations in tones and chroma. - Mix Value to control how much of the effect is added on to the image. A value of -1 would give just the original image, 0 gives a 50/50 mix, and 1 gives just the effect. - Threshold Pixels brighter than this value will be affected by the glare filter.
https://docs.blender.org/manual/de/dev/compositing/types/filter/glare.html
2021-05-06T08:55:01
CC-MAIN-2021-21
1620243988753.91
[]
docs.blender.org
Reduce Game Size Bitmap2Material allows to generate a full material from a single diffuse map in a few milliseconds. As a consequence, you can create full materials in your game while storing only the diffuse map + B2M (a few kb) in the game data, and generate the whole material at runtime. This feature can be used in Game Engine compatible with Substance: - Unity (for web / stand alone, iOS/Android) - UE3/UDK (PC) - Any other game engine which is using Substance Engine If you wish to use Bitmap2Material to save space on your mobile game, you should use the bitmap2material_mobile version (included with B2M product) in order to have better performances. Benchmark
https://docs.substance3d.com/b2m3/reduce-game-size-67797041.html
2021-05-06T09:13:17
CC-MAIN-2021-21
1620243988753.91
[]
docs.substance3d.com
Lite Branded Splash Screen When launching a ZapWorks Designer or ZapWorks Studio experience in the mobile browser (WebAR), it is a essential to have a splash screen. The splash screen requires a ‘user-initiated event’ in order to: - To request permission to access device motion data - To request permission to access the camera (on some platforms) - To play audio Zappar Branded Splash Screen The standard Zappar branded splash screen is available to all users free of charge by default through their workspace. As you can see it has minimal branding and does not support any customisations. The Zappar branded splash screen is perfect for many use cases, but is not appropriate for some customers’ projects. Lite Branded Splash Screen The Lite branded splash screen has a neutral white and black branding by default, and allows customisation of the logo and title of the text. In the example below, we have uploaded a generic logo, and included ‘Company name’ as the title. The Lite branded splash screen is only available on Pro and Enterprise plans. To gain access to this feature, you can upgrade your ZapWorks subscription here. Configuring your Lite Branded Splash Screen In order to launch your project via a Lite branded splash screen, you first need to create and open a ZapWorks Studio or Designer project on my.zap.works. Once created and opened, follow the steps below. The Lite branded splash screen uses our standard privacy policy. If you require a custom privacy policy, check out our custom WebAR sites. 1) On the project overview page, in the bottom right corner you will see the option to update the Project icon and title. We recommend using a JPEG image when uploading your icon. 2) Once you have updated the Project icon and title, you should navigate to the project's triggers page. Your triggers can be accessed either using the left hand navigation or clicking ‘Go to Triggers’ on the project overview page. Your project icon and title will store in your cached data and may not reflect any updates you make. To see any updates, be sure to clear your browser's cached data. 3) Open the trigger settings for your QR code by clicking on the tools icon, highlighted below. 4) This will bring up the Trigger settings modal, from where you can select ‘WebAR - Lite branded splash screen’. This option will only appear if you are on a Pro or Enterprise plan. 5) After you have selected the correct configurations for your QR code, save the options to close the modal. From here, you can scan the QR code to test the Custom Lite Splash Screen or download the QR code to share it. Setting Lite Branded Splash Screen as default You can set the Lite Branded Splash Screen as default, meaning all future triggers added to both new and existing projects will automatically launch into the Lite Branded Splash Screen without needing to change their configuration manually. To do this, select Distribution hub on the left-hand navigation of your workspace. Hover over the WebAR - Lite branded splash screen and click 'Make default'. Custom Branded Splash Screen In addition to the lite branded splash screen, Zappar also offer customers the option of full customization of their screen splash screen. You can see some examples of the Customer branded splash screen below. The custom branded splash screen is available as an add on to your subscription. Prices start at £5000 per project per year. In order to learn more about a custom splash screen, you can request a call with a member of the sales team here: or send a request from the distribution hub on your workspace. If you have purchased a Custom Branded Splash Screen, these will also appear in your workspace's WebAR splash screens, found in the distribution hub.
https://docs.zap.works/general/platform/lite-branded-splash-screen/
2021-05-06T10:40:35
CC-MAIN-2021-21
1620243988753.91
[]
docs.zap.works
Getting Started¶ Warning: PaaSTA is an opinionated way to integrate a collection of open source components in a holistic way to build a PaaS. It is not optimized to be simple to deploy for operators. It is optimized to not reinvent the wheel and utilizes existing solutions to problems where possible. PaaSTA has many dependencies. This document provides documentation on installing some of these dependencies, but some of them are left as an exercise to the reader. PaaSTA is used in production at Yelp, and has never been designed to be easy to deploy or installable from a single command ( curl paasta.sh | sudo bash). We don’t install things that way at Yelp, and we don’t expect others to install things like that either. At Yelp we happen to use Puppet to deploy PaaSTA and the related components. Currently all of the Puppet code is not open source, but we hope to eventually have a fully working example deployment. We do have an example cluster which uses docker-compose to create containers running the necessary components of a PaaSTA cluster. However, it is not a recommended production configuration. paasta_tools¶ The paasta_tools package contains the PaaSTA CLI and other extra integration code that interacts with the other components. Binary packages of paasta_tools are currently not available, so one must build them and install them manually: git clone [email protected]:Yelp/paasta.git # Assuming you are on Ubuntu Xenial make itest_xenial sudo dpkg -i dist/paasta-tools*.deb This package must be installed anywhere the PaaSTA CLI and on the Mesos/Marathon masters. If you are using SmartStack for service discovery, then the package must be installed on the Mesos Slaves as well so they can query the local API. Once installed, paasta_tools reads global configuration from /etc/paasta/. This configuration is in key/value form encoded as JSON. All files in /etc/paasta are merged together to make it easy to deploy files with configuration management. For example, one essential piece of configuration that must be deployed to servers that are a member of a particular cluster is the cluster setting: # /etc/paasta/cluster.json { "cluster": "test-cluster" } It is not necessary to define this config option for servers that only require the PaaSTA CLI tools (as they may not technically be part of any particular PaaSTA cluster). See more documentation for system paasta configs soa-configs¶ soa-configs are the shared configuration storage that PaaSTA uses to hold the description and configuration of what services exist and how they should be deployed and monitored. This directory needs to be deployed globally in the same location to every server that runs any PaaSTA component. See the dedicated documentation on how to build your own soa-configs. soa-configs also transport the deployments.json files for each service. This file contains a mapping for which shas should be deployed where. These files are generated by the generate_all_deployments command. This method allows PaaSTA to inspect the deployments for each service once, and deploy that information in soa-configs, as opposed to having each cluster inspecting git directly. Docker and a Docker Registry¶ PaaSTA uses Docker to build and distribute code for each service. PaaSTA assumes that a single registry is available and that the associated components (Docker commands, unix users, mesos slaves, etc) have the correct credentials to use it. The docker registry needs to be defined in a config file in /etc/paasta/. PaaSTA merges all json files in /etc/paasta/ together, so the actual filename is irrelevant, but here would be an example /etc/paasta/docker.json: { "docker_registry": "private-docker-registry.example.com:443" } There are many registries available to use, or you can host your own. Mesos¶ PaaSTA uses Mesos to do the heavy lifting of running the actual services on pools of machines. See the official documentation on how to get started with Mesos. Marathon¶ PaaSTA uses Marathon for supervising long-running services running in Mesos. See the official documentation for how to get started with Marathon. Then, see the PaaSTA documentation for how to define Marathon jobs. Once Marathon jobs are defined in soa-configs, there are a few tools provided by PaaSTA that interact with the Marathon API: deploy_marathon_services: Does the initial sync between soa-configs and the Marathon API. This is the tool that handles “bouncing” to new version of code, and resizing Marathon applications when autoscaling is enabled. This is idempotent, and should be run periodically on a box with a marathon.jsonfile in the system paasta config directory (Usually /etc/paasta). We recommend running this frequently - delays between runs of this command will limit how quickly new versions of services or changes to soa-configs are picked up. cleanup_marathon_jobs: Cleans up lost or abandoned services. This tool looks for Marathon jobs that are not defined in soa-configs and removes them. check_marathon_services_replication: Iterates over all Marathon services and inspects their health. This tool integrates with the monitoring infrastructure and will alert the team responsible for the service if it becomes unhealthy to the point where manual intervention is required. SmartStack and Hacheck¶ SmartStack is a dynamic service discovery system that allows clients to find and route to healthy mesos tasks for a particular service. Smartstack consists of two agents: nerve and synapse. Nerve is responsible for health-checking services and registering them in ZooKeeper. Synapse then reads that data from ZooKeeper and configures an HAProxy instance. To manage the configuration of nerve (detecting which services are running on a node and what port they are using, etc.), we have a package called nerve-tools. This repo builds a .deb package, and should be installed on all slaves. Each slave should run configure_nerve periodically. We recommend this runs quite frequently (we run it every 5s), since Marathon tasks created by Paasta are not available to clients until nerve is reconfigured. Similarly, to manage the configuration of synapse, we have a package called synapse-tools. Each slave should have this installed, and should run configure_synapse periodically. configure_synapse can run less frequently than configure_nerve – it only limits how quickly a new service, service instance, or haproxy option changes in smartstack.yaml will take effect. Alongside SmartStack, we run hacheck. Hacheck is a small HTTP service that handles health checks for services. nerve-tools and synapse-tools configure nerve and HAProxy, respectively, to send its health check requests through hacheck. Hacheck provides several behaviors that are useful for Paasta: - It caches health check results for a short period of time (1 second, by default). This avoids overloading services if many health check requests arrive in a short period of time. - It can preemptively return error codes for health checks, allowing us to remove a task from load balancers before shutting it down. (This is implemented in the HacheckDrainMethod.) Packages for nerve-tools and synapse-tools are available in our bintray repo. Sensu¶ Sensu is a flexible and scalable monitoring system that allows clients to send alerts for arbitrary events. PaaSTA uses Sensu to allow individual teams to get alerts for their services. The official documentation has instructions on how to set it up. Out of the box Sensu doesn’t understand team-centric routing, and must be combined with handlers that are “team aware” it it is installed in a multi-tenant environment. We to do that, we have written some custom Sensu handlers to do that. Sensu is an optional but highly recommended component. Jenkins / Build Orchestration¶ Jenkins is the suggested method for orchestrating build pipelines for services, but it is not a hard requirement. The actual method that Yelp uses to integrate Jenkins with PaaSTA is not open source. In practice, each organization will have to decide how they want to actually run the paasta cli tool to kick off the building and deploying of images. This may be something as simple as a bash script: #!/bin/bash service=my_service sha=$(git rev-parse HEAD) paasta itest --service $service --commit $sha paasta push-to-registry --service $service --commit $sha paasta mark-for-deployment --git-url $(git config --get remote.origin.url) --commit $sha --deploy-group prod.main --service $service PaaSTA can integrate with any existing orchestration tool that can execute commands like this. Logging¶ Paasta can use one of several backends to centrally log events about what is happening in the infrastructure and to power paasta logs. The backends that are available are listed in the system config docs under log_writer and log_reader. At Yelp, we use Scribe for log writing, so we use the scribe log writer. For reading logs, we have some in-house tools that are unfortunately not open source. The code that reads from these in-house tools are the scribereader log_reader driver, but this code relies on some not-open-source code, so we do not expect that logging via Scribe will work outside of Yelp. The file log writer driver may be useful for getting log data into your logging system, but files are not generally aggregated across the whole cluster in a way that is useful for paasta logs. We are in need of alternate log reader driver, so please file an issue (or better yet, a pull request).
https://paasta.readthedocs.io/en/latest/installation/getting_started.html
2021-05-06T10:37:22
CC-MAIN-2021-21
1620243988753.91
[]
paasta.readthedocs.io
- ) User Roles Two types of user roles are provided in Hevo: - Owner - Member Both roles have complete privileges for working in Hevo, such as: - Creating and managing Pipelines, Models, and Workflows. - Deleting Pipelines and Models created by their team. - Inviting members to their team. The Owner role has additional administrative privileges for managing the Hevo account and the team, such as: - Modifying team members’ permissions. - Purchasing and modifying the Hevo subscription, Add-On plans, and On-Demand Events. - Configuring payment methods. - Receiving billing details and usage summary e-mails. - Updating the notification settings and Slack integration. - Deleting the Hevo account. Last updated on 02 Mar 2021
https://docs.hevodata.com/getting-started/user-roles/
2021-05-06T08:53:23
CC-MAIN-2021-21
1620243988753.91
[]
docs.hevodata.com
RollingUpgradeMode enum type: string The mode used to monitor health during a rolling upgrade. The values are UnmonitoredAuto, UnmonitoredManual, and Monitored. Possible values are: Invalid- Indicates the upgrade mode is invalid. All Service Fabric enumerations have the invalid type. The value is zero. UnmonitoredAuto- The upgrade will proceed automatically without performing any health monitoring. The value is 1 UnmonitoredManual- The upgrade will stop after completing each upgrade domain, giving the opportunity to manually monitor health before proceeding. The value is 2 Monitored- The upgrade will stop after completing each upgrade domain and automatically monitor health before proceeding. The value is 3
https://docs.microsoft.com/en-us/rest/api/servicefabric/v72/sfclient-v72-model-rollingupgrademode
2021-05-06T11:27:48
CC-MAIN-2021-21
1620243988753.91
[]
docs.microsoft.com
EnableOnlyConsentCapableClients method of the Win32_TSGatewayServerSettings class Sets the OnlyConsentCapableClients property. Syntax uint32 EnableOnlyConsentCapableClients( [in] boolean OnlyConsentCapableClients ); Parameters OnlyConsentCapableClients [in] Type: boolean Specifies the new value for the OnlyConsentCapableClients property. Return value Type: uint32 If the method succeeds, it returns zero. If the method is unsuccessful, it returns a nonzero value. For a list of error codes, see Remote Desktop Services WMI Provider Error Codes. Remarks You must be a member of the Administrators group to call this method.).
https://docs.microsoft.com/en-us/windows/win32/termserv/enableonlyconsentcapableclients-win32-tsgatewayserversettings
2021-05-06T09:18:31
CC-MAIN-2021-21
1620243988753.91
[]
docs.microsoft.com
Filtering and hash keys Filtering can be combined with hash keys to segment the traffic into multiple sets of streams. For example, IP packets, non-IP packets, and UDP, TCP and SCTP packets could all be delivered to separate groups of streams. Combined filtering and hash key distribution example This NTPL example distributes UDP and TCP frames to streams 0 to 7 using the 5-tuple hash key, the remaining IP frames to stream 8 to 11 using the 2-tuple hash key, and all non-IP frames to stream 12: HashMode[Priority=0; Layer4Type=UDP,TCP] = Hash5Tuple HashMode[Priority=1; Layer3Type=IP] = Hash2Tuple Assign[Priority=0; StreamId=(0..7)] = Layer4Protocol==UDP,TCP Assign[Priority=1; StreamId=(8..11)] = Layer3Protocol==IP Assign[Priority=2; StreamId=12] = All
https://docs.napatech.com/r/oVaiCGX1STzQ4JECpQ5LeQ/5v1yw5DHdDtE284HcERwqA
2021-05-06T09:03:04
CC-MAIN-2021-21
1620243988753.91
[]
docs.napatech.com
For Canonical Openstack it is not necessary to spin up the TrilioVault VM. The TrilioVault Appliance is delivered as qcow2 image and runs as VM on top of a KVM Hypervisor. This guide shows the tested way to spin up the TrilioVault Appliance on a RHV Cluster. Please contact a RHV Administrator and Trilio Customer Success Agent in case of incompatibility with company standards. The TrilioVault appliance is utilizing cloud-init to provide the initial network and user configuration. Cloud-init is reading it's information either from a metadata server or from a provided cd image. TrilioVault is utilizing the cd image. To create the cloud-init image it is required to have genisoimage available. #For RHEL and centosyum install genisoimage#For Ubuntuapt-get install genisoimage Cloud-init is using two files for it's metadata. The first file is called meta-data and contains the information about the network configuration. Below is an example of this file. [[email protected]]# cat meta-datainstance-id: triliovaultnetwork-interfaces: |auto ens3iface ens3 inet staticaddress 158.69.170.20netmask 255.255.255.0gateway 158.69.170.30dns-nameservers 11.11.0.51local-hostname: tvault-controller The instance-id has to match the VM name in virsh The second file is called user-data and contains little scripts and information to set up for example the user passwords. Below is an example of this file. Both files meta-data and user-data are needed. Even when one is empty, is it needed to create a working cloud-init image. The image is getting created using genisoimage follwing this general command: genisoimage -output <name>.iso -volid cidata -joliet -rock </path/user-data> </path/meta-data> An example of this command is shown below. genisoimage -output tvault-firstboot-config.iso -volid cidata -joliet -rock user-data meta-data After the cloud-init image has been created the TriloVault appliance can be spun up on the desired KVM server. Extract the Triliovault QCOW2 tar file using the following command : tar Jxvf TrilioVault_file.tar.xz See below an example command, how to spin up the TrilioVault appliance using virsh and the created iso image. virt-install -n triliovault-vm --memory 24576 --vcpus 8 \--os-type linux \--disk tvault-appliance-os-3.0.154.qcow2,device=disk,bus=virtio,size=40 \--network bridge=virbr0,model=virtio \--network bridge=virbr1,model=virtio \--graphics none \--import \--disk path=tvault-firstboot-config.iso,device=cdrom It is of course possible to spin up the TrilioVault appliance without a cloud-init iso-image. It will spin up with default values. Once the TrilioVault appliance is up and running with it's initial configuration is it recommended to uninstall cloud-init. If cloud-init is not installed it will rerun the network configuration upon every boot. Setting the network configuration back to DHCP, if no metadata is provided. To uninstall cloud-init, follow the example below. sudo apt-get purge cloud-init
https://docs.trilio.io/openstack/deployment/spinning-up-the-triliovault-vm
2021-05-06T09:25:21
CC-MAIN-2021-21
1620243988753.91
[]
docs.trilio.io
If you are planning to deploy your IntraWeb application as ISAPI extension or ASPX Library, and you have file uploads (using the new IWFileUploader control), probably this topic is for you. IIS comes with a default upload limit of 30 million bytes (approx. 28.6 Mb). If you are able to upload a 28 Mb file, but can't upload a 30 Mb file, then the default limit is active and you may have to change it. This default limit is a security measure of IIS. Internet Information Services (IIS) 7 and 7.5 Internet Information Services (IIS) 7 and 7.5 2.1. Open IIS manager console. 2.2. Expand the default web site and double click on "Request Filtering" icon: 2.3. On the right panel, click on "Edit Feature Settings..." 2.4. Now change the value named "Maximum allowed content length (Bytes)" to the desired value. Please note that you should have the default 30,000,000 bytes value in place. 2.5. Click OK and then restart IIS. The file upload should work now, respecting the limit you have specified in step 2.4 of course. 3.1. If you also have Microsoft URLScan installed in this same IIS, you have an additional step. You also have to configure the URLScan.ini file and change the same setting there. Below is part of my URLScan.ini file. Please note that the URLScan.ini file is usually under C:\WINDOWS\system32\inetsrv\urlscan folder: 3.2. Save the URLScan.ini file and again, restart IIS.
http://docs.atozed.com/docs.dll/deployment/Uploading%20Large%20files%20in%20IIS.html
2021-05-06T09:50:02
CC-MAIN-2021-21
1620243988753.91
[]
docs.atozed.com
Managing Data in DERIVA with deriva-client¶ The deriva-client package bundles an application suite of Python-based client software for use with the DERIVA platform. These tools provide functions such as: - Authentication services for programmatic and non browser-based application access. - Bulk import and export of catalog assets and (meta) data. - Catalog configuration, mutation and administration. - Tools for working with bdbags, a file container format used by DERIVA for the import and export of data. Installed Applications¶ Installer packages for Windows and MacOSX¶ Pre-packaged installers of deriva-client for Windows and MacOSX are available. These installer packages include a bundled Python interpreter and all other software dependencies and are recommended for Windows and MacOSX users who are looking for a more traditional “turnkey” installation that does not require them to install Python and manage Python software package installations. Download the installer packages here. Installing deriva-client from PyPi via pip¶ For users who already have the base Python interpreter installed and are comfortable installing Python software via the pip application, deriva-client can be easily installed along with all of it’s dependencies directly from PyPi using basic pip commands. For those users who wish to write programs against the various APIs included in deriva-client, this is the recommended installation method. Installation Prerequisites¶ - A Python 3.5.4 or greater system installation is required. The latest stable version of Python is recommended. - Verify that the appropriate Python 3 interpreter can be invoked from a command shell using the python3command. This can be tested simply with the following command: python3 --version Installation Quickstart¶ The following commands can be used to perform a venv-based virtual environment installation to the current working directory. Mac/Linux¶ The following commands assume a BASH (or compatible) command shell is used. For a different command interpreter (e.g. CSH), invoke the source command on the appropriate activation script in the virtual environment’s bin directory. python3 -m venv ./deriva-client-venv source ./deriva-client-venv/bin/activate python3 -m pip install --upgrade pip setuptools wheel pip install deriva-client Important Note: For MacOSX users running Python 3.5.x with pip version < 9.0.3¶ If you encounter the following error: Could not fetch URL: There was a problem confirming the ssl certificate: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:720) - skipping This error means that you cannot update pip, setuptools, and wheel via the command provided above. You can work around this error by issuing the following commands instead, and then continue with the installation procedure as described. curl | python3 pip install --upgrade setuptools Windows¶ The following commands assume a Windows Command Prompt command shell is used. For a Powershell shell, the activate.ps1 activation script should be invoked instead. python3 -m venv .\deriva-client-venv .\deriva-client-venv\Scripts\activate python3 -m pip install --upgrade pip setuptools wheel pip install deriva-client IMPORTANT NOTE: Python virtual environments versus user environments¶ While a virtual environment installation is generally the safest way to install and isolate multiple software packages, it also must be activated before use and deactivated after use. If this requirement is too cumbersome, the recommended alternative is to install the software into a user environment instead. See the complete installation procedure below for more information. Installation Procedure¶ - For MacOSX and Linux systems which include Python as a core part of the operating system, it is highly recommended to install this software into a virtual environment or a user environment, so that it does not interfere or conflict with the operating system’s Python installation. The native Python3 venvmodule, the virtualenvpackage from PyPi, or the Anaconda Distribution environment are all suitable for use as virtual environments. - Instead of using a virtual environment, it is also possible to install the software into a user environment using the --userargument when invoking pip install. - Recent versions of pip, setuptools, and wheelare recommended. If these components are already installed, updating them to the latest versions available is optional. Installation Sequence¶ Create and/or activate the target virtual environment, if any. This step is specific to the type of virtual environment being used. Update pip, setuptools, and wheel(optional). For virtual environments execute the following (ensure the environment is active): python -m pip install --upgrade pip setuptools wheel For user environments execute the following: python3 -m pip install --user --upgrade pip setuptools wheel For Linux system python installations it is recommended to use the system’s package manager such as dnf, apt, or yumto update the following packages: python3-pip, python3-setuptools, and python3-wheel. Install deriva-clientdirectly from PyPi using the pip installcommand. For virtual environments execute the following (ensure the environment is active): pip install deriva-client For user environments execute the following: pip3 install --user deriva-client For system-wide python installations (only do this if you understand the complexities involved): pip3 install deriva-client IMPORTANT NOTES: Using pip to install software into system-wide Python locations¶ - Many newer Linux (as well as MacOSX) distributions contain both Python2 and Python3 installed alongside each other. In these environments, both the python interpreter and pipare symbolically linked to the system default version, which in general results in pythonand pipbeing linked to the Python2 versions. - Python3 versions are commonly accessed via python3and pip3. If you are working outside of a Python3 virtual environment and installing either to the system-wide Python location (not recommended) or a user-based location (e.g. with the pip --userargument), then you must substitute pip3for pipwhen issuing pipinstallation commands. - Also note that when installing into the system Python location via pipon Linux/MacOSX, the commands must be run as root or the sudocommand must be prefixed to the command line. Managing data with the datapath API (deriva-py)¶ The deriva-py package (part of deriva-client) also includes a Python API for a programmatic interface for ERMRest. The datapath module in particular is an interface for building ERMRest “data paths” and retrieving data from ERMRest catalogs. It also supports data manipulation (insert, update, delete). In its present form, the module provides a limited programmatic interface to ERMRest. Reference Documentation¶ Source Code¶ The source code for the primary components of deriva-client can be found at the links below:
http://docs.derivacloud.org/users-guide/managing-data.html
2021-05-06T09:51:44
CC-MAIN-2021-21
1620243988753.91
[]
docs.derivacloud.org
Networking Config Version 2¶ Cloud-init’s support for Version 2 network config is a subset of the version 2 format defined for the netplan tool. Cloud-init supports both reading and writing of Version 2; the latter support requires a distro with netplan present. The network key has at least two required elements. First it must include version: 2 and one or more of possible device types.. Cloud-init will read this format from system config. For example the following could be present in /etc/cloud/cloud.cfg.d/custom-networking.cfg: network: version: 2 ethernets: [] It may also be provided in other locations including the NoCloud, see Default Behavior for other places. Supported device types values are as follows: - Ethernets ( ethernets) - Bonds ( bonds) - Bridges ( bridges) - VLANs ( vlans) Each type block contains device definitions as a map where the keys (called “configuration IDs”). Each entry under the types may include IP and/or device configuration. Cloud-init does not current support wifis type that is present in native netplan. Device configuration IDs¶ The key names below the per-device-type definition maps (like ethernets:) are called “ID”s. They must be unique throughout the entire set of configuration files. Their primary purpose is to serve as anchor names for composite devices, for example to enumerate the members of a bridge that is currently being defined. There are two physically/structurally different classes of device definitions, and the ID field has a different interpretation for each: - Physical devices (Examples: ethernet, wifi): These can dynamically come and go between reboots and even during runtime (hotplugging). In the generic case, they can be selected by match:rules on desired properties, such as name/name pattern, MAC address, driver, or device paths. In general these will match any number of devices (unless they refer to properties which are unique such as the full path or MAC address), so without further knowledge about the hardware these will always be considered as a group. It is valid to specify no match rules at all, in which case the ID field is simply the interface name to be matched. This is mostly useful if you want to keep simple cases simple, and it’s how network device configuration has been done for a long time. If there are match: rules, then the ID field is a purely opaque name which is only being used for references from definitions of compound devices in the config. - Virtual devices (Examples: veth, bridge, bond): - These are fully under the control of the config file(s) and the network stack. I. e. these devices are being created instead of matched. Thus match:and set-name:are not applicable for these, and the ID field is the name of the created virtual device. Common properties for physical device types¶ match: <(mapping)> This selects a subset of available physical devices by various hardware properties. The following configuration will then apply to all matching devices, as soon as they appear. All specified properties must match. The following properties for creating matches are supported:. Note MAC addresses must be strings. As MAC addresses which consist of only the digits 0-9 (i.e. no hex a-f) can be interpreted as a base 60 integer per the YAML 1.1 spec it is best practice to quote all MAC addresses to ensure they are parsed as strings regardless of value. driver: <(scalar)> Kernel driver name, corresponding to the DRIVER udev property. Globs are supported. Matching on driver is only supported with networkd. property can be used to give that device a more specific/desirable/nicer name than the default from udev’s ifnames. Any additional device that satisfies the match rules will then fail to get renamed and keep the original kernel name (and dmesg will show an error). wakeonlan: <(bool)> Enable wake on LAN. Off by default. Common properties for all device types¶ renderer: <(scalar)> Use the given networking backend for this definition. Currently supported are networkd and NetworkManager. This property can be specified globally in networks:, for a device type (in e. g. ethernets:) or for a particular device definition. Default is networkd. Note Cloud-init only supports networkd backend if rendering version2 config to the instance. dhcp4: <(bool)> Enable DHCP for IPv4. Off by default. dhcp6: <(bool)> Enable DHCP for IPv6. Off by default.: or mtu: <MTU SizeBytes> The MTU key represents a device’s Maximum Transmission Unit, the largest size packet or frame, specified in octets (eight-bit bytes), that can be sent in a packet- or frame-based network. Specifying mtu is optional. nameservers: <(mapping)> Set DNS servers and search domains, for manual address configuration. There are two supported fields: addresses: is a list of IPv4 or IPv6 addresses similar to gateway*, and search: is a list of search domains. Example: nameservers: search: [lab, home] addresses: [8.8.8.8, FEDC::1] routes: <(sequence of mapping)> Add device specific routes. Each mapping includes a to, via key with an IPv4 or IPv6 address as value. metric is an optional value. Example: routes: - to: 0.0.0.0/0 via: 10.23.2.1 metric: 3 Ethernets¶ Ethernet device definitions do not support any specific properties beyond the common ones described above. Bonds¶ interfaces <(sequence of scalars)> All devices matching this ID list will be added to the bond. Example: ethernets: switchports: match: {name: "enp2*"} [...] bonds: bond0: interfaces: [switchports] parameters: <(mapping)> Customization parameters for special bonding options. Time values are specified in seconds unless otherwise specified.. down-delay: <(scalar)> Specify the delay before disabling a link once the link has been lost. The default value is 0. fail-over-mac-policy: <(scalar)> Set whether to set all slaves to the same MAC address when adding them to the bond, or how else the system should handle MAC addresses. The possible values are none, active, and. learn-packet-interval: <(scalar)> Specify the interval between sending learning packets to each slave. The value range is between 1 and 0x7fffffff. The default value is 1. This option only affects balance-tlb and balance-alb modes. Bridges¶ interfaces: <(sequence of scalars)> All devices matching this ID list will be added to the bridge. Example: ethernets: switchports: match: {name: "enp2*"} [...] bridges: br0: interfaces: [switchports] parameters: <(mapping)> Customization parameters for special bridging options. Time values are specified in seconds unless otherwise specified. ageing-time: <(scalar)> Set the period of time to keep a MAC address in the forwarding database after a packet is received. priority: <(scalar)> Set the priority value for the bridge. This value should be an number between 0 and 65535. Lower values mean higher priority. The bridge with the higher priority will be elected as the root bridge. forward-delay: <(scalar)> Specify the period of time the bridge will remain in Listening and Learning states before getting to the Forwarding state. This value should be set in seconds for the systemd backend, and in milliseconds for the NetworkManager backend. hello-time: <(scalar)> Specify the interval between two hello packets being sent out from the root and designated bridges. Hello packets communicate information about the network topology. max-age: <(scalar)> Set the maximum age of a hello packet. If the last hello packet is older than that value, the bridge will attempt to become the root bridge.. Examples¶ Configure an ethernet device with networkd, identified by its name, and enable DHCP: network: version: 2 ethernets: eno1: dhcp4: true This is a complex example which shows most available features: network: version: 2 ethernets: # opaque ID for physical interfaces, only referred to by other stanzas id0: match: macaddress: '00:11:22:33:44:55' wakeonlan: true dhcp4: true addresses: - 192.168.14.2/24 - 2001:1::1/64 gateway4: 192.168.14.1 gateway6: 2001:1::2 nameservers: search: [foo.local, bar.local] addresses: [8.8.8.8] lom: match: driver: ixgbe # you are responsible for setting tight enough match rules # that only match one device if you use set-name set-name: lom1 dhcp6: true switchports: # all cards on second PCI bus; unconfigured by themselves, will be added # to br0 below match: name: enp2* mtu: 1280 bonds: bond0: interfaces: [id0, lom] bridges: # the key name is the name for virtual (created) interfaces; no match: and # set-name: allowed br0: # IDs of the components; switchports expands into multiple interfaces interfaces: [wlp1s0, switchports] dhcp4: true vlans: en-intra: id: 1 link: id0 dhcp4: yes # static routes routes: - to: 0.0.0.0/0 via: 11.0.0.1 metric: 3
https://cloudinit.readthedocs.io/en/20.4/topics/network-config-format-v2.html
2021-05-06T09:05:32
CC-MAIN-2021-21
1620243988753.91
[]
cloudinit.readthedocs.io
Name Sanitization On This Page In order to provide consistent user experience across many warehouses, Hevo uses a Name Sanitizer system. It applies to the names of the table and columns created in the Hevo ecosystem. It does this by encouraging the use of a simple, consistent, readable vocabulary when naming the tables and columns. To achieve that, Hevo’s Name Sanitizer removes all non-alphanumeric characters, spaces in between the names and replaces them with rather a suitable character, underscore. We’ll go deep into how it works with each warehouse in particular. Before that let us understand when it is applied to these names. When are Names Sanitized? Names are sanitized while mapping Source events to a table in the warehouse via Hevo AutoMapper or when a user tries to create a table manually using Hevo UI, the name is validated before actually trying to create the table in the warehouse. If the validation fails, a proper message is shown on the Hevo UI. There is a provision to switch off name sanitization in the case it is not required. You can switch it off while creating a Destination on Hevo UI. However, when it is switched off and auto-mapping is enabled, force sanitization comes into effect(only applicable to AWS Redshift). You’ll find examples below with specific warehouse behavior. How are Names Sanitized? In this section, we’ll look into how Hevo’s Name Sanitizer performs with each warehouse. AWS Redshift In the case where Hevo AutoMapper is trying to create a table in AWS Redshift, the name sanitizer converts the table name into lowercase, replaces all non-alpha numeric characters with an underscore, and removes the trailing underscores. For example, if the Source Event Type name is _Table$namE_05_, it is converted to _table_name_05, replacing the special character $ with an underscore and removing trailing underscore. However, when a user tries to create the table manually, name sanitization does not apply and it is converted to table$name_05_, making it to lowercase. When a Redshift Destination is configured with sanitization being switched off, it doesn’t do anything apart from making it in lowercase. The table, in this case, is created with the name table$name_05_. This is important to Redshift because a table can’t be created with upper case characters by default. You can check their Developer Guide to know more about naming database objects. The same behavior is followed for columns. Google BigQuery In the case where Hevo AutoMapper is trying to create a table in Google BigQuery, the sanitizer will convert the table name into lowercase, replace all non-alpha numeric characters with an underscore and remove the trailing underscores. For example, if the Source Event Type name is Table$namE_05_ it is sanitized to table_name_05, replacing the special character $ with an underscore and removing trailing underscore. According to the Google BigQuery reference guide for naming tables, it is not possible to create a table name with special characters. Hevo prompts with an error message when a table with the name, Table$namE_05_ is tried manually on Hevo UI. The same behavior is followed for columns. Snowflake Hevo’s Name Sanitizer for Snowflake is designed in such a fashion that the Hevo AutoMapper creates all the tables and columns in uppercase, replacing special characters with an underscore and removing trailing underscores. For example, if the Source Event Type name is Table$namE_05_ it is sanitized to TABLE_NAME_05, replacing the special character $ with an underscore and removing trailing underscore. However, when it is tried manually a table/column will be created in uppercase. MySQL/ PostgreSQL/ MS-SQL/ SQL Server/ AWS Aurora When name sanitization is switched ON, Hevo’s Name Sanitizer replaces all non-alphanumeric characters with underscores, removes the trailing underscores, and converts it to lower case. For example, if the Source Event Type name is Table$namE_05_ it is converted to table_name_05, replacing the non-alphanumeric character $ with an underscore and removing trailing underscore. When name sanitization is switched OFF for the Destination, Hevo’s Name Sanitizer doesn’t affect naming and it doesn’t replace the non-alphanumeric character or the trailing underscores, and a table with name, Table__$namE_05_ is created. However, when it is tried manually, a table with name, table$name_05_ will be created. What Happens to Field Separators and Delimiters? If a . is used as a field separator, for example, first.list, this becomes first_list post sanitization, and is read as two words by Hevo, first and list. This is an important consideration for deciding the table name compression strategy. If name sanitization is disabled, first.list is read as one word since the compression strategy does not recognize . as a separator. Read Table and Column Name Compression.
https://docs.hevodata.com/destinations/name-sanitization/
2021-05-06T10:22:49
CC-MAIN-2021-21
1620243988753.91
[array(['https://res.cloudinary.com/hevo/image/upload/v1602870341/hevo-docs/TableCompression2862/delimiter.png', 'Treatment of delimiter'], dtype=object) ]
docs.hevodata.com
Typing in Makaira Index This topic is interesting for all those who want to use their own fields in their store for searching and filtering. So that the new fields also fulfill the desired function they should be provided with the correct typing (also called casting or mapping). For example, if you want to be able to set filters by slider, you should choose a numeric type float or int. The whole thing gets a bit more complicated when str fields are created for the search. Here we distinguish in the Makaira different cases like short fields, long fields or fields with proper names. Only with the right choice of the type you will get the desired result. In the following we list all types and give beside a short explanation of the type also examples for the use Type: date Fields with date formatting Necessary suffix _date Type: float Fields with floating point numbers Necessary suffix _float Type: int Fields with integer values Necessary suffix _int Type: bool Fields with boolean values Necessary suffix _bool Type: str Fields with text content. Makaira distinguishes between different types of text fields. Depending on the length and type of content, the text is then analyzed in different ways: Short text field The analyzer for most text fields. Decompound, stemming and synonyms are part of the analysis processes, in addition to the standard lowercasing. Fuzzy search can be activated in the Makaira backend. Useful for e.g.: Short description of a product Necessary suffix _str_short Long text field For long text fields, the Decompound and Fuzzy Search option is disabled to keep the search accuracy as sharp as possible. Useful for e.g.: Long description of a product Necessary suffix _str_long Key Felder Decompound, stemming is not used in the analysis of these fields. Useful for e.g.: Manufacturer names, proper names that should only be found in this spelling. Necessary suffix _str_short_key Keywords The analysis consists only of lowercasing. Useful for special applications where case-sensitive proper names are important. Necessary suffix _str_keyword Deactivate typing "data storage only Useful for large objects that are not to be searched and, for example, are only used for output in the frontend. Necessary suffix _data_only
https://docs.makaira.io/books/shop-integrations/page/typing-in-makaira-index
2021-05-06T10:45:30
CC-MAIN-2021-21
1620243988753.91
[]
docs.makaira.io
Step 6) Register a CDP environment When you register an environment, you set properties related to data lake scaling, networking, security, and storage. You will need your Azure environment name, resource group name, storage account name, and virtual network name from your resource group. - In the CDP Management Console, navigate to Environments and click Register Environment. - Provide an Environment Name and description. The name can be any valid name. - Choose Azure as the cloud provider. - Under Microsoft Azure Credential, choose the credential you created in the previous task. - Click Next. - Under Data Lake Settings, give your new data lake a name. The name can be any valid name. Choose the latest data lake version. - Under Data Access and Audit, choose the following: - Assumer Identity: <resourcegroup-name> -<envName>-AssumerIdentity - Storage Location Base: data@<storageaccount-name> - Data Access Identity: <resourcegroup-name> -<envName>-DataAccessIdentity - Ranger Audit Role: <resourcegroup-name> -<envName>-RangerIdentity For example: - For Data Lake Scale, choose Light Duty. - Click Next. - Under Select Region, choose your desired region. This should be the same region you created an SSH key in previously. - For the Select Network field, select the name of the "Virtual Network" resource that was created when you deployed the ARM template to create the resource group. The name of the Virtual Network should be the same as your environment name, but you can verify this in the Azure portal on the Overview page of your resource group. - Under Security Access Settings, select Create New Security Groups for the Security Access Type. - Under SSH Settings, paste the public SSH key that you created earlier. - Optionally, under Add Tags, provide any tags that you'd like the resources to be tagged with in your Azure account. - Click Next. - Under Logs, choose the following: - Logger Identity: <resourcegroup-name> -<envName>-LoggerIdentity - Logs Location Base: logs@<storageaccount-name> For example: - Click Register Environment.
https://docs.cloudera.com/cdp/latest/azure-quickstart/topics/mc-azure-quickstart-environment.html
2021-05-06T10:18:54
CC-MAIN-2021-21
1620243988753.91
[]
docs.cloudera.com
Determining the Cause of Slow and Failed Queries Identifying the cause of slow query run times and queries that fail to complete. Steps with examples are included that explain how to further investigate and troubleshoot the cause of a slow and failed query. - In a supported browser, log in to Workload XM. - In the Clusters page do one of the following: - In the Search field, enter the name of the cluster whose workloads you want to analyze. - From the Cluster Name column, locate and click on the name of the cluster whose workloads you want to analyze. - From the navigation panel under Data Engineering, select Jobs. - From the Health Check list in the Jobs page, select Task Wait Time, which filters the list to display a list of jobs with longer than average wait times to execute a process. - To view more details, from the Job column, select a job's name and then click the Health Checks tab.The Baseline Health checks are displayed. - From the Health Checks panel, select the Task Wait Time health check.The following reveals that for this example the long wait time occurred in the Map Stage of the job process due to insufficient resources: - To display more information about the Map Stage tasks that are experiencing longer than average wait times to execute, click one of the tasks listed under Outlier Tasks.The following reveals that for this outlier task example, the Wait Duration time is above average, as confirmed by comparing this time with the time taken when the task successfully completes. Where, the successful value is displayed in the Successful Attempt Duration field and is significantly better than the average time. This indicates that insufficient resources are allocated for this job.
https://docs.cloudera.com/workload-xm/2.1.3/cluster-management/topics/wxm-determining-cause-of-slow-failed-queries.html
2021-05-06T08:41:39
CC-MAIN-2021-21
1620243988753.91
[]
docs.cloudera.com
Configuring Access to Azure on CDP Public Cloud IDBroker is a REST API built as part of Apache Knox’s authentication services. It allows an authenticated user to exchange a set of credentials or a token for cloud vendor access tokens. IDBroker manages mapping LDAP users to FreeIPA cloud identities for data access. It performs identity mapping for access to object stores. For information on how IDBroker works in CDP, see ADLS Gen2 and managed identities in the Management Console documentation.
https://docs.cloudera.com/runtime/7.2.9/cloud-data-access/topics/cr-cda-configuring-access-to-azure-on-cdp-public-cloud.html
2021-05-06T09:31:52
CC-MAIN-2021-21
1620243988753.91
[]
docs.cloudera.com
Orders Orders page displays the orders engaged by clients on the platform. It enables to manage restaurant’s orders. There are two types of orders : - Foodtech orders, engaged by end customers - Orders of deliveries, engaged with the integrated form for instance. An order can have these following status : - New, the order was not accepted by the restaurant yet - Accepted, the order is in preparation - Refused, the order has been refused - Ready, the order is waiting a bike messenger or its delivery is underway - Done, the order was delivered - Cancelled, the order was cancelled by the client or the restaurant List of orders The list of orders displays orders which are underway on the platform. You can display the cancelled orders by ticking the option : “Display cancelled orders”. The administrator can cancel an order by clicking on the button Cancel d’une commande. Displayed informations are : - id, order’s number - The type of order - Customer who engaged the order - The order’s statut - Total amount charged for the customer - The associated bill, that you can download in pdf format - Date of creation - The Cancelbutton for an order
https://docs.coopcycle.org/en/admin/orders/
2021-05-06T09:51:22
CC-MAIN-2021-21
1620243988753.91
[array(['/assets/images/orders_fr.png', 'Commandes'], dtype=object)]
docs.coopcycle.org
Contents - 1 Application Groups and Thresholds - 1.1 Adding or Deleting an Application Group - 1.2 Configuring an Application Group’s Attributes in Advisors - 1.3 Removing an Application Group from Advisors Configuration - 1.4 Thresholds, Threshold Violations, and Alerts - 1.5 Configuring Thresholds - 1.6 System Maintenance of Expired Alerts - 1.7 Alerts and E-Mail Notifications Application Groups and Thresholds This section describes how to configure application groups and thresholds. The following screenshot shows the Application Groups/Thresholds page in the Administration module. Adding or Deleting an Application Group New application groups must be added in Genesys Administrator. Adding and deleting application groups cannot be performed in the Advisors administration module. However, you can make an application group inactive or remove it from the Advisors configuration. To add a new application group in Genesys Administrator, or to delete an application group, see Advisors Business Objects. Configuring an Application Group’s Attributes in Advisors Use the General tab to maintain application groups. To edit an application group’s configuration attributes, select it in the upper panel and edit these details in the Edit panel. Alternatively, type the first few letters of its name in the Search field, click the icon beside the Search field , and then select from the list. When your edits are complete, click Save. The Name field cannot be edited. This value is configured in Genesys Administrator. Complete the fields in the Edit panel as follows: - Active: Select whether the status of the application group is active or inactive. The first time you make an application group active, it becomes part of the Advisors configuration. After this, you can use it to configure applications and contact groups. When you change such an application group to inactive, it remains available to use in configuration, and the configurations in which it is used do not change. However, CCAdv and WA do not use the application group when calculating data for the dashboards. - Zero Suppressed: Select Yes for application groups where little or no activity is expected. See Zero Suppression for details. When you have made the Edit panel selections and saved them, the following happens: - If the application group has been newly created in Genesys Administrator, the Configured field changes to Yes to indicate that the configuration is now complete on the Advisors side. - An Updated Successfully message displays at the top of the page. - The Remove from Advisors configuration button is activated. Removing an Application Group from Advisors Configuration To remove the application group from the Advisors configuration, click on the Remove from Advisors Configuration button. This removal is not synchronized back to Configuration Server. You cannot remove an application group if: - A metric threshold is defined in the context of the application group. - An active alert exists created by such a threshold. Thresholds, Threshold Violations, and Alerts Thresholds You can create thresholds on a metric's value to alert users to unacceptable values of that metric. The thresholds exist in the context of an application group. That is, for base objects related to one application group, the thresholds can be different than for the same objects related to a different application group. A threshold can have two or four values. The complete four values are low critical, low warning, high warning, and high critical. Either the two low thresholds can be empty, or the two high thresholds can be empty. Threshold Violations When a metric's value violates a threshold, the background to the metric's cell in the dashboard changes color. When a warning threshold is violated, the color is yellow. Violation of a critical threshold changes the color to red. These threshold violations appear in the Applications pane of the CCAdv dashboard, and the Contact Groups pane of the WA dashboard. If a threshold is constantly in a violated state, then it is probably set too tight for the current capabilities of the operating environment. Thresholds should be set carefully and periodically reviewed for tuning requirements. Threshold violations also appear in the Contact Centers pane in each dashboard. A violation appearing in the row for a business object in the Contact Centers pane means that an object related to that business object is reporting a threshold violation. Alerts A threshold violation escalates to an official alert when the metric's value remains above or below a threshold for a specific period of time. The duration to wait before creating an alert is set in the System Configuration page. Alerts appear in the Alerts pane in either dashboard. Thresholds therefore drive alerts. If, when an alert is triggered, no action will be taken or, at the least, no immediate value is delivered in knowing about that alert, it might be better to change the threshold or delete its values. You cannot delete or reset a threshold's values if the threshold is currently causing an active alert. To end the alert and make it inactive, change the threshold's values so that the metric will no longer causes a violation. When the alert ends, and CCAdv or WA has deleted it from the Advisors database, you can reset the threshold or delete its values. Configuring Thresholds The Application Groups/Thresholds page allows you to: - Define critical (red) thresholds, warning (yellow) thresholds, and normal conditions for each metric in the context of an application group, using the Application Thresholds tab. - Define critical (red) thresholds, warning (yellow) thresholds, and normal conditions for each metric in the context of an application group, using the Contact Group Thresholds tab. The Application Thresholds page and the Contact Group Thresholds page display the threshold rule details including: - Metric: Display name of the metric to which the threshold will be applied, when the metric belongs to an object related to the application group - Min and Max: Minimum and maximum permissible values for the threshold. Change these in the Report Metrics page. - Decimal Places: The number of decimal places that the metric's value will display. Set this in the Report Metrics page. This does not affect that values you enter for the threshold. - Lower-Bound Warning, Lower-Bound Critical, Upper-Bound Warning, Upper-Bound Critical: The threshold limits for warning and critical violations. See Adding or Updating Thresholds for details.ImportantYou cannot delete or reset a threshold's values if the threshold is causing an active alert, or caused an alert that is now expired but has not been deleted from the Advisors database. To end the alert and make it inactive, change the threshold's values so that the metric will no longer causes a violation. When the alert ends, and CCAdv or WA has deleted it from the Advisors database, you can reset the threshold or delete its values. - # of Exceptions: The number of exceptions. Exceptions You can add time-based alternative thresholds (that is, exceptions) for the calculation of violations to vary your performance objectives. To do this, see Threshold Exceptions. System Maintenance of Expired Alerts Contact Center Advisor XML Generator uses the following process to remove expired alerts from storage for currently active alerts: - During every processing cycle for the Short time profile group, XML Generator examines threshold violations and alerts. It creates new alerts, updates alerts that existed previously, and ends (expires) alerts that are no longer being caused. - Then, XML Generator deletes the alerts that it has set to expired, and also the manual alerts whose end time indicates they are expired. Workforce Advisor uses the following process to remove expired alerts from the storage for currently active alerts: - During every processing cycle, WA examines threshold violations and alerts. It creates new alerts, updates alerts that existed previously, and ends (expires) alerts that are no longer being caused. - After WA has processed all the alerts in this way, it deletes the alerts that it has set to expired. Alerts and E-Mail Notifications You can configure CCAdv and WA to send e-mail about alerts. Two parameters are important for managing these notifications, the Alert Creation Delay Interval and the Notification Refresh Rate. These are set on the System Configuration page. E-mail about alerts is sent by Distribution Lists that you configure to target your desired audience for the e-mail about a particular alert.
https://docs.genesys.com/Documentation/PMA/8.5.2/CCAWAUser/ApplicationGroupsandThresholds
2021-05-06T10:50:20
CC-MAIN-2021-21
1620243988753.91
[]
docs.genesys.com
TLS port sharing is enabled by default on Unified Access Gateway whenever multiple edge services are configured to use TCP port 443. Supported edge services are VMware Tunnel (Per-App VPN), Content Gateway and Web reverse proxy. Note: If you want TCP port 443 to be shared, ensure that each configured edge service has a unique external hostname pointing to Unified Access Gateway.
https://docs.vmware.com/en/Unified-Access-Gateway/3.3/com.vmware.uag-33-deploy-config.doc/GUID-0679AADA-457F-4688-AE46-AA91C327A90B.html
2021-05-06T10:52:07
CC-MAIN-2021-21
1620243988753.91
[]
docs.vmware.com
The NSX-V virtual networking solution includes the capability of deploying an Edge gateway as a load balancer. Currently, the NSX-V load balancer has basic load balancing functionality and it should not be considered a full-fledged load balancer with advanced configuration like F5 LTM. Use NSX-V version 6.1.3 and higher for all deployments as many issues with the load balancers have been resolved in this release. Prerequisites The following are the prerequisites for a functional NSX-V load balancer in front of a vRealize Operations Manager cluster: This document assumes that NSX-V deployment is already deployed in the environment and is fully functional. The NSX-V deployment is of version 6.1.3 or higher. NSX-V Edge is deployed and has access to the network on which vRealize Operations Manager cluster is deployed. Edge can be enabled for high availability, however it is not a requirement Currently, there are 2 types of modes the load balancer can be used: Accelerated and Non-Accelerated. Difference between Acceleration enabled/disabled is the LB will passthrough TCP connection (enabled) or terminate the TCP connection (disabled), and then send once the TCP connection is done, it will do open a TCP connection to the pool member.
https://docs.vmware.com/en/vRealize-Operations-Manager/services/vrops-manager-load-balancing/GUID-A303CCB0-B809-48B8-9721-D77ED6892C1E.html
2021-05-06T11:04:05
CC-MAIN-2021-21
1620243988753.91
[]
docs.vmware.com
You must create an application profile to define the behavior of a particular type of network traffic. After configuring a profile, you should associate the profile with a virtual server. The virtual server then processes traffic according to the values specified in the profile. Using profiles enhances your control over managing network traffic and makes traffic-management tasks easier and more efficient. Procedure - Log in to the vSphere Web Client. - Click Networking & Security and then click NSX Edges. - Double-click an NSX Edge. - Click Manage and then click the Load Balancer tab. - In the left navigation panel, click Application Profiles. - Click the Add ( ) icon. - Enter a name for the profile and select the traffic type for which you are creating the profile. For example: vrops_https. - Select the Type: TCP - Select Persistence as Source IP. - Enter 1800 for Expires in (seconds). - Select Ignore for Client Authentication. - Click OK to save the Profile Results When the encrypted traffic is balanced, the load balancer cannot differentiate between the traffic for vRealize Operations Manager analytics and EPOps.If you plan to use two load balancers, one for vRealize Operations Manager analytics and one for EPOps, you could use the same profile as both the profiles are identical. If you create two different profiles, only the name of the profiles is different, but the configurations for both the profiles are identical. Example Example:
https://docs.vmware.com/en/vRealize-Operations-Manager/services/vrops-manager-load-balancing/GUID-C1B2B27A-9FAB-4BD2-99B8-4FFAAB9532BC.html
2021-05-06T11:15:42
CC-MAIN-2021-21
1620243988753.91
[array(['images/GUID-FE67A69B-1DA1-4A07-9882-8230AA4106E9-low.png', 'image028'], dtype=object) ]
docs.vmware.com
How to Rename Git Local and Remote Branches Sometimes it is necessary to rename local or remote branches in Git when collaborating with a team on a project. In this tutorial, we are going to show how to rename Git local and remote branches. Steps to renaming local and remote branches¶ Let’s achieve the result with the steps described below: Renaming local branch to the new name¶ To rename the local branch to the new name, use the git branch command followed by the -m option: git branch -m <old-name> <new-name> To delete the old branch on remote (suppose, the name of remote is origin, which is by default), use the following command: git push origin --delete <old-name> Or you can shorten the process of deleting the remote branch like this: git push origin :<old-name> Pushing the new branch to remote¶ Then you should push the new branch to remote: git push origin <new-name> To reset the upstream branch for the new-name local branch use the -u flag with the git push command: git push origin -u <new-name> Branching¶ Git branches are an important part of the everyday workflow. Branches are a pointer to a snapshot of the changes you have made in Git. Branching helps cleaning up the history before merging it. Branches represent an isolated line of development. They are considered as a way to request a new working directory, staging area, and project history. The isolated lines of development for two features in branches make it possible to work on them in parallel and make the master branch free from questionable code. The git branch command creates, lists and deletes branches not allowing to switch between branches or put a forked history back together. Local and Remote Branches¶ The local branch is a branch existing on the local machine. It can be seen only by the local user. The remote branch is a branch on a remote location. A remote-tracking branch is a local copy of a remote branch. Assuming a newly-created <NewBranch> is pushed to origin using the git push command and -u option, a remote-tracking branch named <origin/NewBranch> is created on your machine. The remote-tracking branch tracks the remote branch <NewBranch> on the origin. Update and sync the remote-tracking branch with the remote branch using the git fetch or git pull commands. A local tracking branch is a local branch tracking another branch. Local tracking branches mostly track a remote-tracking branch. When pushing a local branch to the origin with git push -u, the local branch <NewBranch> tracks the remote-tracking branch <origin/NewBranch>.
https://www.w3docs.com/snippets/git/how-to-rename-git-local-and-remote-branches.html
2021-05-06T08:52:16
CC-MAIN-2021-21
1620243988753.91
[]
www.w3docs.com
You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here. Class: Aws::WAFRegional::Types::ListByteMatchSetsRequest - Defined in: - (unknown) Overview When passing ListByteMatchSetsRequest as input to an Aws::Client method, you can use a vanilla Hash: { next_marker: "NextMarker", limit: 1, } Instance Attribute Summary collapse - #limit ⇒ Integer Specifies the number of ByteMatchSetobjects that you want AWS WAF to return for this request. - #next_marker ⇒ String If you specify a value for Limitand you have more ByteMatchSetsthan the value of Limit, AWS WAF returns a NextMarkervalue in the response that allows you to list another group of ByteMatchSets. Instance Attribute Details #limit ⇒ objects. #next_marker ⇒.
https://docs.amazonaws.cn/sdk-for-ruby/v2/api/Aws/WAFRegional/Types/ListByteMatchSetsRequest.html
2021-05-06T10:46:29
CC-MAIN-2021-21
1620243988753.91
[]
docs.amazonaws.cn
honeybee Pick the right study status based on your current recruitment needs After submitting your study, it's status will be "Pending Approval". We internally check every study for quality and consistency before it can go live on our platform. If there are any concerns, we will notify you via email. If the study approval process is taking more than 3 days, please contact us at [email protected] All studies are hidden by default. While hidden, your study will not be searchable or joinable by participants. The only exception to this is if you invite participants to join your study via referral code (see Invite Participants for more information). Typical use cases for "Hidden" studies are: You must manually change your study's status to "Active" if you want it to searchable and joinable by the public. While active, your study will appear on Honeybee's search page as well as the Honeybee mobile app. These statuses are used to mark studies as completed, canceled or temporarily halted. Studies with this status are able to be changed back to any other status.
https://docs.honeybeehub.io/researcher/study_dashboard/study_statuses
2021-05-06T09:14:07
CC-MAIN-2021-21
1620243988753.91
[]
docs.honeybeehub.io
FolderItem.OpenResourceMovie From Xojo Documentation Method FolderItem.OpenResourceMovie(ResID as Integer) As Movie Supported for all project types and targets. Supported for all project types and targets. Opens the movie specified by ResID as a movie (Macintosh only). Notes Used only if the movie is stored as a MooV resource. Use OpenAsMovie for QT 4.0 (and greater) files. Example The following code accesses the resource movie numbered 1. The video/mp4 file type has been added to the project by the File Types Editor.
https://docs.xojo.com/FolderItem.OpenResourceMovie
2021-05-06T09:13:41
CC-MAIN-2021-21
1620243988753.91
[]
docs.xojo.com
Profiles and Writing Files¶ How to use profiles when opening files. Like Python’s built-in open() function, rasterio.open() has two primary arguments: a path (or URL) and an optional mode ( 'r', 'w', 'r+', or 'w+'). In addition there are a number of keyword arguments, several of which are required when creating a new dataset: driver width, height count dtype crs transform These same parameters surface in a dataset’s profile property. Exploiting the symmetry between a profile and dataset opening keyword arguments is good Rasterio usage. with rasterio.open('first.jp2') as src_dataset: # Get a copy of the source dataset's profile. Thus our # destination dataset will have the same dimensions, # number of bands, data type, and georeferencing as the # source dataset. kwds = src_dataset.profile # Change the format driver for the destination dataset to # 'GTiff', short for GeoTIFF. kwds['driver'] = 'GTiff' # Add GeoTIFF-specific keyword arguments. kwds['tiled'] = True kwds['blockxsize'] = 256 kwds['blockysize'] = 256 kwds['photometric'] = 'YCbCr' kwds['compress'] = 'JPEG' with rasterio.open('second.tif', 'w', **kwds) as dst_dataset: # Write data to the destination dataset. The rasterio.profiles module contains an example of a named profile that may be useful in applications: class DefaultGTiffProfile(Profile): """Tiled, band-interleaved, LZW-compressed, 8-bit GTiff.""" defaults = { 'driver': 'GTiff', 'interleave': 'band', 'tiled': True, 'blockxsize': 256, 'blockysize': 256, 'compress': 'lzw', 'nodata': 0, 'dtype': uint8 } It can be used to create new datasets. Note that it doesn’t count bands and that a count keyword argument needs to be passed when creating a profile. from rasterio.profiles import DefaultGTiffProfile with rasterio.open( 'output.tif', 'w', **DefaultGTiffProfile(count=3)) as dst_dataset: # Write data to the destination dataset.
https://rasterio.readthedocs.io/en/stable/topics/profiles.html
2021-05-06T10:34:07
CC-MAIN-2021-21
1620243988753.91
[]
rasterio.readthedocs.io
The first cases of influenza A (H3N2) variant (H3N2v) virus infection this year were reported in June 2013. Health Risk Assessment (HRA) is now being offered to all medicare beneficiaries at no costs to them. The heath risk assessment is a comprehensive evaluation of risk factors for an individual as determined by your physician in a face to face interview with the patient. Determination of ways to lower or mitigate these risks is the ultimate goal of the assessment process. This is an annual benefit for all medicare beneficiaries and is offered at no costs or co-payment to the individual. Call now to make your appointment to receive your free health risk assessment.
http://family-docs.com/
2021-05-06T08:53:25
CC-MAIN-2021-21
1620243988753.91
[]
family-docs.com
Advanced Zapcode Customization One of the key benefits of zapcodes is that the design of the code is more appealing to the viewer than the industrial, purely functional design of traditional barcodes. However, the default black & white style might not be the best choice for every design. Before trying out different designs, it's important to understand what's happening when a zapcode is being scanned with Zappar. The Bolts and the Bits Color Change To work correctly, the code must have good contrast between the background (light colour) and foreground (dark colour). Black on white provides the maximum contrast, but it’s ok to change these colours. To best mimic the way that Zappar sees grayscale, use a ‘Vibrance’ adjustment layer in Photoshop. Leave the Vibrance setting at 0 but reduce the Saturation down to -100. Bit Extension Another useful thing to note is that the ring around the outside is purely aesthetic, so these two zapcodes are both valid: Integrated Design Now let’s take a look at how we can use this knowledge to deeply integrate the zapcode into some artwork. This is the front cover of our ‘Little Book of Zaps’, and the character on the front is sporting an unconventional looking code. This code has 3 main colours (which are close enough to two when viewed in grayscale). This code also displays two more key changes. The bits are being sent in different directions once they’re far enough away from the bolt; the app still sees them when expected. In addition, the small gaps between the bits have been merged together - this will only work on bits that are immediate neighbours. Give it a zap and see what happens! Then try modifying your own zapcode’s colours and design.
https://docs.zap.works/general/design/advanced-zapcode-customization/
2021-05-06T09:53:06
CC-MAIN-2021-21
1620243988753.91
[array(['/static/v-b19a50f3/img/zapcode-creator/zapcode-customization/bits-and-bolt.png', None], dtype=object) array(['/static/v-b19a50f3/img/zapcode-creator/zapcode-customization/bolt-small.png', None], dtype=object) array(['/static/v-b19a50f3/img/zapcode-creator/zapcode-customization/bits-small.png', None], dtype=object) array(['/static/v-b19a50f3/img/zapcode-creator/zapcode-customization/Zapcode-small.png', None], dtype=object) array(['/static/v-b19a50f3/img/zapcode-creator/zapcode-customization/Zapcode-1-small.png', None], dtype=object) array(['/static/v-b19a50f3/img/zapcode-creator/zapcode-customization/Zapcode-2-small.png', None], dtype=object) array(['/static/v-b19a50f3/img/zapcode-creator/zapcode-customization/Zapcode-3-small.png', None], dtype=object) array(['/static/v-b19a50f3/img/zapcode-creator/zapcode-customization/Zapcode-2.png', None], dtype=object) array(['/static/v-b19a50f3/img/zapcode-creator/zapcode-customization/ps-options.png', None], dtype=object) array(['/static/v-b19a50f3/img/zapcode-creator/zapcode-customization/Zapcode-3.png', None], dtype=object) array(['/static/v-b19a50f3/img/zapcode-creator/zapcode-customization/Zapcode-2.png', None], dtype=object) array(['/static/v-b19a50f3/img/zapcode-creator/zapcode-customization/Zapcode-4.png', None], dtype=object) array(['/static/v-b19a50f3/img/zapcode-creator/zapcode-customization/zaps-cover.png', None], dtype=object) array(['/static/v-b19a50f3/img/zapcode-creator/zapcode-customization/zaps-cover-bw.png', None], dtype=object) array(['/static/v-b19a50f3/img/zapcode-creator/zapcode-customization/hidden-bits.png', None], dtype=object) array(['/static/v-b19a50f3/img/zapcode-creator/zapcode-customization/hidden-bits-shown.png', None], dtype=object) ]
docs.zap.works
- Chaplin Chaplin.Router → Source This module is responsible for observing URL changes and matching them against a list of declared routes. If a declared route matches the current URL, a router:match event is triggered. Chaplin.Router is a replacement for Backbone.Router and does not inherit from it. It is a stand-alone implementation with several advantages over Backbone’s default. Why change the router implementation completely? In Backbone there are no controllers. Instead, Backbone’s Router maps routes to its own methods, serving two purposes and being more than just a router. Chaplin on the other hand delegates the handling of actions related to a specific route to controllers. Consequently, the router is really just a router. While the router has been rewritten for this purpose, Chaplin is using Backbone.History in the background. That is, Chaplin relies upon Backbone for handling hash URLs and interacting with the HTML5 History API ( pushState). Declaring routes in the routes file By convention, all application routes should be declared in a separate file, the routes module. This is a simple module in which a list of match statements serve to declare corresponding routes. For example: match '', 'home#index' match 'likes/:id', controller: 'controllers/likes', action: 'show' match('', 'home#index'); match('likes/:id', {controller: 'controllers/likes', action: 'show'}); Ruby on Rails developers may find match intuitively familiar. For more information on its usage, see below. Internally, route objects representing each entry are created. If a route matches, a router:match event is published, passing the route object and a params hash which contains name-value pairs for named placeholder parts of the path description (like id in the example above), as well as additional GET parameters. Methods createHistory() Creates the Backbone.History instance. startHistory() Starts Backbone.History instance. This method should be called only after all routes have been registered. stopHistory() Stops the Backbone.History instance from observing URL changes. match([pattern], [target], [options={}]) Connects a path with a controller action. - pattern (String): A pattern to match against the current path. - target (String): Specifies the controller action which is called if this route matches. Optionally, replaced by an equivalent description through the optionshash. - options (Object): optional The pattern argument may contain named placeholders starting with a colon ( :) followed by an identifier. For example, 'products/:product_id/ratings/:id' will match the paths /products/vacuum-cleaner/ratings/jane-doe as well as /products/8426/ratings/72. The controller action will be passed the parameter hash {product_id: "vacuum-cleaner", id: "jane-doe"} or {product_id: "8426", id: "72"}, respectively. The target argument is a string with the controller name and the action name separated by the # character. For example, 'likes#show' denotes the show action of the LikesController. You can also equivalently specify the target via the action and controller properties of the options hash. The following properties of the options hash are recognized: params (Object): Constant parameters that will be added to the params passed to the action and overwrite any values coming from a named placeholder match 'likes/:id', 'likes#show', params: {foo: 'bar'} match('likes/:id', 'likes#show', {params: {foo: 'bar'}}); In this example, the LikesControllerwill receive a paramshash which has a fooproperty. constraints (Object): For each placeholder you would like to put constraints on, pass a regular expression of the same name: match 'likes/:id', 'likes#show', constraints: {id: /^\d+$/} match('likes/:id', 'likes#show', {constraints: {id: /^\d+$/}}); The idregular expression enforces the corresponding part of the path to be numeric. This route will match the path /likes/5636, but not /likes/5636-icecream. For every constraint in the constraints object, there must be a corresponding named placeholder, and it must satisfy the constraint in order for the route to match. For example, if you have a constraints object with three constraints: x, y, and z, then the route will match if and only if it has named parameters :x, :y, and :z and they all satisfy their respective regex. name (String): Named routes can be used when reverse-generating paths using Chaplin.utils.reversehelper: match 'likes/:id', 'likes#show', name: 'like' Chaplin.utils.reverse 'like', id: 581 # => likes/581 match('likes/:id', 'likes#show', {name: 'like'}); Chaplin.utils.reverse('like', {id: 581}); // => likes/581 If no name is provided, the entry will automatically be named by the scheme controller#action, e.g. likes#show. route([path]) Route a given path manually. Returns a boolean after it has been matched against the registered routes, corresponding to whether or not a match occurred. Updates the URL in the browser. - path can be an object describing a route by - controller: name of the controller, - action: name of the action, - name: name of a named route, can replace controller and action, - params: params hash. For routing from other modules, Chaplin.utils.redirectTo can be used. All of the following would be valid use cases. Chaplin.utils.redirectTo 'messages#show', id: 80 Chaplin.utils.redirectTo controller: 'messages', action: 'show', params: {id: 80} Chaplin.utils.redirectTo url: '/messages/80' Chaplin.utils.redirectTo('messages#show', {id: 80}); Chaplin.utils.redirectTo({controller: 'messages', action: 'show', params: {id: 80}}); Chaplin.utils.redirectTo({url: '/messages/80'}); changeURL([url]) Changes the current URL and adds a history entry without triggering any route actions. Handler for the globalized router:changeURL request-response handler. - url: string that is going to be pushed as the page’s URL dispose() Stops the Backbone.history instance and removes it from the router object. Also unsubscribes any events attached to the Router. On compliant runtimes, the router object is frozen, see Object.freeze. Request-response handlers of Chaplin.Router Chaplin.Router sets these global request-response: router:route path[, options] router:reverse name, params[, options], callback router:changeURL url[, options] Usage Chaplin.Router is a dependency of Chaplin.Application which should be extended by your main application class. Within your application class you should initialize the Router by calling initRouter (passing your routes module as an argument) followed by start. define [ 'chaplin', 'routes' ], (Chaplin, routes) -> 'use strict' class MyApplication extends Chaplin.Application title: 'The title for your application' initialize: -> super @initRouter routes @start() define([ 'chaplin', 'routes' ], function(Chaplin, routes) { 'use strict'; var MyApplication = Chaplin.Application.extend({ title: 'The title for your application', initialize: function() { Chaplin.Application.prototype.initialize.apply(this, arguments); this.initRouter(routes); this.start(); } }); return MyApplication; });
http://docs.chaplinjs.org/chaplin.router.html
2021-05-06T10:14:29
CC-MAIN-2021-21
1620243988753.91
[]
docs.chaplinjs.org
Data science is a team sport involving multiple stakeholders. Each player has specific pieces of information that together turn into a solution. Therefore, being able to collaborate 🤝 is not just an efficiency booster or a fancy add-on, but a necessity for data teams. The Atlan Chat 💬 feature helps you work better together! It helps data teams easily collaborate and stay on the same page when tackling a problem. On each data asset and even on the Glossary page, you can start a Chat to discuss issues, or ask questions about the asset, and resolve issues quickly. 🌟 Pro Tip: You can easily tag users in a Chat using "@" to send them a quick notification. Similarly, you can use "#" to tag any data asset in your Chat. Since the Chat is right next to the data asset, you can have a meaningful conversation with all the context you need right in front of you. ✨ Spotlight: Chat on the Glossary page is super helpful as it lets you easily discuss and update your glossary terms.
https://docs.atlan.com/collaborating-on-your-data/chat
2021-05-06T09:09:17
CC-MAIN-2021-21
1620243988753.91
[]
docs.atlan.com
Despeckle Node¶ The Despeckle node is used to smooth areas of an image in which noise is noticeable, while leaving complex areas untouched. This works by the standard deviation of each pixel and its neighbors is calculated to determine if the area is one of high complexity or low complexity. If the complexity is lower than the threshold then the area is smoothed using a simple mean filter. Eigenschaften¶ - Threshold The threshold to control high/low complexity. - Neighbor The threshold to control the number of pixels that must match.
https://docs.blender.org/manual/de/dev/compositing/types/filter/despeckle.html
2021-05-06T10:47:44
CC-MAIN-2021-21
1620243988753.91
[]
docs.blender.org
The goal of this tutorial is to show some methods creating a sky or an evironment for your scene. First, create a new empty map by clicking “File”, then “New” in the menubar. Alternatively, you can open an existing map, to which you want to add a skybox. Create a block. For this, click the “New Brush” Icon in the left toolbar. In the appearing green bar at the top of the main-window check if “block” is selected as “New brush shape”. In the orthogonal windows (those with the front view, left view, down view), draw a cube. Make it's dimensions bigger than your scene shall be, so that it surrounds the entire scene. Also, be sure that the size of height, width and depht is the same. Press enter. Don't worry, if the size is not really exact now, you can simply correct it later. Next, choose your texture for your sky-map. You can select a texture from the skydome-folder in the textures-directory coming with your Cafu SDK, or you can make your own skybox, textures. How this can be done, I will explain later in another tutorial. To select your sky-texture, press the “Browse”-Button, which is located under the Texture-Preview-Window. A new window opens, showing all texures the CaWe can find in it's pathes. In the filter input box, at the bottom of this window, type sky, so that only the textures related to this word are shown. For example, select the “PK_Autumn”-Texture, by double-clicking it. The window will close, and the selected texture will be shown in the preview window. Now, select your cube with the selection tool. For this, click the upper left blue icon in the left toolbar, then click on one face of your cube. The whole cube will be drawn in red color. Alternatively, you can draw a select box around your cube, and it will be highlighted in red too. If you have a map loaded before, be sure that nothing els is selected. In the blue toolbar at the top of the main-window, click “Apply Material”. With your mouse pointer in one of the orthogonal windows, press enter. The sky-texture will be projected on your cube immediately in a perfect way. Don't worry, if your 3D-window will stayed black. Probably your camera is inside your cube, and you are not able to see it right now. This will change during the simple next, last step. Again, select your Box with the select tool. Then, in the menu-bar, first click “Tools”, then “Make hollow”. In the opening window, confirm the entry of 32 units, press “Enter”, and you are done. Your skybox will be shown also in the 3D-Window, and you are ready to compile your map.
https://docs.cafu.de/mapping:cawe:tutorials:sky
2021-05-06T09:36:10
CC-MAIN-2021-21
1620243988753.91
[]
docs.cafu.de
Int Buffer. Threshold Type Property Definition This API supports the Mono for Android infrastructure and is not intended to be used directly from your code. protected override Type ThresholdType { get; } member this.ThresholdType : Type Property Value A Type which provides the declaring type. Remarks Portions of this page are modifications based on work created and shared by the
https://docs.microsoft.com/en-us/dotnet/api/java.nio.intbuffer.thresholdtype?view=xamarin-android-sdk-9
2021-05-06T11:15:02
CC-MAIN-2021-21
1620243988753.91
[]
docs.microsoft.com
AutoSupport sends messages to different recipients, depending on the type of message. Learning when and where AutoSupport sends messages can help you understand messages that you receive through email or view on the Active IQ (formerly known as My AutoSupport) web site. Unless specified otherwise, settings in the following tables are parameters of the system node autosupport modify command. When events occur on the system that require corrective action, AutoSupport automatically sends an event-triggered message. AutoSupport automatically sends several messages on a regular schedule. You can manually initiate or resend an AutoSupport message. Technical support can request messages from AutoSupport using the AutoSupport OnDemand feature.
https://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-sag/GUID-5D5E1628-188C-4972-B06D-FAE63FC072EC.html
2021-05-06T10:15:22
CC-MAIN-2021-21
1620243988753.91
[]
docs.netapp.com
TrilioVault enables Openstack administrators to set Project Quotas against the usage of TrilioVault. The following Quotas can be set: Number of Workloads a Project is allowed to have Number of Snapshots a Project is allowed to have Number of VMs a Project is allowed to protect Amount of Storage a Project is allowed to use on the Backup Target The TrilioVault Quota feature is available for all supported Openstack versions and distributions, but only Train and higher releases include the Horizon integration of the Quota feature. Workload Quotas are managed like any other Project Quotas. Login into Horizon as user with admin role Navigate to Identity Navigate to Projects Identify the Project to modify or show the quotas on Use the small arrow next to "Manage Members" to open the submenu Choose "Modify Quotas" Navigate to "Workload Manager" Edit Quotas as desired Click "Save" TrilioVault is providing several different Quotas. The following command allows listing those. TrilioVault 4.1 do not yet have the Quota Type Volume integrated. Using this will not generate any Quotas a Tenant has to apply to. workloadmgr project-quota-type-list The following command will show the details of a provided Quota Type. workloadmgr project-quota-type-show <quota_type_id> <quota_type_id> ➡ID of the Quota Type to show The following command will create a Quota for a given project and set the provided value. workloadmgr project-allowed-quota-create --quota-type-id quota_type_id--allowed-value allowed_value--high-watermark high_watermark--project-id project_id <quota_type_id> ➡ID of the Quota Type to be created <allowed_value>➡ Value to set for this Quota Type <high_watermark>➡ Value to set for High Watermark warnings <project_id>➡ Project to assign the quota to The high watermark is automatically set to 80% of the allowed value when set via Horizon. A created Quota will generate an allowed_quota_object with its own ID. This is ID is needed when continuing to work with the created Quota. The following command lists all TrilioVault Quotas set for a given project. workloadmgr project-allowed-quota-list <project_id> <project_id>➡ Project to list the Quotas from The following command shows the details about a provided allowed Quota. workloadmgr project-allowed-quota-show <allowed_quota_id> <allowed_quota_id> ➡ID of the allowed Quota to show. The following command shows how to update the value of an already existing allowed Quota. workloadmgr project-allowed-quota-update [--allowed-value <allowed_value>][--high-watermark <high_watermark>][--project-id <project_id>]<allowed_quota_id> <allowed_value>➡ Value to set for this Quota Type <high_watermark>➡ Value to set for High Watermark warnings <project_id>➡ Project to assign the quota to <allowed_quota_id> ➡ID of the allowed Quota to update The following command will delete an allowed Quota and sets the value of the connected Quota Type back to unlimited for the affected project. workloadmgr project-allowed-quota-delete <allowed_quota_id> <allowed_quota_id> ➡ID of the allowed Quota to delete
https://docs.trilio.io/openstack/admin-guide/workload-quotas
2021-05-06T09:57:55
CC-MAIN-2021-21
1620243988753.91
[]
docs.trilio.io
This page contains information related to an old version of the Read-Only Grid Column. The Paging Grid Text Column Component has been deprecated. Function: a!gridTextColumn() Displays a column of text within a paging grid. To display text in an editable grid, use a text component. Parameters Notes Examples This example needs to be used with the a!gridField() function for it to display anything. Examples that you can see in action are available in the Grid Field section. On This Page
https://docs.appian.com/suite/help/19.4/Grid_Text_Column_Component.html
2020-03-28T23:26:25
CC-MAIN-2020-16
1585370493121.36
[]
docs.appian.com
Cloning Clone is one of the most useful tools available to you - it saves you time and it's kind to your thermometer. To make a clone, first press and hold Secondary motion controller Triangle button.the secondary triangle button Hover your primary imp Primary motion controller Imp over the object to be cloned and grab it with the primary T button Primary motion controller T button. Move your primary controller Primary motion controller and you'll see you are holding the clone. Release the secondary triangle button Secondary motion controller Triangle button as soon as you've made the clone. Release the primary T button Primary motion controller T button to place the clone.Next: Clone-from-Clone The Dreams User Guide is a work-in-progress. Keep an eye out for updates as we add more learning resources and articles over time.
https://docs.indreams.me/en/guide/dreams-workshop/motion-controller-gestures/universal/cloning
2020-03-29T00:59:40
CC-MAIN-2020-16
1585370493121.36
[]
docs.indreams.me
Contents - Functional architecture Organizational structures can be imported and exported: Synchronization of the organizational structure of the Platform with the data on the organizational and staff structure of the organization is produced in two stages: Data import to the organizational structure is performed by adding and updating data. You can import an organizational structure in two ways: In the window that opens, you must specify the name of the update job, the import type, the site and the path to the file to import. You can also enable or disable the task using the check box. There is an import with full update employee information (only new objects are added and existing organizational objects are updated) and an import with full update employee information with the movement between units. (only new ones are added and existing organizational structure objects are updated. At the same time, employees are removed from roles and divisions which is not listed in the “DepartmentUniqueNames” and “RoleUniqueNames” items of the import file). With both methods of import, divisions and roles are not removed from the organizational structure. In the case of importing an organizational structure file to an already existing hierarchy, data is merged (supplemented). If merging data is unacceptable, delete the existing organizational structure before importing. If the file for import does not contain any division that is in the organizational structure, then only employees will be removed from this division, the division will remain empty in the organizational structure. When you add roles and divisions on the site, corresponding SharePoint groups are created. When a role or division is completely removed from the organizational structure, the groups created from the site are also deleted. Information about deleted departments and roles from the document forms will also be deleted and the fields in which this information was displayed are cleared. Described in document import organizational structure may be either an initial loading means organizational structure and adjusting means. In the settings you can specify any frequency of data import. The frequency of updating the organizational structure can be configured in the “Platform: Updating organization structure” task menu in the task settings in the Administration Center: You can also adjust the organizational structure manually, by highlighting the required attribute and clicking the «Edit» button on the top panel: The organizational structure should be described in four data formats: Each organizational structure element has its own UniqueName, which can be generated in two ways: either by the system automatically (32x-digit number – guid is formed during creation), or you can manually specify the desired name. Table 1. The list of attributes of the organizational structure All employees you would like to see in the organizational structure should be in the unloading. All departments are imported in accordance with the import file. If a department has its parent department, then it will be within the specified parent department.
https://docs.systemz.io/en/platform-sp-2016/create-an-organizational-structure/orgstructure-import-requirements/functional-architecture/
2020-03-28T23:24:09
CC-MAIN-2020-16
1585370493121.36
[]
docs.systemz.io
Contents - Platform Edit Condition The condition editor is used to configure: On the form of the condition editor, you must select the behavior that will be applied if the specified conditions are met, as well as the behavior if the conditions are not met. Conditions can be composite, consisting of several simple, combined by logical AND / OR operations: Click the «Add “OR” condition» or «Add “AND” condition» button to add a new condition. Specify the condition in the form that opens: Select the field of the current list or one of the functions in the left or right part of the condition: The list of available conditions is formed depending on the selected field type or function in the left part of the condition: Select the current list field or function in the right part of the condition.
https://docs.systemz.io/en/platform-sp-2016/work-with-the-platform-list/platform-edit-condition/
2020-03-28T23:55:57
CC-MAIN-2020-16
1585370493121.36
[]
docs.systemz.io
Table of Contents Product Index Xana is a character for Genesis 2 Female, with highly detailed skin and makeups, green elf skin with glitter and tribal tattoos, and many options to customize her appearance. Includes a beautiful custom sculpted morph for the head, and also sculpted genitalia, elven ears, and two extra green elf skins, one with tribal tattoos and other without them, and also makeups,. Also includes 10 eye colors (eight natural + two fantasy colors), seven natural makeups, four fantasy makeups, two makeup options for the green elf skin (with green or pink lip color), seven eyelashes colors that will look great with the makeups, eight lip colors, and seven hands and feet nails.
http://docs.daz3d.com/doku.php/public/read_me/index/19346/start
2020-03-29T01:11:33
CC-MAIN-2020-16
1585370493121.36
[]
docs.daz3d.com
Before8887777 to When someone calls 9998887777. Example didML Code Example didML Code <?xml version="1.0" encoding="UTF-8"?><Response><dial>19499300360</dial></Response>
https://docs.didforsale.com/voice-and-sms-apis/voice-and-sms
2020-03-28T23:36:11
CC-MAIN-2020-16
1585370493121.36
[]
docs.didforsale.com
Hands-on Introduction for Getting Started with F# in the Browser using tryfsharp.org Guest blog by Jin Yun Soo Microsoft Student Partner at Imperial College About Me Hello, I am Jin. EnJineering is my tonic. I study Electrical and Electronic Engineering at Imperial College London, and have been involved in tech education and outreach through teaching children and teens coding. I am also a student champion/volunteer at the Imperial College Advanced Hackspace. You can connect with me via LinkedIn and find out more about me from my personal website. I designed and coded my website using Angular, and then deployed it as a cloud app using Microsoft Azure App Service with the free Microsoft Imagine subscription (yay!) that they offer for students. I set up a Continuous Integration (CI) pipeline such that pushing changes to GitHub will trigger the build process on VSTS and update the Azure App automatically. But of course, this blog is not about my story. It is about YOUR journey with technology. And in this article, I shall share with you how you can get started with the .NET functional programming language, F#, using tryfsharp.org. Introduction Based on my personal experience and what I hear from my friends, students in engineering courses are not usually exposed to functional programming in the core curriculum, whereas students in computer science are typically introduced to different types of programming languages including functional in their first year. I find it rather odd because computation in functional programming works like an evaluation of mathematical functions, which should make it very intuitive for students in mathematics and engineering. One of the key features that I like about functional programming is immutability. After being created, the state of an object cannot be modified. We need not worry about side effects. This gives us a peace of mind when working on multi-threaded applications. Particularly, F# is a typed language that is designed to be functional and includes .NET features such as runtime support, object model, and libraries. You can use F# with, for instance, Parallel Extensions for .NET. Parallel and asynchronous I/O programming can be made easier with F# asynchronous workflows. F# for Fun and Profit and Phillip Carter and Mads Torgersen from Microsoft's .NET team explained why you should use F# better than I could. In this article, we will explore how we can use tryfsharp.org to, well, try F#. This platform offers a quick, easy, and effective way to learn and code F# using the browser. The tutorials are well structured and do not appear to be convoluted nor overwhelming. If you go to the Scientific and Numerical Computing section, you will be amazed by how complex logic for things that we are familiar with such as statistics, linear algebra, differential equations, and Fast Fourier Transform can be written so simply and elegantly in F#. Figure 1: Six categories of tutorials in the Learn section of tryfsharp.org. Requirements To use tryfsharp.org, you only need a browser and the Microsoft Silverlight plugin. I asked a couple of friends to visit the site without giving them any heads-up so that I can understand what issues people who are just getting started might face. If you are using Chrome, the warning in the Output Window will not ask you to install the needed components because Chrome is not supported anymore. If you are using Firefox, you will be prompted with the link to install the plugin but you will not be able to enable the plugin in the latest Firefox. Figure 2: Two possible warnings. In short, it is probably safest to use Internet Explorer and install the Microsoft Silverlight plugin. If the problem persists, make sure the plugin is enabled for the tryfsharp.org website through your browser settings. Fun stuff! We shall use a very simple example, the recursive factorial function, to go through the core features of tryfsharp.org and pick up bits and pieces of the F# language syntax or style along the way. Learn Interface 1. Go here . On the left, you will see the Content Window which shows explanations and instructions for the tutorial. On the right, you will be able to type your code in the Script Window, and then see the results of running the code in the Output Window. 2. If you are lazy to type, I mean, eager to go through it quickly, click ‘load and run’ to load the recursive factorial code to the Script Window, and run the code. Note that if you click ‘load and run’ for, say, the recursive power function code below it now, the factorial code that is already present in the Script Window will be overwritten. Figure 3: ‘Load and run’ code from tutorials in the Learn section. Figure 4: When running the code without calling the function, the Output Window will show you that the function takes an input n of type integer and returns an integer. According to the website, in cases where a code example has external dependencies, the code and references are loaded automatically with the example code. This feature is intended to allow each individual example to run independently. 3. Instead of following along the other examples or navigating to the next page (which you can do on your own), let’s continue to work on the factorial function. You can call the function like this: Figure 5: Calling the factorial function. I am serious. No brackets. No semi-colons. So clean and simple. 4. Let’s check the code with different types of input. Type the following into the Script Window without running the code. Hovering on the red curly underlined argument of the first line in Figure 6 shows ‘The value or constructor 'ilovefood' is not defined’, the second one shows ‘This expression was expected to have type int but here has type string’, whereas the third one shows ‘This expression was expected to have type int but here has type float’. Figure 6: You will be alerted before even running the code. Neat. Unlike languages like Python, when you use the wrong type in your F# code, you will be alerted before even running it . This makes debugging in F# easy. 5. Now, call ‘factorial’ using a negative integer like -10 as the input. The browser would show an error message and close. As you probably know already, this is because the base case of n=0 can never be reached, and thus causes a stack overflow exception. 6. We can continue to experiment and build on top of the given code. Use the ‘failwith’ function to raise an exception when n < 0. This makes the code more robust. Figure 7: Use the ‘failwith’ function to raise an exception when n < 0. 7. Remember to use the keyword ‘rec’ when defining recursive functions. One advantage of the keyword ‘rec’ is reducing the probability of using recursion unintentionally. 8. Now, you may be thinking, ‘hmm, sounds good so far. But that is just a simple implementation of recursion. What about tail call optimisation?’ Let’s add the following chunk of code to the Script Window without removing the previous code: Figure 8: Add tail recursive factorial function. Note that in F#, ‘=’ instead of ‘==’ is used as a comparison operator to indicate equality. 9. If you keep experimenting like this, whether on the Learn or Create Interface, your code could become a lot longer and at some point, you might feel you only want to run and test out a certain part of the code. Typically, one would comment out and/or de-comment different parts of the code when one is experimenting with it. But this F# environment makes things even more convenient: if, say, you only want to run ‘factorialTail 10’, simply highlight that line and run it! Create Interface 1. Copy the ‘factorialTail’ function in Figure 8 and proceed here. You can add and upload multiple files on the left, and then load them on the Script Window. This means you can download code samples from the web, and then upload them here to experiment with them. You can also save your code with the button above the Output Window. The rest is similar to the Learn Interface. I will, however, introduce a few new features here. 2. When using the features on the Create Interface, you will be prompted to get a nickname and sign in with your Microsoft or Facebook account. 3. Paste and save the ‘factorialTail’ function code. Let’s produce an array containing results for the factorials of 0 to 10 inclusive. For a typical imperative language, you would use a for loop or while loop. In F#, you can do it in one line, while being very clear about what the code is trying to do. Figure 9: The map function applies the function ‘factorialTail’ to each element in the array of integers from 0 to 10 inclusive, and returns the array of results in the same order. Figure 10: Resulting array displayed in the Output Window upon running the code. Some other common and useful functions besides ‘map’ include ‘Reduce’, ‘Fold’, ‘Scan’, and ‘Filter’. 4. I find the following features very helpful when one is new to the functions available in F#. Figure 11: When you type ‘.’ after ‘Array’, you will be prompted with a list of available functions that you can apply to it. Figure 12: The Output Window will display information about the function as you scroll through the list of available functions. 5. Notice the ‘show canvas’ button on top of the Script Window? The canvas is where we can see plots and visualisations generated by our code. Figure 13: One way to plot the graph for factorial. Figure 14: Now we can visualise how bad algorithms of O(n!) complexity are! 6. Since this is a browser-based environment, we will not have full access to the .NET library. However, we can use all the F# language features including asynchronous and queries. 7. On the left window, click on your file to download or share it! You can view my file with this link Explore Section The Explore section provides some great information and resources! Now that you have had a glimpse of tryfsharp.org and F#, I hope you are excited to go through the tutorials as well as the resources. ‘But I am not sure if I want to do tech. I might venture into finance/consulting/business …’ If you are someone who thinks, ‘I am not confident in coding because it’s not intuitive to me’ or ‘I need to earn enough to pay off my student loans quickly, and I’m not sure I am good enough to be able to do so in the tech industry’, I recommend that you give F# a go. Many people I know find it intuitive due to its functional nature. You might surprise yourself. Besides, functional programming has become very popular in various fields including science, data science, and financial computing. Some thoughts about tryfsharp.org and my experience with functional programming tryfsharp.org lowers the barrier of entry by providing useful tutorials and a browser-based environment that allows you to experiment with a lot of language syntax and features. Based on my experience teaching kids coding and talking to people in my university, some people (including myself) have moments of doubt and thoughts like ‘I’m not good enough’ or ‘I started coding really late so I’m not sure I can do this right’. There might be this one guy in your school who wears a hoodie, has a lot of confidence in what he does, and started coding since the age of 7. But there are also lot of good techies who do not fit into the media stereotype. Besides, technology is evolving so quickly these days that it is a learning process every day even for the most experienced programmers. I started off my internship at a financial technology company knowing nothing about functional programming. Yet I had to use a proprietary visual programming language (VPL) that is similar to F#. The projects assigned to me were experimental and rather different from what the team usually does, so there were no relevant past projects that I could refer to. Nevertheless, such circumstances allowed me to approach the problems with a fresh outlook. I discovered and documented some quirks of the VPL, and wrote sample use cases for some functions that were rarely used in their usual projects such that they can be applied in the context of the team’s work. By sheer hard work, I completed the internship with good reviews. My point is, newbies can still contribute in some ways, in different ways. Just keep thinking in terms of learning and growth. Want more? Keyboard shortcuts for tryfsharp.org It’s easy to use F# on Mac
https://docs.microsoft.com/en-us/archive/blogs/uk_faculty_connection/hands-on-introduction-for-getting-started-with-f-on-the-browser-using-tryfsharp-org
2020-03-29T01:11:25
CC-MAIN-2020-16
1585370493121.36
[]
docs.microsoft.com
and Marko Slyz of the OSG VO Forum - SURAgrid Resource and Application Discovery - Documentation of a proposed solution that use's OSG's BDII database. Participating Conference Bridge: 800-377-8846 Pin: 14421498
https://docs.uabgrid.uab.edu/sgw/index.php?title=SG_Call_March_5,_2012&oldid=225
2020-03-29T00:13:13
CC-MAIN-2020-16
1585370493121.36
[]
docs.uabgrid.uab.edu
Office Communications Server Launch Date For.
https://docs.microsoft.com/en-us/archive/blogs/tarpara/office-communications-server-launch-date
2020-03-29T00:44:15
CC-MAIN-2020-16
1585370493121.36
[]
docs.microsoft.com
Writing a Post-Authentication Handler¶ WSO2 Identity Server authentication framework facilitates you with the pluggable architecture of multiple inbound/outbound protocols as well as local and federated authenticators including a large number of extension points. The Post Authentication Handler is one such extension point which allows you to do a task upon successful authentication. Authentication to the system is only successful once the execution of post-authentication handlers is completed. The following handlers are examples of post-authentication handlers that are available by default. Application authorization handler - Once the user successfully authenticates to a service provider, this authorization handler will check whether the given user is entitled to login by evaluating a xacml policy. This happens during authorization. Missing mandatory claim handler - When mandatory claims are configured in a service provider under claim configurations, the user is prompted to fill in mandatory claim values if the values are not already known at the point of authentication. Consent handler / disclaimer dialog - This handler requests for either consent or disclaimer approval. Once the authentication steps are completed, the user is prompted for consent or disclaimer approval and the user is only able to proceed once it is accepted or approved. Writing a post-authentication handler¶ Writing a custom post-authentication handler is fairly simple. For a sample implementation consisting of the disclaimer dialog, see the sample post-authentication handler . Follow the instructions on the readme to try it out. Extend the org.wso2.carbon.identity.application.authentication.framework.handler.request.AbstractPostAuthnHandler to write a custom handler. This allows you to enable/disable or change the priority of this handler via configurations in identity. xml configuration file. The which you need to implement in order to implement a post-authentication handler. The post-authentication operations can be done within the implementation of this handler. The response can be coveyed on the interface using one of the following two methods. By returning a PostAuthnHandlerFlowStatus¶ This method of returning the response can have multiple flow statuses: As seen in the sample implementation, the disclaimer page is redirected and it stores the “consentPoppedUp” state so that next time the post handler continues upon the response, it can look for the disclaimer response and proceed. By throwing a PostAuthenticationFailedException¶ A post-authentication exception along with an error code and message can be thrown if you wish to break the login flow or do not need to continue the login flow. The error code will be displayed in an error page. For example, this exception can be used for failing a login attempt due to an authorization failure. Follow the steps given in the sample post-authentication handler readme to install this sample and get it working with WSO2 Identity Server. You can enable and disable this newly written handler using the configuration shown below in the <IS_HOME>/repository/conf/deployment.toml . You can also change the execution order using the order parameter. The handler with the lesser value for the order parameter will be executed first. [[event_listener]] id = "custom_post_auth_listener" type = "org.wso2.carbon.identity.core.handler.AbstractIdentityHandler" name = "org.wso2.carbon.identity.post.authn.handler.disclaimer.DisclaimerPostAuthenticationHandler" order = 899 Note Note: These configurations will not be effective if the getPriority and isEnable methods of your post authentication handler are overridden.
https://is.docs.wso2.com/en/next/develop/writing-a-post-authentication-handler/
2020-03-28T23:26:58
CC-MAIN-2020-16
1585370493121.36
[]
is.docs.wso2.com
GNAT Pro User's Guide The GNAT Pro Ada Compiler GNAT Pro Version 7.1.0w Document revision level 248664 Date: 2012/05/23", and with no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License". -- Creating Unit Tests Using gnattest Other Utility Programs Code Coverage and Profiling Running and Debugging Ada Programs Platform-Specific Information for the Run-Time Libraries Example of Binder Output File Elaboration Order Handling in GNAT Conditional Compilation Inline Assembler Compatibility and Porting Guide Microsoft Windows Topics
http://docs.adacore.com/gnat-unw-docs/html/gnat_ugn.html
2012-05-25T17:49:16
crawl-003
crawl-003-010
[]
docs.adacore.com
Often,. Under> where you can supply any directory you like for the --home option. Lazy typists can just type a tilde ( ~); the install command will expand this to your home directory: python setup.py install --home=~ The --home option defines the installation base directory. Files are installed to the following directories under the installation base as follows:.)\Python directory on the current drive. The installation base is defined by the --prefix option; the --exec-prefix option is not supported under Windows. Files are installed as follows: Like Windows, Mac OS has no notion of home directories (or even of users), and a fairly simple standard Python installation. Thus, only a --prefix option is needed. It defines the installation base, and files are installed under it as follows: See section 2.1 for information on supplying command-line arguments to the setup script with MacPython. See About this document... for information on suggesting changes.See About this document... for information on suggesting changes.
http://docs.python.org/release/2.3.3/inst/alt-install-windows.html
2012-05-25T17:49:28
crawl-003
crawl-003-010
[]
docs.python.org
TTable encapsulates a database table. TTable = class(TDBDataSet); class TTable : public TDBDataSet; Use TTable to access data in a single database table using the Borland Database Engine (BDE). TTable provides direct access to every record and field in an underlying database table, whether it is from Paradox, dBASE, Access, FoxPro, an ODBC-compliant database, or an SQL database on a remote server, such as InterBase, Oracle, Sybase, MS-SQL Server, Informix, or DB2. A table component can also work with a subset of records within a database table using ranges and filters. At design time, create, delete, update, or rename the database table connected to a TTable by right-clicking on the TTable and using the pop-up menu.
http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/delphivclwin32/DBTables_TTable.html
2012-05-25T17:45:40
crawl-003
crawl-003-010
[]
docs.embarcadero.com
TStoredProc encapsulates a stored procedure in a BDE-based application. TStoredProc = class(TDBDataSet); class TStoredProc : public TDBDataSet; Use a TStoredProc object in BDE-based applications to use a stored procedure on a database server. A stored procedure is a grouped set of statements, stored as part of a database server's metadata (just like tables, indexes, and domains), that performs a frequently repeated, database-related task on the server and passes results to the client. TStoredProc reuses the Params property to hold the results returned by a stored procedure. Params is an array of values. Depending on server implementation, a stored procedure can return either a single set of values, or a result set similar to the result set returned by a query.
http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/delphivclwin32/DBTables_TStoredProc.html
2012-05-25T17:45:21
crawl-003
crawl-003-010
[]
docs.embarcadero.com
. 그림 3.5. 두 개의 도킹 유형. 탭 메뉴의 세부 메뉴 : At the top of each Tab menu is an entry that opens into is a toggle. If it is checked, then an Image Menu is shown at the top of the dock (see 그림 3.8. “A dock with an Image Menu highlighted.”). It is not available for dialogs docked below the Toolbox. This option is interesting only if you have several open images on your screen.).
http://docs.gimp.org/ko/gimp-concepts-docks.html
2012-05-25T18:52:45
crawl-003
crawl-003-010
[]
docs.gimp.org
Sends all updated, inserted, and deleted records from the client dataset to the provider for writing to the database. function ApplyUpdates(MaxErrors: Integer): Integer; virtual; virtual __fastcall int ApplyUpdates(int MaxErrors); 1.Generates a BeforeApplyUpdates event. (This event may not be public on some TCustomClientDataSet descendants.) 2.Calls the provider to apply the updates in the Delta property and receives any records returned by the provider because they generated errors when it attempted to apply them to the database. 3.Generates an AfterApplyUpdates event. (This event may not be public on some TCustomClientDataSet descendants.) 4.Calls the client dataset's Reconcile method to reconcile any records that are returned in step 2.;
http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/delphivclwin32/DBClient_TCustomClientDataSet_ApplyUpdates.html
2012-05-25T16:36:09
crawl-003
crawl-003-010
[]
docs.embarcadero.com
Figuur 16.258. An example of Make Seamless. Original
http://docs.gimp.org/nl/plug-in-make-seamless.html
2012-05-25T18:10:37
crawl-003
crawl-003-010
[]
docs.gimp.org
Figuur 16.99. The same image, before and after applying Lighting filter Original image Filter “Lighting Effects” applied This filter simulates the effect you get when you light up a wall with a spot. It doesn't produce any drop shadows and, of course, doesn't reveal any new details in dark zones. When Interactive is checked, parameter setting results are interactively displayed in preview without modifying the image until you click on OK button. If Interactive is not checked, changes are displayed in preview only when you click on the button. This option is useful with a slow computer. Any other options are organized in tabs: Makes destination image transparent when bumpmap height is zero (height is zero in black areas of the bumpmapped image). Creates a new image when applying filter. For quick CPU... You can specify the distance of the light source from the center of the image with this slider. The range of values is from 0.0 to 2.0. In this tab, you can set light parameters. With Light 1 ... Light 6 you can create six light sources and work on each of them separately. The filter provides several light types in a drop-down list: Displays a blue point at center of preview. You can click and drag it to move light all over the preview. The blue point is linked to preview center by a line which indicates the direction of light. This deletes the light source (light may persist...). When you click on the color swatch, you bring a dialog up where you can select the light source color. With this option, you can set light intensity. Determines the light point position according to three coordinates: X coordinate for horizontal position, Y for vertical position, Z for source distance (the light darkens when distance increases). Values are from -1 to +1. This option should allow you to fix the light direction in its three X, Y and Z coordinates. With this option, you can decide whether all light sources must appear in the Preview, or only the source you are working on. You can save your settings with theand get them back later with the . These options don't concern light itself, but light reflected by objects. Small spheres, on both ends of the input boxes, represent the action of every option, from its minimum (on the left) to its maximum (on the right). Help pop ups are more useful. With these option, you can set the amount of original color to show where no direct light falls. With this option, you can set the intensity of original color when hit directly by a light source. This option controls how intense the highlight will be. With this option, higher values make the highlight more focused. When this option is checked, surfaces look metallic. In this tab, you can set filter options that give relief to the image. See Bump mapping is only one (very effective) way of simulating surface irregularities which are not actually contained in the geometry of the model. . With this option, bright parts of the image will appear raised and dark parts will appear depressed. The aspect depends on the light source position. You have to select there the grey-scale image that will act as a bump map. See Bump Map plug-in for additional explanations. This option defines the method that will be used when applying the bump map; that is, the bump height is a function of the specified curve. Four curve types are available: Linear, Logarithmic, Sinusoidal and Spherical. This is the maximum height of bumps. When you check this box, the following option is enabled: You have to select there a RGB image, present on your screen. Please note that for this option to work you should load another image with GIMP before using it. An example can be found at [BUDIG01].
http://docs.gimp.org/nl/plug-in-lighting.html
2012-05-25T18:10:31
crawl-003
crawl-003-010
[]
docs.gimp.org
Figuur 16.202. From left to right: original image, map, resulting image Map has three stripes: a solid black area, a vertical gradient area, a solid white area. One can see, on the resulting image, that image zones corresponding to solid areas of the map, are not blurred. Only the image zone corresponding to the gradient area of the map is blurred. “LIC” stands for Line Integral Convolution, a mathematical method. The plug-in author uses mathematical terms to name his options... This filter is used to apply a directional blur to an image, or to create textures. It could be called “Astigmatism” as it blurs certain directions in the image. It uses a blur map. Unlike other maps, this filter doesn't use grey levels of this blur map. Filter takes in account only gradient direction(s). Image pixels corresponding to solid areas of the map are ignored. By selecting Hue, Saturation or Brightness (=Value), filter will use this channel to treat image. The “Derivative” option reverses “Gradient ” direction: Figuur 16.204. Derivative option example Using a square gradient map, Effect operator is on “Gradient” on the left, on “Derivative” on the right: what was sharp is blurred and conversely. You can use two types of convolution. That's the first parameter you have to set: White noise is an acoustics name. It's a noise where all frequencies have the same amplitude. Here, this option is used to create patterns. The source image will be blurred. That's the map for blur or pattern direction. This map must have the same dimensions as the original image. It must be preferably a grayscale image. It must be present on your screen when you call filter so that you can choose it in the drop-list. Figuur 16.205. Blurring with vertical gradient map With a vertical gradient map, vertical lines are blurred. Figuur 16.206. Blurring with a square gradient map The gradient map is divided into four gradient triangles: each of them has its own gradient direction. In every area of the image corresponding to gradient triangles, only lines with the same direction as gradient are blurred. Figuur 16.207. Texture example The “With white noise” option is checked. Others are default. With a vertical gradient map, texture “fibres” are going horizontally. When applying blur, this option controls how important blur is. When creating a texture, it controls how rough texture is: low values result in smooth surface; high values in rough surface. Figuur 16.208. Action example of Filter Length on blur On the left: a vertical line, one pixel wide (zoom 800%). On the right: the same line, after applying a vertical blur with a Filter Length to 3. You can see that blur width is 6 pixels, 3 pixels on both sides. Figuur 16.209. Filter Length example on texture On the left: a texture with Filter Length=3. On the right, the same texture with Filter Length=24. This options controls the amount and size of White Noise. Low values produce finely grained surfaces. High values produce coarse-grained textures. This options controls the influence of gradient map on texture. Figuur 16.211. Action example of Integration Steps on texture On the left: Integration Steps = 2. On the right: Integration Steps = 4. Both values determine a range controlling texture contrast: shrunk range results in high contrast and enlarged range results in low contrast. Figuur 16.212. Action example of min/max values on texture Minimum value = -4.0. Maximum value = 5.0.
http://docs.gimp.org/nl/plug-in-lic.html
2012-05-25T18:10:26
crawl-003
crawl-003-010
[]
docs.gimp.org
With this filter, you can create fractals and multicolored pictures verging to chaos. Unlike the IFS Fractal filter, with which you can fix the fractal structure precisely, this filter lets you perform fractals simply. The Fractal Explorer window contains two panes: on the left there is the Preview pane with a Zoom feature, on the right you find the main options organized in tabs: Parameters, Colors, and Fractals. Uncheck the Realtime preview only if your computer is slow. In this case, you can update preview by clicking on the button. By clicking-dragging mouse pointer on preview, you can draw a rectangle delimiting an area which will be zoomed. You have there some options to zoom in or zoom out. Theallows you to return to previous state, before zooming. The allows you to reestablish the zoom you had undone, without having to re-create it with the or buttons. This tab contains some options to set fractal calculation and select fractal type. Here, you have sliders and input boxes to set fractal spreading, repetition and aspect. You can set fractal spreading between a minimum and a maximum, in the horizontal and/or vertical previously saved fractal, or return to the initial state before all modifications. You can choose what fractal type will be, for instance Mandelbrot, Julia, Barnsley or Sierpinski. This tab contains options for fractal color setting. Number of Colors Density” and “Color Function” options. Fractal colors don't depend on colors of the original image (you can use a white image for fractals as well). If this option is checked, the band effect is smoothed. Color density These three sliders and their text-boxes let you set the color intensity in the three color channels. Values vary from 0.0 to 1.0. Color Function For the Red, Green and Blue color channels, you can select how color will be treated: Color variations will be modulated according to the sine function. Color densities will vary according to cosine function. Color densities will vary linearly. If you check this option, function values will be inverted. Color Mode These options allow you to set where color values must be taken from. Color values will be taken from the Color Density options. Used colors will be that of active gradient. You should be able to select another gradient by clicking on the gradient source button. This tab contains a big list of fractals with their parameters, that you can use as a model: only click on the wanted one. Theallows you to update the list if you have saved your work, without needing to re-start GIMP. You can delete the selected fractal from the list by clicking on the .
http://docs.gimp.org/nl/plug-in-fractalexplorer.html
2012-05-25T20:31:36
crawl-003
crawl-003-010
[]
docs.gimp.org
This filter transforms the image with the Mandelbrot fractal: it maps the image to the fractal. Mandelbrot parameters These parameters are similar to X/YMIN, X/YMAX and ITER parameters of the Fractal Explorer filter. They allow you to vary fractal spreading and detail depth. Mapping image to fractal may reveal empty areas. You can select to fill them with Black, White, Transparency or make what disappears on one side reappear on the opposite side with Wrap option.
http://docs.gimp.org/nl/plug-in-fractal-trace.html
2012-05-25T20:31:31
crawl-003
crawl-003-010
[]
docs.gimp.org
Film filter lets you merge several pictures into a photographic film drawing. Film already opened in GIMP. Shows the pictures chosen to be merged. This button allows the user to put an available image in the “On film” section. This button allows to bring a picture from “On film ” to “Available images”. After that, the picture will not be used anymore in the resulting document. Defines the height of each pictures in the resulting image. Defines the space between the pictures as they will be inserted in the future image. Defines the hole position from image border. Defines the width of the holes in the resulting image. Defines the height of the holes in the resulting image. Defines the space between holes Defines the height of the index number, proportionally to the height of the picture.
http://docs.gimp.org/nl/plug-in-film.html
2012-05-25T20:31:23
crawl-003
crawl-003-010
[]
docs.gimp.org
This filter produces an engraving effect: the image is turned black and white and some horizontal lines of varying height are drawn depending on the value of underlying pixels. The resulting effect reminds of engravings found in coins and old book illustrations. The result of your settings will appear in the Preview without affecting the image until you click on. This option specifies the height of the engraving lines. The value goes from 2 to 16. If this option is enabled thin lines are not drawn on contiguous color areas. See the figure below for an example of this option result. Figuur 16.43. Example result of Limit line width option Original image Limit line width option enabled Limit line width option disabled
http://docs.gimp.org/nl/plug-in-engrave.html
2012-05-25T20:31:14
crawl-003
crawl-003-010
[]
docs.gimp.org
The: The WSDL for a given version of the Mechanical Turk Service API can be found at a URL that corresponds to the API version. For example, the WSDL for the 2007-06-21 version of the API can be found here: The XML Schema for the messages of a given version of the Mechanical Turk Service API can be found at a URL that corresponds to the API version. For example, the XML Schema for the 2007-06-21 version of the API can be found here: The Mechanical Turk Service has several parameters and return values that contain XML data. The XML content must validate against the appropriate XML schema. For more information, see QuestionForm, QuestionFormAnswers, and AnswerKey. 2007-06-21.=2007-06-21 &Operation=GetHIT &HITId=123RVWYBAZW00EXAMPLE Older AWS services supported requests that did not specify an API version. This behavior is still supported for legacy reasons, but its use is discouraged. When the.
http://docs.amazonwebservices.com/AWSMechTurk/2007-06-21/AWSMechanicalTurkRequester/ApiReference_WsdlLocationArticle.html
2012-05-26T01:57:58
crawl-003
crawl-003-010
[]
docs.amazonwebservices.com
Converts a string from the ANSI character set to the character set associated with a given locale. Call AnsiToNativeBuf to convert a null-terminated string in the ANSI character set (used internally by Windows) to the character set associated with the database locale specified by the Locale parameter. The resulting string is copied into the buffer pointed to by the Dest parameter. Use the Len parameter to specify the size of this buffer. Use AnsiToNativeBuf to convert strings typed by the user into the character set used by a database table.
http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/delphivclwin32/DBTables_AnsiToNativeBuf.html
2012-05-25T17:24:18
crawl-003
crawl-003-010
[]
docs.embarcadero.com
Specifies the position of the up-down control relative to its companion control. property AlignButton: TUDAlignButton; __property TUDAlignButton AlignButton; Set AlignButton to indicate where to position the up-down control. The up-down control appears attached to the outer edge of the control specified by the Associate property. These are the possible values: The up-down control resizes itself to the height of the companion control.
http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/delphivclwin32/ComCtrls_TUpDown_AlignButton.html
2012-05-25T15:03:16
crawl-003
crawl-003-010
[]
docs.embarcadero.com
If. The trade-off is that the less space an image takes, the more detail from the original image you lose. You should also be aware that repeated saving in the JPG format causes more and more image degradation. To. Abschnitt 4.7, „Den Modus ändern“., a Quality setting of 10 produces a very poor quality image that uses very little disk space. The figure below shows a more reasonable image. A quality of 75 produces a reasonable image using much less disk space, which will, in turn, load much faster on a web page. Although the image is somewhat degraded, it is acceptable for the intended purpose. Finally, here is a comparison of the same picture with varying degrees of compression: Abbildung 3.24. Beispiele für starke JPEG-Kompression Quality: 10; Size: 3.4 KiloBytes Quality: 40; Size: 9.3 KiloBytes Abbildung 3.25. Beispiele für moderate JPEG-Kompression Quality: 70; Size: 15.2 KiloBytes Quality: 100; Size: 72.6 KiloBytes
http://docs.gimp.org/de//gimp-tutorial-quickie-jpeg.html
2012-05-25T15:20:26
crawl-003
crawl-003-010
[]
docs.gimp.org
As with anything else, images come in different kinds and serve different purposes. Sometimes, a small size is important (for web sites) and at other times, retaining a high color depth (e.g., images, and able to display millions of colors. This is also the mode for most of your image work including scaling, cropping, and even flipping. In RGB mode, each pixel consists of three different components: R->Red, G->Green, B->Blue. Each of these in turn can have an intensity value of 0-255. What you see at every pixel is an additive combination of these three components. Indexed- This is the mode usually used when file size is of concern, or when you are working with images with few colors. It involves using a fixed number of colors (256 or less) for the entire image to represent colors. By default, when you change an image to a palleted image, GIMP generates an „optimum palette“ to best represent your image. As you might expect, since the information needed to represent the color at each pixel is less, the file size is smaller. However, sometimes, there are options in the various menus that are grayed-out for no apparent reason. This usually means that the filter or option cannot be applied when your image is in its current mode. Changing the mode to RGB, as outlined above, should solve this issue. If RGB mode doesn't work either, perhaps the option you're trying requires your layer to have the ability to be transparent. This can be done just as easily via→ → . Grayscale- Grayscale images have only shades of gray. This mode has some specific uses and takes less space on the hard drive in some formats, but is not recommended for general use as reading it is not supported by many applications. There is no need to convert an image to a specific mode before saving it in your favorite format, as GIMP is smart enough to properly export the image.
http://docs.gimp.org/de//gimp-tutorial-quickie-change-mode.html
2012-05-25T15:20:20
crawl-003
crawl-003-010
[]
docs.gimp.org
This filter adds “cow spots” to the active layer alpha channel. The horizontal (X) and vertical (Y) spots density will be used by the Solid Noise filter as X Size and Y Size options. So these values range from 1 to 16, with high values resulting in many spots in the respective dimension, low values resulting in few spots. 그림 16.385. “Spots density” examples Maximum X density, minimum Y density Maximum Y density, minimum X density This is the color used to fill the “Background” layer; it defaults to white. When you click on the color button, you may choose any other color in the color selector dialog. The filter fills the alpha channel with Solid Noise: ... and maximizes the Contrast: Besides, the filter adds a Blur layer as a light gray shadow and uses this layer as a Bump Map. Finally a (by default) white “Background” layer is added below. So the filter will end up with these layers:[19] [19] If the active layer is not the top layer, it might happen that the filter messes up the layers. Then you will have to raise the active layer.
http://docs.gimp.org/ko/script-fu-bovinated-logo-alpha.html
2012-05-25T20:11:06
crawl-003
crawl-003-010
[]
docs.gimp.org
These filters add a gradient effect to the alpha channel of active layer as well as a drop shadow and a background layer. The “Basic II” also adds a highlight layer. The filters are derived from the “Basic I” and “Basic II” logo scripts (see → → ), which draw a text with the filter effect, e.g. The “Basic I” logo script. This color is used to fill the background layer created by the filter. It defaults to white. When you click on the color swatch button, a color selector pops up where you can select any other color. The name of this option refers to the text color of the logo scripts that were mentioned above. Here this color — by default blue (6,6,206) for “Basic I” and red (206,6,50) for “Basic II” — sets the basic color of the gradient effect: this is the color the alpha channel will be filled with before the gradient effect will be applied. You can reproduce the gradient effect manually by using the Blend tool with the following options: Mode: Multiply, Gradient: FG to BG (RGB), where FG is white and BG is black, Offset: 20, Shape: Radial, Dithering: checked.
http://docs.gimp.org/ko/script-fu-basic-logo-alpha.html
2012-05-25T20:10:53
crawl-003
crawl-003-010
[]
docs.gimp.org
This filter just does what its name says: it adds a border to the image. You can specify the thickness of the border as well as the color. The four sides of the border are colored in different shades, so the image area will appear raised. The image will be enlarged by the border size, it won't be painted over. Here you can select the thickness of the added border, in pixels. X size (left an right) and Y size (top and bottom) may be different. Maximum is 250 pixels. Clicking on this button brings up the color selector dialog that allows you to choose an “average” border color (see below, Delta value on color). This option makes the border sides to be colored in different shades and thus makes the image to appear raised. The actual color of the respective border side is computed for every color component red, green, and blue[15] from the “average” Border color as follows (resulting values less than 0 are set to 0, values greater than 255 are set to 255): Top shade = Border color + Delta Right shade = Border color - ½ Delta Bottom shade = Border color - Delta Left shade = Border color + ½ Delta 그림 16.221. Delta examples “Add Border” filter applied with Delta value 25, then with 75, 125, 175, and 225. Example: the default color is blue (38,31,207), default delta is 25. So the shades of the borders are: top: (38,31,207) + (25,25,25) = (63,56,232), right: (38,31,207) + (-13,-13,-13) = (25,18,194), etc. [15] See image types or YUV .
http://docs.gimp.org/ko/script-fu-addborder.html
2012-05-25T20:10:44
crawl-003
crawl-003-010
[]
docs.gimp.org
Addalias.exe Text File Example Create a text file named test.txt that contains the following lines. test1=me test2=test1 test3=test2 -h virtual001 test1=me test3=me -m test2=him -d test3 At the MS-DOS prompt, enter: addalias < test.txt The < symbol tells addalias to use test.txt as output. You then get the following messages: current host is wks003.augusta.ipswitch.com added [wks003.augusta.ipswitch.com ] test1 -> me added [wks003.augusta.ipswitch.com ] test2 -> test1 added [wks003.augusta.ipswitch.com ] test3 -> test2 current host is virtual001 alias exists [virtual001] test1 -> someone added [virtual001] test3 -> me modified [virtual001] test2 -> him deleted [virtual001] test3 -> me
http://docs.ipswitch.com/_Messaging/IMailServer/v10/Help/Admin/aliases_add_w_txt_file_ex.htm
2012-05-25T23:11:10
crawl-003
crawl-003-010
[]
docs.ipswitch.com
from the image menu, or use the keyboard shortcut, Ctrl+Y. It is often helpful to judge the effect of an action by repeatedly undoing and redoing it. This is usually very quick, and does not consume any extra resources or alter the undo history, so there is never any harm in it. If you often find yourself undoing and redoing many steps at a time, it may be more convenient to work with the Undo History dialog, a dockable dialog that shows you a small sketch of each point in the Undo History, allowing you to go back or forward to that point by clicking. Undo is performed on an image-specific basis: the "Undo History" is one of the components of an image. GIMP allocates a certain amount of memory to each image for this purpose. You can customize your Preferences to increase or decrease the amount, using the Environment page of the Preferences dialog. There are two important variables: the minimal number of undo levels, which GIMP will maintain regardless of how much memory they consume, and the maximum undo memory, beyond which GIMP will begin to delete the oldest items from the Undo History. is no way to implement Undo except by memorizing the entire contents of the affected layer before and after the operation. You might only be able to perform a few such operations before they drop out of the Undo History. Most actions that alter an image can be undone. Actions that do not alter the image generally cannot be undone. Examples include saving the image to a file, duplicating the image, copying part of the image to the clipboard, etc. It also includes most actions that affect the image display without altering the underlying image data. The most important example is zooming. There are, however, exceptions: toggling QuickMask on or off can be undone, even though it does not alter the image data. There are a few important actions that do alter an image but cannot be undone: The Undo History is a component of the image, so when the image is closed and all of its resources are freed, the Undo asks you to confirm that you really want to revert the image. Some tools require you to perform a complex series of manipulations before they take effect, but only allow you to undo the whole thing rather than the individual elements. For example, the Intelligent Scissors require you to create a closed path by clicking at multiple points in the image, and then clicking inside the path to create a selection. You cannot undo the individual clicks: undoing after you are finished takes you all the way back to the starting point. For another example, when you are working with the Text tool, you cannot undo individual letters, font changes, etc.: undoing after you are finished removes the newly created text layer. Filters, and other actions performed by plugins or scripts, can be undone just like actions implemented by the GIMP core, but this requires them to make correct use of GIMP's Undo functions. If the code is not correct, a plugin can potentially corrupt the Undo History, so that not only the plugin but also previous actions can no longer properly be undone. The plugins and scripts distributed with GIMP are all believed to be set up correctly, but obviously no guarantees can be given for plugins you obtain from other sources. Also, even if the code is correct, canceling a plugin while it is running may corrupt the Undo History, so it is best to avoid this unless you have accidentally done something whose consequences are going to be very harmful.
http://docs.gimp.org/sv/gimp-concepts-undo.html
2012-05-25T21:19:38
crawl-003
crawl-003-010
[]
docs.gimp.org
The Copy Visible command is similar to the Copy command. However, it does not just copy the contents of the current layer; it copies the contents of the visible layers (or the selection of the visible layers), that is, the ones that are marked with an ”eye”. Please note that the information about the layers is lost when the image data is put in the clipboard. When you later paste the clipboard contents, there is only one layer, which is the fusion of all the marked layers. You can access this command from the image menubar through Edit → Copy Visible.
http://docs.gimp.org/sv/gimp-edit-copy-visible.html
2012-05-25T21:19:42
crawl-003
crawl-003-010
[]
docs.gimp.org
When first run, GIMP performs a series of steps to configure options and directories. The configuration process creates a subdirectory in your home directory called .gimp-2.6. All of the configuration information is stored in this directory. If you remove or rename the directory, GIMP will repeat the initial configuration process, creating a new .gimp-2.6 directory. Use this capability to explore different configuration options without destroying your existing installation, or to recover if your configuration files are damaged. Just a couple of suggestions before you start, though: First, GIMP provides tips you can read at any time using the menu command Getting Unstuck may help you out. Happy Gimping!→ . The tips provide information that is considered useful, but not easy to learn by experimenting; so they are worth reading. Please read the tips when you have the time. Second, if at some point you are trying to do something, and GIMP seems to have suddenly stopped functioning, the section
http://docs.gimp.org/sv/gimp-concepts-setup.html
2012-05-25T21:19:34
crawl-003
crawl-003-010
[]
docs.gimp.org
Event ID 30018 — RRAS DHCP Relay Agent Request and Response Operations Applies To: Windows Server 2008 R2 The DHCP Relay Agent is a Bootstrap Protocol (BOOTP) relay agent that relays Dynamic Host Configuration Protocol (DHCP) messages between DHCP clients and DHCP servers on different IP networks. The DHCP Relay Agent relays DHCP/BOOTP requests and responses from different networks. Event Details Resolve Reconfigure DHCP Relay Agent, check memory status, or restart Routing and Remote Access service Possible resolutions: - Remove and reinstall the DHCP Relay Agent. For more information, see the "Remove and reinstall the DHCP Relay Agent" section. - Check that the hop count field is correctly configured. For more information, see the "Configure DHCP Relay Agent hop count". Remove and reinstall the Request and Response Operations Routing and Remote Access Service Infrastructure
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd315791(v=ws.10)
2018-02-18T00:23:39
CC-MAIN-2018-09
1518891808539.63
[array(['images/ee406008.red%28ws.10%29.jpg', None], dtype=object)]
docs.microsoft.com
Turn.Troubleshoot a report on a homepageCreate a widget that displays a ServiceNow UI pageRelated ConceptsRestrict content additions to a homepageTop Searches homepageCustom homepage widgetsHomepage customizationHomepage and content page layoutsHomepage cachingHomepage splash pageManage a service catalog homepageHomepage administrationRelated ReferenceHomepage user preferences
https://docs.servicenow.com/bundle/geneva-servicenow-platform/page/administer/homepage_administration/task/t_TurnOnHomepageRenderTime.html
2018-02-17T23:41:12
CC-MAIN-2018-09
1518891808539.63
[]
docs.servicenow.com
Contents Now Platform User Interface Previous Topic Next Topic Generate an encoded query string through a filter Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents If you are in split mode in List v3, right-click the blue filter text in the left pane. Copy the query to your system clipboard. Use the query string to Navigate to a record or module using a URL or an advanced reference qualifier. When you use the CONTAINS operator on a list filter, the system translates the filter to a LIKE query. For example, if you filter for active records with numbers that contain 123, the URL is. Related tasksNavigate to a record or module using a URLRelated conceptsReference qualifiers On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/madrid-platform-user-interface/page/use/using-lists/task/t_GenEncodQueryStringFilter.html
2019-10-14T03:55:47
CC-MAIN-2019-43
1570986649035.4
[]
docs.servicenow.com
The number of searches that the Splunk platform runs at a time to generate summary files for data models has changed. When you upgrade to Splunk Enterprise 6. Results for unaccelerated data models now match results from accelerated data models. You must now enable access to Splunk Enterprise debugging endpoints When you upgrade to Splunk Enterprise 6.. The Splunk Web visualizations editor changes take precedence over existing 'rangemap' configurations for single-value. as of version 6.2 of Splunk Enterprise. You can only modify the footer of the login page after an upgrade. Windows-specific changes The Windows host monitoring input no longer monitors application state Beginning with Splunk Enterprise v6.3, the Windows version of Splunk Enterprise." This feature was introduced in Splunk Enterprise 6.2, but we retain it here for those who upgrade to 6.3 from earlier versions.. This change was introduced in Splunk Enterprise 6.2, but we retain it here for those who upgrade to 6.3 from earlier versions.: 6.3.0, 6.3.1, 6.3.2, 6.3.3 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/Splunk/6.3.0/Installation/Aboutupgradingto6.3READTHISFIRST
2019-10-14T03:46:07
CC-MAIN-2019-43
1570986649035.4
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Create an overlay chart and explore visualization options In this example, you create a chart that overlays two data series as lines over three data series as columns. The overlay chart will show the Actions and the Conversion Rates. You will use the stats command to count the user actions. The eval command is used to calculate the conversion rates for those actions. For example, how often someone who viewed a product also added the product to their cart. Prerequisite This example uses the productName field from the Enabling field lookups section of this tutorial. You must complete all of those steps before continuing with this section. Steps -. - The next few steps reformat the chart visualization to overlay the two data series for the conversion rates, onto the three data series for the actions. - Click the Visualization tab. - This is the same chart in Create a basic chart, with two additional data series, viewsToPurchase and cartToPurchase. - Click Format and X-Axis. - Look at the numbers on the Y-Axis. They range from 1000 to 3000. Click Format and Y-Axis. - To make the chart easier to read, add a label and specify different number intervals on the Y-Axis. - Look at the legend. It shows that some of the columns represent actions and some columns represent conversion rates. - To fix this issue, click Format and Chart Overlay. - To separate the actions (views, adds to cart, and purchases) from the conversion rates (viewToPurchases and cartToPurchases), the second Y-Axis. The label and values for the line series appear on this axis. - In the Save Report As dialog box, for Title type Comparison of Actions and Conversion Rates by Product. - For Description, type The number of times a product is viewed, added to cart, and purchased and the rates of purchases from these actions. Next step Create a report from a custom chart See also stats command in the Search Reference eval command in the Search Reference Chart overview in Dashboards and Visualizations!
https://docs.splunk.com/Documentation/Splunk/7.0.3/SearchTutorial/Chartoverlays
2019-10-14T03:54:44
CC-MAIN-2019-43
1570986649035.4
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Follow this procedure if you're currently using SVN MultiSite 4.2. If you are using a different version of SVN MultiSite you should review this instructions in the chapter 5. Upgrading Subversion MultiSite. in a read-only state so either complete the upgrade out of development hours and ensure that your developers are aware of the brief Subversion outage. access-control Create an archive of the svn-replicator directory. 2 into the utils directory ( <:INSTALL-DIR>/svn-replicator/utils/) which contains three conversions scripts. You will need to apply the script that corresponds with the version of SVN MultiSite from which you are upgrading. i.e. perl convertac42-42-latest.pl backup.xml localOR perl convertac42-42-latest.pl backup.xml ldap Running the conversion script will create a new file in the utils folder. So you backed up your data, installed the latest version MultiSite, if it was neccessary, undated-convert-access-control.xml file,. Check that your settings such as scheduled DN rotation and consistency checks are still setup to your requirements. Complete some test commits and check the dashboard to ensure that replication is working. This product is protected by copyright and distributed under licenses restricting copying, distribution and decompilation.
https://docs.wandisco.com/svn/ms/v4.2/upgrade42.html
2019-10-14T02:59:41
CC-MAIN-2019-43
1570986649035.4
[]
docs.wandisco.com
Bing Maps SDK for Android and iOS Welcome to the Bing Maps SDK for Android and iOS! The Bing Maps SDK for Android and Bing Maps SDK for iOS are libraries for building mapping applications for Android and iOS. The SDKs feature a native map control and an accompanying map services API set. The map control is powered by a full vector 3D map engine with a number of standard mapping capabilities including displaying icons, drawing polylines and polygons, and overlaying texture sources. The engine brings in the same 3D Native support you know from the Xaml Map Control in Windows 10, including worldwide 3D elevation data (via our Digital Elevation Model). The map control shares much in common with the Map Control in Windows 10, so many of the concepts from the Windows 10 control apply as well. For more information, please see the Windows UWP Map Control documentation. Bing Maps Key You must obtain a Bing Maps Key from Bing Maps Dev Center in order to use the Bing Maps SDK for Android and iOS. Your app must be authenticated to use the map controls and map services. To authenticate your app, you must specify a Bing Maps key through the API. Visit the Bing Maps Dev Center Help page for detailed steps on obtaining one. License and Terms of Service By using the Bing Maps SDK for Android and iOS you accept the Bing Maps Platform APIs Terms of Use. Please review our TOU carefully. It describes in detail what you can and can't do with the SDK.
https://docs.microsoft.com/en-us/bingmaps/sdk-native/
2019-10-14T03:46:01
CC-MAIN-2019-43
1570986649035.4
[]
docs.microsoft.com
Write a plug-in The process of writing, registering, and debugging a plug-in is: - Create a .NET Framework Class library project in Visual Studio - Add the Microsoft.CrmSdk.CoreAssembliesNuGet package to the project - Implement the IPlugin interface on classes that will be registered as steps. - Add your code to the Execute method required by the interface - Get references to services you need - Add your business logic - Sign & build the assembly - Test the assembly - Register the assembly in a test environment - Add your registered assembly and steps to an unmanaged solution - Test the behavior of the assembly - Verify expected trace logs are written - Debug the assembly as needed Content in this topic discusses the steps in bold above and supports the following tutorials: Assembly constraints When creating assemblies keep the following constraints in mind. Use .NET Framework 4.6.2 Plug-ins and custom workflow assemblies should use .NET Framework 4.6.2. While assemblies built using later versions should generally work, if they use any features introduced after 4.6.2 an error will occur. Optimize assembly development The assembly should include multiple plug-in classes (or types), but can be no larger than 16 MB. It is recommended to consolidate plug-ins and workflow assemblies into a single assembly as long as the size remains below 16 MB. More information: Optimize assembly development Assemblies must be signed All assemblies must be signed before they can be registered. This can be done using Visual Studio Signing tab on the project or by using Sn.exe (Strong Name Tool). Do not depend on .NET assemblies that interact with low-level Windows APIs Plug-in assemblies must contain all the necessary logic within the respective dll. Plugins may reference some core .Net assemblies. However, we do not support dependencies on .Net assemblies that interact with low-level Windows APIs, such as the graphics design interface. IPlugin interface A plug-in is a class within an assembly created using a .NET Framework Class library project using .NET Framework 4.6.2 in Visual Studio. Each class in the project that will be registered as a step must implement the IPlugin interface which requires the Execute method. Important When implementing IPlugin, the class should be stateless. This is because the platform caches a class instance and re-uses it for performance reasons. A simple way of thinking about this is that you shouldn't add any properties or methods to the class and everything should be included within the Execute method. There are some exceptions to this. For example you can have a property that represents a constant and you can have methods that represent functions that are called from the Execute method. The important thing is that you never store any service instance or context data as a property in your class. These change with every invocation and you don't want that data to be cached and applied to subsequent invocations. More information: Develop IPlugin implementations as stateless The Execute method accepts a single IServiceProvider parameter. The IServiceProvider has a single method: GetService. You will use this method to get several different types of services that you can use in your code. More information: Services you can use in your code Pass configuration data to your plug-in When you register a plug-in you have the ability to pass configuration data to it. Configuration data allows you to define how a specific instance of a registered plug-in should behave. This information is passed as string data to parameters in the constructor of your class. There are two parameters: unsecure and secure. Use the first unsecure parameter for data that you don't mind if people can see. Use the second secure parameter for sensitive data. The following code shows the three possible signatures for a plug-in class named SamplePlugin. public SamplePlugin() public SamplePlugin(string unsecure) public SamplePlugin(string unsecure, string secure) The secure configuration data is stored in a separate entity which only system administrators have privileges to read. More information: Register plug-in step > Set configuration data Services you can use in your code Within your plug-in you will need to: - Access the contextual information about what is happening in the event your plug-in was registered to handle. This is called the execution context. - Access the Organization web service so you can write code to query data, work with entity records, use messages to perform operations. - Write messages to the Tracing service so you can evaluate how your code is executing. The IServiceProvider.GetService method provides you with a way to access these services as needed. To get an instance of the service you invoke the GetService method passing the type of service. Note When you write a plug-in that uses Azure Service Bus integration, you will use a notification service that implements the IServiceEndpointNotificationService interface, but this will not be described here. More information: Azure Integration Organization Service To work with data within a plug-in you use the organization service. Do not try to use the Web API. Plug-ins are optimized to use the .NET SDK assemblies. To gain access to a svc variable that implements the IOrganizationService interface, use the following code: // svc = serviceFactory.CreateOrganizationService(context.UserId); The context.UserId variable used with IOrganizationServiceFactory.CreateOrganizationService(Nullable<Guid>) comes from execution context the UserId property, so this is call is done after the execution context has been accessed. More information: - Entity Operations - Query data - Create entities - Retrieve an entity - Update and Delete entities - Associate and disassociate entities - Use messages - Late-bound and Early-bound programming You can use early bound types within a plug-in. Just include the generated types file in your project. But you should be aware that all entity types that are provided by the execution context input parameters will be late-bound types. You will need to convert them to early bound types. For example you can do the following when you know the Target parameter represents an account entity. Account acct = context.InputParameters["Target"].ToEntity<Account>(); But you should never try to set the value using an early bound type. Don't try to do this: context.InputParameters["Target"] = new Account() { Name = "MyAccount" }; // WRONG: Do not do this. This will cause an SerializationException to occur. Use the tracing service Use the tracing service to write messages to the PluginTraceLog Entity so that you can review the logs to understand what occurred when the plug-in ran. To write to the tracelog, you need to get an instance of the tracing service. The following code shows how to get an instance of the tracing service using the IServiceProvider.GetService method. // Obtain the tracing service ITracingService tracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); To write to the trace, use the ITracingService.Trace method. tracingService.Trace("Write {0} {1}.", "your", "message"); More information: Use Tracing, Logging and tracing. Performance considerations When you add the business logic for your plug-in you need to be very aware of the impact they will have on overall performance. Important The business logic in plug-ins registered for synchronous steps should take no more than 2 seconds to complete. Time and resource constraints There is a 2-minute time limit for message operations to complete. There are also limitations on the amount of CPU and memory resources that can be used by extensions. If the limits are exceeded an exception is thrown and the operation will be cancelled. If the time limit is exceeded, an TimeoutException will be thrown. If any custom extension exceeds threshold CPU, memory, or handle limits or is otherwise unresponsive, that process will be killed by the platform. At that point any current extension in that process will fail with exceptions. However, the next time that the extension is executed it will run normally. Monitor Performance Run-time information about plug-ins and custom workflow extensions is captured and store in the PluginTypeStatistic Entity. These records are populated within 30 minutes to one hour after the custom code executes. This entity provides the following data points: This data is also available for you to browse using the Power Platform Admin Center. Select Analytics > Common Data Service > Plug-ins. Next steps Register a plug-in Debug Plug-ins See also Write plug-ins to extend business processes Best practices and guidance regarding plug-in and workflow development Handle exceptions Impersonate a user Tutorial: Write and register a plug-in Tutorial: Debug a plug-in Tutorial: Update a plug-in Comentários A carregar comentários...
https://docs.microsoft.com/pt-pt/powerapps/developer/common-data-service/write-plug-in
2019-10-14T05:09:02
CC-MAIN-2019-43
1570986649035.4
[]
docs.microsoft.com
Feature: Session Persistence¶ The session persistence feature allows drools kie sessions to be persisted in a database surviving pdp-d restarts. The configuration is located at: - $POLICY_HOME/config/feature-session-persistence.properties Each controller that wants to be started with persistence should contain the following line in its <controller-name>-controller.properties Facts will survive PDP-D restart using the native drools capabilities and introduce a performance overhead. End of Document
https://docs.onap.org/en/dublin/submodules/policy/engine.git/docs/platform/feature_sesspersist.html
2019-10-14T04:34:07
CC-MAIN-2019-43
1570986649035.4
[]
docs.onap.org
Zend Server Web API Note: This section is being actively updated. Once done, this note will be removed. Thank you for your understanding. The Zend Server Web API allows external systems to connect to a programmatic, restful API that allows access to all of Zend Server’s management features. Using the Web API, a 3rd party system can automate cluster management, application deployment, and other development and integration tasks. The Zend Server UI is both an example and a test case for the use of the Zend Server Web API. Almost every functionality in the UI is executed via the Web API. How does it work? The Web API is a restful gateway that relies on a signature-based authentication solution for identity control. Seated behind the authentication mechanism is also a detailed permissions system that handles access control and allows you to limit access to your Zend Server. After a response is generated by the Web API it will be parsed into either in XML or JSON, and returned to the requesting entity. Contents This reference guide includes the following sections: Versions The following table lists the various versions of the Zend Server Web API and their corresponding product version. *Current version Note: For a versioning of available Web API methods, see Available API Methods. Please note that support of methods depends on the Zend Server edition you are using.
https://docs.roguewave.com/en/zend/current/content/web_api_reference_guide.htm
2019-10-14T03:37:15
CC-MAIN-2019-43
1570986649035.4
[]
docs.roguewave.com
Contents IT Business Management Previous Topic Next Topic Release management Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Release management Releases contain user stories, sometimes from multiple products or projects, that form the release backlog. A product owner creates the releases. A release is bounded by start and end times and is used to organize the effort of the assigned groups working on user stories. A release can use multiple assignment groups. Typically, the product owners select the prioritized stories from the backlog to be completed in a given release. The set of stories in a release are referred as release backlog. Agile Development 2.0 allows the release backlog to be executed in two ways: Project-based execution - Allows release backlog to be executed as one or more projects. Non project-based execution - Allows release backlog to be executed by one or more assignment groups using their sprint schedules within a release. Create a release Create a release, and then select the prioritized stories to be completed in that release. Before you beginRole required: scrum_release_planner, scrum_admin About this taskBefore attempting to create a release, make sure that you have created the appropriate stories and scrum tasks and associated them with one or more products. Procedure Create a release using one of these methods: OptionAction From a product record Select the Releases related list and click New. From the Releases list Navigate to Agile Development > Releases. Click New in the record list. Fill in the fields, as appropriate. Table 1. Release form fields Field Description Number A system generated number for the release. State Current state of the release. The default is Draft. Total committed points Displays the sum of all story points from the stories assigned to the release. Release capacity Sum of group capacity of all the assignment groups associated with the release.Group capacity of an assignment group for a release is calculated as: Group capacity * Number of sprints in the release for that groupRelease capacity is updated only when the Start sprint and End sprint are populated for the groups in the Groups related list in the release record. Planned start date The estimated date for the release to start. Planned end date The estimated date for the release to end. Assigned to The scrum user assigned to the release.It must be a scrum user, such as a release planner or product owner, whose role allows rights to create and edit releases. Short Description A brief description of the release. Description A detailed description of the release. Work notes Notes about the work being performed on the release. Click Submit. What to do nextAfter a release record is created, perform release planning by selecting a product and moving stories from a product backlog to a release backlog. You can add products, stories, or groups using the following related lists. Table 2. Release form related list Field Description Related list Products Lists the products associated with the release. Click New to create a product. Click Edit to add an existing product to the release. Stories Lists the stories associated with the release. The stories you add create the release backlog.Click New to create a story.Click Edit to add an existing story to the release. Groups Lists the groups assigned to the release. Click Edit to assign an existing agile group to the release. When you associate a product to a release, the groups assigned to the product are automatically added to the release.Select Start sprint and End sprint for which the group is assigned to the release. The Group capacity of the assignment group for a release is calculated as: Group capacity * Number of sprints in the release for that group On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/madrid-it-business-management/page/product/agile-development/concept/manage-releases.html
2019-10-14T03:53:20
CC-MAIN-2019-43
1570986649035.4
[]
docs.servicenow.com
The server's internal settings are reported on the Settings tab, along with a number of important editable settings. The Resource Monitoring Data settings provide a basic tool for monitoring available disk storage for MultiSite's resources. /opt/wandisco/svn-multisite-plus/replicator/properties/application.properties resourcemonitor.period.min=10Value is in minutes, and only run through the UI, it is not handled directly by the replicator. For more information about setting up monitors, read 22. Setting up resource monitoring. The notifications system provides SVN administrators with the ability to create event-driven alert emails. Set up one more more gateway (mail servers), add destination emails to specify recipients, create email templates for the content of the alert emails, then set the rules for which event should trigger a specific email. The Gateways section stores the details of those email relay servers that your organization uses for internal mail delivery. You can add any number of gateways, SVN MultiSite Plus will attempt to delivery notification emails using each gateway in their order on the list, #0, #1, #2 etc. SVN MultiSite Plus will attempt delivery via the next gateway server once it has attempted delivery a number of times equal to the Tries number. It will repeat a try after waiting a number of settings equal to the Internval setting. How SVN MultiSite Plus gives up on delivering to a gateway. Example. Gateway #0 is offline. With Tries set to 5 and Interval set to 600, MultiSite will attempt delivery using the next gateway (#1) after 600s x 5 = 50 minutes. keystores? If you're not familiar with the finer points of setting up SSL keystores and truststores it is recommended that you read the following article: Using Java Keytool to manage keystores. In the Destinations panel is used to store email address for notification recipients. Add, Edit or remove email addresses. The templates panel is used to store email content. You create messaging to match those events for which you want to send user notifications. Use the Rules panel to actually setup up your notification emails. Here you'll associate email templates and destination emails with a particular system event. For example, you may create an email message to send to a particular group mailing list in the event that a repository goes into Read-only mode. Selecting descriptive subjects for your templates will help you to select the right templates here. The are used to override the default values. Changes are applied instantly but in-memory only and are forgotten after a restart of the replicator (unless they are saved). For information about adding or changing loggers, see 34. Logging Setting Tool. The System Data table provides a list of read-only settings that were either provided during setup or have since been applied: an only copy of the latest API documenation available in this admin guide, note though that it has been lifted from an installation and will link to resources that will not be available on the website (resulting in dead links). The module versions provides a list of the component parts of the SVN MultiSite application. This is useful if you need to verify what version of a component you are using - such as if you need to contact WANdisco for support. This product is protected by copyright and distributed under licenses restricting copying, distribution and decompilation. SVN MultiSite Plus Last doc build: 12:48 - 08th January 2014
https://docs.wandisco.com/svn/archive/ms-plus1.2/reference_settings.html
2019-10-14T03:40:56
CC-MAIN-2019-43
1570986649035.4
[]
docs.wandisco.com
Security¶ This chapter introduces you to the security configuration in eXo Platform: - JAAS Realm configuration Instructions on how to configure JAAS Realm. - Gadget proxy configuration How to configure the ProxyFilterService, and how the proxy service works. - Enabling HTTPS To enable security access, you can either run eXo Platform itself in HTTPS, or more commonly, use a reverse proxy like Apache. - Password encryption key of RememberMe Information about the file location and steps to update the “Remember My Login” password encryption key. - XSS protection To activate XSS protection mechanisms. - Securing the MongoDB Database How to secure eXo chat database. - Rest Api exposure List of REST API exposed by eXo Platform. JAAS Realm configuration¶ eXo Platform relies on JAAS for propagating the user identity and roles to the different applications deployed on the server. The JAAS realm is used by all eXo Platform applications and even propagated to the JCR for Access Control. Therefore, if you need to change the JAAS configuration, consider that your change impacts a lot and it may require you to unpackage and modify some .war files. This section explains: What is JAAS Realm?¶ The JAAS configuration requires a login.config file. This file contains one (or more) entry which is called a “Realm”. Each entry declares a Realm name and at least one login module. Each login module consists of a Java class and some parameters which are specified by the class. Below is the default Realm in the Tomcat bundle. In JBoss, it looks different but basically, the explanation is right for both. gatein-domain { org.gatein.sso.integration.SSODelegateLoginModule required enabled="#{gatein.sso.login.module.enabled}" delegateClassName="#{gatein.sso.login.module.class}" portalContainerName=portal realmName=gatein-domain password-stacking=useFirstPass; org.exoplatform.services.security.j2ee.TomcatLoginModule required portalContainerName=portal realmName=gatein-domain; }; In which: gatein-domainis the Realm name which will be refered by applications. If you change this default name, you need to re-configure all the applications that use the Realm (listed later). Two required login modules are: org.gatein.sso.integration.SSODelegateLoginModule and org.exoplatform.services.security.j2ee.TomcatLoginModule. The first, if authentication succeeds, will create an Identity object and save it into a shared state map, then the object can be used by the second. These are some login modules available in eXo Platform. Refer to Existing login modules to understand how they match the login scenarios. Declaring JAAS Realm in eXo Platform¶ In the Tomcat bundle The default Realm is declared in the $PLATFORM_TOMCAT_HOME/conf/jaas.conffile. Its content is exactly the above example. A “security domain” property in $PLATFORM_TOMCAT_HOME/gatein/conf/exo.properties(about this file, see Configuration overview) needs to be set equal to the Realm name: exo.security.domain=gatein-domain In the JBoss package The default Realm is declared in the $PLATFORM_JBOSS_HOME/standalone/configuration/standalone-exo.xmlfile, at the following lines: <security-domain <authentication> <!-- <login-module <module-option <module-option <module-option <module-option <module-option </login-module> --> <login-module <module-option <module-option </login-module> </authentication> </security-domain> A “security domain” property in $PLATFORM_JBOSS_HOME/standalone/configuration/gatein/exo.properties(about this file, see Configuration overview) needs to be set equal to the Realm name: exo.security.domain=gatein-domain List of applications using Realm¶ If an application (.war) uses the Realm for authentication and authorization, it will refer to the Realm name with either of the following lines. In WEB-INF/jboss-web.xml: <security-domain>java:/jaas/gatein-domain</security-domain> In WEB-INF/web.xml: <realm-name>gatein-domain</realm-name> In META-INF/context.xml: appName='gatein-domain' As mentioned above, if you change “ gatein-domain”, you need to re-configure all the applications that use the Realm to refer to the new Realm. Here is the list of webapps and the files you need to re-configure: In the Tomcat bundle: portal.war: /WEB-INF/jboss-web.xml, /WEB-INF/web.xml, /META-INF/context.xml. rest.war: /WEB-INF/jboss-web.xml, /WEB-INF/web.xml. ecm-wcm-extension.war: /WEB-INF/jboss-web.xml. calendar-extension.war: /WEB-INF/jboss-web.xml. forum-extension.war: /WEB-INF/jboss-web.xml. wiki-extension.war: /WEB-INF/jboss-web.xml. ecm-wcm-core.war: /WEB-INF/jboss-web.xml. Note The .war files are located under the $PLATFORM_TOMCAT_HOME/webapps folder. In the JBoss package: exo.portal.web.portal.war: /WEB-INF/jboss-web.xml, /WEB-INF/web.xml, /META-INF/context.xml. exo.portal.web.rest.war: /WEB-INF/jboss-web.xml, /WEB-INF/web.xml. calendar-extension-webapp.war: /WEB-INF/jboss-web.xml. forum-extension-webapp.war: /WEB-INF/jboss-web.xml. wiki-extension-webapp.war: /WEB-INF/jboss-web.xml. ecms-core-webapp.war: /WEB-INF/jboss-web.xml. ecms-packaging-wcm-webapp.war: /WEB-INF/jboss-web.xml. Note The .war files are located under the $PLATFORM_JBOSS_HOME/standalone/deployments/platform.ear folder. Gadget proxy configuration¶ In eXo Platform, you could allow gadgets to load remote resources. However, this could be a potential security risk, as it will make the Gadget deployed as an open web proxy. So, you can set up the anonymous proxy to accept or deny certain hosts by configuring the ProxyFilterService. Configuring the ProxyFilterService¶ By default, the proxy denies any host except the domain on which the gadget server is installed. To specify domains that you want to allow or deny, modify the file: $PLATFORM_TOMCAT_HOME/webapps/portal.war/WEB-INF/conf/common/common-configuration.xml(in Tomcat). $PLATFORM_JBOSS_HOME/standalone/deployments/platform.ear/exo.portal.web.portal.war/WEB-INF/conf/common/common-configuration.xml(in JBoss). The default configuration is: <component> <key>org.exoplatform.web.security.proxy.ProxyFilterService</key> <type>org.exoplatform.web.security.proxy.ProxyFilterService</type> <init-params> <values-param> <!-- The white list --> <name>white-list</name> <!-- We accept anything not black listed --> <value>*</value> </values-param> <values-param> <name>black-list</name> <value>*.evil.org</value> </values-param> </init-params> </component> How does it work?¶ - Any domain name in black list is denied. - Any domain name NOT in white list is denied. - Only domain names in white list and NOT in black list are allowed. Multiple values can be added (by adding more value tags) and wildcards can be used, as in the following example: <component> <key>org.exoplatform.web.security.proxy.ProxyFilterService</key> <type>org.exoplatform.web.security.proxy.ProxyFilterService</type> <init-params> <values-param> <name>white-list</name> <value>*.example.com</value> <value></value> </values-param> <values-param> <name>black-list</name> <value>evil.example.com</value> </values-param> </init-params> </component> Enabling HTTPS¶ In order to enable HTTPS, you can either: - Use a reverse proxy, such as Apache HTTPd or Nginx, to set up an HTTPS virtual host that runs in front of eXo Platform. Or: - Run eXo Platform itself over HTTPS. In both cases, you must have a valid SSL certificate. For testing purpose, you can generate a self-signed SSL certificate should be used. Generating a self-signed certificate¶ Generating a self-signed certificate can be done with OpenSSL. Once again, a self-signed certificate must be used only for testing purpose, never in production. Use the following command to generate the certificate: openssl req -x509 -nodes -newkey rsa:2048 -keyout cert-key.pem -out cert.pem -subj '/O=MYORG/OU=MYUNIT/C=MY/ST=MYSTATE/L=MYCITY/CN=proxy1.com' -days 730 You will use cert-key.pem to certificate the Apache/Nginx server proxy1.com, so the part “CN=proxy1.com” is important. Note When using a self-signed certificate, users will need to point their browser to and accept the security exception. Importing an SSL certificate in the JVM’s trust store¶ For gadgets to work, the SSL certificate must be imported in the JVM trust store: - Because Java keytool does not accept PEM file format, you will need to convert cert-key.peminto DER format. openssl x509 -outform der -in cert-key.pem -out cert-key.der - Import your certificate to the JVM trust store using the following command: keytool -import -trustcacerts -file cert-key.der -keystore $JAVA_HOME/jre/lib/security/cacerts -alias proxy1.com Note The default password of the JVM’s trust store is “changeit”. Using a reverse proxy for HTTPS in front of eXo Platform¶: Note. Configuring Apache¶ Before you start, note that for clarity, not all details of the Apache server configuration are described here. The configuration may vary depending on Apache version and your OS, so consult Apache documentation if you need. Note> Configuring Nginx¶¶: " proxyName="proxy1.com" proxyPort="443" scheme="https" /> - In JBoss Set the following property in $PLATFORM_JBOSS_HOME/standalone/configuration/gatein/exo.propertiesfile:. Running eXo Platform itself under HTTPS¶ In the previous section you learnt to configure a reverse proxy in front of eXo Platform, and it is the proxy which encrypts the requests and responses. Alternatively you can configure eXo Platform to allow HTTPS access directly, so no proxy between browsers and eXo Platform. See the following diagram : Configuring eXo Platform’s Tomcat¶ Set the following property in $PLATFORM_TOMCAT_HOME/gatein/conf/exo.propertiesfile: exo.base.url= Edit the $PLATFORM_TOMCAT_HOME/conf/server.xmlfile by commenting the following lines: " /> Uncomment the following lines and edit with your keystoreFileand keystorePassvalues: <Connector port="8443" protocol="org.apache.coyote.http11.Http11Protocol" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" keystoreFile="/path/to/file/serverkey.jks" keystorePass="123456"/> After starting eXo Platform, you can connect to. If you are testing with dummy server names, make sure you created the host “exo1.com” in the file /etc/hosts. Configuring eXo Platform’s JBoss¶ To configure JBoss to run under HTTPS, you just need to set the following property in $PLATFORM_JBOSS_HOME/standalone/configuration/gatein/exo.properties file: exo.base.url= After starting JBoss, you can connect to eXo Platform at. If you are testing with dummy server names, make sure you created the host “exo1.com” in the file /etc/hosts. Password encryption key of RememberMe¶ eXo Platform supports the “Remember My Login” feature. This guideline explains how the feature works, and how to update the password encryption key in server side for security purpose. How the feature works?¶ If users select “Remember My Login” when they log in, their login information will be saved in both client and server sides: - A token is saved in the server side. The user password is encrypted and saved along with the token. - The token ID is sent back to the browser and saved in the “rememberme” cookie. When the users visit the website for next time from the same browser on the same machine, they do not need to type their username and password. The browser sends the cookies, and the server validates it using the token. By that way, the login step is automatically completed. Symmetric encryption of passwords¶, eXo Platform allows you to configure and use your own keystore to conform to your security policy. How to customize the password¶ As you can see, the customization involves properties in exo parameter keystore (“customStore” in the example). The valid value of algorithms and other parameters can be found here. Then, place the generated file under gatein/conf/codec (in Tomcat) or standalone/configuration/gatein/codec (in JBoss). Update the jca-symmetric-codec.propertiesfile exo.properties. Updating password encryption key¶ The password encryption uses a keystore file. By default, the file is: $PLATFORM_TOMCAT_HOME/gatein/conf/codec/codeckey.txt(in Tomcat). $PLATFORM_JBOSS_HOME/standalone/configuration/gatein/codec/codeckey.txt(in JBoss). To update the password encryption key, just remove the file, then restart the server. The keystore file will be re-created at the startup time. Note Updating the password encryption key causes the invalidation of existing tokens, so the users must re-login. XSS Protection¶ Even if the XSS protection is handled in the PRODUCT development, some protections can be added on the server side to protect against external threats. They are essentially based on HTTP headers added to the responses to ask the modern browsers to avoid such attacks. Additional configuration options can be found on the Content-security-Policy header definition. Add XSS protection headers on Apache¶ To manipulate the response headers, the Apache module mod_headers must be activated and the following lines added on your configuration : <VirtualHost *:80> ... # XSS Protection Header always append X-Frame-Options SAMEORIGIN Header always append X-XSS-Protection 1 Header always append Content-Security-Policy "frame-ancestors 'self'" ... </VirtualHost> Secured MongoDB¶ For a quick setup, the add-on by default uses a local and none-authorization connection. However, in production it is likely you will secure your MongoDB, so authorization is required. Below are steps to do this. Note Read MongoDB documentation for MongoDB security. This setup procedure is applied for MongoDB 3.2. Start MongoDB and connect to the shell to create a database named admin. Add a user with role userAdminAnyDatabase. $ mongo >use admin >db.createUser({user: "admin", pwd: "admin", roles: [{role: "userAdminAnyDatabase", db: "admin"}]}) >exit Edit MongoDB configuration to turn on authentication, then restart the server. # mongodb.conf # Your MongoDB host. bind_ip = 192.168.1.81 # The default MongoDB port port = 27017 # Turn on authentication auth=true Create a user having readWrite role in the database chat (you can name the database as your desire). $ mongo -port 27017 -host 192.168.1.81 -u admin -p admin -authenticationDatabase admin >use chat >db.createUser({user: "exo", pwd: "exo", roles: [{role: "readWrite", db: "chat"}]}) >exit Verify the authentication/authorization of the new user: $ mongo -port 27017 -host 192.168.1.81 -u exo -p exo -authenticationDatabase chat >use chat >db.placeholder.insert({description: "test"}) >db.placeholder.find() Create a configuration file containing these below parameters. dbName=chat dbServerHost=192.168.1.81 dbServerPort=27017 dbAuthentication=true dbUser=exo dbPassword=exo Note The parameters above correspond with the values used during creating authorization for MongoDB. Rest Api exposure¶ eXo Platform exposes a list of Rest API methods. They are used internally by the deployed components but can also be used by your users. Depending on your use cases, it could be (highly) recommanded to block the public access to some of them. /rest/loginhistory/loginhistory/AllUsers: to avoid information disclosure and for performance issue. /rest/private/loginhistory/loginhistory/AllUsers/*: to avoid information disclosure and for performance issue. /rest/jcr/repository/collaboration/Trash: to avoid information disclosure. /rest/: Avoid rest services discovery. /portal/rest: Avoid rest services discovery. The following configuraton examples will allow you to block the previously listed Rest URLs with Apache or Nginx. Block sensitive Rest urls with Apache¶ ... # Block login history for performance and security reasons RewriteRule "/rest/loginhistory/loginhistory/AllUsers" - [L,NC,R=403] RewriteRule "/rest/private/loginhistory/loginhistory/AllUsers/*" - [L,NC,R=403] # Block access to trash folder RewriteRule "/rest/jcr/repository/collaboration/Trash" - [L,NC,R=403] # Don't expose REST APIs listing RewriteRule "^/rest/?$" - [NC,F,L] RewriteRule "^/portal/rest/?$" - [NC,F,L] ... Block sensitive Rest urls with Nginx¶ You can create redirection rules in several ways with nginx, this is one of the possibles : ... # Block login history for performance and security reasons location /rest/loginhistory/loginhistory/AllUsers { return 403; } location /rest/private/loginhistory/loginhistory/AllUsers { return 403; } # Block access to trash folder location /rest/jcr/repository/collaboration/Trash { return 403; } # Don't expose REST APIs listing location ~ ^/rest/?$ { return 403; } location ~ ^/portal/rest/?$ { return 403; } ...
https://exo-documentation.readthedocs.io/en/latest/Security.html
2019-10-14T03:04:37
CC-MAIN-2019-43
1570986649035.4
[array(['_images/https_reverse_prx_diagram.png', 'image0'], dtype=object) array(['_images/https_direct_access_diagram.png', 'image1'], dtype=object)]
exo-documentation.readthedocs.io
The NodeDefLocale=locale definition for node name is obsolete and will be ignored because name is an application node. NodeDefLocale is valid only for voice response nodes. This message is information only. NodeDefLocale does not apply to an application node. Remove the NodeDefLocale=locale definition. To set a default locale, use the NodeDefLocale keyword in the voice response node configuration entry.
http://docs.blueworx.com/BVR/InfoCenter/V6.1/help/topic/com.ibm.wvraix.probdet.doc/dtxprobdet1189.html
2019-10-14T04:41:25
CC-MAIN-2019-43
1570986649035.4
[]
docs.blueworx.com
WHMpress is a unique plugin, it will help you quickly build your webhosting website with WordPress. It offers WHMCS elements that you can insert into your pages without coding a single line. If you have been manually inserting your hosting plans and prices in past, or taking help of a programmer to link WHMCS, those days are over. We have prepared this Quick Start video for you to get you up and running. See yourself how easy it is to set up a webhosting website with WHMpress.
http://docs.whmpress.com/docs/whmpress/getting-started/quick-start-guide/
2019-10-14T04:46:55
CC-MAIN-2019-43
1570986649035.4
[]
docs.whmpress.com
Client Interface Commands¶ Client Commands¶ Bounty-Related Commands¶ Keyboard shortcuts¶ Tip: How to use the Search feature CTRL + Fto trigger display the search field - Enter your search keyword (not case sensitive) - Hit Enterto jump to the next matching keyword (incremental search) - When you are done. Press CTRL + Fagain to go back to reset.
https://docs.hummingbot.io/cheatsheets/client/
2019-10-14T05:07:38
CC-MAIN-2019-43
1570986649035.4
[]
docs.hummingbot.io
Log Shipping Transaction Log Backup Settings SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel Data Warehouse Use this dialog box to configure and modify the transaction log backup settings for a log shipping configuration. For an explanation of log shipping concepts, see About Log Shipping (SQL Server). Options. Note.... Backup. Compression SQL Server 2008 Enterprise (or a later version) supports backup compression. Set backup compression In SQL Server 2008 Enterprise (or a later version), select one the following backup compression values for the log backups of this log shipping configuration: See Also Configure a User to Create and Manage SQL Server Agent Jobs About Log Shipping (SQL Server) Feedback
https://docs.microsoft.com/en-us/sql/relational-databases/databases/log-shipping-transaction-log-backup-settings?redirectedfrom=MSDN&view=sql-server-ver15
2019-10-14T03:41:04
CC-MAIN-2019-43
1570986649035.4
[]
docs.microsoft.com
IKFast Kinematics Solver¶ In this section, we will walk through configuring an IKFast plugin for MoveIt! What is IKFast?¶ From Wikipedia: IKFast, the Robot Kinematics Compiler, is a powerful inverse kinematics solver provided within Rosen Diankov’s OpenRAVE motion planning software. Unlike most inverse kinematics solvers, IKFast can analytically solve the kinematics equations of any complex kinematics chain, and generate language-specific files (like C++) for later use. The end result is extremely stable solutions that can run as fast as 5 microseconds on recent processors MoveIt! IKFast¶ MoveIt! IKFast is a tool that generates a IKFast kinematics plugin for MoveIt using OpenRAVE generated cpp files. This tutorial will step you through setting up your robot to utilize the power of IKFast. MoveIt! IKFast is tested on ROS Kinetic with Catkin using OpenRAVE 0.8 with a 6DOF and 7DOF robot arm manipulator. While it works in theory, currently the IKFast plugin generator tool does not work with >7 degree of freedom arms. Getting Started¶ If you haven’t already done so, make sure you’ve completed the steps in Getting Started. You should have MoveIt! configuration package for your robot that was created by using the Setup Assistant Installing OpenRAVE on Ubuntu 16.04 is tricky. Here are 2 blog posts that give slightly different recipes for installing OpenRAVE. Make sure you have these programs installed: sudo apt-get install cmake g++ git ipython minizip python-dev python-h5py python-numpy python-scipy qt4-dev-tools You may also need the following libraries: sudo apt-get install libassimp-dev libavcodec-dev libavformat-dev libavformat-dev libboost-all-dev libboost-date-time-dev libbullet-dev libfaac-dev libglew-dev libgsm1-dev liblapack-dev liblog4cxx-dev libmpfr-dev libode-dev libogg-dev libpcrecpp0v5 libpcre3-dev libqhull-dev libqt4-dev libsoqt-dev-common libsoqt4-dev libswscale-dev libswscale-dev libvorbis-dev libx264-dev libxml2-dev libxvidcore-dev To enable the OpenRAVE viewer you may also need to install OpenSceneGraph-3.4$(nproc) sudo make install For IkFast to work correctly, you must have the correct version of sympy installed: pip install --upgrade --user sympy==0.7.1 You should not have mpmath installed: sudo apt remove python-mpmath MoveIt! IKFast Installation¶ Install the MoveIt! IKFast package either from debs or from source. Binary Install: sudo apt-get install ros-kinetic-moveit-kinematics Source Inside your catkin workspace: git clone OpenRAVE Installation¶ Binary Install (only Indigo / Ubuntu 14.04): sudo apt-get install ros-indigo-openrave Note: you have to set: export PYTHONPATH=$PYTHONPATH:`openrave-config --python-dir` Source Install: git clone --branch latest_stable cd openrave && mkdir build && cd build cmake -DODE_USE_MULTITHREAD=ON -DOSG_DIR=/usr/local/lib64/ .. make -j$(nproc) sudo make install Working commit numbers 5cfc7444... confirmed for Ubuntu 14.04 and 9c79ea26... confirmed for Ubuntu 16.04, according to Stéphane Caron. Please report your results with this on this GitHub repository. Create Collada File For Use With OpenRAVE¶ Parameters¶ - MYROBOT_NAME - name of robot as in your URDF - PLANNING_GROUP - name of the planning group you would like to use this solver for, as referenced in your SRDF and kinematics.yaml - MOVEIT_IK_PLUGIN_PKG - name of the new package you just created - IKFAST_OUTPUT_PATH - file path to the location of your generated IKFast output.cpp file To make using this tutorial copy/paste friendly, set a MYROBOT_NAME environment variable with the name of your robot: export MYROBOT_NAME="panda_arm" First you will need robot description file that is in Collada or OpenRAVE robot format. If your robot is not in this format we recommend you create a ROS URDF file. If your robot is in xacro format you can convert it to urdf using the following command: rosrun xacro xacro --inorder -o "$MYROBOT_NAME".urdf "$MYROBOT_NAME".urdf.xacro Once you have your robot in URDF format, you can convert it to Collada (.dae) file using the following command: rosrun collada_urdf urdf_to_collada "$MYROBOT_NAME".urdf "$MYROBOT_NAME".dae Often floating point issues arise in converting a URDF file to Collada file, so a script has been created to round all the numbers down to x decimal places in your .dae file. Its probably best if you skip this step initially and see if IKFast can generate a solution with your default values, but if the generator takes longer than, say, an hour, try the following: export IKFAST_PRECISION="5" cp "$MYROBOT_NAME".dae "$MYROBOT_NAME".backup.dae # create a backup of your full precision dae. rosrun moveit_kinematics round_collada_numbers.py "$MYROBOT_NAME".dae "$MYROBOT_NAME".dae "$IKFAST_PRECISION" From experience we recommend 5 decimal places, but if the OpenRAVE IKFast generator takes to long to find a solution, lowering the number of decimal places should help. To see the links in your newly generated Collada file You may need to install package libsoqt4-dev to have the display working: openrave-robot.py "$MYROBOT_NAME".dae --info links This is useful if you have a 7-dof arm and you need to fill in a –freeindex parameter, discussed later. To test your newly generated Collada file in OpenRAVE: openrave "$MYROBOT_NAME".dae You should see your robot. Create IKFast Solution CPP File¶ Once you have a numerically rounded Collada file its time to generate the C++ .h header file that contains the analytical IK solution for your robot. Select IK Type¶ You need to choose which sort of IK you want. See this page for more info. The most common IK type is transform6d. Choose Planning Group¶ If your robot has more than one arm or “planning group” that you want to generate an IKFast solution for, choose one to generate first. The following instructions will assume you have chosen one <planning_group_name> that you will create a plugin for. Once you have verified that the plugin works, repeat the following instructions for any other planning groups you have. For example, you might have 2 planning groups: <planning_group_name> = "left_arm" <planning_group_name> = "right_arm" To make it easy to use copy/paste for the rest of this tutorial. Set a PLANNING_GROUP environment variable. eg: export PLANNING_GROUP="panda_arm" Identify Link Numbers¶ You also need the link index numbers for the base_link and end_link between which the IK will be calculated. You can count the number of links by viewing a list of links in your model: openrave-robot.py "$MYROBOT_NAME".dae --info links A typical 6-DOF manipulator should have 6 arm links + a dummy base_link as required by ROS specifications. If no extra links are present in the model, this gives: baselink=0 and eelink=6. Often, an additional tool_link will be provided to position the grasp/tool frame, giving eelink=7. The manipulator below also has another dummy mounting_link, giving baselink=1 and eelink=8. Set the base link and EEF link to the desired index: export BASE_LINK="0" export EEF_LINK="8" If you have a 7 DOF arm you will need to specify a free link: export FREE_INDEX="1" Generate IK Solver¶ To generate the IK solution between the manipulator’s base and tool frames for a 6DOF arm, use the following command format. We recommend you name the output ikfast61_”$PLANNING_GROUP”.cpp: export IKFAST_OUTPUT_PATH=`pwd`/ikfast61_"$PLANNING_GROUP".cpp For a 6DOF arm: python `openrave-config --python-dir`/openravepy/_openravepy_/ikfast.py --robot="$MYROBOT_NAME".dae --iktype=transform6d --baselink="$BASE_LINK" --eelink="$EEF_LINK" --savefile="$IKFAST_OUTPUT_PATH" For a 7 dof arm, you will need to specify a free link: python `openrave-config --python-dir`/openravepy/_openravepy_/ikfast.py --robot="$MYROBOT_NAME".dae --iktype=transform6d --baselink="$BASE_LINK" --eelink="$EEF_LINK" --freeindex="$FREE_INDEX" --savefile="$IKFAST_OUTPUT_PATH" The speed and success of this process will depend on the complexity of your robot. A typical 6 DOF manipulator with 3 intersecting axis at the base or wrist will take only a few minutes to generate the IK. Known issue –freeindex argument is known to have a bug that it cannot handle tree index correctly. Say –baselink=2 –eelink=16 and links index from 3 to 9 is not related to current planning group chain. In that case –freeindex will expect index 2 as link 2, but index 3 as link 10 ... and index 9 as link 16. You should consult the OpenRAVE mailing list and ROS Answers for information about 5 and 7 DOF manipulators. Create Plugin¶ Create the package that will contain the IK plugin. We recommend you name the package “$MYROBOT_NAME”_ikfast_”$PLANNING_GROUP”_plugin.: export MOVEIT_IK_PLUGIN_PKG="$MYROBOT_NAME"_ikfast_"$PLANNING_GROUP"_plugin cd ~/catkin_ws/src catkin_create_pkg "$MOVEIT_IK_PLUGIN_PKG" Build your workspace so the new package is detected (can be ‘roscd’): catkin build Create the plugin source code: rosrun moveit_kinematics create_ikfast_moveit_plugin.py "$MYROBOT_NAME" "$PLANNING_GROUP" "$MOVEIT_IK_PLUGIN_PKG" "$IKFAST_OUTPUT_PATH" Or without ROS: python /path/to/create_ikfast_moveit_plugin.py "$MYROBOT_NAME" "$PLANNING_GROUP" "$MOVEIT_IK_PLUGIN_PKG" "$IKFAST_OUTPUT_PATH" Usage¶ The IKFast plugin should function identically to the default KDL IK Solver, but with greatly increased performance. The MoveIt configuration file is automatically edited by the moveit_ikfast script but>_kinematics/IKFastKinematicsPlugin -INSTEAD OF- kinematics_solver: kdl_kinematics_plugin/KDLKinematicsPlugin Updating the Plugin¶ If any future changes occur with MoveIt! or IKFast, you might need to re-generate this plugin using our scripts. To allow you to easily do this, a bash script is automatically created in the root of your IKFast package, named update_ikfast_plugin.sh. This does the same thing you did manually earlier, but uses the IKFast solution header file that is copied into the ROS package. Open Source Feedback See something that needs improvement? Please open a pull request on this GitHub page
http://docs.ros.org/kinetic/api/moveit_tutorials/html/doc/ikfast/ikfast_tutorial.html
2019-10-14T04:46:37
CC-MAIN-2019-43
1570986649035.4
[array(['../../_images/openrave_panda.png', '../../_images/openrave_panda.png'], dtype=object) array(['../../_images/openrave_panda.png', '../../_images/openrave_panda.png'], dtype=object)]
docs.ros.org
3.1.1 Abstract Data Model This section describes a conceptual model of possible data organization an implementation maintains to participate in this protocol. The described organization is provided to facilitate the explanation of how the protocol behaves. This document does not mandate that implementations adhere to this model as long as their external behavior is consistent with that described in this document. The protocol server maintains a table of useful URLs and descriptive details for each URL.
https://docs.microsoft.com/en-us/openspecs/sharepoint_protocols/ms-plsp/b4299f49-d803-4906-b0f7-3eb8c04a1386?redirectedfrom=MSDN
2019-10-14T03:42:10
CC-MAIN-2019-43
1570986649035.4
[]
docs.microsoft.com
SDK Warning When using this prefab, you'll get a warning about GameObjects having the same name. This is a known bug. Fix it by renaming one of the Cube objects in the Prefab something else. An example of how to use VRC_Station, found in Assets > VRCSDK > Prefabs > World.
https://docs.vrchat.com/docs/vrcchair
2019-10-14T04:12:55
CC-MAIN-2019-43
1570986649035.4
[]
docs.vrchat.com
Innovative Report on Quantum Computing In Aerospace & Defense Market 2019-2026 Profiling key players like D-Wave Systems Inc. (US), Qxbranch LLC (US), IBM Corporation (US), Cambridge Quantum Computing Ltd (UK) Market Research Scoop Oct 04, 2019 09:56 UTC Global Quantum Computing Market Trends, Regulations And Competitive Landscape Outlook To 2028 Business Broker Oct 04, 2019 06:33 UTC Quantum Computing Market to Grow at 24.9% CAGR to 2024 The Chicago Sentinel Oct 03, 2019 11:20 UTC Market Trends: Quantum Computing Market - WhaTech WhaTech Oct 03, 2019 07:33 UTC Quantum Computing Market Development Trends, Key Manufacturers And Competitive Analysis 2019-2026 The Ukiah Post Oct 03, 2019 06:05 UTC Cryptocurrency Market Stalls As Horizon Fades: Where Does Quantum Computing Fit In? Bitcoin Exchange Guide Sep 30, 2019 19:21 UTC How Will Blockchains Battle Quantum Computing? Crypto Daily Sep 30, 2019 11:31 UTC Quantum Computing Market 2025; Top Key Players: Nokia Bell Labs, Hewlett Packard, Booz Allen Hamilton Inc., Toshiba Optimization and management of information, harvested within an organization or from different parts of the world entails, entails the employment of efficient… Sep 25, 2019 13:17 UTC Latest Change : Quantum Computing Market Size, Status and Forecast To 2025 ResearchMoz presents professional and in-depth study of “Global Quantum Computing Market Size, Status and Forecast 2019-2025”. The report, titled “Global… Sep 24, 2019 12:35 UTC Quantum Computing Market Growing Demand Overview Volume and Value Forecast Report 2023 The Quantum Computing Market provide deep growth analysis of the Quantum Computing industry for identifying the growth opportunities, development trends… Sep 23, 2019 19:26 UTC NMR Quantum Computing Market is Going to Expand in Near Future at Tremendous CAGR. Worldwide Market Reports presents NMR Quantum Computing Market report for the forecast period 2019 – 2025. The report offers drivers, restraints,… Sep 23, 2019 15:05 UTC Global Quantum Computing Market 2019 Share and Forecast to 2024: D-Wave Systems Inc., Qxbranch, LLC, etc. The Quantum Computing market report offers a sorted out perspective by the simplified information connected to Quantum Computing Market. The Quantum… Sep 23, 2019 13:07 UTC Global Quantum Computing Market 2019 Detailed Overview of the Market with Current and Future Industry Challenges and Opportunities The Global Quantum Computing Market Research Report Forecast 2019-2028: The research study has been prepared with the use of in-depth qualitative and… Sep 23, 2019 09:38 UTC Global Topological Quantum Computing Market Insights 2019 Microsoft, IBM, Google, D-Wave Systems, Airbus – The Global Topological Quantum Computing Market report provide a meticulous evaluation of all of the segments included in the report. The segments are… Sep 23, 2019 09:17 UTC Global Quantum Computing Market 2019 Complete Research Study on the Market with Current and Future Market Trends Till 2028 Quantum Computing Market Report of MarketResearch.Biz is an exclusive assortment of Market Size, Share, Trends, Constraints, and drivers of Key business. Sep 23, 2019 08:07 UTC Trending Report On Quantum Computing 101 Market 2019 with Major Players: Intel, IBM Corporation, Google Inc., Microsoft Corporation, Qxbranch, LLC, Cambridge Quantum Computing Ltd., 1QB Information Technologies Inc., QC Ware Corp. IT Technology News24 provides latest industry trending news, hosted news *service*, IT & Technology news helps businesses connect with their target audiences… Sep 23, 2019 05:02 UTC Global Quantum Computing Technologies Market to reach USD 419.21 million by 2026 The latest research report on 'Quantum Computing Technologies Market' by Ricerca Alfa, presents a detailed analysis concerning market share, market… Sep 22, 2019 16:46 UTC Detail Insight about Quantum Computing in Aerospace & Defense Market 2019 To 2025 by Top Leading Player Contrive Datum Insights proclaims the addition of new analytical data on the global Quantum Computing in Aerospace & Defense market titled as Quantum… Sep 21, 2019 09:31 UTC Quantum Computing Market Analysis by Size, Share, Applications, Growth and Top Key Players 2024 This “Quantum Computing Market” research report provides a comprehensive overview of the markets between 2019-2024 and offers an in-depth summary of… Sep 21, 2019 08:12 UTC Quantum Computing: An Applied Approach (Springer) This book brings together the foundations of quantum computing with a hands-on coding approach. Author Jack D. Hidary is a research scientist in quantum… Sep 20, 2019 16:44 UTC Quantum Computing Market Growth Analysis, Share, Demand By Regions, Research Forecasts To 2025 The Global Quantum Computing Market Report 2019-2025 provides a comprehensive analysis, forecast and prospects, both region wise and global, of the… Sep 20, 2019 09:06 UTC Quantum Computing Is Revolutionizing: Scientists Build Miniaturized Chip-Based Superconducting Circuit Quantum computing promises to revolutionize the manners by which scientists can process and manipulate data. The physical and material underpinnings for… Sep 20, 2019 06:47 UTC The Quantum Computing Detects The States Of Electrons Quantum computing harnesses the enigmatic properties of small particles to process complicated info. But quantum systems are fragile and error-prone, and… Sep 19, 2019 17.
https://search-docs.net/quantum-computing-news:Xhn1Sj2wXpma-cVbRbBb1dRbcT
2019-10-14T03:21:44
CC-MAIN-2019-43
1570986649035.4
[]
search-docs.net
- Image Family: Oracle Linux 6.x - Operating System: Oracle Linux - Kernel Version: 4.1.12-112.16.4.el6uek.x86_64 - Release Date: April 24, 2018 Release Notes: This release includes the following changes: - Security configuration updated to remove access policies added to image released on December 18, 2017. - Includes dependencies for cloud-utils-growpart to enable partition extension. A reboot is required to enable. - Includes init script enabling yum mirror mappings to local regions to work across snapshots from different regions. - Includes fixes to improve iSCSI attachment stability. - Includes kernel update with retpoline mitigation for issue described in CVE-2017-5715.
https://docs.cloud.oracle.com/iaas/images/image/83c0229a-e2f3-4a86-8d13-89d21551b2a8/
2019-10-14T04:44:56
CC-MAIN-2019-43
1570986649035.4
[]
docs.cloud.oracle.com
When Performance Counters reporting and Service Control reporting is not enough, it's possible to consume raw metrics data by directly attaching to the public API provided by the package. First, the Metrics themselves need to be enabled. Then, a custom reporter can be attached to send data to any collector e.g. Service Control, Azure Application Insights, etc. Enabling NServiceBus.Metrics var metrics = endpointConfiguration.EnableMetrics(); Reporting metrics data Metrics can be reported to a number of different locations. Each location is updated on a separate interval.)); To Windows Performance Counters Some of the data captured by the NServiceBus.Metrics component can be forwarded to Windows Performance Counters. See Performance Counters for more information.
https://docs.particular.net/monitoring/metrics/raw?version=metrics_1
2019-10-14T03:36:48
CC-MAIN-2019-43
1570986649035.4
[]
docs.particular.net
OverviewOverview Welcome to Ujo's documentation. Here you'll learn more about the Ujo platform, including a number of resources to help get you started developing on it. You're welcome to help us improve our documentation through the links found at the bottom of each section or by going to our docs repository. If you have any general questions, email us at [email protected]. IntroductionIntroduction Ujo is a music platform that uses the ethereum blockchain as the substrate for innovation by empowering artists, digitizing their music rights and metadata, sharing this information in an open environment, thus enabling new applications, products, and services to license their catalogs and pay artists directly with minimal friction. Vision / MissionVision / Mission Our vision at Ujo is to empower music through a transparent and open ecosystem and our mission is to build resilient, sustainable and accessible infrastructure for artists supporters, and developers. Through building towards the creation of a fair, efficient, and decentralized music ecosystem, we hope to enable opportunity and creativity to flourish. Ujo PlatformUjo Platform To execute the vision and mission, Ujo is developing and designing the core technologies and protocols necessary to create decentralized music applications with a variety of services that further empower those applications and the users of them. The aim to provide the immediate benefits blockchain and decentralized technologies offer (self-sovereign identity, portability, provenance, payment channels, security), while balancing the use of infrastructure that provides world-class user experiences and legal compliance. The Ujo platform documentation is intended to provide a resource for developers to build on top of and integrate open source protocols into applications and services across the decentralized music ecosystem.
https://docs.ujomusic.com/
2019-10-14T03:14:28
CC-MAIN-2019-43
1570986649035.4
[]
docs.ujomusic.com
Getting started¶. Note According to your roles, not all features described in this guide are available to you. Check with your administrator to assure which features are for your account or ask for more appropriate rights. Glossary¶ This section provides a number of terms that you will encounter when implementing eXo Platform. Site¶ A web-based environment which is used for aggregating and personalizing information via specific applications with an interactive and consistent look and feel. Users and administrators are able to integrate information, people and processes via a web-based user interface. Portlet¶ An applicative component pluggable to a site through which users can access some specific information, including supports, updates, or mini-applications. The portlet produces fragments of a markup code that are aggregated into a page. Typically, a page is displayed as a non-overlapping portlet windows collection, where each portlet window displays a portlet. Content generated by a portlet can be customized, depending on the configuration set by each user. Portlets can be divided into two following types: - Functional portlets which support all functions of a site. They are built into the site and accessed via toolbar links when the site-related tasks are performed. - Interface portlets which constitute the eXo Platform interface as front-end components of the site. Super-user¶ A super-user is a special user who has full privileges and used for the administration. In eXo Platform, this account is configured with Root, Root, root@localhost and its memberships are member:/organization/management/executive-board, *:/platform/administrators, *:/platform/users, *:/platform/web-contributors, *:/organization/employees. A super-user has all permissions on all features of eXo Platform. Space¶ A collaboration workspace where you can share documents, tasks, events, wikis and more. A space can be open or closed, private or public and space administrators can manage members and applications that are available. Connection¶ A bond among people in a network. By connecting to other people, you will be able you to track their activities through the activity stream. Activity¶ An activity is published on the Activity Stream and allows you to follow what your connections are sharing, such as links to documents or just moods. An activity can be made out of different parts: - The author - The author’s avatar - The space - The type of the activity (for instance Documents, Wiki, Forums, Spaces or Connections) - The activity message - The featured content - The action bars including the buttons Comment and Like - The like section - The comment section Gadget¶ A mini web application which is run on a platform and can be integrated and customized in the website. You can add these gadgets to your dashboards by yourself. Modes¶ eXo Platform offers two access modes by default: - Public mode is for guest users (visitors) who are not registered. In this mode, you are not required to sign in, but limited to public pages in the site. After being registered successfully, you can use the private mode, but must contact the site administrators to get more rights or the group manager to become the member and gain the access to the group. - Private mode is for registered users who will apply their usernames and passwords to sign in. This mode supports users in taking many actions, such as creating private pages, editing or deleting them, “borrowing” pages from others by creating hyperlinks, changing languages to their individual needs, managing private information. Permission¶ Permission settings control actions of a user within the site and are set by the administrators. See Managing permissions <Administration.ManagingPermissions> for more details. Repository¶ A locus where content or digital data are maintained. Users can access without traveling across a network. Drive¶ A shortcut to a specific location in the content repository that enables administrators to limit visibility of each workspace for groups of users. It is also a simple way to hide the complexity of the content storage by showing only the structure that is helpful for business users. In details, a drive consists of: - A configured path where the user will start when browsing the drive. - A set of allowed views that will allow the user to limit the available actions, such as editing or creating content while being in the drive. - A set of permissions which limits the access and view of the drive to a specified number of people. - A set of options to describe the behavior of the drive when the users browse it. Node¶ An abstract unit used to build linked data structures, such as linked lists and trees, and computer-based representation of graphs. Nodes contain data and/or links to another nodes. Links between nodes are often implemented by pointers or references. Also, a node can be defined as a logical placeholder for data. It is a memory block which contains some data units, and optionally a reference to some other data. By linking one node with other interlinked nodes, very large and complex data structure can be formed. WebDAV¶ This term stands for Web-based Distributed Authoring and Versioning. In eXo Platform, it is used as a mean to access the content repository directly from the Sites Explorer. Welcome to eXo Platform¶ eXo Platform is a full-featured application for users to have many experiences in building and deploying transactional websites, authoring web and social content, creating gadgets and dashboards with reliable capabilities of collaboration and knowledge. When you initialize eXo Platform for the first time, the Terms and Conditions Agreement screen is displayed as follows: Note The Terms and Conditions Agreement screen appears in the Commercial editions only. In the Community edition, the Account Setup form appears for the first time. This agreement contains all terms and conditions that you need to read carefully before deciding to use eXo Platform. By ticking the checkbox at the screen bottom, you totally agree with the eXo Platform’s terms and conditions. Next, click Continue to move to the Account Setup form. The Account Setup window consists of 2 sub-forms: - Create your account: Create your primary account. - Admin Password: Change the default password of the “root” user. You can use this account to log in eXo Platform as a super-user who has the highest rights in the system. You can select Skip to ignore this step, then sign in as the root user with the default password (gtn). Setting up your account¶ - Enter your information in fields. - It is required to fill all fields, except the Username field of the Admin Password form, which is pre-filled with “root” and disabled. See Adding auser for more details. - Values entered in both Password and Confirm fields must be the same. - You can change these entered information after logging in eXo Platform. See Changing your account settings for more details. - Click Submit to finish setting up your account. Once your account has been created successfully, a Greetings! screen appears that illustrates how to add more users. 3. Click Start to be automatically logged in with your created account and redirected to the Social Intranet homepage. Now, you can start adding more users to collaborate, creating/joining spaces, or creating/following activities. Note - After your accounts have been submitted successfully, the following memberships will be granted to your primary account: - *:/platform/administrators - *:/platform/web-contributors - *:/platform/users - *:/developers - If the server stops before your account setup data is submitted, the Account Setup screen will appear at your next startup. Managing Account¶ To change your account information, click your display name on the top navigation bar of the site and click Settings from the drop-down menu. The account settings appears. Changing your profile information¶ 1- Select the Account Profiles tab. 2- Change your First Name, Last Name and Email. Your Username cannot be changed. 3- Click Save button to submit your changes. Note The email address changed must be in the valid format. See details about the Email Address format here. Changing your password¶ 1- Select the Change Password tab. 2- Input your current password to identify that you are the owner of this account. 3- Input your new password which must have at least 6 characters. 4- Re-enter your password in the Confirm New Password field. 5- Click Save button to accept your changes. Note The users who just did their login via the social networks will not have a password defined. They should be able to reset a password via their Account Settings or via the Forgot Password feature or ask the administrator to set it (in the Manage Community page). Once the password is set, the user can either log in via the login/password or via the social networks. When the reset password link is clicked: - An information message is displayed: Reset password guidelines have been sent to you. Please check your mailbox. - The Forgot Password function is executed, and the users receive an email to guide them to change their account password. If you forget your password, you can request the system to send you a link to reset it. The link will be sent to your email. It helps if you forget the username also, but it requires an email that is set in your account properly. - In Login screen, click Can’t access your account? link. - In next screen, input your username or email, then click Send. - Check your mailbox. The email looks like this: 4. Click the link in the email, then input your new password and click Save. If the password is saved successfully, a popup will notify you in seconds, then you are redirected to the Login screen. In case the link has been expired already, you will see a notification like this: The link expires as soon as you successfully reset the password, or after 1 day by default. The system administrators can configure the expiration time. Using the Activity Stream¶ - Sharing in the activity stream Steps to post status updates through the Activity Stream. - Sharing a news in the activity stream Steps to post a news in the Activity Stream. - The formatting toolbar in activity messages and comments This sections describes possible actions with the microblog toolbar. - Mentioning someone Steps to refer to someone in your activity composer or comment box. - Editing an activity Steps to refer to someone in your activity composer or comment box. - Liking activities Steps to show your reaction (like/unlike) towards an activity. - Deleting an activity Steps to remove activities from the Activity Stream. - Getting permalink of an activity Steps to get permanent link of an activity. - Commenting on activities Steps to comment on an activity that allows you to get ideas, answers, and any additional information. - Editing a comment Steps to edit a comment in the Activity Stream. - Liking comments Steps to express emotion (like or remove like) on a comment to an activity. - Replying to comments Steps to reply to a comment. - Deleting a comment Steps to remove a comment from the Activity Stream. - Getting permalink of a comment Steps to get permanent link of a comment. After logging in, you will be directed to the Intranet homepage as below. You can see activities of other users by clicking their display name to reach their profile page, then selecting Activity Stream. However, for people that are not in your connections, you only can view their activities but cannot post, comment or like on their activity streams. The homepage also aggregates activities from spaces, so you can keep track of their activities without visiting every space. For example, when there is a new post in a forum of a given space, it is displayed in Activity Stream of the space and of the Social Intranet homepage. You can filter what you want to see on the homepage: in, that you liked or where you left comments. To access your Activity Stream page, click your display name on the top navigation bar, then select My Activities. You will be then directed to your Activity Stream page. Note In Activity Stream, the order of activities is based on the last date when you create a publication action, or post a new comment. This means the last publication or comment will be auto-updated and pushed up to the top of the Activity Stream so that you will not miss any recent activities. Posting a news in the activity stream¶ From a space’s acivity stream, you are able to share a news with the space’s members. Publishing a news allows you to easily write, broadcast, pin and share communication content. Posting a simple news from short form¶ To post an article with other space’s members, click on News tab from the activity composer, the tab contains : - A Title field : Allows you to enter the news title. The title should not exceed 150 characters: Beyond that limit you will not be allowed to write. - A Content field : Allows you to enter the content of the News. No limitation for the number of characters. - A Pin article Check box. - A More icon with a tooltip “more options”: Permits to open the creation full form. - A Post button : Disabled by default until the two fields “Title” and “Content” are filled. Once all fields are filled, click on the Post button to post the News in the space’s activity stream. The article is shared into the space’s activity stream. Note The button is grey and unclickable until the mandatory fields title and content are filled. Posting a news from full form¶ The creation of a news since the composer is minimalist. More options are available from the full form. In order to access to this form, you have to display the small form as explained in the previous paragraph and click on “More options” button. The full form allows you in addition to the fields available in the simple form to add a summary to your article. This field is optional and will be displayed in the news preview if filled. If not it is the first three lines of article’s content that will be displayed in the news preview. The summary is limited to 1000 characters. It is also possible to upload optionally an image as an illustrative vignette of the article from the dedicated area. The image size must not exceed 10 Mo and the supported extensions are “.jpg”,”.jpeg”,”.png” and “.gif”. The plus icon of the simple form is replaced by a less icon with a tooltip “return to original post”. This icon allows you to return to the creation simple form without losing the changes made in the full form. As for the simple form, once all fields are filled, click on the Post button to post the News in the space’s activity stream. The article is shared into the space’s activity stream. You can access the content of the article either by clicking on its title or by clicking on “read more”. The details of the article are available on the current page and display all the information, including the publication date and the author. Editing a News¶ You can change the content of the article using the edit icon. The possible actions from the edit mode interface are Update, Update and post and Cancel. The buttons Update and Update and post are disabled until changes are made. The Update action allows you to make changes in the article without reposting it unlike the Update and post action that allows to apply the changes in the article details and rise up the article’s preview in the activity stream. When an article has been edited, the details view display the updated date and author besides initial information. Pinning a News in the home page¶ As a platform-wide publisher (publisher:/platform/users role required), you are allowed to pin any article to the home page. Pinning an article effectively publishes it from wherever it was originally posted to all users of the platform. Pin function is available at three locations described below: 1- Pinning a News from creation form : From simple and full creation forms, a “Pin article” checkbox is available. After filling in the article details, tick “Pin article” checkbox then click on “post” button. A confirmation message appears. After confirmation, the article is posted to the space’s activity stream and automatically published to the home page’s News block. 2- Pinning a News from the activity stream : the “pin article” function is also available from the three-dots menu of the article’s activity. When you choose this option, a confirmation popup appears: After confirmation, a success message appears: *3- Pinning a News from News details : To pin a News from details, you only need to click on the available pin icon. The action is successfully done after confirming it as for the two options above. You may display the home page to verify that the news is available in the appropriate block. Drafts management¶ When you start writing an article, a draft is automatically saved as long as you write or modify the information in the form. The information about the saving status is displayed in the simple and the complete form : You can access to your drafts from the complete creation form. A draft button is displayed indicating the number of drafts available for the space in which you are writing the article. To view all the drafts, click on the button. A drawer is displayed with the list of drafts and allows either to resume or delete each draft using the appropriate button. To delete a draft, you have to confirm this action: To continue writing in a draft, you have to click on the draft title or the resume icon. The content of the related draft will be displayed in the form and you can update the different information and post the article. The formatting toolbar in activity messages and comments¶ The formatting toolbar (or the microblog component) is present at every place where you can add text message. It allows you to: - format your text: bold, italic, numbered list, bullet list - quote a previous message. - insert a link in your status message/comment - insert an image in your status message/comment. Text formatting in the microblog¶ You can format your text to make it richer and more readable by using different effects. Select the text you want to format. Then click on one of the buttons from the formatting toolbar to apply its effect: The first button formats the text as bold. The second button formats the text as italic. The third button clears the existing format. Writing a text then clicking on the fourth button adds the text to a numbered list. Clicking on Enter button of the keyboard adds a new line with the following number. When the listing is finished, to exit from the numbered list, you should click twice on Enter button of the keyboard. Typing a text then clicking on that button adds a bullet list. When you finish your listing, you need to double click on Enter button of the keyboard. Quote text in the microblog¶ The formatting toolbar allows you to quote a previous text message. To do this, click on the Quote button and then copy and paste the text you want to quote. Double click on the Enter button on your keyboard to leave the quote area. Insert link in the microblog¶ To insert a link in your text message/comment, click on the link button to bring up a Link form . Type the text and link into this form. The text you type will appear in your message/comment and will redirect users to the inserted link. You can also link to text that has already been typed. Select the text, then click on the Link button . The Link form will appear with the Text field already completed. To finish, type the link. Note It is also possible to add a link by right-clicking in the text area then selecting Link. Insert image in the microblog¶ The last button of the formatting toolbar in the microblog is the Insert Image button allowing you to insert an image in your message/comment. To insert an image in your text message/comment, follow these steps: - Click on the Insert Image button to open the Select image form. You have four options: Drop an image: drag and drop an image from your computer. A progress bar will appear to indicate the upload progress. When the upload has ended, the image will appear in the dedicated area. Upload an image from your desktop: It allows you to select an image from your computer. Browse for the image and double-click on it to select. A progress bar will appear to indicate the upload progress. When the upload has ended, the image will appear in the dedicated area. Select on server: select an image already on the server from your drives. Clicking on the link opens the Select files form. Navigate through your drives and then select an image. This will be directly displayed in the dedicated area. Pick an image online: insert an image using its URL. Paste the image link into the Image URL field. An upload time will appear and the OK button will be greyed. When the upload has ended, the image will appear in the dedicated area and the OK button will become clickable. Note Click on the Cancel button to return to the screen showing the options. When picking an image online, click on the Back button. This button will disappear when the image is fully uploaded. - To choose the alignement you want, click on one of the three buttons. 3. Click on the OK button. The image will appear in the comment/message area. - To resize, hover over the image to bring up a black frame. Manipulate the frame to the size you want. - When you right click on the image, a contextual menu appears: Click on Copy followed by Paste to duplicate the image in the editor. lick on Cut followed by Paste to move the image to another location in the editor. Click on Change Image to open the Insert Image form prefilled with: - the image preview. - the image alignment as previously selected. - the Remove Image link allowing you to remove the image and start again. Click on Link to open the Link form allowing you to insert an image using its URL. Note After you’ve finished resizing the image and posted it in the activity stream, the image will appear with the exact size you defined. Otherwise it appears in its default size. Mentioning someone¶ Mention is a way to refer to people so that they are informed of who and what you are talking about. Mentioning someone is possible in activity stream composer, activities comments and also document comments. To mention someone, do as follows: 1. Type the “@” symbol into the activity/comment composer, then type the person name you want to mention. A suggestion list that contains matching characters will appear. Only one person can be selected at one time. Note When mentionning a user with “@”, it displays in first positions contacts in your connections, then other people 2. Go through the suggestion list with the “Up” and “Down” arrow keys or by moving your cursor over it, then click or hit the “Enter” key to validate your selected person. Note Only one person can be selected at one time. After being validated, “@” and following characters will be replaced with First name and Last name which are wrapped in a label. You can click [x] in the label to dismiss it. In the Activity Stream, the mention is displayed as a link to the mentioned user’s profile page. Note - You can do the same steps above to mention someone in your comments (document comments and activity comments). - The person you mention also sees the post in his/her Activity Stream. - Document comments appears also in the Activity Stream. Editing an activity¶ With eXo Platform you can edit an activity you posted. To edit an activity, proceed as follows: Click the pulldown menu on the top right of your activity : . Two entries appear: Edit and Delete. Click on Edit–> Your activity’s text appears in the editor area allowing you to make changes. Make the needed changes and then click on Updatebutton. Note The Update button remains disabled until at least one change is done. - If you click the Cancelbutton, your changes will be ignored. Note Edition is only possible on written text or inserted images added via the CKEditor toolbar. Attached images, files or link (added through the dedicated tab) can’t be edited. If the activity contains only attachments (link or files and/or images) the edit button opens the editor allowing you to type a text message. After saving the change you made on your activity, the activity creation timestamp will be updated by a new label under your name indicating the time of the last edit: If you mouseover the timestamp, a popover appears indicating the original time of activity post. Warning Activities automatically generated from other aplications such as: - activities¶ You can “Like” an activity to show your interest and support to that activity. Liking an activity¶ Click under the activity you like, a tooltip appears . When you like an activity, the “Like” button will be highlighted to show that you already click “Like” on that activity. The activity displays the information of like numbers or people who also like the activity right below it. If many people have liked the activity, you can click to expand the view to see other “likers”. Deleting an activity¶ You are allowed to delete your activities that you created, and those in your activity stream and in the space where your are the manager. Change the activity filter to All Activities or My spaces to view all of your activities. Click on the pulldown menu on the top right of your activity you want to delete. Two entries appear: Edit and Delete. Click on Deletebutton –> A confirmation pop up appears. Click Yesbutton in the confirmation message to accept your deletion. Note As an eXo Platform user, you can only delete your own activities. If you are manager of a space, you can delete any activity posted in your space. - If you click Cancelbutton, nothing happens. Getting permalink of an activity¶ You can easily get the link of any activity (either edited or not) from the activity stream to share with others. With this feature, you can bring the attention of other users to an activity/comment without the need to mention them. To get the permalink to an activity, just click on its timestamp. This permalink will then take you to the activity with all comments expanded. If the activity is edited, when you mouse over its timestamp, a tooltip appears displaying the original timestamp of the post. Commenting on activities¶ This action allows you to get ideas, answers, and any additional information when your collaborators respond to your status updates. Besides, you can comment by yourself about any activities as follows: - Click on the the activity you want to comment. 2. Enter your comment into the Comment box and press the Comment button. Your comment will be displayed right after the activity. Note - A formatting toolbar appears once you click in the comment composer. It allows you to change the formatting of your message, attaching images and links and preview how it will look once posted. (like what we have for the activity stream composer) - 2000 characters are allowed in the comment message. If you exceed them, the comment button turns to inactive status. When there are more than two comments on activity, 2 latest comments will be displayed below the activity. You can click “View all XX comments” (XX is the total number of comments) to view 10 more comments. If some comments left are not displayed yet, click View previous comments on the top of the comment part to view more. You can mention people in your comment by “@” symbol into your activity composer, then type the person name you want to mention. See Mentioning someone for more details. Editing a comment¶ Just like for activities, you can edit any comment you wrote. To edit one of your comments, proceed as follows: Click on the pulldown menu at the right of the comment box : . Just like for activities, two entries appear: Click on Edit–> Your comment’s text appears in the editor area allowing you to edit it. Edit your comment and then click on Updatebutton. Note The Update button remains disabled until you change the comment. - If you click the Cancelbutton, your changes will be ignored. Note You can change an inserted link/image to your comment. Like for activities, after saving the change you made to your comment, a text appears near your name indicating that an edit has been done: Warning - comments¶ With: - Remove connection to delete a user from your connections. - Cancel Request to cancel a user invitation. - Connect to send an invitation to a user or accept his invitation. Liking comments on documents preview¶. Replying to comments¶ In addition to Liking comments feature in eXo Platform, it is possible to reply to a comment. Under each comment, a Reply button appears allowing you to reply to that comment: When you click on the Reply link, a comment composer appears with your avatar just below the last reply if it exists: When you click on the comment composer to type your message, a rich text editor toolbar appears allowing you to format your text: When more than two replies are posted to a comment, the replies are collapsed and a link to View all X replies (X is the total number of replies) is displayed allowing to view the whole replies. Note Some other details about the reply to comment feature: - There is only one level of replies, it is the reply to comment. There is not a reply to a reply. - Deleting a comment with replies induces the replies deletion. - In addition to activity stream comments, the reply to comment feature is available for activities of these applications: Documents preview, forum and tasks. - Same as for comments, it is possible to like replies except in tasks application. Deleting a comment¶ You are allowed to delete your comments you wrote, and those in your activity stream and in the space where your are the manager. Click on the pulldown menu on the top right of your comment you want to delete. Two entries appear: Edit and Delete. Click on Deletebutton –> A confirmation pop up appears. Click Yesbutton in the confirmation message to accept your deletion. Note As an eXo Platform user, you can only delete your own comments. If you are manager of a space, you can delete any comment posted in your space. - If you click Cancelbutton, nothing happens. Getting permalink of a comment¶ Just like for activities, click on the timestamp of the comment to get its permalink. This permalink will then take you to the activity in which the comment is highlighted. Just like for edited activities, a tooltip appears when mousing over timestamp of edited comments to display the original timestamp of the comment. Social Intranet Homepage¶ This section introduces you to the Social Intranet homepage. Besides, you will further learn about the following topics:. Notification: Clicking will show all on-site notifications. See Managing your notifications for more details.. Left navigation : It is a hamburger menu which allows you to quickly jump to : Activity Composer & Activity Stream Applications Quickly perform key actions through the following applications: Creating content quickly¶ In eXo Platform, you easily create your preferred content without navigating to its relevant application. Simply click to open the drop-down Here, you can do the following actions quickly: Creating a task¶ After saving, a pop up link appears which points to the created task. Creating an event¶ A pop up appears indicating in which calendar the event was added. Creating a poll¶ If you select a space forum, you will be redirected to the Forums application of the selected space after clicking Next. If you select “intranet” which has more than 1 forum and then click Next, another new selection menu will be opened. The Next button now becomes disabled until you have selected one forum from the And Forum menu. Creating a topic¶ Click Topic from the drop-down menu. 2. Select the location where your topic is created from the In Location drop-down menu. The “intranet” is selected by default. If you select a space forum, you will be redirected to the Forums application of the selected space after clicking Next. After clicking Next, if you select “intranet” which has more than 1 forum, one new selection will be opened that requires you to select your desired forum as below. The Next button becomes disabled until you have selected one forum. Uploading a file¶ Simply select Upload a File from the drop-down menu. See Sharing a File for more details. Creating a Wiki page¶ Social Intranet applications¶ Intranet applications are ones which come with the Social Intranet homepage, including: Getting Started¶ The Getting Started application is displayed first in the list of the Intranet homepage applications on the top right. This application helps you start exploring the Social Intranet by suggesting you where to go and what you should do first via the following links: Clicking each link will direct you to the related page to do the action. After each action is performed, it will be remarked as completed with a strike-through even though it is not performed via this application. Also, the completion percentage is updated on the percentage bar. When all the actions are performed, the completion percentage will be 100%. You can remove this application from the homepage by clicking Close or by hovering your cursor over the application header, and click . Calendar¶ The Calendar application displays some calendars and all of their events and tasks scheduled in the Calendar applications of Intranet and spaces. When going to the homepage, you will see events with their start and end date and tasks of Today. You can also see the events and tasks of the previous/next day by clicking the previous/next arrow respectively. To view details of an event/task directly in the Calendar application, click your desired event/task. To configure and set which calendars to be displayed in the Calendar application, hover your cursor over the application, then click at the right bottom of the application. To remove a calendar from the list of Displayed Calendars, click . This removed calendar will appear in the list of Display Additional Calendar. To add one of removed calendars again to the list of Displayed Calendars, simply hover your cursor over the desired calendar, then click . You can use the Search box to filter calendars quickly. Click OK to accept your settings. Invitations¶ The Invitations application shows a list of spaces and users who have sent you connection requests. You can see the number of requests displayed next to the application name. You can accept/refuse an invitation by hovering your cursor over a user/space’s name, then clicking Accept or respectively. When the invitation is accepted or refused, it will permanently removed from the list. Suggestions¶ The Suggestions application suggests you to connect with other users or to join spaces. Usually, it suggests two people having the most common connections with you, and two spaces having the most members who are your connections. Otherwise, it will suggest the newest users or the latest created space in the portal. When the suggestion is accepted or refused, it will permanently removed from the list. Who’s Online?¶ The Who’s Online? application shows all users who are already logged in the portal. Hover your cursor over the avatar of an online user, a pop-up will show you some information about him, such as name, avatar, current position (if defined), and the last activity message of status activity, file or link sharing activity (if any). You can also see your connection status with an online user via the corresponding button at the pop-up bottom: If you are not connected with him yet, the Connect button is to send connection invitation to him. If you have sent a connection request, the Cancel Request button is to revoke your connection request. If you are invited to connect, the Confirm button is to accept his connection request. If you are already connected with him, the Remove Connection button is to delete connection between you and him. Changing the UI language¶ To change the language of eXo Platform, do as follows: 1. Click your display name on the top navigation bar, then select Change Language from the drop-down menu. 2. In the Interface Language Setting form, you will see 23 languages that eXo Platform supports. Select your preferred language to display, for instance English: Click Apply to commit your changes.
https://exo-documentation.readthedocs.io/en/latest/GettingStarted.html
2019-10-14T03:04:42
CC-MAIN-2019-43
1570986649035.4
[array(['_images/Unlock-termsentskin.jpg', 'image1'], dtype=object) array(['_images/Unlock-ACCOUNTSETUP.jpg', 'image2'], dtype=object) array(['_images/Unlock-greetings_entskin.jpg', 'image3'], dtype=object) array(['_images/account_settings.png', 'image21'], dtype=object) array(['_images/account_settings_form.png', 'image22'], dtype=object) array(['_images/change_password_form.png', 'image23'], dtype=object) array(['_images/social_networks_form.png', 'image24'], dtype=object) array(['_images/forgot_password_1.png', 'image25'], dtype=object) array(['_images/forgot_password_2.png', 'image26'], dtype=object) array(['_images/forgot_password_3.png', 'image27'], dtype=object) array(['_images/forgot_password_4.png', 'image28'], dtype=object) array(['_images/intranet_homepage.png', 'homepage'], dtype=object) array(['_images/activity_filter.png', 'filter'], dtype=object) array(['images/sharenews/composer_news.png', 'image156'], dtype=object) array(['_images/shared_news.png', 'image158'], dtype=object) array(['_images/More_options.png', 'image159'], dtype=object) array(['_images/complete_form.png', 'image160'], dtype=object) array(['_images/shared_news_cf.png', 'image161'], dtype=object) array(['_images/news_details2.png', 'image162'], dtype=object) array(['_images/edit_news.png', 'image163'], dtype=object) array(['_images/modified_news.png', 'image164'], dtype=object) array(['_images/pin_checkbox.png', 'image168'], dtype=object) array(['_images/confirm_pin.png', 'image169'], dtype=object) array(['_images/pin_activity.png', 'image170'], dtype=object) array(['_images/confirm_pin.png', 'image169'], dtype=object) array(['_images/success_pin.png', 'image171'], dtype=object) array(['_images/pin_details.png', 'image181'], dtype=object) array(['_images/bloc_news.png', 'image172'], dtype=object) array(['_images/saving_shortform.png', 'image173'], dtype=object) array(['_images/saved_shortform.png', 'image174'], dtype=object) array(['_images/saving_draft.png', 'image175'], dtype=object) array(['_images/saved_fullform.png', 'image176'], dtype=object) array(['_images/draft_button.png', 'image177'], dtype=object) array(['_images/drafts_drawer.png', 'image178'], dtype=object) array(['_images/confirm_delete.png', 'image179'], dtype=object) array(['_images/resume_article.png', 'image180'], dtype=object) array(['_images/formatting_toolbar_actions.png', 'toolbar'], dtype=object) array(['_images/quoted_message.png', 'image85'], dtype=object) array(['_images/Link_form.png', 'image87'], dtype=object) array(['_images/Link_form_text.png', 'image90'], dtype=object) array(['_images/select_image_form.png', 'image93'], dtype=object) array(['_images/alignement.png', 'image98'], dtype=object) array(['_images/image_added_in_area.png', 'image99'], dtype=object) array(['_images/resize_image.png', 'image100'], dtype=object) array(['_images/contextual_menu.png', 'image101'], dtype=object) array(['_images/people_suggestion_list.png', 'image102'], dtype=object) array(['_images/mention_people.png', 'image103'], dtype=object) array(['_images/click_mentioned_user.png', 'image104'], dtype=object) array(['_images/Edited_activity.png', 'image140'], dtype=object) array(['_images/original_time.png', 'image149'], dtype=object) array(['_images/like_info.png', 'image115'], dtype=object) array(['_images/timestamp.png', 'image131'], dtype=object) array(['_images/tooltip_edited-activity.png', 'image147'], dtype=object) array(['_images/comment_activity.png', 'image109'], dtype=object) array(['_images/more_comments.png', 'viewmore'], dtype=object) array(['_images/edit_comment_attachments.png', 'image144'], dtype=object) array(['_images/Edited_comment.png', 'image145'], dtype=object) array(['_images/people_who_liked_doc.png', 'image120'], dtype=object) array(['_images/reply_comment.png', 'image121'], dtype=object) array(['_images/reply_comment_area.png', 'image122'], dtype=object) array(['_images/reply_comment_area_CKeditor.png', 'image123'], dtype=object) array(['_images/more_replies.png', 'image124'], dtype=object) array(['_images/timestamp_comment.png', 'image134'], dtype=object) array(['_images/tooltip_edited-comment.png', 'image148'], dtype=object) array(['_images/4.png', 'image17'], dtype=object) array(['_images/5.png', 'image18'], dtype=object)]
exo-documentation.readthedocs.io
Contents Now Platform Administration Previous Topic Next Topic Task table modifications Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Task table modifications Modifications made to the Task table are applied to all child tables. Be sure that the changes being made are appropriate for all the child tables. Adding fields is a low-impact change, because the field can be hidden on tables that do not need it. However, if the field is being used across tables, deleting fields may cause unwanted data loss. Note: When adding choice list entries to a choice list on the Task table, make sure that the entry value is unique. You can use dictionary overrides to change some parts of a field definition in a way that does not to not apply to all child tables. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/london-platform-administration/page/administer/task-table/reference/r_TaskTableModifications.html
2019-10-14T03:51:03
CC-MAIN-2019-43
1570986649035.4
[]
docs.servicenow.com
New Line Charts We've added a new type of chart into Xara Cloud - Line charts! You can insert and use these charts immediately from the Insert > Charts menu. Chart Improvements We've made significant improvements to our charts by adding several more controls to change its appearance and movement. Here are some of the new handles will now show: For bar charts we've also added the height control, so you no longer have to resize the whole chart if you wish to change the height only. Direct PDF file open After much development and testing, we've now enabled opening PDF files directly from the file picker. You can now open, edit and export as PDF completely within Xara Cloud! This feature is still in beta so please feel free to give your feedback from the Intercom button at the bottom right corner. Shortcuts for SmartFields and Company Fonts. We've added buttons to the SmartFields and Fonts panel to easily jump into your Control Panel and add additional values or fonts: Bug fixes and improvements Fixed an issue where text was not being entered on the iPad. Fixed an issue where auto-correcting text caused an internal error. Fixed an issue where lines and arrows were not snapping to shapes. Fixed an issue when rotating a photo within a layout that's grouped. Fixed an issue when adding new rows to a table in Firefox. Fixed selection issue when dragging on a backpanel of a text panel Fixed an error when changing page selection. Fixed a styling issue within comments. Resetting UI settings will no longer remove recent documents. Improved error message when attempting to access a document you don't have permission to access. Improved error message when attempting to upload a non-supported font type. Improved notifications when a setting is changed in the control panel. Improved error message information when trying to access a document from a disconnected drive. Improved access to company settings from the company fonts and smartfields panel. Improved trial account "days left" tooltips. Improved coffee machine by descaling it. Various other small and server side fixes and improvements.
https://docs.xara.com/en/articles/3158315-product-update-for-18th-july-2019
2019-10-14T04:37:45
CC-MAIN-2019-43
1570986649035.4
[array(['https://downloads.intercomcdn.com/i/o/135210232/466dacaf8ef9df51f4391aae/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/135169626/3003dfd77c55dbbaae3a169b/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/135191562/682fd5030725f5f073f2cdd7/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/135219687/b4e5c087e6652333427cbcb1/image.png', None], dtype=object) ]
docs.xara.com
View Connection Server instances communicate in a Cloud Pod Architecture environment by using an interpod communication protocol called the View InterPod API (VIPA). View Connection Server instances use the VIPA interpod communication channel to launch new desktops, find existing desktops, and share health status data and other information. View configures the VIPA interpod communications channel when you initialize the Cloud Pod Architecture feature.
https://docs.vmware.com/en/VMware-Horizon-6/6.0/com.vmware.horizon-view.cloudpodarchitecture.doc/GUID-4EF6E676-9F9B-494A-8EE0-7A06D29885E1.html
2018-07-16T01:13:57
CC-MAIN-2018-30
1531676589029.26
[]
docs.vmware.com
Show Contents List JSON (JavaScript Object Notation) is a simple, text-based data exchange format. JSON is human readable and contains just the minimum amount of text to describe the data. RESTful web services are generally JSON-based which is why we need to be able to construct and parse JSON-formatted data. There are a couple of classes that you can choose from to work with JSON data. They are: See Working with JSON Data. Show Contents List
https://docs.lansa.com/14/en/lansa004/content/lansa/lansa004sp2_0045.htm
2018-07-16T00:46:05
CC-MAIN-2018-30
1531676589029.26
[]
docs.lansa.com