content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Custom Classifier Metrics
Amazon Comprehend provides you with metrics to help you estimate how well a custom classifier should work for your job. They are based on training the classifier model, and so while they accurately represent the performance of the model during training, they are only an approximation of the API performance during classification.
Metrics are included any time metadata from a trained custom classifier is returned.
Please refer to Metrics:
Precision, Recall,
and FScore
Amazon Comprehend creates a Confusion Matrix as part of the custom classifier model training. This is placed in the output file specified in the CreateDocumentClassifier operation and can be used to assess how well the model works.
We also support the following metrics:
Accuracy
Precision (Macro Precision)
Recall (Macro Recall)
F1 Score (Macro F1 Score)
Hamming Loss
Micro Precision
Micro Recall
Micro F1 Score
These can be seen on the Classifier Details page in the console.
Accuracy
Accuracy indicates the percentage of labels from the test data that are predicted exactly right by the model. In other words, this is the fraction of the labels that were correct recognized. It is computed by dividing the number of labels in the test documents that were correctly recognized by the total number of labels in the test documents.
For example
The accuracy consists of the number of "rights" divided by the number of overall test samples = 5/7 = 0.714, or 71.4%
Precision (Macro Precision)
Precision is a measure of the usefulness of the classifier results in the test data. It's defined as the number of documents correctly classified, divided by the total number of classifications for the class. High precision means that the classifier returned substantially more relevant results than irrelevant ones.
The
Precision metric is also known as Macro Precision.
This is demonstrated in the following test set:
The Precision (Macro Precision) metric for the model is therefore:
Macro Precision = (0.75 + 0.80 + 0.90 + 0.50 + 0.40)/5 = 0.67
Recall (Macro Recall)
This indicates the percentage of correct categories in your text that the model can predict. This metric comes from averaging the recall scores of all available labels. Recall is a measure of how complete the classifier results are for the test data.
High recall means that the classifier returned most of the relevant results.
The
Recall metric is also known as Macro Recall.
This is demonstrated in the following test set:
The Recall (Macro Recall) metric for the model is therefore:
Macro Recall = (0.70 + 0.70 + 0.98 + 0.80 + 0.10)/5 = 0.656
F1 Score (Macro F1 Score)
A combination of the Precision and Recall metrics. The F1 score is the harmonic mean
of the Precision and Recall
metrics. This score is based on the Precision and Recall created by the averaging
method and is also known as the
Macro F1 score. A measure of how accurate the classifier results are for the test
data. It is
derived from the
Precision and
Recall values. The
F1Score is
the harmonic average of the two scores. The highest score is 1, and the worst score
is 0.
The
F1 Score metric is also known as the Macro F1 Score.
This is demonstrated in the following test set:
The F1 Score (Macro F1 Score) for the model is therefore as follows:
Macro F1 Score = (0.724 + 0.824 + 0.94 + 0.62 + 0.16)/5 = 0.6536
Hamming Loss
The fraction of labels that are incorrectly predicted. Also seen as the fraction of wrong labels compared to the total number of labels. Scores closer to zero are better.
Micro Precision
As Precision above, except that instead of averaging the precision scores of all available labels, this is based on the overall score of all precision scores added together.
Micro Recall
As Recall above, except that instead of averaging the recall scores of all labels, this is based on the overall score of all recall scores added together.
Micro F1 Score
As F1 Score above, but instead a combination of the Micro Precision and Micro Recall metrics.
Improving Your Custom Classifier's Performance
The metrics provide an insight into how your custom classifier will perform during a classification job. If the metrics are low, it's very likely that the classification model might not be effective for your use case. If this happens, you have several options to improve your classifier performance.
In your training data, provide more concrete data that can easily separate the categories. For example, provide documents that can best represent the label in terms of unique words/sentences.
Add more data for under-represented labels in your training data.
Try to reduce skew in the categories. If the largest label in your data is more than 10X the documents that are in the smallest label, try to increase the number of documents in the smallest and make sure to get the skew ratio down to at least 10:1 between highly represented and least represented classes. You can also try removing few documents from highly represented classes as well.
Confusion Matrix
A confusion matrix can give a very good indication on the classes for which adding more data would help model performance. A higher fraction of samples for a label shown along the diagonal of the matrix shows that the classifier is able to classify that label more accurately. If this number is lower (if the label class has a higher fraction of its samples in the non-diagonal portion of the matrix), you can try to add more samples. For example, if 40 percent of label A samples are classified as label D, adding more samples for both label A and label D will enhance the performance of the classifier. For more information, see Confusion Matrix.
|
https://docs.aws.amazon.com/comprehend/latest/dg/cer-doc-class.html
| 2020-08-03T12:45:13 |
CC-MAIN-2020-34
|
1596439735810.18
|
[array(['images/classifierperformance.png', 'Custom Classifier Metrics'],
dtype=object) ]
|
docs.aws.amazon.com
|
HTTPS Settings
After the initial startup, the Fiddler Everywhere application could only capture non-secure traffic (HTTP) while SSL traffic is not captured. To enable capturing and decrypting HTTPS traffic, you will need to explicitly install a root trust certificate via the HTTPS submenu in Settings.
Trust Root Certificate
The button installs and trusts the Fiddler root certificate (macOS and Windows only).
Capture HTTPS traffic
The option defines if Fiddler will capture HTTPS traffic or will skip it. It is inactive by default. To active it, the root certificate should be trusted first.
Export Root Certificate to Desktop and Trust Certificate
Expand Advanced Settings drop-down to show the Export Root Certificate to Desktop and Trust Certificate button. Click the button to export Fiddler the root certificate to the Desktop folder for manual import and trusting of the certificate.
|
https://docs.telerik.com/fiddler-everywhere/user-guide/settings/https
| 2020-08-03T12:38:43 |
CC-MAIN-2020-34
|
1596439735810.18
|
[array(['../../images/settings/settings-https.png',
'default https settings'], dtype=object) ]
|
docs.telerik.com
|
Sales Questions General Sales QuestionsSales
|
https://docs.brekeke.com/sales/
| 2021-07-24T01:53:45 |
CC-MAIN-2021-31
|
1627046150067.87
|
[]
|
docs.brekeke.com
|
Demonstration: Adding A Rule to the Rule Set
We will now begin to configure a routing rule for the new router's Rule Set. Our rules will forward only messages matching the following criteria to the TutorialFileOperation:
The message's document type is 2.3:ORM_O01.
The MSH:ReceivingApplication field (MSH:5) contains "PHARMACY"
Follow these steps to start configuring the routing rule set:
Click TutorialMsgRouter on the production diagram and then click the magnifying glass next to the Business Rule Name under Basic Settings.
The Rule Set already contains a rule. Add the following constraint to the rule. Note that double clicking the constraint box on the diagram launches the Rule Constraint Editor.
Schema Catetory: 2.3
Document Name: ORM_O01
Click OK
The updated rule looks like the following:
Refer to Creating a Business Operation if you have not completed the earlier tutorial.
|
https://docs.intersystems.com/latest/csp/docbook/Doc.View.cls?KEY=THL7_MessageRouters_8
| 2021-07-24T02:17:21 |
CC-MAIN-2021-31
|
1627046150067.87
|
[]
|
docs.intersystems.com
|
To remove an existing DataKeeper Resource/Mirror from LifeKeeper/SPS:
- Without impacting the existing resources in LifeKeeper/SPS, i.e. File Shares, Oracle, SQL, DNS
- LifeKeeper/SPS, all administration takes place at the LifeKeeper/SPS level.
Steps to Remove:
In LifeKeeper/SPS:
In the LifeKeeper GUI console, remove the dependency that the Volume is associated with:
- Right click on your parent level resource and select Remove Dependency…
- Select the appropriate Source Server and click Next
- Under Child Resource, select your Volume from the drop down list and click Next
- Select Remove Dependency when the dialog pertaining to the Parent/Child dependency appears, then select Done
Now the Volume Resource is listed as a standalone hierarchy. To remove:
- Right click on Volume/Volume Hierarchy and select Delete Resource Hierarchy . . .
- Select the Target Server, then select Next
- Select Delete when the dialog pertaining to Volume Hierarchy and Target Server removal appears, then select Done
The DataKeeper Storage is no longer a resource in the LifeKeeper/SPS.
- In the DataKeeper UI > Action Panel, select Delete Job
- Select Yes when prompted “Are you sure you want to delete the ‘Volume (drive)’ and its mirror?”
If you have multiple mirrors/targets, select Delete Mirror and the jobs will be deleted also
Feedback
Thanks for your feedback.
Post your comment on this topic.
|
https://docs.us.sios.com/sps/8.7.1/en/topic/remove-datakeeper-storage-from-lifekeeper
| 2021-07-24T01:12:29 |
CC-MAIN-2021-31
|
1627046150067.87
|
[]
|
docs.us.sios.com
|
A Receipt is created each time a deposit is taken or a credit card transaction is processed. If you are setup to use online processing of deposits, then 'receipts' will be automatically issued against the booking in the amount specified in the deposit. Clicking on the Receipt link shows Receipt Details, including an authorization and transaction ID when using a credit card gateway with the POS Module.
If you wish to use the features for adding Receipts – both credits and debits – to bookings, then make sure you upgrade to the Channel Management Tool Pro product. To see if this upgrade is right for you, see: and then contact BookingCenter to upgrade. In addition to offering the capability for managing credits and debits, the Channel Management Tool Pro provides a comprehensive Letters feature to allow unlimited customized communication – via printer, email, fax, or .pdf – for sending correspondence to Guests associated with bookings, as well as a full suite of Reports to better manage the system.
|
https://docs.bookingcenter.com/display/MTOOL/Receipt+Details
| 2019-07-16T02:43:41 |
CC-MAIN-2019-30
|
1563195524475.48
|
[]
|
docs.bookingcenter.com
|
Xbox Live documentation
Xbox Live is a premier gaming network that connects millions of gamers across the world. You can add Xbox Live to your Windows 10 PC, Xbox One, or mobile game in order to take advantage of the Xbox Live features and services.
GDC Announcements ⬀
Visit us at GDC to find out the best ways to host your game in the cloud, operate your live titles, or reach the most passionate and engaged gamers on the planet.
Getting Started
Join a developer program, create a game app at Partner Center, add the Xbox Live SDK to your IDE, and write basic sign-in code.
Features
Add Xbox Live features to your game, such as Identity, Social features, Achievements, Cloud Storage, Multiplayer features, and External Services.
Testing and Releasing
Test, troubleshoot, and publish a game.
API Reference
Xbox Live API reference, including Xbox Services API (XSAPI), WinRT, Xbox Authentication Library (XAL), XAsync, and RESTful APIs.
|
https://docs.microsoft.com/en-us/gaming/xbox-live/index
| 2019-07-16T02:23:02 |
CC-MAIN-2019-30
|
1563195524475.48
|
[]
|
docs.microsoft.com
|
GlobalUnlock function
Decrements the lock count associated with a memory object that was allocated with GMEM_MOVEABLE. This function has no effect on memory objects allocated with GMEM_FIXED.
Syntax
BOOL GlobalUnlock( HGLOBAL hMem );
Parameters
hMem
A handle to the global memory object. This handle is returned by either the GlobalAlloc or GlobalReAlloc function.
Return Value
If the memory object is still locked after decrementing the lock count, the return value is a nonzero value. GlobalLock function increments the count by one, and GlobalUnlock decrements the count by one. For each call that a process makes to GlobalLock for an object, it must eventually call. If the specified memory block is fixed memory, this function returns TRUE.
If the memory object is already unlocked, GlobalUnlock returns FALSE and GetLastError reports ERROR_NOT_LOCKED.
A process should not rely on the return value to determine the number of times it must subsequently call GlobalUnlock for a memory object.
Requirements
See Also
Global and Local Functions
Memory Management Functions
|
https://docs.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-globalunlock
| 2019-07-16T02:13:49 |
CC-MAIN-2019-30
|
1563195524475.48
|
[]
|
docs.microsoft.com
|
Overview of creating services in ITSI
This topic provides an overview of how to create services for your Splunk IT Service Intelligence (ITSI) deployment. For instructions on configuring services, see Configure services in ITSI in this manual.
What is an ITSI service?
An ITSI service is a representation of a real-world IT service that lets you monitor the health of IT systems and business processes. For more information on ITSI services, see ITSI concepts and features in this manual.
After you create and configure a service, you can use ITSI to monitor service health, perform root-cause analysis, set up threshold-based alerts, and track compliance with organizational SLAs (service-level agreements).
Services must be assigned to a team. A service is only visible for roles that are assigned read permissions to the team. A service can only be edited by roles that are assigned read and write permissions to the team. If your organization has decided not to create private teams, all services will reside in the Global team. For information about teams, see ITSI service-level permissions in this manual.
Note: Before you create a service, it is a best practice to define the entities that you want the service to contain. You can then add entities to the service when you configure the service. For more information, see Define entities in ITSI in this manual.
How to create services
There are three ways to create services:
- Create a single service in ITSI
- Create new services one at a time in the UI. You can use service templates to quickly configure services.
- Import from CSV
- Import new services and link them to service templates from a CSV file. This method lets you import a hierarchy of dependent services with entities already associated. You can also create a modular input that runs automated recurring imports of the CSV file contents.
- Import from search
- Add services and link services to service templates from an ITSI module, saved search, or ad hoc search.
Feedback submitted, thanks!
|
https://docs.splunk.com/Documentation/ITSI/4.1.0/Configure/CreateService
| 2019-07-16T02:43:45 |
CC-MAIN-2019-30
|
1563195524475.48
|
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
|
docs.splunk.com
|
A meta task that compiles source files.
It simply runs the compilers registered in your project and returns a tuple with the compilation status and a list of diagnostics.
:compilers - compilers to run, defaults to
Mix.compilers/0, which are
[:yecc, :leex, :erlang, :elixir, :xref,
--list- lists all enabled compilers
--no-archives-check- skips checking of archives
--no-deps-check- skips checking of dependencies
--no-protocol-consolidation- skips protocol consolidation
--force- forces compilation
--return-errors- returns error status and diagnostics instead of exiting on error
--erl-config- path to an Erlang term file that will be loaded as mix config
Returns all compilers
Returns manifests for all compilers
Receives command-line arguments and performs compilation. If it produces errors, warnings, or any other diagnostic information, it should return a tuple with the status and a list of diagnostics
Returns all compilers.
Returns manifests for all compilers.
Receives command-line arguments and performs compilation. If it produces errors, warnings, or any other diagnostic information, it should return a tuple with the status and a list of diagnostics.
Callback implementation for
Mix.Task.Compiler.run/1.
© 2012 Plataformatec
Licensed under the Apache License, Version 2.0.
|
https://docs.w3cub.com/elixir~1.7/mix.tasks.compile/
| 2019-07-16T01:53:52 |
CC-MAIN-2019-30
|
1563195524475.48
|
[]
|
docs.w3cub.com
|
We've covered a number of fundamental Subversion concepts in this chapter:
We've introduced the notions of the central repository, the client working copy, and the array of repository revision trees.
We've seen some simple examples of how two collaborators can use Subversion to publish and receive changes from one another, using the “copy-modify-merge” model.
We've talked a bit about the way Subversion tracks and manages information in a working copy.
At this point, you should have a good idea of how Subversion works in the most general sense. Armed with this knowledge, you should now be ready to move into the next chapter, which is a detailed tour of Subversion's commands and features.
|
https://docs.huihoo.com/subversion/1.4/svn.basic.summary.html
| 2019-07-16T02:12:31 |
CC-MAIN-2019-30
|
1563195524475.48
|
[]
|
docs.huihoo.com
|
UIPage
Control.
UIPage Appearance When Contained In(Type[]) Control.
Method
Appearance When Contained In(Type[])
Definition
Returns a strongly typed UIAppearance for instances of this class when the view is hosted in the specified hierarchy.
public static UIKit.UIPageControl.UIPageControlAppearance AppearanceWhenContainedIn (params Type[] containers);
static member AppearanceWhenContainedIn : Type[] -> UIKit.UIPageControl.UIPageControl UIPageControl when those instances are contained in the hierarchy specified by the
containers parameter.
If developers want to control the appearance of subclasses of UIPageControl,.
|
https://docs.microsoft.com/en-us/dotnet/api/uikit.uipagecontrol.appearancewhencontainedin?view=xamarin-ios-sdk-12
| 2019-07-16T03:10:38 |
CC-MAIN-2019-30
|
1563195524475.48
|
[]
|
docs.microsoft.com
|
Updatable Subscriptions for Transactional Replication
Transactional replication supports updates at Subscribers through updatable subscriptions and peer-to-peer replication. The following are the two types of updatable subscriptions:
- Immediate updating. The Publisher and Subscriber must be connected to update data at the Subscriber.
- Queued updating The Publisher and Subscriber do not have to be connected to update data at the Subscriber. Updates can be made while the Subscriber or Publisher is offline.
When data is updated at a Subscriber, it is first propagated to the Publisher and then propagated to other Subscribers. If immediate updating is used, the changes are propagated immediately using the two-phase commit protocol..
- SQL Server Management Studio: How to: Enable Updating Subscriptions for Transactional Publications (SQL Server Management Studio)
- Replication Transact-SQL programming: How to: Enable Updating Subscriptions for Transactional Publications (Replication Transact-SQL Programming)
To create updatable subscriptions for transactional publications
- SQL Server Management Studio: How to: Create an Updatable Subscription to a Transactional Publication (SQL Server Management Studio)
- Replication Transact-SQL programming: How to: Create an Updatable Subscription to a Transactional Publication (Replication Transact-SQL Programming)
Switching Between Update Modes.
Note
Replication does not switch automatically between update modes. You must set the update mode through SQL Server Management Studio or your application must call sp_setreplfailovermode (Transact-SQL) to switch between modes..
- SQL Server Management Studio: How to: Switch Between Update Modes for an Updatable Transactional Subscription (SQL Server Management Studio)
- Replication Transact-SQL programming: How to: Switch Between Updating Modes for an Updating Transactional Subscription (Replication Transact-SQL Programming)
Considerations for Using Updatable Subscriptions
General Considerations
- Updating subscriptions are supported for Subscribers running Microsoft SQL Server 2000 SP3 and later. If you used immediate updating subscriptions on a Subscriber running SQL Server version 7.0 and are upgrading to SQL Server 2005, you must drop and re-create the subscriptions.
- After a publication is enabled for updating subscriptions or queued updating subscriptions, the option cannot be disabled for the publication (although subscriptions do not need to use it). To disable the option, the publication must be deleted and a new one created.
- Republishing data is not supported.
- Replication adds the msrepl_tran_version column to published tables for tracking purposes. Because of this additional column, all INSERT statements should include a column list.
- To make schema changes on a table in a publication that supports updating subscriptions, all activity on the table must be stopped at the Publisher and Subscribers, and pending data changes must be propagated to all nodes before making any schema changes. This ensures that outstanding transactions do not conflict with the pending schema change. After the schema changes have propagated to all nodes, activity can resume on the published tables. For more information, see How to: Quiesce a Replication Topology (Replication Transact-SQL Programming).
- If you plan to switch between update modes, the Queue Reader Agent must run at least once after the subscription has been initialized (by default, the Queue Reader Agent runs continuously).
- If the Subscriber database is partitioned horizontally and there are rows in the partition that exist at the Subscriber, but not at the Publisher, the Subscriber cannot update the preexisting rows. Attempting to update these rows returns an error. The rows should be deleted from the table and then added at the Publisher.
Updates at the Subscriber
- Updates at the Subscriber are propagated to the Publisher even if a subscription is expired or is inactive. Ensure that any such subscriptions are either dropped or reinitialized.
- If TIMESTAMP or IDENTITY columns are used, and they are replicated as their base data types, values in these columns should not be updated at the Subscriber.
- Subscribers cannot update or insert text, ntext or image values because it is not possible to read from the inserted or deleted tables inside the replication change-tracking triggers. Similarly, Subscribers cannot update or insert text or image values using WRITETEXT or UPDATETEXT because the data is overwritten by the Publisher. Instead, you could partition the text and image columns into a separate table and modify the two tables within a transaction.
To update large objects at a Subscriber, use the data types varchar(max), nvarchar(max), varbinary(max) instead of text, ntext, and image data types, respectively.
- Updates to unique keys (including primary keys) that generate duplicates (for example, an update of the form UPDATE <column> SET <column> =<column>+1 are not allowed and will be rejected because of a uniqueness violation. This is because set updates made at the Subscriber are propagated by replication as individual UPDATE statements for each row affected.
- If the Subscriber database is partitioned horizontally and there are rows in the partition that exist at the Subscriber but not at the Publisher, the Subscriber cannot update the pre-existing rows. Attempting to update these rows returns an error. The rows should be deleted from the table and inserted again.
User-Defined Triggers
- If the application requires triggers at the Subscriber, the triggers should be defined with the NOT FOR REPLICATION option at the Publisher and Subscriber. For more information about this option, see Controlling Constraints, Identities, and Triggers with NOT FOR REPLICATION. This ensures that triggers fire only for the original data change, but not when that change is replicated.
- Ensure that the user-defined trigger does not fire when the replication trigger updates the table. This is accomplished by calling the procedure sp_check_for_sync_trigger in the body of the user-defined trigger. For more information, see sp_check_for_sync_trigger (Transact-SQL).
Immediate Updating
- For immediate updating subscriptions, changes at the Subscriber are propagated to the Publisher and applied using Microsoft Distributed Transaction Coordinator (MS DTC). Ensure that MS DTC is installed and configured at the Publisher and Subscriber. For more information, see the Windows documentation.
- The triggers used by immediate updating subscriptions require a connection to the Publisher to replicate changes. For information about securing this connection, see Security Considerations for Updating Subscriptions.
- If the publication allows immediate updating subscriptions and an article in the publication has a column filter, you cannot filter out non-nullable columns without defaults.
Queued Updating
- Tables included in a merge publication cannot also be published as part of a transactional publication that allows queued updating subscriptions.
- Updates made to primary key columns are not recommended when using queued updating because the primary key is used as a record locator for all queries. When the conflict resolution policy is set to Subscriber Wins, updates to primary keys should be made with caution. If updates to the primary key are made at both the Publisher and at the Subscriber, the result will be two rows with different primary keys.
- For columns of data type SQL_VARIANT: when data is inserted or updated at the Subscriber, it is mapped in the following way by the Queue Reader Agent when it is copied from the Subscriber to the queue :
- BIGINT, DECIMAL, NUMERIC, MONEY, and SMALLMONEY are mapped to NUMERIC.
- BINARY and VARBINARY are mapped to VARBINARY data.
Conflict Detection and Resolution
- For the Subscriber Wins conflict policy: conflict resolution is not supported for updates to primary key columns.
- Conflicts due to foreign key constraint failures are not resolved by replication:
- If conflicts are not expected and data is well partitioned (Subscribers do not update the same rows), you can use foreign key constraints on the Publisher and Subscribers.
- If conflicts are expected: you should not use foreign key constraints at the Publisher or Subscriber if you use "Subscriber wins" conflict resolution; you should not use foreign key constraints at the Subscriber if you use "Publisher wins" conflict resolution.
See Also
Concepts
Queued Updating Conflict Detection and Resolution
Peer-to-Peer Transactional Replication
Publication Types for Transactional Replication
Publishing Data and Database Objects
Security Considerations for Updating Subscriptions
Subscribing to Publications
Help and Information
Getting SQL Server 2005 Assistance
|
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2005/ms151718(v=sql.90)
| 2019-07-16T03:44:04 |
CC-MAIN-2019-30
|
1563195524475.48
|
[]
|
docs.microsoft.com
|
Visual Basic for Applications Reference
Input # Statement Example
This example uses the Input # statement to read data from a file into two variables. This example assumes that
TESTFILE.
|
https://docs.microsoft.com/en-us/previous-versions/visualstudio/aa243387(v%3Dvs.60)
| 2019-07-16T03:40:11 |
CC-MAIN-2019-30
|
1563195524475.48
|
[]
|
docs.microsoft.com
|
Make an External List from a SQL Azure table with Business Connectivity Services and Secure Store
As a SharePoint or global admin in Office 365, you can use services in SharePoint Online to access data from a Microsoft SQL Azure database. Because SQL Azure is a cloud-based relational database technology, the connection works completely in the cloud. This article describes how to use SharePoint technologies to access data from a SQL Azure database without having to write code.
To use data from a SQL Azure database, you have to create an External List by using Business Connectivity Services (BCS) and Secure Store. BCS connects SharePoint solutions to external data, and Secure Store enables user authentication for the data. By using an External List, you can display the contents of a table from SQL Azure in SharePoint Online. Users can read, edit, and update the data, all in SharePoint Online.
For more information about how to use BCS to use external data, see Introduction to external data.
SQL Azure databases are cloud-based relational databases that are created by using SQL Server technology. To learn how to get started with these databases, see Getting Started with Microsoft Azure SQL Database Using the Microsoft Azure Platform Management Portal
Overview of steps in the process
To create an External List that enables you to access data from SQL Azure, you have to complete a sequence of separate steps.
The following table lists the steps and the required software for that step.
How BCS and Secure Store work together
Business Connectivity Services (BCS) connects to data in an external data store. You can display the data in an External List, and maintain the data elsewhere. BCS enables you to connect SharePoint solutions to two kinds of resources:
A SQL Azure database
A WCF web service that acts as an end-point for some other kind of data store
In SharePoint Online, BCS enables you to access an external data source by using the Secure Store. Secure Store keeps encrypted copies of credentials. It enables a SharePoint admin to associate a SharePoint group that uses a single SQL Azure account that can access the target database. When a SharePoint user browses the data in the External List, Secure Store uses the associated SQL Azure account to request the data from SQL.
To make this possible, a SharePoint queries External Content Type for that list in the BDC metadata store that contains the list. The query asks for the following information: how to access the external system, which operations are supported, and what credentials to use.
The BDC service runtime sends.
Step 1: Set permissions on the BCS Metadata store
To do this step, follow the procedure in Set permissions on the BCS Metadata Store for a Business Connectivity Services on-premises solution in SharePoint 2013.
When you finish the steps in that procedure, return to this page and start Step 2: Create a Secure Store credentials mapping.
Step 2: Create a Secure Store credentials mapping
Typically, when you create a credentials mapping in Secure Store, you map multiple SharePoint users to a single SQL Azure account. You might use a SharePoint group, or just list all the user names. The SQL Azure account has appropriate permissions to access the target database table. The database that you target in SQL Azure is known as the Secure Store Target Application, or just the Target Application.
Tip
Make sure that you have SQL Azure credentials ready. You'll use these credentials when you create the mapping between SharePoint users and a SQL Azure account.
Create the Secure Store Target Application
To create a Secure Store Target Application,.
On the ribbon, select New to open the page where you can specify settings for a Target Application.
In the Target Application Settings section, do the following:
Under Target Application ID, specify a value for a unique ID. This ID maps the External Content type to credentials that are required to authenticate the user. You cannot change the Target Application ID once you create the Target Application.
Under Display Name, specify a user-friendly name for referring to the Target Application.
Under Contact E-mail, specify the e-mail address that you want people to use when they have a question about the Target Application (external data system).
Under Target Application Type, verify that the value is set to Group Restricted. Group Restricted means that the Secure Store contains a mapping that connects a group of SharePoint users to a single, external data account that can act on their behalf. In addition, a Group Restricted application type is restricted to the specified external data system.
In Credential Fields section, enter the field names that you want to use for the user name and password of the external data system. By default, the Secure Store uses the Windows User Name and Windows Password. We recommend that you accept these values. You cannot edit these Field Types after you finish creating the application.
In the Target Application Administrators section, in the Target Application Administrators field, enter the name of a group or a list of users who can edit this Target Application. You can also search for the name of a group in Microsoft Online Directory Server. Typically, this section usually contains the name of the SharePoint or global admin.
In the Members section, in the Members field enter the name of the group that will use the Target Application. Generally, this is a group from the Microsoft Online Directory Service (MSODS).
If you are a global administrator, you can create groups in MSODS in the Microsoft 365 admin center.
Select OK to create the Target Application and return to the Secure Store Service page. is Windows User Name and the password is Windows Password.
Important
Keep a secure record of this information. After you set these credentials, an administrator cannot retrieve them.
Step 3: Create the External Content Type
You can create an External Content Type (ECT) by using Microsoft Visual Studio, or by using Microsoft SharePoint Designer 2010. This procedure describes how to create an ECT in SharePoint Designer 2010. Microsoft SharePoint Designer 2010 is available as a free download from the Microsoft Download Center.
You must be a SharePoint or global admin in your organization to perform this task.
To create an ECT, follow these steps.
Start Microsoft SharePoint Designer.
Select, a SharePoint or global admin performs these steps.
If you want to change to a different user, select Add a new user, select Personal or Organization, and then sign in to the site as the SharePoint or global admin, and Sign In.
After the site opens, in the Site Objects tree on the left of the application window, select External Content Types.
Select the External Content Types tab and then, in the ribbon, select.
Select.
When you select SQL Server, specify the following:
Database Server name
Database Name
Name
Important
The URL you use to access the database contains the Fully Qualified Server Name. For example, if you access the database via your Fully Qualified Server Name is aaapbj1mtc.database.windows.net. > If you log on at a higher level, such as the Management Portal for Microsoft Azure, you can discover the Fully Qualified Server Name. On the portal page, under Subscriptions, select select OK.
If you see a prompt for credentials to access the external data source, enter the correct User name and Password credentials to access the external data system. Then, select OK to connect.
The Data Source Explorer tab, you can view a list of tables that are available from the SQL Azure database. To see a list of possible operations for this table, open the shortcut menu for the table.
You can select specific options such as New Read Item Operation and New Update Operation for the table. Or, you can just select Create All Operations.
Select Create All Operations to open a wizard, and then select Next.
On the Operation Properties page of the wizard, in the Errors and Warnings pane, read about any issues. It is important to resolve reported issues that you see. For example, you may have to choose a field to show in an external item picker control. For a customer table, you could choose the customer name.
Important
The wizard may display a warning message if unique, required fields, such as 'CustomerID', exist in the target table. This is valid if the specified field is required and unique in the table, such as a primary key.
Note
For more information about how to define filters in external content types, see How to: Define filters for External Item Picker controls .
Select Finish to accept the operations properties that you configured. SharePoint Designer displays the operations as a list of ECT Operations.
When this step is complete, you are ready to create an External List to use the data from the external source.
Step 4: Create an External List
You can create an External List by using SharePoint Designer, or by adding an External List as an app on the SharePoint Online team site. This procedure describes how to create an External List from the team site in SharePoint Online.
Create an External List by using SharePoint Online
Go to the home page of the SharePoint Online team site.
Select might display a message that states, "Creating lists and forms requires the external content type to be saved". select.
Select OK and then Save to create the External List in the SharePoint Online site. the Set Object Permissions.
Important
You must manually assign permissions to manage the ECT to a global or SharePoint admin by using the Set Object Permissions command. If you do not assign these permissions explicitly, the admins won't have permission to manage the ECT.
In the set object permissions dialog, select the check boxes for all the permissions (( Edit, Execute, Selectable in Clients, and Set Permissions) that the SharePoint admin needs.
Note
Make sure that at least one user or group has Set Permissions rights. If you don't assign someone this right, you might create an unmanageable BCS connection.
Select Propagate permissions to all methods of this external content type. Doing this overwrites any existing permissions.
Note
If you want to add a group that can use the External Lists, you must also give the group Execute rights. That enables users in the group to run a query to the external source, and view the results in SharePoint.
Feedback
|
https://docs.microsoft.com/en-us/sharepoint/make-external-list?redirectSourcePath=%252fda-dk%252farticle%252foprette-en-ekstern-liste-fra-en-sql-azuretabel-med-business-connectivity-services-og-secure-store-466f3809-fde7-41f2-87f7-77d9fdadfc95
| 2019-07-16T02:32:57 |
CC-MAIN-2019-30
|
1563195524475.48
|
[array(['sharepointonline/media/4201a500-2932-4e53-867c-c911df2c729a.png',
'Diagram that shows the connectivity between a user, SharePoint Online, and an external data source in SQL Azure'],
dtype=object) ]
|
docs.microsoft.com
|
Bug Check 0x149: REFS_FILE_SYSTEM
The REFS_FILE_SYSTEM bug check has a value of 0x00000149. This indicates that a file system error has occurred.
Important
This topic is for programmers. If you are a customer who has received a blue screen error code while using your computer, see Troubleshoot blue screen errors.
REFS_FILE_SYSTEM Parameters
Resolution
If you see RefsExceptionFilter on the stack then the 2nd and 3rd parameters are the exception record and context record. Do a .exr on the 2nd parameter to view the exception information, then do a .cxr on the 3rd parameter and kb to obtain a more informative stack trace.
Feedback
|
https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/bug-check-0x149--refs-file-system
| 2019-07-16T03:13:40 |
CC-MAIN-2019-30
|
1563195524475.48
|
[]
|
docs.microsoft.com
|
Boards and mcus¶
A board is the top level configuration entity in the build framework. It contains information about the MCU and the pin mapping.
In turn, the MCU contains information about available devices and clock frequencys in the microcontroller.
See src/boards/ and src/mcus for available configurations.
Only one MCU per board is supported. If there are two MCU:s on one physical board, two board configurations have to be created, one for each MCU.
The porting guide Porting shows how to port Simba to a new board.
|
https://simba-os.readthedocs.io/en/11.0.0/developer-guide/boards-and-mcus.html
| 2019-07-16T02:01:40 |
CC-MAIN-2019-30
|
1563195524475.48
|
[]
|
simba-os.readthedocs.io
|
BerkeleyGW¶
Background¶ found in Quantum Chemistry applications.
Our target science application for the Cori timeframe is to study realistic interfaces in organic photo-voltaics (example P3HT system pictured below). Such systems require 1000+ atoms and considerable amount of vacuum that contributes to the computational complexity. GW calculations general scale as the number of atoms to the fourth power (the vacuum space roughly counting as having more atoms). This is 2-5 times bigger problem than has been done in the past. Therefore, successfully completing these runs on Cori requires not only taking advantage of the compute capabilities of the Knights-Landing architecture but also improving the scalability of the code in order to reach full-machine capability.
Starting Point¶
BerkeleyGW started off as an MPI-Only application. This model in non-optimal for a number reasons. The most pressing is that, like all MPI applications, BerkeleyGW duplicates some data structures on each MPI task in order to avoid significant amount of communication. For large problem sets on many-core systems, this is problematic because the duplicated memory can become significant and prevent a pure MPI application from fitting in memory (particularly High Bandwidth Memory) when using one MPI task per core. For this reason we were motivated to start by adding OpenMP to the application - targeting performant scaling up to 100+ threads. BerkeleyGW has multiple computational bottlenecks, we discuss below the "Sigma" application which is bottleneck for large systems.
Discovery And Strategy¶
We started off with the following optimization strategy:
Step 1: Refactor loops to have a 3 level structure - a top level loop for MPI, a second level loop for OpenMP, a bottom level loop targeting vectorization. This process is expressed in the following figure where we compare the performance of the Sigma kernel on Xeon and Xeon-Phi Knights-Landing processors.
In the above figure, you can see a couple of themes. Firstly, optimizations targeting the Xeon-Phi improve the code performance on the Xeon architecture (Haswell in the above case) as well. Secondly, like many codes, we found that as soon as we added OpenMP parallelism, it didn't immediately perform well on the Xeon Phi. We needed to make sure the loops targeted at vectorization were actually being vectorized by the compiler (Step 3). The following figure shows the code schematic for this change.
In the above figure you can see two changes that affected vectorization. Firstly, we reordered the loops, so that the innermost loop had a long trip count (the original inner loop had a trip-count of 3) the new inner-loop has 1000-10000 iteration trip count. Secondly, we removed spurious logic and cycle statements that cause unnecessary execution forking. The original code was not auto-vectorized by the Intel compiler due to a low predicted vector gain.
Working with Cray and Intel¶
We next took the above code to a dungeon session attended by Cray and Intel engineers. We addressed the issue of why the performance on Xeon-Phi not outperforming Xeon by a larger margin.
We note that the above kernel can be essentially broken down as the following:
We note that on both the Xeon and Xeon-Phi the memory requirements to hold a row of the three arrays is more than the available L2 (256-512 KB). However, the arrays do fit into L3 cache on the Xeon. There is not corresponding L3 on Xeon-Phi; so the data is streamed from the Xeon-Phi RAM. However, if we rewrite the code in the following way (optimization step 4), we can we use data in L2 on the Xeon-Phi at least 3 times:
In this case, the code is near the cross-over point for being bandwidth vs compute bound. By reusing the data 3 times, we are able to make better use of the vector units in the Xeon-Phi processor. The change becomes evident if we look at the optimization path on a roofline performance curve. The curve shows measured performance of the above application revisions and measured arithmetic-intensity (FLOPs per byte transferred from DRAM to the cores on the node).
The cache-blocking step described is represented by optimization step 4. You can see that our vectorization optimization actually reduced our measured arithmetic intensity on KNL (due to the loss of factor of 3 reuse) and cache blocking restores it. When running out of DDR on the KNL (brown line), you can see the performance is memory bandwidth limited (the performance is pinned to the roofline curve). However when running out of HBM, room exists to improve the performance further.
In order to gain further performance, we utilize hyper-threading (beyond 64 threads on the 7210 KNL part) and improve the performance of the complex divides performed in the loop, which by default generate X87 instructions which serialize the code. The following figure shows the thread scaling of the optimized on code (Kernel C represents the Sigma code discussed above) on various platforms:
You can see that code scales well to 64 threads (and beyond) on KNL when running out of MCDRAM, but fails to scale beyond 40 or so threads when running out of DDR. This is because memory bandwidth ultimately limits the performance when running out of DDR (also evident on the roofline curve). Secondly, you can see that KNL outperforms Haswell by about 60% and significantly outperforms Knights-Corner!
Lessons Learned¶
Optimal performance for this code required restructuring to enable optimal thread scaling, vectorization and improved data reuse.
Long loops are best for vectorization. In the limit of long loops, effects of loop peeling and remainders can be neglected.
There are many coding practices that prevent compiler auto-vectorization of code. The use of profilers and compiler reports can greatly aid in producing vectorizable code.
The absence on L3 cache on Xeon-Phi architectures makes data locality ever more important than on traditional Xeon architectures.
Optimization is a continuous process. The limiting factor in code performance may change between IO/communication, memory bandwidth, latency and CPU clockspeed as you continue to optimize.
|
https://docs.nersc.gov/performance/case-studies/berkeleygw/
| 2019-07-16T01:54:05 |
CC-MAIN-2019-30
|
1563195524475.48
|
[array(['Interface.png', None], dtype=object)
array(['Sigma-Optimization-Process.png', None], dtype=object)
array(['VectorLoop.png', None], dtype=object)
array(['precode.png', None], dtype=object)
array(['postcode2.png', None], dtype=object)
array(['bgw-roofline.png', None], dtype=object)
array(['ThreadScaling.png', None], dtype=object)]
|
docs.nersc.gov
|
How to set up a development environment¶
This guide includes instructions for Linux / macOS and Windows.
Pre-requisites¶
You need the following programs installed before proceeding (on all operating systems).
How to use Docker¶
No matter what operating system you use, you need Docker. Docker is a tool to run applications inside what are known as “containers”. Containers are similar to lightweight virtual machines (VMs). They make it easier to develop, test, and deploy a web application. Fedora Happiness Packets uses Docker for local development and to deploy to the production website.
The project comes with a Dockerfile. A Dockerfile is the instructions for a container build tool (like Docker) to build a container. Look at the Fedora Happiness Packets Dockerfile for an example.
How to install Docker¶
Install Docker as described in the installation docs. You also need to install Docker Compose. Docker Compose is used to run multiple containers side-by-side. While developing Fedora Happiness Packets, there are a few different services that run in multiple containers, like the Django web app, the Postgres database, and more.
See below for platform specific installation guidelines:
- Docker Desktop for Mac (Docker Compose included)
- Docker Desktop for Windows (Docker Compose included)
- CentOS
- Debian
- Fedora
- Ubuntu
Run initial set-up¶
This section explains how to get started developing for the first time.
Fork and clone¶
First, you need to fork the Fedora Happiness Packets repo on Pagure.io to your Pagure account. Then, clone your fork to your workstation using git. For extra help, see the Pagure first steps help page. Once you clone your fork, you need to run a script to generate authentication tokens (more on this later).
git clone "ssh://[email protected]/forks/<username>/fedora-commops/fedora-happiness-packets.git" cd fedora-happiness-packets ./generate_client_secrets.sh
Although Docker runs this script during container build-time, please generate a local copy first. This way, new client keys are not being generated each time the container is rebuilt. This avoids rate-limiting by the authentication service.
Add FAS account login info¶
Next, you need to configure Fedora Happiness Packets with your Fedora Account System (FAS) username and password.
This is used to authenticate with Fedora APIs for username search.
Copy the example file
fas-admin-details.json.example as a new file named
fas-admin-details.json.
Add your username and password into the quotes.
Create a project config file¶
Next, create a configuration file to add admin users to Fedora Happiness Packets.
Like before, copy
config.yml.example to a new file named
config.yml.
Add your name and
@fedoraproject.org email address for ADMINS.
For superuser privileges, add your FAS username to the list.
How to test sending emails¶
In the development environment, sending emails is tested in one of two ways:
- Using console
- Using third-party mail provider (e.g. Gmail
Using console¶
The default setup is to send emails on the console.
The full content of emails will appear in the
docker-compose console output (explained below).
To see this in action, no changes are needed.
Using Gmail¶
Sending real, actual emails can be tested with a third-party mail provider, like Gmail. There are other mail services you can use, but this guide explains using Gmail. To test this functionality:
- In
settings/dev.py, un-comment the setting for
Configurations to test sending emails using Gmail SMTP. Comment out the setting under
Configurations for sending email on console. In
docker-compose.yml, un-comment the ports setting in
celeryservice.
- Enable less secure apps in the Gmail account which you want to use as the host email. (It is strongly recommended to not allow less secure apps in your primary Gmail account. A separate account for testing is recommended with this setting enabled.)
- Replace
<[email protected]>and
<HOST_EMAIL_PASSWORD>with the email address of the above account and its password.
Start Fedora Happiness Packets with Docker Compose¶
Now you are ready to start Fedora Happiness Packets! You will use Docker Compose to start all the containers used at the same time (like Redis, Celery, and others). Run this command to start up the project:
docker-compose up
Once it finishes starting up, open localhost:8000 in your browser You should see the Fedora Happiness Packets home page.
Thanks to PR #235 from @ShraddhaAg, changes to Django code, HTML templates, and CSS/JavaScript are automatically reloaded while
docker-compose is running.
You should not need to rebuild the containers every time you make a change.
However, sometimes you will need to rebuild the containers (e.g. adding a new dependency).
This can be done with the following command:
docker-compose up --build
Run integration tests¶
Integration tests are tests that ensure an application works fully from beginning to end. Fedora Happiness Packets is not fully tested, but there are some integration tests. To run integration tests, you need to enter the container while it is running and run the test suite. Open a new window and run this command to open a shell prompt inside the Django web app container:
docker-compose exec web bash
Once inside the container, run this command:
docker-compose exec web ./manage.py test -v 2 -p integration_test*.py --settings=happinesspackets.settings.tsting
Test
fedora-messaging integration¶
To test if messages are being sent to the RabbitMQ broker, open a new terminal while
docker-compose is running.
Run the following commands:
docker-compose exec web bash fedora-messaging consume --callback=fedora_messaging.example:printer
The messages sent to the RabbitMQ broker, when a sender confirms sending a happiness packet, will be printed in this terminal.
Alternatives to Docker¶
There are other ways to run Fedora Happiness Packets without containers or Docker. However, this is discouraged as current maintainers use containers to test changes and deploy Fedora Happiness Packets to production. If you choose to not use Docker and set up everything manually, you may run into unexpected issues. Project maintainers cannot easily help you if you choose this route (and may not be able to help you)! Therefore, please consider very carefully if you wish to run everything locally without containers.
Troubleshooting¶
Windows:
alpinelinux.org error ERROR: unsatisfiable constraints¶
On Windows, you might get the above error when running
docker-compose.
This can be resolved by following these steps:
- Open Docker settings.
- Click on Network.
- Look for “DNS Server” section.
- It is set to Automatic by default. Change it to Fixed.
- The IP address should now be editable. Try changing it to
1.1.1.1.
- Save settings.
- Restart Docker.
|
https://fedora-happiness-packets.readthedocs.io/setup/development/
| 2019-07-16T03:00:54 |
CC-MAIN-2019-30
|
1563195524475.48
|
[]
|
fedora-happiness-packets.readthedocs.io
|
Contents IT Business Management Previous Topic Next Topic Tests Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Tests A test is made up of conditions, variables, or steps that are used to determine whether a feature is working correctly. A test also includes an expected result, which is used to determine whether the test passes or fails. The test manager creates and updates test suites, test cases, and tests. To display a list of tests within a test case, navigate to Test Management > Test Repository > Test Cases and select the desired test case. The Tests related list displays all tests assigned to the test case. Click a test to display the Test form. On this page Send Feedback Previous Topic Next Topic
|
https://docs.servicenow.com/bundle/london-it-business-management/page/product/test-management/concept/c_Tests.html
| 2019-07-16T02:47:39 |
CC-MAIN-2019-30
|
1563195524475.48
|
[]
|
docs.servicenow.com
|
You.
The HTTP Proxy must perform on-the-fly HTTP request and response header modification, because DC/OS is not aware of the custom hostname and port that is being used by user agents to address the HTTP proxy.
The following instructions provide a tested HAProxy configuration example that handles the named request/response rewriting. This example ensures that the communication between HAProxy and DC/OS Admin Router is TLS-encrypted.
-
Create an HAProxy configuration for DC/OS. This example is for a DC/OS cluster on AWS. For more information on HAProxy configuration parameters, see the documentation.
You can find your task IP by using the agent IP address DNS entry.
<taskname>.<framework_name>.agentip.dcos.thisdcos.directory
Where:
taskname: The name of the task.
framework_name: The name of the framework, if you are unsure, it is likely
marathon.
global daemon log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 20000 pidfile /var/run/haproxy.pid defaults log global option dontlog-normal mode http retries 3 maxconn 20000 timeout connect 5000 timeout client 50000 timeout server 50000 frontend http # Bind on port 9090. HAProxy will listen on port 9090 on each # available network for new HTTP connections. bind 0.0.0.0:9090 # Specify your own server certificate chain and associated private key. # See # bind *:9091 ssl crt /path/to/browser-trusted.crt # # Name of backend configuration for DC/OS. default_backend dcos # Store request Host header temporarily in transaction scope # so that its value is accessible during response processing. # Note: RFC 7230 requires clients to send the Host header and # specifies it to contain both, host and port information. http-request set-var(txn.request_host_header) req.hdr(Host) # Overwrite Host header to 'dcoshost'. This makes the Location # header in DC/OS Admin Router upstream responses contain a # predictable hostname (NGINX uses this header value when # constructing absolute redirect URLs). That value is used # in the response Location header rewrite logic (see regular # expression-based rewrite in the backend section below). http-request set-header Host dcoshost backend dcos # Option 1: use TLS-encrypted communication with DC/OS Admin Router and # perform server certificate verification (including hostname verification). # If you are using the community-supported version of DC/OS, you must # configure Admin Router with a custom TLS server certificate, see # /1.11/administering-clusters/. This step # is not required for DC/OS Enterprise. # # Explanation for the parameters in the following `server` definition line: # # 1.2.3.4:443 # # IP address and port that HAProxy uses to connect to DC/OS Admin # Router. This needs to be adjusted to your setup. # # # ssl verify required # # Instruct HAProxy to use TLS, and to error out if server certificate # verification fails. # # ca-file dcos-ca.crt # # The local file `dcos-ca.crt` is expected to contain the CA certificate # that Admin Router's certificate will be verified against. It must be # retrieved out-of-band (on Mesosphere DC/OS Enterprise this can be # obtained via) # # verifyhost frontend-xxx.eu-central-1.elb.amazonaws.com # # When verifying the TLS certificate presented by DC/OS Admin Router, # perform hostname verification using the hostname specified here # (expect the server certificate to contain a DNSName SAN that is # equivalent to the hostname defined here). The hostname shown here is # just an example and needs to be adjusted to your setup. server dcos-1 1.2.3.4:443 ssl verify required ca-file dcos-ca.crt verifyhost frontend-xxx.eu-central-1.elb.amazonaws.com # Option 2: use TLS-encrypted communication with DC/OS Admin Router, but do # not perform server certificate verification (warning: this is insecure, and # we hope that you know what you are doing). # server dcos-1 1.2.3.4:443 ssl verify none # # Rewrite response Location header if it contains an absolute URL # pointing to the 'dcoshost' host: replace 'dcoshost' with original # request Host header (containing hostname and port). http-response replace-header Location https?://dcoshost((/.*)?) "[var(txn.request_host_header)]\1"
Start HAProxy with these settings.
|
http://docs-staging.mesosphere.com/1.11/security/ent/tls-ssl/haproxy-adminrouter/
| 2019-07-16T02:06:36 |
CC-MAIN-2019-30
|
1563195524475.48
|
[]
|
docs-staging.mesosphere.com
|
SWFDisplayItem->skewY()
(No version information available, might be only in CVS)
SWFDisplayItem->skewY() — Sets the Y-skew
Description
SWFDisplayItem
void skewY ( float $ddegrees )
Warning
This function is EXPERIMENTAL. The behaviour of this function, the().
Return Values
No value is returned.
SWFDisplayItem->skewY()
There are no user contributed notes for this page.
|
http://docs.php.net/manual/en/function.swfdisplayitem.skewy.php
| 2008-05-16T22:34:03 |
crawl-001
|
crawl-001-011
|
[array(['/images/notes-add.gif', 'add a note'], dtype=object)]
|
docs.php.net
|
Wagtail 0.8.6 release notes¶
What’s new¶
Minor features¶
- Translations updated, including new translations for Czech, Italian and Japanese
- The “fixtree” command can now delete orphaned pages
Bug fixes¶
- django-taggit library updated to 0.12.3, to fix a bug with migrations on SQLite on Django 1.7.2 and above ()
- Fixed a bug that caused children of a deleted page to not be deleted if they had a different type
Upgrade considerations¶
Orphaned pages may need deleting¶
This release fixes a bug with page deletion introduced in 0.8, where deleting a page with child pages will result in those child pages being left behind in the database (unless the child pages are of the same type as the parent). This may cause errors later on when creating new pages in the same position. To identify and delete these orphaned pages, it is recommended that you run the following command (from the project root) after upgrading to 0.8.6:
$ ./manage.py fixtree
This will output a list of any orphaned pages found, and request confirmation before deleting them.
Since this now makes
fixtree an interactive command, a
./manage.py fixtree --noinput option has been added to restore the previous non-interactive behaviour. With this option enabled, deleting orphaned pages is always skipped.
|
http://docs.wagtail.io/en/v2.0.2/releases/0.8.6.html
| 2019-08-17T11:10:25 |
CC-MAIN-2019-35
|
1566027312128.3
|
[]
|
docs.wagtail.io
|
Adding, Changing and Deleting Musical Works¶
The view for adding and changing works is shown in the image above. It is the most complex view in Django Music Publisher. It has several parts, so let us cover them one by one.
General¶
This part contains the fields
Title and
ISWC, as well as read-only field
Work ID, which is set automatically upon first save. Please note that the label
Title is bold, representing that this field is required. So, lets put a title in.
If
allow_modifications is set, two more fields are shown,
original title and
version type, with only the former being editable. By filling out this field, the
version type will be set to
modification and a more complex set of validation rules will apply.
Alternate Titles¶
Press on
Add another Alternate Title and put the title in the field. Please note the icon for deleting the row.
Setting
admin_show_alt_suffix adds two more columns to this section. If you choose to mark the alt title as
suffix, then it will be appended to the
Work title, and the result will be displayed in the last column. Please note that the limit of 60 characters applies to the whole alternate title.
Writers in Work¶
This is where you put in the information on composers and lyricists who created this musical work. As information on at least one controlled writer is required, let us look at all the columns:
Writeris where you can select a writer. The field is conditionally required for controlled writers, and at least one writer must be controlled, so you need to select at least one. But, as there are no writers, press on the green plus
+sign next to it. A pop-up window appears. Fill out
IPI Name #and
Performing Rights Society, and press
Save. The newly added writer will appear in this field. There is another way to add writers, which will be covered later. Please also note that for shares you do not control, this field is not required. If left empty, it means that the writer is unknown.
Capacityis where you select how this writer contributed to the work, the options are:
Composer,
Lyricistand
Composer and Lyricist. This field is required for controlled writers. Please note that at least one of the writers should be a
Composeror a
Composer and Lyricist. If modifications are allowed, further roles are present and a far more complex set of validation rules applies. At least two rows are required, one being (original)
Composeror a
Composer and Lyricist, and one being
Arranger,
Adaptoror
Translator.
Relative shareis where the relative share is put in. The sum of relative shares for each work must be 100%. This is how the writers split the shares prior to publishing. For controlled writers, 50% for performing rights, 100% for mechanical and 100% for sync, of
relative shareis transferred to the publisher (you). Please note that Django Music Publisher does not support different splits.
Controlledis where you select whether you control the writer or not. Select it for at least one
Writer in Workrow.
Original publisheris a read-only field showing which entity is the original publisher. This field only makes sense for the US publishers with multiple entities. It can be disabled in the settings. DMP Guru instances show this field only if the publisher has enities in multiple US PROs.
Society-assigned agreement numberis a field where society-assigned agreement numbers for specific agreements are entered. For general agreements, they are set when defining the
Writers. If both exist, the specific one is used. This field can also be disabled in settings, as it is only used in some societies. It may also be set as required for controlled writers. It should not be filled for other writers. DMP Guru does not show this field for affiliates of US PROs and HFA, shows for all other societies. For affiliates of societies that require this field, it is automatically set as required.
Publisher feeis the fee kept by the publisher, while the rest is forwarded to the writer. This field is not used in registrations. It is used only for royalty statement processing. This field can also be disabled in the settings. It may also be set as required for controlled writers. It should not be filled for other writers. DMP Guru sets this field as required for controlled writers. If it is set as a part of a general agreement in
Writers, it does not have to be set in
Writer in Work. If it is set in both places, the one from
Writer in Workhas precedence.
Setting
allow_multiple_ops enables the option to cover the case with multiple original publishers per writer. As stated in many places, the data on other publishers can not be entered. So, in case of multiple original publishers, one of which is you, enter two
Writer in Work rows with the same
Writer and
Capacity, one controlled (with your share) and one for the other publisher(s).
First Recording¶
Django Music Publisher can only hold data on the first recording/release of a musical work, not all of them. This is caused by the fact that not all societies and well-known sub-publishers have removed a long obsolete limit in CWR to one recording per work. This will change in future releases.
All fields are self-explanatory. Please note that fields
Album / Library CD and
Recording Artist behave in the same way the described field
Writer does. Let us presume that our first work has not been recorded yet and remove this form.
Please read the part on ``Albums and/or Library CDs`` for details on albums and music libraries, as this is often a source of confusion for production music publishers.
Artists Performing Works¶
Here you list the artists who are performing the work, there is no need to repeat the
Artist set as the
Recording Artist in the previous section.
Registration Acknowledgements¶
This is where the work registration acknowledgements are recorded. Please note that only superusers (in the default configuration) can modify this section, as it is automatically filled out from uploaded acknowledgement files. This will be covered later in this document.
Once you press
Save, you are taken to the
Work list view.
|
https://django-music-publisher.readthedocs.io/en/stable/manual_works.html
| 2019-08-17T12:25:52 |
CC-MAIN-2019-35
|
1566027312128.3
|
[]
|
django-music-publisher.readthedocs.io
|
$WAS_DISTRIBUTION/modules/camunda-ibm-websphere-ear-$PLATFORM_VERSION.ear. During the installation, the EAR will try to reference the
Camunda shared library.
8..
|
https://docs.camunda.org/manual/latest/update/minor/73-to-74/was/
| 2019-08-17T11:03:22 |
CC-MAIN-2019-35
|
1566027312128.3
|
[]
|
docs.camunda.org
|
Exporting Licenses to Excel
To export the list of licenses to Excel:
- Go to the Manage Licenses tab and generate a list of licenses by using a search query or the filters on the left.
- Click Export to Excel above the list of licenses.
- Select which columns of the list you want to export:
- Current columns. The Excel file will contain the columns currently displayed in the list and columns that are added automatically based on the specified search parameters.
- All columns. The Excel file will contain all possible columns that can be displayed for a license.
- If the export takes time, it will run in the background and you will be able to download the file when it is ready. Otherwise, the file will be downloaded immediately.
|
https://docs.plesk.com/en-US/onyx/partner-central-user-guide/exporting-licenses-to-excel.78244/
| 2019-08-17T11:25:50 |
CC-MAIN-2019-35
|
1566027312128.3
|
[]
|
docs.plesk.com
|
New:
{{ post.published_at|date("m/d/Y") }}":
{{ "now"|date("m/d/Y") }}
To escape words and characters in the date format use
\\ in front of each character:
{{ post.published_at|date("F jS \\a\\t g:ia") }}:
|
https://docs.w3cub.com/twig~1/filters/date/
| 2019-08-17T11:04:31 |
CC-MAIN-2019-35
|
1566027312128.3
|
[]
|
docs.w3cub.com
|
, click the priority that you want to assign to inSync on your laptop.
- Click Ok on the confirmation message.
Note: The first backup is a full backup and requires the most processing power. All subsequent backups are incremental and require low processing power. We recommend that for a full backup, you select high CPU priority. Thereafter, change the CPU priority to normal.
Automatically pause backups according to the laptop battery percentage level
You can configure inSync to automatically pause backups when the battery of your laptop reaches the percentage level that you define.
To configure inSync to automatically pause backups according to your laptop battery percentage level
- Start inSync.
- On the left navigation pane, click Preferences.
- Under the Backup Preferences tab, in the Pause if battery is below list, click the battery percentage level which when reached, inSync must pause the backup operation. A confirmation message appears.
- Click Ok.
|
https://docs.druva.com/005_inSync_Client/5.4c/Administration/Configure_inSync/030_Update_backup_interval_and_system_resources
| 2019-08-17T10:57:27 |
CC-MAIN-2019-35
|
1566027312128.3
|
[]
|
docs.druva.com
|
[DE] Revolution slider (Obsoleted)
Kindly note that we no longer support Revolution slider which means that the RS file (slider exported are not compatible with the latest version).
This plugin is only supported in the previous version of Directory” in our demo site.
- Activating Revolution Slider plugin.
- Choose Plugins → Activate the plugin. Then you’ll find the plugin in your admin panel.
Creating new slider
You can also create a new slider based on your need. To create a new slide, hover your mouse over the “ New Slider” box.
- Content Source
There are several new content sources that can be used for Slider Revolution, including Instagram, Facebook, Woocommerce, etc. You should select the suitable one for your site.
Slider Title & ShortCode
All the blanks in this section MUST be completed while other sections such as Content Source, Select a Slider Type, and Slider Layout can be used).
-.
- Slider Layout
There are 3 layouts that you can use for your site: Auto, Full-Widget, and Full-Screen. Select the most suitable one and complete its settings.
- General Settings
Also, the general settings block also gives you a quick visual of the specific setting for your slider such as “Layout & Visual”, “Navigation”, “Parallax & 3D”, Problem Handing”, and “Google Font.”
Don’t forget to click “ Save Settings” at the left sidebar or on the top-right. in
the main page. You have different options to control your sliders:
- Change the settings.
- Edit slides.
- Export slider.
- Delete.
- Duplicate the slider.
- See the preview.
Slider Edition
The source of the #1 slide in this section is transparent, so you can add new slide and delete it. Otherwise, you have to change its source into Main/Background Image and add a layer.
Hover your mouse the “Add Slide” box. You can add a blank slide (new slide with default a settings) or bulk sliders (multiple-slides based on a selection of media gallery images of your choice). In FreelanceEngine, we recommend you to add bulk slides in this section.
Add Bulk Slides
- Select Add Bulk Sliders to add multi-sliders at once.
- After that, select images from your existing images in the media library or upload new ones. Don’t forget to delete the #1 slide.
- Individual Slide Settings
This section gives you in-depth settings for each slide such as main background, general settings, thumbnail, side animation, link & sea, slide info, and nav.overwrite.
- Individual Slide Content
This section allows you style an individual slide; add new layer; change animation & loop, etc.
Click “ Save Slide” after each slide to complete your settings.
Add Slider to Web page
Before add the slider to your site, you should create new page ( Pages → Add New) or select an existing pages (Pages → All Pages).
Here’re typical ways to add slider to your site:
Shortcode method
- From the slider's main settings page, copy the slider's shortcode auto-generated in the “Slider Title & ShortCode” section.
- Paste this shortcode into your page's main content area.
Quick Shortcode Creator
- Click on the “Slider Revolution Shortcode Creator” icon from the page's main content editor
- Then, select a slider and choose “Add Selected Slider.”:.
|
https://docs.enginethemes.com/article/194-revolution-slider
| 2019-08-17T10:46:08 |
CC-MAIN-2019-35
|
1566027312128.3
|
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56c6e208c697915005a72a5f/images/5bcb00632c7d3a04dd5bec96/file-vjGV2ZH49b.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56c6e208c697915005a72a5f/images/5bcaff612c7d3a04dd5bec91/file-ma7dI9vW0U.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56c6e208c697915005a72a5f/images/5bcb01722c7d3a04dd5bec98/file-e60AMIEOJY.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56c6e208c697915005a72a5f/images/5bcb022d042863158cc7ab42/file-02ckEHvAie.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56c6e208c697915005a72a5f/images/5bcb1a382c7d3a04dd5becb6/file-LQkk9TEAaK.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56c6e208c697915005a72a5f/images/5bcb1b322c7d3a04dd5becb7/file-oBLUEcReej.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56c6e208c697915005a72a5f/images/5bcb28c12c7d3a04dd5becc4/file-g0EVhZsBro.png',
None], dtype=object) ]
|
docs.enginethemes.com
|
Do you have a business continuity plan?
Yes, ISMS 05, can be provided upon Customer's request.
Do you have a disaster recovery plan?
Business Continuity Plan, can be supplied to the Customer if requested.
Do you have a copy of your latest SOC audit?
Maytech do not have a SOC 2 report. Our information security management systems are instead ISO 27001 certified, and audited twice a year by Lloyd's Register Quality Assurance, one of the leading global business assurance providers.
The criteria / controls required by the two standards were developed to mitigate similar risks and there is considerable overlap in the criteria defined in the Trust Service Principles of SOC 2 and the controls defined in Annex A of ISO 27001.
Both standards provide independent assurance that the necessary controls are in place and whereas ISO 27001 is an international standard with its origin in a British standard, SOC 2 is created and governed by the American Institute of Certified Public Accountants, AICPA.
Are you able to share the results of any such penetration tests with us? (If so, please confirm any format restrictions. Such as a provision to provide abridged summaries only.)
Maytech can share the management summary and residual risk statement upon Customer request.
With prior agreement, can we arrange for our own third-party penetration test to be carried out?
Customers may perform penetration testing on Maytech's systems subject to advance written agreement.
|
https://docs.maytech.net/plugins/viewsource/viewpagesrc.action?pageId=4456807
| 2019-08-17T11:06:39 |
CC-MAIN-2019-35
|
1566027312128.3
|
[]
|
docs.maytech.net
|
Batch Processing
Enterprise, CloudHub
Mule possesses the ability to process messages in batches. Within an application, you can initiate a batch job which.
For example, batch processing is particularly useful when working with the following scenarios:
Integrating data sets, small or large, streaming or not, to parallel process records.
Synchronizing ESB which contains one or more batch steps which, as the label implies, process items step-by-step in a sequential order. Batch steps all fall within the Process Phase of batch processing (more on Batch Processing Phases below).
<batch:job <batch:process-records> <batch:step <batch:step <batch:step </batch:process-records> </batch:job> <flow name="flowOne"> ... </flow>
Batch jobs process records which are individual pieces into which Mule splits a large or streaming message. Where a Mule flow processes messages, a Mule batch job processes records.:job <batch:process-records> <batch:step <message processor/> <message processor/> </batch:step> <batch:step <message processor/> </batch:step> <batch:step <message processor/> <message processor/> </batch:step> </batch:process-records> </batch:job> <flow name="flowOne"> ... </flow>.
Studio Visual Editor
XML Editor
Note that details in code snippet are abbreviated so as to highlight batch phases, jobs and steps. See Complete Code Example for more detail.
<batch:job <batch:input> <poll> <sfdc:authorize/> </poll> <set-variable/> </batch:input> <batch:process-records> <batch:step/> <batch:process-records> </batch:job> which it associates to the new batch job instance. A batch job instance is an occurrence in a Mule application resulting from the execution of a batch job in a Mule flow; it exists for as long as it takes to process each record in a batch. (What’s the difference between a batch job and a that a batch job instance does not wait for all its queued records to finish processing in one batch step before pushing any of them to the next batch step. Queues are persistent.
Mule persists a list of all records as they succeed or fail to process through each batch step. If a record should fail to be processed by a message processor in a batch step, Mule can simply continue processing the batch, skipping over the failed record in each subsequent batch step. )
Studio Visual Editor
>>IMAGE:job>.
Studio Visual Editor
>>IMAGE:on-complete> <logger/> </batch:on-complete> </batch:job> building block..)
Studio Visual Editor
XML Editor
Note that details in code snippet are abbreviated so as to highlight batch phases, jobs and steps. See Complete Code Example for more detail.
<batch:job <batch:process-records> <batch:step <batch:record-variable-transformer/> <data-mapper:transform/> </batch:step> <batch:step <logger level="INFO" doc: <http:request/> </batch:step> </batch:process-records> <batch:on-complete> <logger level="INFO" doc: </batch:on-complete> </batch:job> <flow name="batchtest1Flow1"> <http:listener/> <data-mapper:transform/> <batch:execute </flow>.
Studio Visual Editor
XML Editor
Note that details in code snippet are abbreviated so as to highlight batch phases, jobs and steps. See Complete Code Example for more detail.
<batch:job <batch:input> <poll> <sfdc:authorize/> </poll> </batch:input> <batch:process-records> <batch:step <batch:record-variable-transformer/> <data-mapper:transform/> </batch:step> <batch:step <logger/> <http:request/> </batch:step> </batch:process-records> <batch:on-complete> <logger/> </batch:on-complete> </batch:job>:
Stop processing the entire batch, processing as soon as Mule encounters a single record-level error. Example
This example uses batch processing to address a use case in which the contents of a comma-separated value file (CSV) of leads – comprised of names, birthdays and email addresses – must be uploaded to Salesforce. To avoid duplicating any leads, the batch job checks to see if a lead exists before uploading data to Salesforce. The description below outlines the steps the batch job takes in each phase of processing.
Studio Visual Editor
XML Editor
<?xml version="1.0" encoding="UTF-8"?> <mule xmlns: <sfdc:config <sfdc:connection-pooling-profile </sfdc:config> <data-mapper:config <data-mapper:config <data-mapper:config <data-mapper:config <batch:job <batch:threading-profile <batch:input> <file:inbound-endpoint <data-mapper:transform </batch:input> <batch:process-records> <batch:step <enricher source="#[payload.size() > 0]" target="#[recordVars['exists']]" doc: <sfdc:query </enricher> </batch:step> <batch:step <logger message="Got Record #[payload], it exists #[recordVars['exists']]" level="INFO" doc: <batch:commit <sfdc:create <sfdc:objects </sfdc:create> </batch:commit> </batch:step> <batch:step <logger message="Got Failure #[payload]" level="INFO" doc: </batch:step> </batch:process-records> <batch:on-complete> <logger message="#[payload.loadedRecords] Loaded Records #[payload.failedRecords] Failed Records" level="INFO" doc: </batch:on-complete> </batch:job> </mule>
INPUT PHASE
The application first uses a File connector.
Studio Visual Editor
XML Editor.
Studio Visual Editor
XML Editor accumulates records as they trickle through the queue into the batch commit "bucket". When it has accumulated 200 – as specified with the
sizeattribute of the batch commit element – batch commit inserts all 200 records at once into Salesforce as new leads.
Studio Visual Editor
XML Editor
The final batch step,
log-failures,
INFO 2013-11-19 11:10:00,947 [[training-example-1].connector.file.mule.default.receiver.01] org.mule.api.processor.LoggerMessageProcessor: 2 Loaded Records 1 Failed Records CloudHub support for batch processing.
Learn more about Anypoint Connectors.
Learn more about Polling and Watermarks.
Learn more about DataMapper.
|
https://docs.mulesoft.com/mule-runtime/3.7/batch-processing
| 2019-08-17T11:44:13 |
CC-MAIN-2019-35
|
1566027312128.3
|
[array(['_images/batch-main1.png', 'batch_main1'], dtype=object)
array(['_images/batch-main3.png', 'batch_main3'], dtype=object)
array(['_images/batch-phases.png', 'batch_phases'], dtype=object)
array(['_images/input-phas.png', 'input_phas'], dtype=object)
array(['_images/batch-diagram.jpg', 'batch+diagram'], dtype=object)
array(['_images/process-phase.png', 'process phase'], dtype=object)
array(['_images/on-complete-phase.png', 'on-complete_phase'], dtype=object)
array(['_images/trigger-ref1.png', 'trigger_ref1'], dtype=object)
array(['_images/trigger-source.png', 'trigger_source'], dtype=object)
array(['_images/example-batch.png', 'example_batch'], dtype=object)
array(['_images/example-query3.png', 'example_query3'], dtype=object)
array(['_images/query4.png', 'query4'], dtype=object)
array(['_images/example-filter3.png', 'example_filter3'], dtype=object)
array(['_images/batch-example-filter.png', 'batch example filter'],
dtype=object)
array(['_images/example-insert1.png', 'example_insert1'], dtype=object)
array(['_images/example-insert2.png', 'example_insert2'], dtype=object)]
|
docs.mulesoft.com
|
Welcome to MyCapytains’s documentation!¶
MyCapytain is a python library which provides a large set of methods to interact with Text Services API such as the Canonical Text Services, the Distributed Text Services. It also provides a programming interface to exploit local textual resources developed according to the Capitains Guidelines.
Simple Example of what it does¶
The following code and example is badly displayed at the moment on Github. We recommend you to go to
On Leipzig DH Chair’s Canonical Text Services API, we can find the Epigrammata of Martial. This texts are identified by the identifier “urn:cts:latinLit:phi1294.phi002.perseus-lat2”. We want to have some information about this text so we are gonna ask the API to give its metadata to us :
This query will return the following information :
<class 'MyCapytain.resources.collections.cts.Text'> ['book', 'poem', 'line']
And we will get
Hesterna factum narratur, Postume, cena
If you want to play more with this, like having a list of what can be found in book three, you could go and do
Which would be equal to :
['3.1', '3.2', '3.3', '3.4', '3.5', '3.6', '3.7', '3.8', '3.9', '3.10', '3.11', '3.12', '3.13', ...]
Now, it’s your time to work with the resource ! See the CapiTainS Classes page on ReadTheDocs to have a general introduction to MyCapytain objects !
Installation and Requirements¶
The best way to install MyCapytain is to use pip. MyCapytain tries to support Python over 3.4.
The work needed for supporting Python 2.7 is mostly done, however, since 2.0.0, we are giving up on ensuring that MyCapytain will be compatible with Python < 3 while accepting PR which would help doing so.
pip install MyCapytain
If you prefer to use setup.py, you should clone and use the following
git clone cd MyCapytain python setup.py install
Contents¶
- MyCapytain’s Main Objects Explained
- Project using MyCapytain
- Working with Local CapiTainS XML File
- Known issues and recommendations
- MyCapytain API Documentation
- Benchmarks
|
https://mycapytain.readthedocs.io/en/2.0.9/
| 2019-08-17T10:37:36 |
CC-MAIN-2019-35
|
1566027312128.3
|
[]
|
mycapytain.readthedocs.io
|
Install:
`ccxt.js<>`__ in JavaScript
`./python/<>`__ in Python (generated from JS)
`ccxt.php<>`__ in PHP (generated from JS)
You can also clone it into your project directory from ccxt GitHub repository and copy files manually into your working directory with language extension appropriate for your environment.
git clone
An alternative way of installing this library is to build a custom bundle from source. Choose exchanges you need in
exchanges.cfg.
JavaScript (NPM)¶
JavaScript version of ccxt works both in Node and web browsers. Requires ES6 and
async/await syntax support (Node 7.6.0+). When compiling with Webpack and Babel, make sure it is not excluded in your
babel-loader config.
ccxt crypto trading library in npm
npm install ccxt
var ccxt = require ('ccxt') console.log (ccxt.exchanges) // print all available exchanges
Node.js + Windows¶
Windows users having difficulties installing
w3,
scrypt or
node-gyp dependencies for the ccxt library, try installing
scrypt first:
npm install -g web3 --unsafe-perm=true --allow-root
or
sudo npm install -g web3 --unsafe-perm=true --allow-root
Then install ccxt as usual with
npm install ccxt.
If that does not help, please, follow here:
JavaScript (for use with the
<script> tag):¶
All-in-one browser bundle (dependencies included), served from a CDN of your choice:
- jsDelivr:
- unpkg:
You can obtain a live-updated version of the bundle by removing the version number from the URL (the
@a.b.c thing) — however, we do not recommend to do that, as it may break your app eventually. Also, please keep in mind that we are not responsible for the correct operation of those CDN servers.
<script type="text/javascript" src=""></script>
Creates a global
ccxt object:
console.log (ccxt.exchanges) // print all available exchanges
Python¶
ccxt algotrading library in PyPI
The autoloadable version of ccxt can be installed with Packagist/Composer (PHP 5.4+).
It can also be installed from the source code:
`ccxt.php <>`__
Alternatively:
docker build . --tag ccxt docker run -it ccxt
Proxy¶
In some specific cases you may want a proxy, if you experience issues with DDoS protection by Cloudflare or your network / country / IP is rejected by their filters.
Bear in mind that each added intermediary contributes to the overall latency and roundtrip time. Longer delays can result in price slippage.
JavaScript Proxies¶
In order to use proxies with JavaScript, one needs to pass the proxying
agent option to the exchange class instance constructor (or set the
exchange.agent property later after instantiation in runtime):
const ccxt = require ('ccxt') , HttpsProxyAgent = require ('https-proxy-agent') const proxy = process.env.http_proxy || '' // HTTP/HTTPS proxy to connect to const agent = new HttpsProxyAgent (proxy) const kraken = new ccxt.kraken ({ agent })
Python Proxies¶
The python version of the library uses the python-requests package for underlying HTTP and supports all means of customization available in the
requests package, including proxies.
You can configure proxies by setting the environment variables HTTP_PROXY and HTTPS_PROXY.
$ export HTTP_PROXY="" $ export HTTPS_PROXY=""
After exporting the above variables with your proxy settings, all reqeusts from within ccxt will be routed through those proxies.
You can also set them programmatically:
import ccxt exchange = ccxt.poloniex({ 'proxies': { 'http': '', # these proxies won't work for you, they are here for example 'https': '', }, })
Or
import ccxt exchange = ccxt.poloniex() exchange.proxies = { 'http': '', # these proxies won't work for you, they are here for example 'https': '', }
Python 2 and 3 sync proxies¶
# -*- coding: utf-8 -*- import os import sys import ccxt from pprint import pprint On the other hand, the "proxies" setting is for HTTP(S)-proxying (SOCKS, etc...) # It is a standard method of sending your requests through your proxies # This gets passed to the `python-requests` implementation directly # You can also enable this with environment variables, as described here: # # This is the setting you should be using with synchronous version of ccxt in Python 2 and 3 # 'proxies': { 'http': '', 'https': '', }, }) # your code goes here... pprint(exchange.fetch_ticker('ETH/BTC'))
Python 3.5+ asyncio/aiohttp proxy¶
# -*- coding: utf-8 -*- import asyncio import os import sys import ccxt.async_support as ccxt from pprint import pprint async def test_gdax(): The "aiohttp_proxy" setting is for HTTP(S)-proxying (SOCKS, etc...) # It is a standard method of sending your requests through your proxies # This gets passed to the `asyncio` and `aiohttp` implementation directly # You can use this setting as documented here: # # This is the setting you should be using with async version of ccxt in Python 3.5+ # 'aiohttp_proxy': '', # 'aiohttp_proxy': '', # 'aiohttp_proxy': '', }) # your code goes here... ticker = await exchange.fetch_ticker('ETH/BTC') # don't forget to free the used resources, when you don't need them anymore await exchange.close() return ticker if __name__ == '__main__': pprint(asyncio.get_event_loop().run_until_complete(test_gdax()))
A more detailed documentation on using proxies with the sync python version of the ccxt library can be found here:
Python aiohttp SOCKS proxy¶
pip install aiohttp_socks
import ccxt.async_support as ccxt import aiohttp import aiohttp_socks async def test(): connector = aiohttp_socks.SocksConnector.from_url('socks5://user:[email protected]:1080') session = aiohttp.ClientSession(connector=connector) exchange = ccxt.binance({ 'session': session, 'enableRateLimit': True, # ... }) # ... await session.close() # don't forget to close the session # ...
CORS (Access-Control-Allow-Origin)¶
If you need a CORS proxy, use the
proxy property (a string literal) containing base URL of http(s) proxy. It is for use with web browsers and from blocked locations.
CORS is Cross-Origin Resource Sharing. When accessing the HTTP REST API of an exchange from browser with ccxt library you may get a warning or an exception, saying
No 'Access-Control-Allow-Origin' header is present on the requested resource. That means that the exchange admins haven’t enabled access to their API from arbitrary web browser pages.
You can still use the ccxt library from your browser via a CORS-proxy, which is very easy to set up or install. There are also public CORS proxies on the internet.
The absolute exchange endpoint URL is appended to
proxy string before HTTP request is sent to exchange. The
proxy setting is an empty string
'' by default. Below are examples of a non-empty
proxy string (last slash is mandatory!):
kraken.proxy = ''
gdax.proxy = ''
To run your own CORS proxy locally you can either set up one of the existing ones or make a quick script of your own, like shown below.
Node.js CORS Proxy¶
// JavaScript CORS Proxy // Save this in a file like cors.js and run with `node cors [port]` // It will listen for your requests on the port you pass in command line or port 8080 by default let port = (process.argv.length > 2) ? parseInt (process.argv[2]) : 8080; // default require ('cors-anywhere').createServer ().listen (port, 'localhost')
Python CORS Proxy¶
#!/usr/bin/env python # Python CORS Proxy # Save this in a file like cors.py and run with `python cors.py [port]` or `cors [port]` try: # Python 3 from http.server import HTTPServer, SimpleHTTPRequestHandler, test as test_orig import sys def test (*args): test_orig (*args, port = int (sys.argv[1]) if len (sys.argv) > 1 else 8080))
Testing CORS¶
After you set it up and run it, you can test it by querying the target URL of exchange endpoint through the proxy (like).
To test the CORS you can do either of the following:
- set up proxy somewhere in your browser settings, then go to endpoint URL
- type that URL directly in the address bar as
- cURL it from command like
curl
To let ccxt know of the proxy, you can set the
proxy property on your exchange instance.
|
https://ccxt.readthedocs.io/en/latest/install.html
| 2019-08-17T10:34:49 |
CC-MAIN-2019-35
|
1566027312128.3
|
[]
|
ccxt.readthedocs.io
|
If you need help in your account that we can’t resolve over chat or email, the Logz.io Support team may request access to your account.
You can control support access from > Settings > General in the top menu.
To enable support access, set the switch to Enabled, choose an expiration period, and click Update.
At the end of this time, support access will disable itself automatically. If you set this time period to forever, support access will remain enabled until you disable it or set a new expiration period.
What access am I granting?
Great question! We don’t blame you for asking.
When you enable support access, you’re granting full administrator permissions to our Support team. This allows us to troubleshoot issues as quickly as possible.
You can disable this access at any time.
|
https://docs.logz.io/user-guide/accounts/support-access.html
| 2019-08-17T11:07:11 |
CC-MAIN-2019-35
|
1566027312128.3
|
[array(['/images/accounts/general--account-settings.png',
'Account settings'], dtype=object) ]
|
docs.logz.io
|
Automated backups
SQL Database automatically creates the database backups that are kept between 7 and 35 days, and uses Azure read-access geo-redundant storage (RA-GRS) to ensure that they are preserved even if the data center is unavailable. These backups are created automatically. Database backups are an essential part of any business continuity and disaster recovery strategy because they protect your data from accidental corruption or deletion. If your security rules require that your backups are available for an extended period of time (up to 10 years), you can configure a long-term retention on Singleton databases and Elastic pools..
What is a SQL Database backup
SQL Database uses SQL Server technology to create full backups every week, differential backups every 12 hours, and transaction log backups every 5-10 minutes. The backups are stored in RA-GRS storage blobs that are replicated to a paired data center for protection against a data center outage. When you restore a database, the service figures out which full, differential, and transaction log backups need to be restored.
You can use these backups to:
- Restore an existing database to a point-in-time in the past within the retention period using the Azure portal, Azure PowerShell, Azure CLI, or REST API. In Single database and Elastic pools, this operation will create a new database in the same server as the original database. In Managed Instance, this operation can create a copy of the database or same or different Managed Instance under the same subscription.
- Change Backup Retention Period between 7 to 35 days to configure your backup policy.
- Change long-term retention policy up to 10 years on Single Database and Elastic Pools using the Azure portal or Azure PowerShell.
- Restore a deleted database to the time it was deleted or anytime within the retention period. The deleted database can only be restored in the same logical server or Managed Instance where the original database was created.
- Restore a database to another geographical region. Geo-restore allows you to recover from a geographic disaster when you cannot access your server and database. It creates a new database in any existing server anywhere in the world.
- Restore a database from a specific long-term backup on Single Database or Elastic Pool if the database has been configured with a long-term retention policy (LTR). LTR allows you to restore an old version of the database files from one location to another. SQL's database replication refers to keeping multiple secondary databases synchronized with a primary database.
You can try some of these operations using the following examples:
How long are backups kept
All Azure SQL databases (single, pooled, and managed instance databases) have a default backup retention period of seven days. You can change backup retention period up to 35 days.
If you delete a database, SQL Database will keep the backups in the same way it would for an online database. For example, if you delete a Basic database that has a retention period of seven days, a backup that is four days old is saved for three more days.
If you need to keep the backups for longer than the maximum retention period, you can modify the backup properties to add one or more long-term retention periods to your database. For more information, see Long-term retention.
Important
If you delete the Azure SQL server that hosts SQL databases, all elastic pools and databases that belong to the server are also deleted and cannot be recovered. You cannot restore a deleted server. But if you configured long-term retention, the backups for the databases with LTR will not be deleted and these databases can be restored.
How often do backups happen
Backups for point-in-time restore
SQL Database 30 minutes, but it can take longer when the database is of a significant size. For example, the initial backup can take longer on a restored database or a database copy. After the first full backup, all further backups are scheduled automatically and managed silently in the background. The exact timing of all database backups is determined by the SQL Database service as it balances the overall system workload. You cannot change or disable the backup jobs.
The PITR backups are geo-redundant and protected by Azure Storage cross-regional replication
For more information, see Point-in-time restore
Backups for long-term retention
Single and pooled databases offer the option of configuring long-term retention (LTR) of full backups for up to 10 years in Azure Blob storage. If LTR policy is enabled, the weekly full backups are automatically copied to a different RA-GRS storage container. To meet different compliance requirement, you can select different retention periods for weekly, monthly and/or yearly backups. The storage consumption depends on the selected frequency of backups and the retention period(s). You can use the LTR pricing calculator to estimate the cost of LTR storage.
Like PITR, the LTR backups are geo-redundant and protected by Azure Storage cross-regional replication.
For more information, see Long-term backup retention.
Storage costs
Seven days of automated backups of your databases are copied to RA-GRS Standard blob storage by default. The storage is used by weekly full backups, daily differential backups, and transaction log backups copied every 5 minutes. The size of the transaction log depends on the rate of change of the database. A minimum storage amount equal to 100% of database size is provided at no extra charge. Additional consumption of backup storage will be charged in GB/month.
For more information about storage prices, see the pricing page.
Are backups encrypted
If your database is encrypted with TDE, the backups are automatically encrypted at rest, including LTR backups. When TDE is enabled for an Azure SQL database, backups are also encrypted. All new Azure SQL databases are configured with TDE enabled by default. For more information on TDE, see Transparent Data Encryption with Azure SQL Database.
How does Microsoft ensure backup integrity
On an ongoing basis, the Azure SQL Database engineering team automatically tests the restore of automated database backups of databases placed in Logical servers and Elastic pools (this is not available in Managed Instance). Upon point-in-time restore, databases also receive integrity checks using DBCC CHECKDB.
Managed Instance takes automatic initial backup with
CHECKSUM of the databases restored using native
RESTORE command or Data Migration Service once the migration is completed.
Any issues found during the integrity check will result in an alert to the engineering team. For more information about data integrity in Azure SQL Database, see Data Integrity in Azure SQL Database.
How do automated backups impact compliance
When you migrate your database from a DTU-based service tier with the default PITR retention of 35 days, to a vCore-based service tier, the PITR retention is preserved to ensure that your application's data recovery policy is not compromised. If the default retention doesn't meet your compliance requirements, you can change the PITR retention period using PowerShell or REST API. For more information, see Change Backup Retention Period..
How to change the PITR backup retention period
You can change the default PITR backup retention period using the Azure portal, PowerShell, or the REST API. The supported values are: 7, 14, 21, 28 or 35 days. The following examples illustrate how to change PITR retention to 28 days.
Warning
If you reduce the current retention period, all existing backups older than the new retention period are no longer available. If you increase the current retention period, SQL Database will keep the existing backups until the longer retention period is reached.
Note
These APIs will only impact the PITR retention period. If you configured LTR for your database, it will not be impacted. For more information about how to change the LTR retention period(s), see Long-term retention.
Change PITR backup retention period using the Azure portal
To change the PITR backup retention period using the Azure portal, navigate to the server object whose retention period you wish to change within the portal and then select the appropriate option based on which server object you're modifying.
Change PITR for a SQL Database server
Change PITR for a Managed Instance
Change PITR backup retention period using.
Set-AzSqlDatabaseBackupShortTermRetentionPolicy -ResourceGroupName resourceGroup -ServerName testserver -DatabaseName testDatabase -RetentionDays 28
Change PITR retention period using REST API
Sample Request
PUT
Request Body
{ "properties":{ "retentionDays":28 } }
Sample Response
Status code: 200
{ "id": "/subscriptions/00000000-1111-2222-3333-444444444444/providers/Microsoft.Sql/resourceGroups/resourceGroup/servers/testserver/databases/testDatabase/backupShortTermRetentionPolicies/default", "name": "default", "type": "Microsoft.Sql/resourceGroups/servers/databases/backupShortTermRetentionPolicies", "properties": { "retentionDays": 28 } }
For more information, see Backup Retention REST API.
Next steps
- Database backups are an essential part of any business continuity and disaster recovery strategy because they protect your data from accidental corruption or deletion. To learn about the other Azure SQL Database business continuity solutions, see Business continuity overview.
- To restore to a point in time using the Azure portal, see restore database to a point in time using the Azure portal.
- To restore to a point in time using PowerShell, see restore database to a point in time using PowerShell.
- To configure, manage, and restore from long-term retention of automated backups in Azure Blob storage using the Azure portal, see Manage long-term backup retention using the Azure portal.
- To configure, manage, and restore from long-term retention of automated backups in Azure Blob storage using PowerShell, see Manage long-term backup retention using PowerShell.
Feedback
|
https://docs.microsoft.com/en-in/azure/sql-database/sql-database-automated-backups
| 2019-08-17T10:39:09 |
CC-MAIN-2019-35
|
1566027312128.3
|
[array(['media/sql-database-automated-backup/configure-backup-retention-sqldb.png',
'Change PITR Azure portal'], dtype=object)
array(['media/sql-database-automated-backup/configure-backup-retention-sqlmi.png',
'Change PITR Azure portal'], dtype=object) ]
|
docs.microsoft.com
|
Signaling Type
Admin
Fixed Length
Feature Group D
Fixed Length
Specifies the type of address register. The Fixed Length value selects a fixed length register, with the length defined by the Register Length system parameter. Feature Group D selects the unique protocol Exchange Access North American Signaling described in the Bellcore publication TR-NPL-000258. That is, Blueworx Voice Response sends the information field (ANI) first then sends the address field (DNIS). Blueworx Voice Response does not support any other sending sequence.
|
http://docs.blueworx.com/BVR/InfoCenter/V6.1/help/topic/com.ibm.wvraix.config.doc/i897967.html
| 2019-08-17T10:41:17 |
CC-MAIN-2019-35
|
1566027312128.3
|
[]
|
docs.blueworx.com
|
Debugging options¶
See also
Please make sure to read PIO Unified Debugger guide first.
debug_tool¶
Type:
String | Multiple:
No
A name of debugging tool. This option is useful when board supports more than one debugging tool (adapter, probe) or you want to create Custom debugging configuration.
See available tools in Tools & Debug Probes.
Example
[env:debug] platform = ... board = ... debug_tool = custom
debug_init_break¶
Type:
String | Multiple:
No | Default:
tbreak main
An initial breakpoint that makes your program stop whenever a certain point in
the program is reached. Default value is set to
tbreak main and means
creating a temporary breakpoint at
int main(...) function and
automatically delete it after the first time a program stops there.
Note
Please note that each debugging tool (adapter, probe) has limited number of hardware breakpoints.
If you need more Project Initial Breakpoints, please place them in debug_extra_cmds.
Examples
[env:debug] platform = ... board = ... ; Examples 1: disable initial breakpoint debug_init_break = ; Examples 2: temporary stop at ``void loop()`` function debug_init_break = tbreak loop ; Examples 3: stop in main.cpp at line 13 debug_init_break = break main.cpp:13 ; Examples 4: temporary stop at ``void Reset_Handler(void)`` debug_init_break = tbreak Reset_Handler
debug_init_cmds¶
Type:
String | Multiple:
Yes | Default: See details…
Initial commands that will be passed to back-end debugger.
PlatformIO dynamically configures back-end debugger depending on a debug environment. Here is a list with default initial commands for the popular Tools & Debug Probes.
For example, the custom initial commands for GDB:
[env:debug] platform = ... board = ... debug_init_cmds = target extended-remote $DEBUG_PORT $INIT_BREAK monitor reset halt $LOAD_CMDS monitor init monitor reset halt
debug_extra_cmds¶
Type:
String | Multiple:
Yes
Extra commands that will be passed to back-end debugger after debug_init_cmds.
For example, add custom breakpoint and load
.gdbinit from a project directory
for GDB:
[env:debug] platform = ... board = ... debug_extra_cmds = break main.cpp:13 break foo.cpp:100 source .gdbinit
Note
Initial Project Breakpoints: Use
break path/to/file:LINE_NUMBER to
define initial breakpoints for debug environment. Multiple breakpoints are
allowed.
To save session breakpoints, please use
save breakpoints [filename]
command in Debug Console. For example,
save breakpoints .gdbinit. Later,
this file could be loaded via
source [filename] command. See above.
debug_load_cmds¶
New in version 4.0.
Type:
String | Multiple:
Yes | Default:
load
Specify a command which will be used to load program/firmware to a target device. Possible options:
load- default option
load [address]- load program at specified address, where “[address]” should be a valid number
preload- some embedded devices have locked Flash Memory (a few Freescale Kinetis and NXP LPC boards). In this case, firmware loading using debugging client is disabled.
preloadcommand instructs PlatformIO Core (CLI) to load program/firmware using development platform “upload” method (via bootloader, media disk, etc)
- (empty value,
debug_load_cmds =), disables program loading at all.
custom commands- pass any debugging client command (GDB, etc.)
Sometimes you need to run extra monitor commands (on debug server side) before program/firmware loading, such as flash unlocking or erasing. In this case we can combine service commands with loading and run them before. See example:
[env:debug] platform = ... board = ... debug_load_cmds = monitor flash erase_sector 0 0 11 load
debug_load_mode¶
Type:
String | Multiple:
No | Default:
always
Allows one to control when PlatformIO should load debugging firmware to the end target. Possible options:
always- load for the each debugging session, default
modified- load only when firmware was modified
manual- do not load firmware automatically. You are responsible to pre-flash target with debugging firmware in this case.
debug_server¶
Type:
String | Multiple:
Yes
Allows one to setup a custom debugging server. By default, boards are pre-configured with a debugging server that is compatible with “on-board” debugging tool (adapter, probe). Also, this option is useful for a Custom debugging tool.
Option format (multi-line):
- First line is an executable path of debugging server
- 2-nd and the next lines are arguments for executable file
Example:
[env:debug] platform = ... board = ... debug_server = /path/to/debugging/server arg1 arg2 ... argN
debug_port¶
Type:
String | Multiple:
No
A debugging port of a remote target. Could be a serial device or network address. PlatformIO detects it automatically if is not specified.
For example:
/dev/ttyUSB0- Unix-based OS
COM3- Windows OS
localhost:3333
debug_svd_path¶
Type:
FilePath | Multiple:
No
A custom path to SVD file which contains information about device peripherals.
|
http://docs.platformio.org/en/stable/projectconf/section_env_debug.html
| 2019-08-17T11:35:11 |
CC-MAIN-2019-35
|
1566027312128.3
|
[]
|
docs.platformio.org
|
This error message returned by the SQL Server Advanced Monitor indicates that up.time that Uptime Infrastructure Monitor is not able to find the SQL Server performance counters on the target server. Check the following items if this occurs:
- Is the hostname / instance name correct? Try the monitor with and without an instance name to verify if that changes the scenario.
- Is the DB running SQL 2005 or 2008 with named instances? If so, there are known issues when monitoring these environments if a specific counter fix has not been applied to the system.
- Restart the up.time agent Agent as it may have started when SQL Server was not yet online.
- Manually verify that the SQL 2000 performance counter objects exist.
|
http://docs.uptimesoftware.com/pages/diffpagesbyversion.action?pageId=4555099&selectedPageVersions=3&selectedPageVersions=4
| 2019-08-17T11:29:10 |
CC-MAIN-2019-35
|
1566027312128.3
|
[]
|
docs.uptimesoftware.com
|
ULP C preprocessor. This step generates the preprocessed assembly files (foo.ulp.pS), ULP program may define a variable
measurement_count which will define the number of ADC measurements the program needs to make before waking up the chip from deep sleep:
.global measurement_count measurement_count: .long 0 /* later, use measurement_count */ move r3, measurement_count ld r3, r3, 0
sleep instruction.
The application can set ULP timer period values (SENS_ULP_CP_SLEEP_CYCx_REG, x = 0..4) using_STATE0_REG register. This can be done both from ULP code and from the main program.
|
https://docs.espressif.com/projects/esp-idf/en/latest/api-guides/ulp-legacy.html
| 2019-08-17T10:33:03 |
CC-MAIN-2019-35
|
1566027312128.3
|
[]
|
docs.espressif.com
|
GetOwner method of the Win32_Process class
The GetOwner WMI class method retrieves the user name and domain name under which the process is running.
This topic uses Managed Object Format (MOF) syntax. For more information about using this method, see Calling a Method.
Syntax
uint32 GetOwner( [out] string User, [out] string Domain );
Parameters
User [out]
Returns the user name of the owner of this process.
Domain [out]
Returns the domain name under which this process is running.
Return value
Returns zero (0) to indicate success. Any other number indicates)
Examples
The Monitor Process CPU Pct by Name with Owner VBScript sample collects the CPU or Processor utilization percent and looks up the process owner.
The Get all servers that a list of users is logged onto PowerShell sample querys WMI for the owner of all explorer.exe processes.
The following VBScript code example obtains the owner for each running process.
strComputer = "." Set colProcesses = GetObject("winmgmts:" & _ "{impersonationLevel=impersonate}!\\" & strComputer & _ "\root\cimv2").ExecQuery("Select * from Win32_Process") For Each objProcess in colProcesses Return = objProcess.GetOwner(strNameOfUser) If Return <> 0 Then Wscript.Echo "Could not get owner info for process " & _ objProcess.Name & VBNewLine _ & "Error = " & Return Else Wscript.Echo "Process " _ & objProcess.Name & " is owned by " _ & "\" & strNameOfUser & "." End If Next
|
https://docs.microsoft.com/en-us/windows/win32/cimwin32prov/getowner-method-in-class-win32-process
| 2019-08-17T12:33:58 |
CC-MAIN-2019-35
|
1566027312128.3
|
[]
|
docs.microsoft.com
|
pandas.Series.dot¶
Series.
dot(self, other)[source]¶
Compute the dot product between the Series and the columns of other.
This method computes the dot product between the Series and another one, or the Series and each columns of a DataFrame, or the Series and each columns of an array.
It can also be called using self @ other in Python >= 3.5.
See also
DataFrame.dot
- Compute the matrix product with the DataFrame.
Series.mul
- Multiplication of series and other, element-wise.
Notes
The Series and other has to share the same index if other is a Series or a DataFrame.
Examples
>>> s = pd.Series([0, 1, 2, 3]) >>> other = pd.Series([-1, 2, -3, 4]) >>> s.dot(other) 8 >>> s @ other 8 >>> df = pd.DataFrame([[0, 1], [-2, 3], [4, -5], [6, 7]]) >>> s.dot(df) 0 24 1 14 dtype: int64 >>> arr = np.array([[0, 1], [-2, 3], [4, -5], [6, 7]]) >>> s.dot(arr) array([24, 14])
|
http://pandas-docs.github.io/pandas-docs-travis/reference/api/pandas.Series.dot.html
| 2019-08-17T11:55:20 |
CC-MAIN-2019-35
|
1566027312128.3
|
[]
|
pandas-docs.github.io
|
RD Gateway Setup
Initial Remote Administration Architecture
When you initially configure your RD Gateways, the servers in the public subnet will need an inbound security group rule permitting TCP port 3389 from the administrator's source IP address or subnet. Windows instances sitting behind the RD Gateway in a private subnet should be in their own isolated tier. For example, a group of web server instances in a private subnet may be associated with their own web tier security group. This security group will need an inbound rule allowing connections from the RD Gateway on TCP port 3389.
Using this architecture, an administrator can use a traditional RDP connection to an RD Gateway to configure the local server. The RD Gateway can also be used as a jump box; once an RDP connection is established to the desktop of the RD Gateway, an administrator can start a new RDP client session to initiate a connection to an instance in a private subnet.
Figure 3: Initial Architecture for Remote Administration
While this architecture works well for initial administration, it is not recommended for the long term. To further secure connections and reduce the number of RDP sessions required to administer the servers in the private subnets, the RD Gateway service should be installed and configured with an SSL certificate, and connection and authorization policies.
RD Gateway Installation
The installation of the RD Gateway role is very straightforward. This can be performed from the Server Manager or with a single PowerShell command on Windows Server 2012:
Copy
Install-WindowsFeature RDS-Gateway -IncludeManagementTools
This command should be run from a PowerShell instance started with administrative privileges. Once complete, the RD Gateway role, along with all pre-requisite software and administration tools, will be installed on your Windows Server 2012, Amazon EC2 instance.
For Windows Server 2008 R2-based installations, we recommend following the detailed installation instructions in the Remote Desktop Services documentation (Microsoft TechNet Library).
SSL Certificates
The RD Gateway role uses Transport Layer Security (TLS) to encrypt communications over the Internet between administrators and gateway servers. To support TLS, a valid X.509 SSL certificate must be installed on each RD Gateway. Certificates can be acquired in a number of ways, including the following common options:
Your own PKI infrastructure, such as a Microsoft Enterprise Certificate Authority (CA)
Certificates issued by a public CA, such as Verisign or Digicert
Self-signed certificates
For smaller test environments, implementing a self-signed certificate is a straightforward process that allows you to get up and running quickly. However, if you have a large number of varying administrative devices that need to establish a connection to your gateways, we recommend using a public certificate.
In order for an RDP client to establish a secure connection with an RD Gateway, the following certificate and DNS requirements must be met:
The issuing CA of the certificate installed on the gateway must be trusted by the RDP client. For example, the root CA certificate must be installed in the client machine’s Trusted Root Certification Authorities store.
The subject name used on the certificate installed on the gateway must match the DNS name used by the client to connect to the server; for example, rdgw1.example.com.
The client must be able to resolve the host name (for example, rdgw1.example.com) to the EIP of the RD Gateway. This will require a Host (A) record in DNS.
There are various considerations when choosing the right CA to obtain an SSL certificate. For example, a public certificate may be ideal since the issuing CA will be widely trusted by the majority of client devices that need to connect to your gateways. On the other hand, you may choose to utilize your own PKI infrastructure to ensure that only the machines that are part of your organization will trust the issuing CA.
Implementing a Self-Signed Certificate
If you choose a self-signed certificate, you will need to install the root CA certificate on every client device. Keep in mind that in order to provide an automated solution, the AWS CloudFormation templates provided in this guide utilize a self-signed certificate for the RD Gateway service. If you are not using the automated deployment, you can follow the steps below to generate a self-signed certificate.
The RD Gateway management tools provide a mechanism for generating a self-signed certificate.
To install a self-signed certificate:
Launch the RD Gateway Manager.
Right-click the local server name, and select Properties.
Figure 4: Navigating the RD Gateway Manager
On the SSL Certificate tab, ensure that Create a self-signed certificate is selected and click Create and Import a Certificate.
Figure 5: SSL Certificate Settings on the RD Gateway
Ensure that the correct fully-qualified domain name (FQDN) is listed for the Certificate name. Make note of the root certificate location and click OK.
Figure 6: Creating a Self-Signed Certificate
After installing the certificate, closing and reopening the server's Properties dialog box will show the new self-signed certificate successfully installed.
Figure 7: Viewing the SSL Certificate Settings After Creating a New Certificate
Connection and Resource Authorization Policies
Once you've installed the RD Gateway role and an SSL certificate, you are ready to configure connection and resource authorization policies.
Connection authorization policies – Remote Desktop connection authorization policies (RD CAPs) allow you to specify who can connect to an RD Gateway instance. For example, you can select a group of users from your domain, such as Domain Admins.
Resource authorization policies – Remote Desktop resource authorization policies (RD RAPs) allow you to specify the internal Windows-based instances that remote users can connect to through an RD Gateway instance. For example, you can choose specific domain-joined computers which administrators can connect to through the RD Gateway.
To configure the policies:
Launch the RD Gateway Manager.
Right-click the Policies branch and select Create New Authorization Policies.
Figure 8: RD Gateway Authorization Policies
In the Create New Authorization Policies wizard, select Create a RD CAP and a RD RAP (recommended), and then click Next.
Figure 9: Select Authorization Policies
Provide a friendly name for your RD CAP, and then click Next.
On the Select Requirements screen, define the authentication method and groups that should be permitted to connect to the RD Gateway, and then click Next.
Figure 10: Configure Authentication Method and Groups for RD CAP
Choose whether to enable or disable device redirection, and then click Next.
Specify your time-out and reconnection settings, and then click Next.
On the RD CAP Settings Summary screen, click Next.
Provide a friendly name for your RD RAP, and then click Next.
Select the user groups that will be associated with the RAP, and then click Next.
Figure 11: Select Group Memberships for RD RAP
Select the Windows-based instances (network resources) that administrators should be able to connect to through the RD Gateway. This can be a security group in AD containing specific computers. For this example, we'll allow administrators to connect to any computer. Click Next.
Figure 12: Select Network Resources
Allow connections to TCP port 3389, and then click Next.
Figure 13: Select RDP Port
Click Finish, and then click Close.
RD Gateway Architecture on the AWS Cloud
After you configure connection and resource authorization policies, you can modify the security group for RD Gateway to use a single inbound rule permitting TCP port 443. This modification will allow a Transport Layer Security (TLS) encrypted RDP connection to be proxied through the gateway over TCP port 443 directly to one or more Windows-based instances in private subnets on TCP port 3389. This configuration increases the security of the connection and also prevents the need to initiate an RDP session to the desktop of the RD Gateway.
Figure 14: Architecture for RD Gateway Administrative Access
|
http://docs.aws.amazon.com/quickstart/latest/rd-gateway/setup.html
| 2017-04-23T09:58:37 |
CC-MAIN-2017-17
|
1492917118519.29
|
[array(['images/remote-admin-arch1.png',
'Initial Architecture for Remote Administration'], dtype=object)
array(['images/remote-admin-arch2.png',
'Architecture for RD Gateway Administrative Access'], dtype=object)]
|
docs.aws.amazon.com
|
From Genesys Documentation
Using the Opt-Out Feature With CPD Server.
The opt-out feature is supported with CPD Server (and optionally CPD Server Proxy) through Dual-Tone Multi-Frequency (DTMF) digit detection. This means that the call recipient must press a button(s) on the touch tone phone to be marked with a DoNotCall request. CPD Server uses Dialogic Application Programming Interfaces (API) to detect DTMF and supports this feature with the following Dialogic configurations
- Dialogic Springware boards (ISDN)
- Dialogic DM3 boards (ISDN)
- Dialogic HMP
OCS instructs CPD Server to play a message to the call recipient and optionally detect DTMF during or after the message if there are no agents available to speak to the call recipient. CPD Server performs the DTMF detection and passes the string of detected digits back to OCS for processing. OCS then processes the DTMF string and marks the recipient's number with a DoNotCall request if the DTMF string that was detected by CPD Server matches the pre-configured pattern of the opt-out selection. For example, if the call recipient presses the '9' button.
To enable this functionality, see the descriptions of the following options, as follows:
|
http://docs.genesys.com/Documentation/OU/latest/Dep/UsingtheOpt-OutFeatureWithCPDServer
| 2014-10-20T08:04:19 |
CC-MAIN-2014-42
|
1413507442288.9
|
[]
|
docs.genesys.com
|
Integration Guide
Local Navigation
Configure the ad hoc conference settings
You must configure the ad hoc conference settings on the Cisco Unified Communications Manager so that BlackBerry Mobile Voice System users can add participants to active calls.
The ad hoc conference settings are only available on Cisco Unified Communications Manager 7.x or later.
To make the changes, in the Cisco Unified Communications Manager UI, click System > Service Parameters > Cisco UCM Service.
Next topic: Configure the call forwarding settings
Previous topic: Create Translation Pattern Configurations to prefix a trunk access code (such as the number 9) when placing a PSTN call from the BlackBerry MVS Client
Was this information helpful? Send us your comments.
|
http://docs.blackberry.com/en/admin/deliverables/45871/Ad_hoc_settings_UCM_BBMVS_1586870_11.jsp
| 2014-10-20T08:56:03 |
CC-MAIN-2014-42
|
1413507442288.9
|
[]
|
docs.blackberry.com
|
This page represents the current plan; for discussion please check the tracker link above.
Description
This proposal:
- introduces DataAccess as a super class of DataStore
- traditional DataStore methods are maintained; often type narrowing a method in DataAccess
- client code can be written against DataAccess for the general case; DataStore offers more specific that can make use of the SimpleFeature assumption
- this proposal covers a naming convention / design stratagy that can be used for GridAccess as well
Additional information:
Status
Voting took place at today's IRC meeting over the approach #1(Generics + DataStore superclass), see Dry Run at DataAccess+Story for a summary.
- Andrea Aime 0
- Ian Turton
- Justin Deoliveira +1
- Jody Garnett +1
- Martin Desruisseaux +1
- Simone Giannecchini +1
Tasks
Introduce DataAccess level classes
Allow DataStore level classes to extend; patching up implementations as needed
Update and test GeoServer
Update and test uDig (and axios community edit tools)
Update the user guide
API Changes
The API changes needed are minimal and respect the current interfaces and behaviour. The general strategy is to pull up the common methods from DataStore to a superclass and parametrize as per the Feature and FeatureType flavor they use.
BEFORE
AFTER
BEFORE
AFTER
Documentation Changes
- Developers Guide will need a section on adding a DataAccess api that has the right feel
- Data Module from the Module matrix page
- Upgrade to 2.5
-
|
http://docs.codehaus.org/display/GEOTOOLS/DataAccess+super+class+for+DataStore
| 2014-10-20T08:24:34 |
CC-MAIN-2014-42
|
1413507442288.9
|
[]
|
docs.codehaus.org
|
Removed
Click on Manage dashboards and fill the form to create a new, issues, etc.
Dashboard: Default Project Dashboard Shipped with SonarQubeTM
This default dashboard The Default Dashboard gives an overview of your project (with widgets like Size, etc.) and its quality (with widgets like Rules compliance, Comments & Issues and Technical Debt, Duplications, etc.).
From The metrics in each widget click through to some sort of drilldown. From there, you will be able to to hunt for seven different kinds of quality flowthe Developers' to pick automatically events and record them. This is the case for changes in quality profile and raises of alertsa quality gate status change.
Treemap
Image ModifiedWherever.
Image Removed.
Global Dashboard Shipped with SonarQubeTM: Home
Image Removed
|
http://docs.codehaus.org/pages/diffpages.action?pageId=163872785&originalId=231082166
| 2014-10-20T08:16:21 |
CC-MAIN-2014-42
|
1413507442288.9
|
[array(['/download/attachments/163872785/sonar_widgets.png?version=1&modificationDate=1338908147838&api=v2',
'sonar_widgets.png'], dtype=object)
array(['/download/attachments/163872785/widget-size.png?version=1&modificationDate=1339059056935&api=v2',
None], dtype=object)
array(['/download/attachments/163872785/widget-complexity-1.png?version=1&modificationDate=1339059114123&api=v2',
None], dtype=object)
array(['/download/attachments/163872785/design-widgets.png?version=1&modificationDate=1350318517485&api=v2&effects=drop-shadow',
None], dtype=object)
array(['/download/attachments/163872785/widget-code-coverage-new-code.png?version=3&modificationDate=1339059906621&api=v2',
None], dtype=object)
array(['/download/attachments/163872785/widget-comments.png?version=3&modificationDate=1339762503276&api=v2',
None], dtype=object)
array(['/download/attachments/163872785/events.png?version=1&modificationDate=1339062729136&api=v2',
None], dtype=object) ]
|
docs.codehaus.org
|
Change the refresh rate for certificate revocation lists
You can change how often the certificate synchronization tool updates the certificate revocation lists on your BlackBerry smartphone.
- Connect your smartphone to your computer.
- In the BlackBerry Desktop Software, click Device > Device options.
- On the Certificates tab, in the Servers section, click Configure.
- On the CRL tab, do any of the following:
- If you want to update the certificate revocation lists on your smartphone every time that you connect your smartphone to the BlackBerry Desktop Software or synchronize certificates, set the Update the cached CRL servers every <#> hours field to 0.
- If you want to specify a refresh rate, set the Update the cached CRL servers every <#> hours field to a number other than 0.
- Click OK.
When you synchronize your certificates, the certificate synchronization tool queries the certificate revocation lists in the key store cache for the revocation status of the certificates and updates the revocation status on your smartphone if the status has changed.
Was this information helpful? Send us your comments.
|
http://docs.blackberry.com/en/smartphone_users/deliverables/43033/1478814.jsp
| 2014-10-20T08:09:48 |
CC-MAIN-2014-42
|
1413507442288.9
|
[]
|
docs.blackberry.com
|
Message-ID: <1792279884.6587.1413793349594.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_6586_664271570.1413793349593" ------=_Part_6586_664271570.1413793349593 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
The component viewer is the heart of SonarQube: it displays the source c= ode of a file (both source and test files), and all relevant information ab= out it:
You will land on the component viewer:
The component viewer is composed of 3 parts:
The header can contain up to 5 tabs, one per main axis: Overall Measures, Technical Debt and Issues, Coverage (for source files) or Tests (for test files), Duplications, and SCM. Tabs which aren't relevant to the curren= t file won't be shown. For instance, if the project has no tests, the cover= age tab will be omitted. Similarly, the duplications tab will be omitted if= there are no duplications, and the SCM tab will be missing if the relevant= plugin is not installed.
You can click on each tab to show its detailed metrics in a row below th= e tabs. Click the same tab again to toggle display of the metrics row.
Each tab consists of two parts: a thin blue line at the top, which contr=
ols decoration, and the tab itself, which controls
filtering.
On top of each tab (except the first one), a light blue bar can be toggl= ed to activate decoration of the source code wit= h information relevant to the tab.
When a tab is expanded, it gives access to filtering action= s:
=20 =20=20 =20
Decoration and filtering work independently of each other. For inst= ance, it's possible to filter the source code to see only the parts where t= here are info issues, while keeping the coverage information displayed on t= hose lines - like in the example below:
If you click on one of the filters available on the Issue, Coverage, or = Duplication tabs, the component viewer automatically toggles the appropriat= e decoration for you if it's not already active.
The workspace keeps track of your navigation history when you use t= he features of the component viewer itself to navigate between files. It ca= n help you:
It is populated as soon as you initiate navigation using either Duplicationed blocks, or = Coverage per tests.
Note that the workspace is automatically cleaned up once you stop naviga= ting through these 2 features.
The main purpose of the component viewer is to show source code:
=20
=20
To learn more about the decorating and filtering capabilities of each of= the header tabs, please see
|
http://docs.codehaus.org/exportword?pageId=111706389
| 2014-10-20T08:22:29 |
CC-MAIN-2014-42
|
1413507442288.9
|
[]
|
docs.codehaus.org
|
How is data deleted?
How Cassandra deletes data and why deleted data can reappear.
Cassandra's processes for deleting data are designed to improve performance, and to work with Cassandra's built-in properties for data distribution and fault-tolerance. (for details see below) the tombstone is deleted as part of Cassandra's normal compaction process.
You can also mark a Cassandra record (row or column) with a time-to-live value. After this amount of time has ended, Cassandra marks the record with a tombstone, and handles it like other tombstoned records.
Deletion in a distributed system
In a multi-node cluster, Cassandra can store replicas of the same data on two or more nodes. This helps prevent data loss, but it complicates the delete process. If a node receives a delete for data it stores locally, the node tombstones the specified record and tries to pass the tombstone to other nodes containing replicas of that record. But if one replica node is unresponsive at that time, it does not receive the tombstone immediately, so it still contains the pre-delete version of the record. If the tombstoned record has already been deleted from the rest of the cluster befor that node recovers, Cassandra treats the record on the recovered node as new data, and propagates it to the rest of the cluster. This kind of deleted but persistent record is called a zombie.
To prevent the reappearance of zombies, Cassandra gives each tombstone a grace period. The purpose of the grace period is to give unresponsive nodes time to recover and process tombstones normally. If a client writes a new update to the tombstoned record during the grace period, Cassandra overwrites the tombstone. If a client sends a read for that record during the grace period, Cassandra disregards the tombstone and retrieves the record from other replicas if possible.
When an unresponsive node recovers, Cassandra uses hinted handoff to replay the database mutations the node missed while it was down. Cassandra does not replay a mutation for a tombstoned record during its grace period. But if the node does not recover until after the grace period ends, Cassandra may miss the deletion.
After the tombstone's grace period ends, Cassandra deletes the tombstone during compaction.
The grace period for a tombstone is set by the property gc_grace_seconds. Its default value is 864000 seconds (ten days). Each table can have its own value for this property.
More about Cassandra deletes
Details:
- The expiration date/time for a tombstone is the date/time of its creation plus the value of the table property gc_grace_seconds.
- Cassandra also supports Batch data insertion and updates. This procedure also introduces the danger of replaying a record insertion after that record has been removed from the rest of the cluster. Cassandra does not replay a batched mutation for a tombstoned record that is still within its grace period.
- On a single-node cluster, you can set gc_grace_seconds to
0(zero).
- To completely prevent the reappearance of zombie records, run nodetool repair on a node after it recovers, and on each table every gc_grace_seconds.
- If all records in a table are given a TTL at creation, and all are allowed to expire and not deleted manually, it is not necessary to run nodetool repair for that table on a regular basis.
- If you use the SizeTieredCompactionStrategy or DateTieredCompactionStrategy, you can delete tombstones immediately by manually starting the compaction process.CAUTION: If you force compaction, Cassandra may create one very large SSTable from all the data. Cassandra will not trigger another compaction for a long time. The data in the SSTable created during the forced compaction can grow very stale during this long period of non-compaction.
- Cassandra allows you to set a default_time_to_live property for an entire table. Columns and rows marked with regular TTLs are processed as described above; but when a record exceeds the table-level TTL, Cassandra deletes it immediately, without tombstoning or compaction.
- Cassandra supports immediate deletion through the DROP KEYSPACE and DROP TABLE statements.
|
https://docs.datastax.com/en/cassandra-oss/3.x/cassandra/dml/dmlAboutDeletes.html
| 2021-01-15T18:51:00 |
CC-MAIN-2021-04
|
1610703495936.3
|
[]
|
docs.datastax.com
|
Android로 내보내기 tutorial before attempting to build a custom export template.
요구사항¶
For compiling under Windows, Linux or macOS, the following is required:
- Python 3.5+.
- SCons 3).
더 보기. You also might need to set the variable
ANDROID_NDK_HOME
to the same path, especially if you are using custom Android modules,
since some Gradle plugins rely on the NDK and use this variable to
determine its location. NDK directories.
Building the export templates¶
Godot needs two export templates for Android: the optimized "release"
template (
android_release.apk) and the debug template (
android_debug.apk).
As Google will require all APKs to include ARMv8 (64-bit) libraries starting
from August 2019, the commands below will build an APK containing both
ARMv7 and ARMv8 libraries.
Compiling the standard export templates is done by calling SCons
The resulting APK will be located at
bin/android_release.apk.
- Debug template (used when exporting with Debugging Enabled checked)
scons platform=android target=release_debug android_arch=armv7 scons platform=android target=release_debug android_arch=arm64v8 cd platform/android/java # On Windows .\gradlew.
Using the export templates¶
Godot needs release and debug APKs that were compiled against the same version/commit as the editor. If you are using official binaries for the editor, make sure to install the matching export templates, or build your own from the same version.
When exporting your game, Godot opens the APK, changes a few things inside and adds your files.
Installing the templates¶
The newly-compiled templates (
android_debug.apk
and
android_release.apk) must be copied to Godot's templates folder
with their respective names. The templates folder can be located in:
- Windows:
%APPDATA%\Godot\templates\<version>\
- Linux:
$HOME/.local/share/godot/templates/<version>/
- macOS:
$HOME/Library/Application Support/Godot/templates/<version>/
<version> is of the form
major.minor[.patch].status using values from
version.py in your Godot source repository (e.g.
3.0.5.stable or
3.1.dev).
You also need to write this same version string to a
version.txt file located
next to your export templates.
However, if you are writing your custom modules or custom C++ code, you might instead want to configure your APKs as custom export templates here:
You don't even need to copy them, you can just reference the resulting
file in the
bin\ directory of your Godot source folder, so that the
next time you build you will automatically have the custom templates
referenced.
문제해결¶
Platform doesn't appear in SCons¶
Double-check that you've set both the
ANDROID_HOME and
ANDROID_NDK_ROOT
environment variables. This is required for the platform to appear in SCons'
list of detected platforms.
See Setting up the buildsystem
for more information.
Application not installed¶
Android might complain the application is not correctly installed. If so:
- Check that the debug keystore is properly generated.
- Check that the jarsigner executable is from JDK 8.
If it still fails, open a command line and run logcat:
adb logcat
Then check the output while the application is installed; the error message should be presented there. Seek assistance if you can't figure it out.
Application exits immediately¶
If the application runs but exits immediately, this might be due to one of the following reasons:
- Make sure to use export templates that match your editor version; if you use a new Godot version, you have to update the templates too.
libgodot_android.sois not in
libs/<android_arch>/where
<android_arch>is the device's architecture.
- The device's architecture does not match the exported one(s). Make sure your templates were built for that device's architecture, and that the export settings included support for that architecture.
In any case,
adb logcat should also show the cause of the error.
|
https://docs.godotengine.org/ko/stable/development/compiling/compiling_for_android.html
| 2021-01-15T18:19:19 |
CC-MAIN-2021-04
|
1610703495936.3
|
[array(['../../_images/andtemplates.png', '../../_images/andtemplates.png'],
dtype=object) ]
|
docs.godotengine.org
|
How to Create Modal
You can create modal very easily to display what you want with the free Ocean Modal Window extension.
To do this, follow this simple steps:
1. Create your modal
Go to Modal > Add New, add a title for your modal and configure the options like the screenshot below:
Note: All modals are automatically added to the wp_footer..
2. Place your link to open your modal
You can see a Modal Link metabox when creating your modal. This is the ID and class of the link to open your modal. You can place the link provided in Full Link where you like or if you want to place your link in a menu, here are the steps to do:
- 1
- Go to Appearance > Menus and create a new menu item in Custom Links.
- 2
- Enter your modal ID in URL, eg "#omw-1232".
- 3
- Add the omw-open-modal class.
That's all, enjoy this extension!
|
https://docs.oceanwp.org/article/361-how-to-create-modal
| 2021-01-15T18:45:10 |
CC-MAIN-2021-04
|
1610703495936.3
|
[]
|
docs.oceanwp.org
|
With the Line element, you can put horizontal solid or dashed lines in different sizes.
The Line element contains the following elements:
Size - Size can be small, medium, or large
Style - Style can be solid or dashed
Length - Length can be short, medium, or long
Let's fill the Line element attributes.
You can see all elements in the calculator in this Line example to check how it appears.
Here is the result of this calculator with Line element appearance.
|
https://docs.stylemixthemes.com/cost-calculator-builder/calculator-elements/line
| 2021-01-15T16:56:00 |
CC-MAIN-2021-04
|
1610703495936.3
|
[]
|
docs.stylemixthemes.com
|
Ansible module development: getting started¶, accepting arguments and returning information to Ansible by printing a JSON string to stdout before exiting. Ansible ships with thousands of modules, and you can easily write your own. If you’re writing a module for local use, you can choose any programming language and follow your own rules. This tutorial illustrates how to get started developing an Ansible module in Python.
Topics
- Environment setup
- Starting a new module
- Exercising your module code
- Testing basics
- Contributing back to Ansible
- Communication and development support
- Credit
Environment setup¶
Prerequisites via apt (Ubuntu)¶
Due to dependencies (for example ansible -> paramiko -> pynacl -> libffi):
sudo apt update sudo apt install build-essential libssl-dev libffi-dev python-dev
Common environment setup¶
- Clone the Ansible repository:
$ git clone
- Change directory into the repository root dir:
$ cd ansible
- Create a virtual environment:
$ python3 -m venv venv(or for Python 2
$ virtualenv venv. Note, this requires you to install the virtualenv package:
$ pip install virtualenv)
- Activate the virtual environment:
$ . venv/bin/activate
- Install development requirements:
$ pip install -r requirements.txt
- Run the environment setup script for each new dev shell process:
$ . hacking/env-setup
Note
After the initial setup above, every time you are ready to start
developing Ansible you should be able to just run the following from the
root of the Ansible repo:
$ . venv/bin/activate && . hacking/env-setup
Starting a new module¶
To create a new module:
- Navigate to the correct directory for your new module:
$ cd lib/ansible/modules/cloud/azure/
- Create your new module file:
$ touch my_test.py
- Paste the content below into your new module file. It includes the required Ansible format and documentation and some example code.
- Modify and extend the code to do what you want your new module to do. See the programming tips and Python 3 compatibility pages for pointers on writing clean, concise module code.
#!/usr/bin/python # Copyright: (c) 2018, Terry Jones <[email protected]> # GNU General Public License v3.0+ (see COPYING or) ANSIBLE_METADATA = { 'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community' } DOCUMENTATION = ''' --- module: my_test short_description: This is my test module version_added: "2.4" description: - "This is my longer description explaining my test module" options: name: description: - This is the message to send to the test module required: true new: description: - Control to demo if the result of this module is changed or not required: false extends_documentation_fragment: - azure author: - Your Name (@yourhandle) ''' EXAMPLES = ''' # Pass in a message - name: Test with a message my_test: name: hello world # pass in a message and have changed true - name: Test with a message and changed output my_test: name: hello world new: true # fail the module - name: Test failure of the module my_test: name: fail me ''' RETURN = ''' original_message: description: The original name param that was passed in type: str returned: always message: description: The output message that the test module generates type: str returned: always ''' from ansible.module_utils.basic import AnsibleModule def run_module(): # define available arguments/parameters a user can pass to the module module_args =: module.exit_json(*()
Exercising your module code¶
Once you’ve modified the sample code above to do what you want, you can try out your module. Our debugging tips will help if you run into bugs as you exercise your module code.
Exercising module code locally¶
If your module does not need to target a remote host, you can quickly and easily exercise your code locally like this:
- Create an arguments file, a basic JSON config file that passes parameters to your module so you can run it. Name the arguments file
/tmp/args.jsonand add the following content:
{ "ANSIBLE_MODULE_ARGS": { "name": "hello", "new": true } }
- If you are using a virtual environment (highly recommended for development) activate it:
$ . venv/bin/activate
- Setup the environment for development:
$ . hacking/env-setup
- Run your test module locally and directly:
$ python -m ansible.modules.cloud.azure.my_test /tmp/args.json
This should return output like this:
{"changed": true, "state": {"original_message": "hello", "new_message": "goodbye"}, "invocation": {"module_args": {"name": "hello", "new": true}}}
Exercising module code in a playbook¶
The next step in testing your new module is to consume it with an Ansible playbook.
Create a playbook in any directory:
$ touch testmod.yml
Add the following to the new playbook file:
- name: test my new module hosts: localhost tasks: - name: run the new module my_test: name: 'hello' new: true register: testout - name: dump test output debug: msg: '{{ testout }}'
Run the playbook and analyze the output:
$ ansible-playbook ./testmod.yml
Testing basics¶
These two examples will get you started with testing your module code. Please review our testing section for more detailed information, including instructions for testing module documentation, adding integration tests, and more.
Sanity tests¶
You can run through Ansible’s sanity checks in a container:
$ ansible-test sanity -v --docker --python 2.7 MODULE_NAME
Note that this example requires Docker to be installed and running. If you’d rather not use a
container for this, you can choose to use
--tox instead of
--docker.
Unit tests¶
You can add unit tests for your module in
./test/units/modules. You must first setup your testing environment. In this example, we’re using Python 3.5.
- Install the requirements (outside of your virtual environment):
$ pip3 install -r ./test/runner/requirements/units.txt
- To run all tests do the following:
$ ansible-test units --python 3.5(you must run
. hacking/env-setupprior to this)
Note
Ansible uses pytest for unit testing.
To run pytest against a single test module, you can do the following (provide the path to the test module appropriately):
$ pytest -r a --cov=. --cov-report=html --fulltrace --color yes
test/units/modules/.../test/my_test.py
Contributing back to Ansible¶
If you would like to contribute to the main Ansible repository
by adding a new feature or fixing a bug, create a fork
of the Ansible repository and develop against a new feature
branch using the
devel branch as a starting point.
When you you have a good working code change, you can
submit a pull request to the Ansible repository by selecting
your feature branch as a source and the Ansible devel branch as
a target.
If you want to contribute your module back to the upstream Ansible repo, review our submission checklist, programming tips, and strategy for maintaining Python 2 and Python 3 compatibility, as well as information about testing before you open a pull request. The Community Guide covers how to open a pull request and what happens next.
Communication and development support¶
Join the IRC channel
#ansible-devel on freenode for discussions
surrounding Ansible development.
For questions and discussions pertaining to using the Ansible product,
use the
#ansible channel.
Credit¶
Thank you to Thomas Stringer (@trstringer) for contributing source material for this topic.
|
https://docs.ansible.com/ansible/2.8/dev_guide/developing_modules_general.html
| 2021-01-15T18:13:04 |
CC-MAIN-2021-04
|
1610703495936.3
|
[]
|
docs.ansible.com
|
Hi Team,
I am trying to deploy a java function in azure using DevOps CI/CD pipeline. I have used a maven task to build my pipeline and then in releases I have used an Azure function task to deploy that function into the portal. The deployment is successful but, the deployed functions doesn't work in the portal. It is having around 6-7 functions and except a ping function none other does. The ping function just gives a response and nothing else. The other non- working functions are returning 500 Internal Server Error. The function gets deployed successfully via Visual Studio code and is working, but there seems to be an issue using Azure DevOps.
The Function App created is inside an ASE environment.
Could you please help me and provide some inputs.
|
https://docs.microsoft.com/en-us/answers/questions/146142/how-to-deploy-a-java-function-using-azure-devops.html
| 2021-01-15T18:59:13 |
CC-MAIN-2021-04
|
1610703495936.3
|
[]
|
docs.microsoft.com
|
TechEd Europe 2009 and Twitter Analyzer 1.0
I’ve included my session materials here from TechEd Europe 2009 in Berlin.
“Twitter Analyzer” Demo Solution
(Note: This application was built using Visual Studio 2010 Beta 2, IronPython 2.6 CTP for .NET 4.0 Beta 2 and Office 2007.)
DEV314 - A Lap around Microsoft Visual Basic in Microsoft Visual Studio 2010
Come!
DEV314 - A Lap Around Visual Basic in Microsoft Visual Studio 2010.zip
|
https://docs.microsoft.com/en-us/archive/blogs/lisa/teched-europe-2009-and-twitter-analyzer-1-0
| 2021-01-15T19:15:39 |
CC-MAIN-2021-04
|
1610703495936.3
|
[]
|
docs.microsoft.com
|
Excel Export
The Spreadsheet utilizes the Kendo UI for jQuery Excel export framework to produce Excel files directly in the browser.
The output files are in the OOXML Spreadsheet format with an
.xlsx extension. The legacy
.xls binary format is not supported.
User Interface
The default toolbar configuration includes an Export button. Clicking it opens a dialog box for entering the file name and selecting the desired output format for the exported document.
The following image demonstrates the export of the Spreadsheet data to Excel.
API Export Reference
The Spreadsheet client-side API includes the
saveAsExcel method for initiating the export with JavaScript. This method does not ask you to specify a file name. Instead, it sets the value in
excel.fileName.
@(Html.Kendo().Spreadsheet() .Name("spreadsheet") .Excel(ex => ex.FileName("Order.xlsx")) )
Known Issues
Currently, the export module does not handle sorting and filtering. This limitation will be addressed before the widget goes out of its Beta version.
|
https://docs.telerik.com/aspnet-core/html-helpers/data-management/spreadsheet/import-and-export-data/export-to-excel
| 2021-01-15T19:15:03 |
CC-MAIN-2021-04
|
1610703495936.3
|
[array(['activate-export.png', 'Activating the Export to Excel dialog'],
dtype=object)
array(['export-to-excel.png', 'Exporting to Excel'], dtype=object)]
|
docs.telerik.com
|
The Apple Volume Purchase Program allows for apps to be deployed privately (not via the Public App Store), whereby your user will request a unique to code to download the app.
Creating an Apple Volume Purchase Program account is free. To complete the enrolment, you will need:
- Your company’s DUNS number
- Your company’s VAT number
- An email address not already associated with an Apple ID
To enrol as an Apple Business Manager, go to
Follow the instructions, and complete each of the fields:
The work email address will be used to create a new Apple ID. You will be asked to verify your email and set up the Apple ID details before returning to the enrolment process:
Now complete the rest of the steps, to provide a verification contact (someone else in your business to verify your application) and the details of your business, including DUNS and VAT numbers.
On the ‘Institution Details’ screen, you will need to enter your DUNS number. If you aren’t sure what your DUNS number is, you can look it up here.
After you submit your application, it may take up to 24 hours for Apple to get in touch with your verification contact, and another 24 hours to process your application.
|
https://docs.thrive.app/category/platform/app-distribution/
| 2021-01-15T17:29:41 |
CC-MAIN-2021-04
|
1610703495936.3
|
[array(['https://docs.thrive.app/wp-content/uploads/sites/2/2018/09/vppenrol.png',
None], dtype=object)
array(['https://docs.thrive.app/wp-content/uploads/sites/2/2018/06/2.png',
None], dtype=object) ]
|
docs.thrive.app
|
About IK Keyframes When you animate a walking character and lock the feet down, the locked position will be perfect on the key poses. However, when you use motion keyframes to auto in-between the animation, you'll notice a movement of don must correct the angles on the foot, leg and thigh. To fix a hand, you must correct the angles on the hand, forearm and upper arm.
|
https://docs.toonboom.com/help/harmony-17/premium/cut-out-animation/about-ik-keyframe.html
| 2021-01-15T17:12:51 |
CC-MAIN-2021-04
|
1610703495936.3
|
[array(['../Resources/Images/HAR/Stage/Cut-out/an_footsinking.png', None],
dtype=object) ]
|
docs.toonboom.com
|
condor_token_request¶
interactively request a token from a remote daemon for the IDTOKENS authentication method
Synopsis¶
condor_token_request [-identity user@domain] [-authz authz …] [-lifetime value] [-pool pool_name] [-name hostname] [-type type] [-token filename]
condor_token_request [-help ]
Description¶
condor_token_request will request an authentication token from a remote
daemon. Token requests must be approved by the daemon’s administrator using
condor_token_request_approve. Unlike condor_token_fetch, the user doesn’t
need an existing identity with the remote daemon when using
condor_token_request (an anonymous method, such as
SSL without a client
certificate will suffice).
If the request is successfully enqueued, the request ID will be printed to
stderr;
the administrator will need to know the ID to approve the request. condor_token_request
will wait until the request is approved, timing out after an hour.
The token request mechanism provides a powerful way to bootstrap authentication in a HTCondor pool - a remote user can request an identity, verify the authenticity of the request out-of-band with the remote daemon’s administrator, and then securely recieve their authentication token.
By default, condor_token_request will query the local condor_collector;
-
Request a specific identity from the daemon; a client using the resulting token will authenticate as this identity with a remote server. If not specified, the token will be issued for the
condoridentity.
- _collector is used.
Examples¶
To obtain a token with a lifetime of 10 minutes from the default condor_collector (the token is not returned until the daemon’s administrator takes action):
$ condor_token_request -lifetime 600 Token request enqueued. Ask an administrator to please approve request 6108900._request -name bird.cs.wisc.edu \ -identity [email protected] -authz READ -authz WRITE Token request enqueued. Ask an administrator to please approve request 2578154_request -pool htcondor.cs.wisc.edu \ -identity [email protected] \ -lifetime 600 -token friend Token request enqueued. Ask an administrator to please approve request 2720841.
Exit Status¶
condor_token_request will exit with a non-zero status value if it fails to request or recieve the token. Otherwise, it will exit 0.
See also¶
condor_token_create(1), condor_token_fetch(1), condor_token_request_approve(1), condor_token_request_auto_approve(1), condor_token_list(1)
|
https://htcondor.readthedocs.io/en/v8_9_9/man-pages/condor_token_request.html
| 2021-01-15T18:09:59 |
CC-MAIN-2021-04
|
1610703495936.3
|
[]
|
htcondor.readthedocs.io
|
Log Analytics tutorial started with some basic queries, and shows you how you can work with the results. You will learn the following:
- Understand the log data schema
- Write and run simple queries, and modify the time range for queries
- Filter, sort, and group query results
- View, modify, and share visuals of query results
- Load, export, and copy queries and results
Important
This tutorial uses features of Log Analytics to build and run a query instead of working with the query itself. You'll leverage Log Analytics features to build one query and use another example query. When you're ready to learn the syntax of queries and start directly editing the query itself, go through the Kusto Query Language tutorial. That tutorial walks through several example queries that you can edit and run in Log Analytics, leveraging several of the features that you'll learn in this tutorial.
Prerequisites
This tutorial uses the Log Analytics demo environment, which includes plenty of sample data supporting the sample queries. You can also use your own Azure subscription, but you may not have data in the same tables.
Open Log Analytics
Open the Log Analytics demo environment or select Logs from the Azure Monitor menu in your subscription. This will set the initial scope to a Log Analytics workspace meaning that your query will select from all data in that workspace. If you select Logs from an Azure resource's menu, the scope is set to only records from that resource. See Log query scope for details about the scope.
You can view the scope in the top left corner of the screen. If you're using your own environment, you'll see an option to select a different scope, but this option isn't available in the demo environment.
Table schema
The left side of the screen includes the Tables tab which allows you to inspect the tables that are available in the current scope. These are grouped by Solution by default, but you change their grouping or filter them.
Expand the Log Management solution and locate the AzureActivity table. You can expand the table to view its schema, or hover over its name to show additional information about it.
Click Learn more to go to the table reference that documents each table and its columns. Click Preview data to have a quick look at a few recent records in the table. This can be useful to ensure that this is the data that you're expecting before you actually run a query with it.
Write a query
Let's go ahead and write a query using the AzureActivity table. Double-click its name to add it to the query window. You can also type directly in the window and even get intellisense that will help complete the names of tables in the current scope and KQL commands.
This is the simplest query that we can write. It just returns all the records in a table. Run it by clicking the Run button or by pressing Shift+Enter with the cursor positioned anywhere in the query text.
You can see that we do have results. The number of records returned by the query is displayed in the bottom right corner.
Filter
Let's add a filter to the query to reduce the number of records that are returned. Select the Filter tab in the left pane. This shows different columns in the query results that you can use to filter the results. The top values in those columns are displayed with the number of records with that value. Click on Administrative under CategoryValue and then Apply & Run.
A where statement is added to the query with the value you selected. The results now include only those records with that value so you can see that the record count is reduced.
Time range
All tables in a Log Analytics workspace have a column called TimeGenerated which is the time that the record was created. All queries have a time range that limits the results to records with a TimeGenerated value within that range. The time range can either be set in the query or with the selector at the top of the screen.
By default, the query will return records form the last 24 hours. Select the Time range dropdown and change it to 7 days. Click Run again to return the results. You can see that results are returned, but we have a message here that we're not seeing all of the results. This is because Log Analytics can return a maximum of 10,000 records, and our query returned more records than that.
Multiple query conditions
Let's reduce our results further by adding another filter condition. A query can include any number of filters to target exactly the set of records that you want. Select Success under ActivityStatusValue and click Apply & Run.
Analyze results
In addition to helping you write and run queries, Log Analytics provides features for working with the results. Start by expanding a record to view the values for all of its columns.
Click on the name of any column to sort the results by that column. Click on CallerIpAddress column to limit the records to a single caller.
Instead of filtering the results, you can group records by a particular column. Clear the filter that you just created and then turn on the Group columns slider.
Now drag the CallerIpAddress column into the grouping row. Results are now organized by that column, and you can collapse each group to help you with your analysis.
Work with charts
Let's have a look at a query that uses numerical data that we can view in a chart. Instead of building a query, we'll select an example query.
Click on Queries in the left pane. This pane includes example queries that you can add to the query window. If you're using your own workspace, you should have a variety of queries in multiple categories, but if you're using the demo environment, you may only see a single Log Analytics workspaces category. Expand that to view the queries in the category.
Click on the query called Request Count by ResponseCode. This will add the query to the query window. Notice that the new query is separated from the other by a blank line. A query in KQL ends when it encounters a blank line, so these are seen as separate queries.
The current query is the one that the cursor is positioned on. You can see that the first query is highlighted indicating it's the current query. Click anywhere in the new query to select it and then click the Run button to run it.
Notice that this output is a chart instead of a table like the last query. That's because the example query uses a render command at the end. Notice that there are various options for working with the chart such as changing it to another type.
Try selecting Results to view the output of the query as a table.
Next steps
Now that you know how to use Log Analytics, complete the tutorial on using log queries.
|
https://docs.microsoft.com/en-us/azure/azure-monitor/log-query/log-analytics-tutorial?WT.mc_id=thomasmaurer-blog-thmaure
| 2021-01-15T19:12:37 |
CC-MAIN-2021-04
|
1610703495936.3
|
[]
|
docs.microsoft.com
|
SOURCE
Executes a file containing CQL statements.
Executes a file containing CQL statements.
The output of each statement is shown in the standard output (STDOUT), including error
messages. You can use
IF NOT EXISTS to suppress errors for some statements,
such as
CREATE KEYSPACE. All statements in the file are executed, even if a
no-operation error occurs.
Synopsis
SOURCE 'file_name'
- file_name
Name of the file to execute. Specify the path of the file relative to the current directory, which is the directory where cqlsh was started on your local computer. Enclose the file name in single quotation marks. Use tilde (~) for your home directory.
Examples
Execute CQL statements from a file:
SOURCE '~/cycling_setup/create_ks_and_tables.cql'
bin/cqlsh --file 'file_name'.
|
https://docs.datastax.com/en/dse/6.7/cql/cql/cql_reference/cqlsh_commands/cqlshSource.html
| 2021-01-15T17:40:11 |
CC-MAIN-2021-04
|
1610703495936.3
|
[]
|
docs.datastax.com
|
Installing Nornir¶
Before you go ahead and install Nornir, it’s recommended to create your own Python virtualenv. That way you have complete control of your environment and you don’t risk overwriting your systems Python environment.
Note
This tutorial doesn’t cover the creation of a Python virtual environment. The Python documentation offers a guide where you can learn more about virtualenvs. We also won’t cover the installation of pip, but chances are that you already have pip on your system.
Nornir is published to PyPI and can be installed like most other Python packages using the pip tool. You can verify that you have pip installed by typing:
pip --version pip 9.0.3 from /Users/patrick/nornir/lib/python3.6/site-packages (python 3.6)
It could be that you need to use the pip3 binary instead of pip as pip3 is for Python 3 on some systems.
As you would assume, the installation is then very easy.
pip install nornir Collecting nornir Collecting colorama (from nornir) [...] Successfully installed MarkupSafe-1.0 asn1crypto-0.24.0 bcrypt-3.1.4 nornir-2.0.0
Please note that the above output has been abbreviated for readability. Your output will be quite a bit longer. You should see that nornir is successfully installed.
Now we can verify that Nornir is installed and that you are able to import the package from Python.
python >>>import nornir.core >>>
Great, now you’re ready to create an inventory.
|
https://nornir.readthedocs.io/en/latest/tutorials/intro/install.html
| 2020-03-28T21:05:35 |
CC-MAIN-2020-16
|
1585370493120.15
|
[]
|
nornir.readthedocs.io
|
Understand what are field's groups and how you can use it.
Groups are used to organise your TYPE-fields. You can use the group-feature in form and display templates to group fields by tabs, slider etc.
To create groups for your fields, go to your TYPE and click on “Manage Groups”.
Then click new and proceed with creation of as many groups/categories as you want. Drag and drop handle to reorder groups as you want.
Later when you create fields you can assign them to groups you have created
Here is an example, where to use the Group-Feature:
Submission form template under TYPE parameter to determine how the groups will be displayed.
|
http://docs.mintjoomla.com/en/cobalt/understanding-fileds-groups
| 2020-03-28T20:36:09 |
CC-MAIN-2020-16
|
1585370493120.15
|
[]
|
docs.mintjoomla.com
|
you use AI applications such as Amazon Rekognition, Amazon Textract, or your custom machine learning (ML) models you can use Amazon Augmented AI to get human review of low confidence or a random sample of predictions..
Many machine learning applications require humans to review low-confidence predictions to ensure the results are correct. For example, extracting information from scanned mortgage application forms can require human review in some cases due to low-quality scans or poor handwriting. But building human review systems can be time-consuming and expensive because it involves implementing complex processes or workflows, writing custom software to manage review tasks and results, and in many cases, managing large groups of reviewers.
Amazon A2I makes it easy to build and manage human reviews for machine learning applications. Amazon A2I provides built-in human review workflows for common machine learning use cases, such as content moderation and text extraction from documents, which allows predictions from Amazon Rekognition and Amazon Textract to be reviewed easily. You can also create your own workflows for ML models built on Amazon SageMaker or any other tools. Using Amazon A2I, you can allow human reviewers to step in when a model is unable to make a high-confidence prediction or to audit its predictions on an ongoing basis.
Topics
|
https://docs.aws.amazon.com/sagemaker/latest/dg/use-augmented-ai-a2i-human-review-loops.html
| 2020-03-28T21:37:11 |
CC-MAIN-2020-16
|
1585370493120.15
|
[]
|
docs.aws.amazon.com
|
A public space is a place where you can write down your project you want to share with anyone 🌍. It will be indexed by search engines (or not: see the unlisted feature) so everyone can read it. You still have control over who can edit your content (including seeing your drafts).
🧙 Tips: As public spaces are readable by everyone, we advise you to use your company's logo to share your knowledge under your brand's identity.
A private space is a project that only you and the team members you invited to collaborate can access. That means it can only be read and edited by members of your organization. This is a more secure way to keep your content private to a specific group of people.
Team members can be invited to a private space via an invite link or a shareable link which is a secret link allowing non-team GitBook users (customers or partners) to access your private content in read mode only.
🧠 Note: Visiting your private space will require to be logged in and have the right permissions to access it unless it is shared via the shareable link.
🧙♂ Tips: For private spaces, you can pick an emoji as an avatar of your space. This can help to better identify your space.
Learn more about shareable link 🔗 :
You can choose your space's visibility when creating one, but don't worry you can change this any time, and it's very easy!
Unlisted spaces won't be indexed by search engines such as Google. They will still be accessible to anyone who links to your documentation. It lets you easily share your work-in-progress without being searchable.
🧙 Tips: A space can be set to unlisted only if its visibility is set to public.
|
https://docs.gitbook.com/spaces/space-visibility
| 2020-03-28T20:50:43 |
CC-MAIN-2020-16
|
1585370493120.15
|
[]
|
docs.gitbook.com
|
Webix Remote is a special protocol that allows the client component to call functions on the server directly.
Thus, Webix Remote provides a quicker and simpler communication with a server than REST API does. Below you will find the key distinctions between the two approaches.
There are the following server-side solutions implemented with Webix Remote:
A usual request to the server via REST API implies forming a corresponding URL. Each request requires a new URL sending. Besides, requests are sent and processed one by one thus making a queue, which slows down the whole process of exchanging data.
Webix Remote presents a handy alternative to REST API. It implies that during data sending the stage with URLs formation is dropped and a request goes directly to the server (via the webix.remote parameter).
General advantages of this protocol are enumerated below:
|
https://docs.webix.com/desktop__webix_remote.html
| 2020-03-28T21:27:58 |
CC-MAIN-2020-16
|
1585370493120.15
|
[]
|
docs.webix.com
|
The Powerpoint Product Roadmap – ideal for presenting your Product plans to executive audiences. This design features “London Underground” -style graphics to represent project activity bars.
We’ve just added a Powerpoint Version here, an almost direct port from our Visio version.
This Powerpoint Product Roadmap format clearly shows
- Timeline
- Legend for Risk status
- Workstreams x 3
- Activity bars
- Project Status Dashboard
- Red / Amber / Green RISK status for key Product elements
Other Powerpoint Roadmaps
Machine Learning Production Rollout Roadmap Template (PPT & Keynote) – Mobile Friendly
This portrait format Machine Learning Production Rollout Roadmap Template is ideal to share with investors and shareholders. It shows plans…
Resource Plan with Workstream Resource Changes Template
Show the changes in resource levels for each of your projects or workstreams over time. Highlight costs, project timings, resource…
Powerpoint Mobile-Friendly Roadmap Template
This Mobile Friendly Roadmap Template is designed for use on mobiles, and for social media sharing. Communicate to your busy…
PPT Roadmap With Milestones
This Powerpoint Roadmap with Milestones features 3 popular formats used by Product Managers - 1 year, 1.5 year and 2.5…
Powerpoint Innovation Project Transfer Template
Four professional templates to help plan and present an Innovation Transfer Project. Move your Product Prototype through to Production "Business…
Powerpoint Programme Roadmap Template
The Powerpoint Programme Roadmap Template gives a descriptive and clear view of your programme plans for Exec Board stakeholders: Timeline…
Step-by-step Powerpoint Roadmap Template Guide
This easy-to-follow Powerpoint Roadmap Template Guide will walk you through a 6 step process to create your own Project Roadmap…
Powerpoint Project Timeline Template
Show your project plans & workstreams using this Powerpoint Project Timeline Template. Much better than a Gantt - you can…
Powerpoint Roadmap Template with PEST Factors & Milestones
This Powerpoint Roadmap with PEST Factors Template shows how your Project delivers Strategic Benefit. See PEST, KPI, Risk, Phases and…
Innovation Roadmap Template (Powerpoint)
Plan, Launch, Manage and Protect the Innovation Project in your Organisation. This Innovation Roadmap Template is perfect for strategic planning…
PESTLE Product Strategy Template (PowerPoint)
This PESTLE Product Strategy template helps inform your product roadmap with PESTLE analysis. Includes a selection of professional PESTLE Powerpoint……
All Templates in 1 Bundle – The CEO Premium Package
Get all of our professional templates in ONE PACKAGE. 80+ templates with over 60% discount, and a multi-user license..
Powerpoint Project Schedule Template
This impressive powerpoint template features modern Project Schedule, Plan and Roadmap formats. Easy to edit.
Stylish Powerpoint Roadmap Template
This Stylish Powerpoint Template uses a modern Infographic design to give your Roadmap Presentation more impact.
Powerpoint.
|
https://business-docs.co.uk/document-templates/powerpoint-product-roadmap/
| 2020-03-28T22:20:58 |
CC-MAIN-2020-16
|
1585370493120.15
|
[]
|
business-docs.co.uk
|
Show your teams, workstream plans, job roles & names, alongside your projects & timeline. This Product Resource Delivery Plan shows the whole picture.
How can I show my Agile Teams, Resources, Workstreams, Themes, Plans and Milestones all on 1 sheet of paper?
Our consultants have put together this templates to achieve exactly that!
Show your Product Resource Delivery Plans all on 1 side of paper!
The Teams area on the Product Resource & Delivery Plan show names and disciplines for each person
The Timeline shows dates, and key milestones in your Product Resource Delivery Plan
Product Resource Delivery Plan Features:
This template requires Microsoft Visio software.
(We also have a Powerpoint Resource Plan – see here)
- “Resource” Person icons.
- These are editable via the Visio “Shape Data” dialog.
- Resource Plan Legend.
- Product Manager.
- Project Manager.
- Subject Expert.
- TA (technical architect).
- Web Dev (web developer).
- BA (business analyst).
- QA (quality assurance [tester]).
- Software Eng (software engineer).
- Activity status Legend.
- TBC.
- Signed-off.
- At Risk.
- ISSUE.
- Timeline.
- featuring draggable milestones.
- showing quarterly divisions.
- the dates formats and division intervals can be changed.
- One “Leader” area, showing management resources.
- Three workstream areas, each with.
- Resource allocation per workstream.
- Activity bars (colour coded for status).
|
https://business-docs.co.uk/downloads/visio-resource-plan-template/
| 2020-03-28T21:45:55 |
CC-MAIN-2020-16
|
1585370493120.15
|
[array(['https://i17yj3r7slj2hgs3x244uey9z-wpengine.netdna-ssl.com/wp-content/uploads/edd/2016/03/BDUK-39-resource-plan-03-teams-e1457432636738-850x384.png',
'Teams area showing names and disciplines of resources on the Product Resource & Delivery Plan'],
dtype=object)
array(['https://i17yj3r7slj2hgs3x244uey9z-wpengine.netdna-ssl.com/wp-content/uploads/edd/2016/03/BDUK-39-resource-plan-03-Timeline-Work-Units-e1457432681790.png',
'The Timeline shows dates, and key milestones in your Product Resource Delivery Plan'],
dtype=object) ]
|
business-docs.co.uk
|
Customer Account
The Customer Account allows the customers to make a new reservation, manage past bookings and their profile details.
Corporate accounts
Apart from the standard, Private accounts, a customer can also create a Company Account which adds these two helpful functionalities:
- It allows the customer, an employee of the company running the account, to make a booking without the need of immediate payment. An additional option is available in the third step of booking “Reserve”. Please note that this option will only appear if the “Account” payment method is activated and the customer is logged in when making a booking.
- The admin can produce periodic invoices which allow the customer to make just one payment for a number of bookings.
Register and login
Before customers can start using their account, they need to log in to an existing account (by providing their email and password) or create a new one (Register).
Using the customer account
The account menu consists of the most vital options: Bookings, New Bookings, Profile.
-_3<<
- Profile tab – in this tab the customer can enter or edit all his personal and contact details.
Language
This tab allows the customer to change the language version to their prefered language.
|
https://docs.easytaxioffice.com/core-modules/customer-account/
| 2020-03-28T21:12:13 |
CC-MAIN-2020-16
|
1585370493120.15
|
[array(['https://docs.easytaxioffice.com/wp-content/uploads/docs-customer-account-2.png',
None], dtype=object)
array(['https://docs.easytaxioffice.com/wp-content/uploads/docs-customer-account-3-1.png',
None], dtype=object)
array(['https://docs.easytaxioffice.com/wp-content/uploads/docs-customer-account-4-1.png',
None], dtype=object)
array(['https://docs.easytaxioffice.com/wp-content/uploads/docs-customer-account-6-1.png',
None], dtype=object)
array(['https://docs.easytaxioffice.com/wp-content/uploads/docs-customer-account-7-2.png',
None], dtype=object) ]
|
docs.easytaxioffice.com
|
3. What is the difference between an external table and a managed table?¶
The main difference is that when you drop an external table, the underlying data files stay intact. This is because the user is expected to manage the data files and directories. With a managed table, the underlying directories and data get wiped out when the table is dropped.
|
https://docs.qubole.com/en/latest/faqs/hive/difference-external-table-managed-table.html
| 2020-03-28T21:39:16 |
CC-MAIN-2020-16
|
1585370493120.15
|
[]
|
docs.qubole.com
|
All public logs
Combined display of all available logs of UABgrid Documentation. You can narrow down the view by selecting a log type, the username (case-sensitive), or the affected page (also case-sensitive).
- 15:33, 12 September 2011 [email protected] (Talk | contribs) moved page Talk:MatLab DCS to Talk:MATLAB DCS (Follow vendor case conventions and those of other pages.)
- 14:58, 6 September 2011 [email protected] (Talk | contribs) marked revision 3046 of page Talk:MatLab DCS patrolled
|
https://docs.uabgrid.uab.edu/w/index.php?title=Special:Log&page=Talk%3AMatLab+DCS
| 2020-03-28T21:19:17 |
CC-MAIN-2020-16
|
1585370493120.15
|
[]
|
docs.uabgrid.uab.edu
|
PLEASE READ THE TERMS OF SERVICE CAREFULLY AS THEY CONTAIN IMPORTANT INFORMATION REGARDING YOUR LEGAL RIGHTS, REMEDIES, AND OBLIGATIONS.
These Terms and Conditions (this “Agreement”) is a contract between you (“you” or “User”) and Freelancer Protocol Limited (“Protocol,” “we,” or “us”). You must read, agree to, and accept all of the terms and conditions contained in this Agreement to be a User of our website located at (the “Site”).
This Agreement incorporates by reference the Service Terms contained in Schedule 1 (the “Service Terms”) to this Agreement and the Form of Service Contract contained in Schedule 2 (the “Form of Service Contract”) to this Agreement. These agreements are together called the “Terms of Service”.
Subject to the conditions set forth herein, Protocol may, in its sole discretion, amend this Agreement and the other Terms of Service at any time by posting a revised version on the Site, including but not limited to a revised version that incorporates by reference further agreements. Protocol will provide reasonable advance notice of any amendment that includes a Substantial Change (as defined below), by posting the updated Terms of Service on the Site, providing notice on the Site, and/or sending you notice by email. Any revisions to the Terms of Service will take effect on the noted effective date (each, as applicable, the “Effective Date”).
1 Accounts. Protocol reserves the right to decline a registration to join Protocol or to add grant a User an Account type as a Client or Freelancer, for any lawful reason as it sees fit.
If you create an Account as an employee or agent on behalf of a company, you represent and warrant that you are authorized to enter into binding contracts, including the Terms of Service, on behalf of yourself and the company.
1.2 ACCOUNT ELIGIBILITY
Protocol”). You agree to provide true, accurate, and complete information on your Profile and all registration and other forms you access on the Site or provide to us and to update your information to maintain its truthfulness, accuracy, and completeness.
1.4.1 CLIENT ACCOUNT
You can register for an Account as a Client (a “Client Account”).
1.4.2 FREELANCER ACCOUNT
You can register for an Account as a Freelancer (a “Freelancer Account”). Protocol.
2. PURPOSE OF PROTOCOL
Section 2 discusses what Protocol, Protocol provides the Site Services to Users, including hosting and maintaining the Site and facilitating the formation of Service Contracts. When a User enters a Service Contract, the User uses the Site to invoice and pay any amounts owed under the Service Contract.
2.1 RELATIONSHIP WITH PROTOCOL
Protocol merely makes the Site and Site Services available to enable Freelancers and Clients to find and transact directly with each other. Protocol is not a party to that Service Contract.
You acknowledge, agree, and understand that Protocol. Protocol Protocol does not, in any way, supervise, direct, control, or evaluate Freelancers or their work and is not responsible for any Project, Project terms or Work Product. Protocol makes no representations about and does not guarantee, and you agree not to hold Protocol.
2.2 TAXES AND BENEFITS
Freelancer acknowledges and agrees that Freelancer is solely responsible (a) for all tax liability associated with payments received from Freelancer’s Clients and through Protocol, and that Protocol will not withhold any taxes from payments to Freelancer; (b) to obtain any liability, health, workers’ compensation, disability, unemployment, or other insurance needed, desired, or required by law, and that Freelancer is not covered by or eligible for any insurance from Protocol; Protocol is required by applicable law to withhold any amount of the Freelancer Fees and for notifying Protocol of any such requirement and indemnifying Protocol for any requirement to pay any withholding amount to the appropriate authorities (including penalties and interest). In the event of an audit of Protocol, Freelancer agrees to promptly cooperate with Protocol and provide copies of Freelancer’s tax returns and other documents as may be reasonably requested for purposes of such audit, including but not limited to records showing Freelancer is engaging in an independent business as represented to Protocol.
3 Protocol is not a party to any Service Contract, that the formation of a Service Contract between Users will not, under any circumstance, create an employment or other service relationship between Protocol and any Freelancer or a partnership or joint venture between Protocol and any User.
With respect to any Service Contract, Clients and Freelancers may enter into any written agreements that they deem appropriate provided that any such agreements do not conflict with, narrow, or expand Protocol’s rights and obligations under the Terms of Service. The parties to a Service Contract agree to incorporate the Service Terms contained in Schedule 2 to this Agreement.
4 WORKER CLASSIFICATION
Nothing in this Agreement is intended to or should be construed to create a partnership, joint venture, franchisor/franchisee or employer-employee relationship between Protocol and a User.
5 DEMO
Freelancer shall, in good faith, produce a video of the Work Product created through the performance of the Freelancer Services specified in a Service Contract between Users (the “Demo”). Freelancer shall send such Demo to Client on the date specified in the Service Contract (the “Demo Date”).
Freelancer agrees and acknowledges that the Demo shall be a true and accurate reflection of the Work Product. Freelancer further agrees and acknowledges that any mistruth, misrepresentation or material inaccuracy in any Demo pursuant to any Service Contract between Users on Protocol shall constitute grounds for Protocol, in its sole discretion, to permanently close the Freelancer Account.{“ “}
5.1 DEMO REVIEW
Client agrees to review any Demo sent to Client by Freelancer pursuant to any Service Contract between Users in good faith and with reference to the Project Description specified in such Service Contract (the “Demo Review”). Client agrees and acknowledges that in relation to any Demo, it is obliged to complete the Demo Review within three days of and including the Demo Date.
5.2 DEMO ACCEPTANCE EVENT
If Client, in good faith, is satisfied with the Demo following any Demo Review and indicates as such through the messaging service on Protocol’s Site, a “Demo Acceptance Event” occurs.
5.3 DEMO REJECTION EVENT
If Client, in good faith, is unsatisfied with the Demo following any Demo Review and indicates as such through the messaging service on Protocol’s Site, a “Demo Rejection Event” occurs.
6 PAYMENT TERMS
You acknowledge and agree that any payments due under a Service Contract will be paid by Users through Protocol. Client shall remunerate Freelancer for Freelancer Services by the sum specified in a Service Contract agreed between Users through Protocol (the “Total Payment”).
6.1 Total Payment
On any Demo Acceptance Event, the Total Payment becomes payable by Client to Freelancer. Client shall pay to Freelancer the Total Payment within 14 days of, and including, the Demo Acceptance Event.
6.2 Part Payment
On any Demo Rejection Event, 33% of the Total Payment (the “Part Payment”) becomes payable by Client to Freelancer. Client shall pay to Freelancer the Total Payment within 14 days of, and including the Demo Rejection Event.
7. Disputes
If a dispute arises between you and Protocol, you and the Protocol agree to first attempt to resolve any dispute or claim that arises out of this Agreement, the other Terms of Service, your relationship with Protocol, the termination of your relationship with Protocol through mediation.
You and Protocol agree.
8. NON-CIRCUMVENTION
8.1 PAYMENT THROUGH THE PROTOCOL
Users acknowledge and agree to use the Site as their exclusive method for requesting, making and receiving all payments for a Project that is subject to a Service Contract. Users further agree to notify Protocol if a User suggests to you making or receiving payments outside of the Site in violation of this Section 7.1.
8.2 COMMUNICATION AND FILE TRANSFER THROUGH THE PROTOCOL
Users acknowledge and agree to use the Site as their exclusive method for all communications and transmission of files that relate to a Project, including the transfer to a Client by a Freelancer of a Demo and Work Product.
9 TERMINATION
Unless both you and Protocol expressly agree otherwise in writing, either of us may terminate this Agreement in our sole discretion, at any time, without explanation, upon written notice to the other, which will result in the termination of the other Terms of Service as well, except as otherwise provided herein. Protocol is not a party to any Service Contract between Users. Consequently, User understands and acknowledges that termination of this Agreement (or attempt to terminate this Agreement) does not terminate or otherwise impact any Service Contract or Project entered into between Users.
10 GENERAL
Section 10 discusses additional terms of the agreement between you and Protocol, including that the Terms of Service contain our full agreement, how the agreement will be interpreted and applied, and your agreement not to access the Site from certain locations, as detailed below.
10.1 ENTIRE AGREEMENT
This Agreement, together with the Service Contract Terms and any other Terms of Service that the Protocol may, from time to time, incorporate by reference into this Agreement, sets forth the entire agreement and understanding between you and Protocol.
10.2 MODIFICATIONS
No modification or amendment to the Terms of Service will be binding upon Protocol unless in a written instrument signed by a duly authorised representative of Protocol or posted on the Site by Protocol. Our failure to act with respect to a breach by you or others does not waive our right to act with respect to subsequent or similar breaches. We do not guarantee we will take action against all breaches of this User Agreement.
10.3 SEVERABILITY
If and to the extent any provision of this Agreement or the other Terms of Service is held to be invalid or unenforceable in whole or in part, all other provisions will nevertheless continue to be valid and enforceable with the invalid or unenforceable parts severed from the remainder of this Agreement and the other Terms of Service.
10.4 FORCE MAJEURE
The parties to this Agreement will not be responsible for the failure to perform or any delay in performance of any obligation hereunder for a reasonable period due to accidents, fires, floods, telecommunications or Internet failures, strikes, wars, riots, rebellions, blockades, acts of government, governmental requirements and regulations or restrictions imposed by law or any other similar conditions beyond the reasonable control of such party.
Schedule 1 - Service Terms
Users who enter into a Service Contract on the Site with another User are bound by these Service Terms in whole. The Protocol is not a party to any Service Contract between Users that incorporates these Service Terms. These Service Terms, together with the User Agreement, the Milestone Agreement and any further agreements that Protocol may, from time to time, incorporate by reference into the Terms and Conditions form the Terms of Service.
By agreeing to these Service Terms, Users do not limit their ability to negotiate and determine the specific terms of the Project, except as expressly stated otherwise in these Service Terms.
1. PARTIES
Client and Freelancer identified on the Site in relation to the Project are the parties to the Service Contract. The address of each party is the address entered under the tax information on the Site. Protocol is not a party to the Service Contract.
2. SERVICES
Client and Freelancer agree that the Freelancer is performing services as an independent contractor and that Freelancer is not an employee or agent of Client. Freelancer will perform the Freelancer Services in a professional.
3. RESPONSIBILITY FOR EMPLOYEES AND SUBCONTRACTORS
If a User subcontracts with or employs third parties to perform Freelancer Services on behalf of the User for any Engagement, the User represents and warrants that it does so as a legally recognized entity or person and in compliance with all applicable laws and regulations. A User that agreed to perform services under a Service Contract remains responsible for the quality of the services.
4. DEMO
The Freelancer agrees, in good faith, to accurately report and reflect the Freelancer Services by way of the Demo described in Section 5 of the Terms and Conditions. The Freelancer agrees and acknowledges that failing to accurately report and reflect such Freelancer Services in any Demo may result in the termination of the Freelancer’s Account, in accordance with [7.5] of the Terms and Conditions.
6..
7 RETURN OF PROPERTY
Upon the expiry or termination of this Agreement, Freelancer shall return to the Clinet any property, documentation, records, or confidential information, which is the property of the Client.
8 Waiver
The waiver by either of the Parties of a breach, default, delay or omission of any of the provisions of this Service Contract
|
https://docs.freelancerprotocol.com/terms
| 2020-03-28T22:10:15 |
CC-MAIN-2020-16
|
1585370493120.15
|
[]
|
docs.freelancerprotocol.com
|
Setting Up AD Authentication and Data Authorization for Azure Gen 2 Storage¶
Qubole on Azure supports Azure Active Directory (AD) for both user access control and data authorization. With these in place, you can on-board users to Qubole on Azure and have these users retain their existing Azure data-access policies directly in QDS. This is possible because when a user authenticates with QDS via AD, QDS retains the OAuth token returned by Active Directory for the duration of the QDS user’s session, and uses this when executing any command via the API.
AD integration with QDS can deployed in two ways:
- AD Authentication only. In this case, AD is only used for Single Sign-On (SSO).
- AD Authentication and data authorization. In this case the default storage location for data output from QDS must be set to ADLS Gen2.
Setting Up AD Authentication Only¶
Create a Qubole Support ticket asking Qubole to enable AD authentication for your QDS account and add a new control for the email domain users who will be accessing the platform via AD.
Once this is done, AD authentication is in effect for your QDS account, and SSO is enabled. Users should choose the Sign in with Azure Active Directory option when they log in to QDS:
Setting Up AD Authentication and Data Authorization¶
Create a Qubole Support ticket asking Qubole to enable AD authentication for your QDS account and add a new control for the email domain users who will be accessing the platform via AD.
In the Azure portal, navigate to the Qubole App registration and ensure that it has the following API permissions set: *
User.Readon Azure Active Directory Graph *
User_impersonationon Azure Storage
Click on the Grant consent… button in the API Permissions window to make sure that admin permissions have been granted to the Qubole App:
Under Authentication, set the reply URL for the app registration:
Note
The type is Web and the URL should be set to
Authorized users should now be able to sign in to QDS using Azure AD authentication.
Log in to the QDS UI using the Sign in with Azure Active Directory option:
Navigate to the Account Settings page and scroll down to Storage Settings:
You should now see Data Lake Gen2 with Azure AD as a Storage Service option in the drop-down list. When you select it, you also need to select either:
- AD Service Principal - select this for shared data access policies; OR
- Per-user AD tokens - select this for per-user data access policies.
After making your selection, click Save. QDS should now be ready to start up a cluster. All jobs run via the API will now use the user’s or the Service Principal’s token, depending on which option you selected.
|
https://docs.qubole.com/en/latest/admin-guide/AD-Auth.html
| 2020-03-28T21:43:01 |
CC-MAIN-2020-16
|
1585370493120.15
|
[]
|
docs.qubole.com
|
[−][src]Crate pwbox
Password-based encryption and decryption for Rust.
Overview
This crate provides the container for password-based encryption,
PwBox,
which can be composed of key derivation and authenticated symmetric
Cipher cryptographic
primitives. In turn, authenticated symmetric ciphers can be composed from an
UnauthenticatedCipher and a message authentication code (
Mac).
The crate provides several pluggable cryptographic
Suites with these primitives:
Sodium
RustCrypto(provides compatibility with Ethereum keystore; see its docs for more details)
PureCrypto(pure Rust implementation; good for comiling into WASM or for other constrained environments).
There is also
Eraser, which allows to (de)serialize
PwBoxes from any
serde-compatible
format, such as JSON or TOML.
Naming
PwBox name was produced by combining two libsodium names:
pwhash for password-based KDFs
and
*box for ciphers.
Crate Features
std(enabled by default): Enables types from the Rust standard library. Switching this feature off can be used for constrained environments, such as WASM. Note that the crate still requires an allocator (that is, the
alloccrate) even if the
stdfeature is disabled.
exonum_sodiumoxide(enabled by default),
rust-crypto,
pure(both disabled by default): Provide the cryptographic backends described above.
Examples
Using the
Sodium cryptosuite:
use rand::thread_rng; use pwbox::{Eraser, ErasedPwBox, Suite, sodium::Sodium}; // Create a new box. let pwbox = Sodium::build_box(&mut thread_rng()) .seal(b"correct horse", b"battery staple")?; // Serialize box. let mut eraser = Eraser::new(); eraser.add_suite::<Sodium>(); let erased: ErasedPwBox = eraser.erase(&pwbox)?; println!("{}", serde_json::to_string_pretty(&erased)?); // Deserialize box back. let plaintext = eraser.restore(&erased)?.open(b"correct horse")?; assert_eq!(&*plaintext, b"battery staple");
|
https://docs.rs/pwbox/0.3.0/pwbox/
| 2020-03-28T19:52:59 |
CC-MAIN-2020-16
|
1585370493120.15
|
[]
|
docs.rs
|
Spring Boot has no mandatory logging dependency, except for the Commons Logging API, which
is typically provided by Spring Framework’s
spring-jcl module. To use
Logback, you need to include it and
spring-jcl on the classpath.
The simplest
You can also set the location of a file to which to write the log (in addition to the console) by using "logging.file"..
If you want to disable console logging and write output only to a file, you need a custom
logback-spring.xml that imports
file-appender.xml but not
console-appender.xml, as
shown in the following example:
<, as shown in the
following example:
logging.file=myapplication.log simplest path is probably through the starters, even though it requires some jiggling with excludes. The following example shows how to set up the starters>
And the following example shows one way to set up the starters in Gradle:
dependencies { compile 'org.springframework.boot:spring-boot-starter-web' compile 'org.springframework.boot:spring-boot-starter-log4j2' } configurations { all { exclude group: 'org.springframework.boot', module: 'spring-boot-starter-logging' } }, as shown in the following example:
|
https://docs.spring.io/spring-boot/docs/2.0.9.RELEASE/reference/html/howto-logging.html
| 2020-03-28T21:16:26 |
CC-MAIN-2020-16
|
1585370493120.15
|
[]
|
docs.spring.io
|
See the demo Auction is a theme with some extra functionality than the other themes and is specialized on auctions. Sellers can list an item and choose the listing duration. The bidding opens at a price the seller specifies when it ends the bidder with the highest bid wins. Activate Auction theme on your site Login to your Admin panel. Go to Appearance on left hand menu. Choose Themes. Search for Auction and click on Activate. Done! Configuration Once the Auction theme is activated, you will be asked to auto-configure it. It’s highly recommended to press “Yes” in order to achieve the desired functionality. In case you press “No” or you think the theme is not properly configured, try one of the following: Re-activate the theme by activating a different theme first and then Auction Run this on your browser: yourdomain.com?theme-config=1 Do the following changes in your panel: Create custom field: Name: auction_days , Type: Number , Required: Yes Settings -> Plugins -> enable “Messaging System” Settings -> Payment -> PayPal -> disable “Buy Now button” Settings -> Advertisement -> Display Options -> enable “Price on contact form” Settings -> Advertisement -> Advertisement Fields -> enable “Price” How it works The following will explain all the steps from the time a seller lists an item until the auction ends. The seller visits your site and fills the “Publish New” form. The required fields are: Title, Category, Location (if enabled by the admin), Description, Price and Auction Days. The Auction Days field needs to be filled with a number (integer). For example, if it’s filled with the number 7, the auctions ends 7 days from the time the ad was published. Note that ads with the Price or Auction Days fields empty will not be displayed in your site. The ad is now published and potential buyers can place their bid by visiting the ad page. The seller will get notified for every bid. When the auction ends the ad will automatically gets marked as deactivated. This way bidders cannot bid to this ad anymore and the seller can contact the bidder who won the auction. Theme Options Color Choose the color scheme that will style your website. Default Orange Red Green Blue Orange-Blue Layout Display breadcrumb: Enable to show breadcrumb on the top of each page Header tool bar gets fixed in the top: If enabled, the top menu will always be visible and on the top of the page while scrolling the page down. Search bar on header: Enable to have a search bar on the header of the website. Where you want the sidebar to appear: Select where to place the Sidebar, Left or Right. Choose None if you don’t want to have Sidebar. Hide header and footer on single ad and user profile page: Removes the header, breadcrumbs and footer on single ad and user profile page. Enable to show the user profile page like “your store”. Homepage Numbers of ads to display on homepage. Recommended 30.: Select the number of ads that will be displayed in the homepage slider. Since each slide includes 6 ads, we recommended to enter 30 or a number that is divided by 6. On mobile each slide shows 2 ads. Homepage site slogan: Enter the homepage Slogan. It’s displayed above the header Search Bar. Homepage site description: Enter the site description. It’s displayed above the header Search Bar and below the site slogan. Show latest closed auctions on homepage: Enable to show the Closed Auctions section in the homepage. Display highest bidder username in the homepage: If enabled, the name of the highest bidder will be displayed at the bottom of each ad in the Latest Auction and Closed Auctions sections. Listing Infinite scroll: Auctions will be loading automatically whenever the user scrolls down in the listings page. Default state for list/grid in listing: Choose the default way you want the ads to be viewed by users when in listing view. Display slider in listing: Enable to activate a slider in the listing pages with the latest ads of the certain category. Display listing slider on mobile devices: Enable if you want to have the listing slider displayed on mobile screens. If enabled, each slide shows 2 ads. Display highest bidder username in the listing page: If enabled, the name of the highest bidder will be displayed at the bottom of each ad in the listing slider and listing page ads. Ad details page Show best bidder and number of bids: Choose what information you want to show in each listing page next to the price. Show Bids history in pop-up: Choose how you want to show the Bid History, or choose None to disable the option. If enabled, Bid History button will be available in each listing page. Pressing the button will show a pop up with all the bids and bidders. Show/hide phone number in the ad page: If enabled, users will need to press on the seller phone number to reveal it. Splash Overview of theme Yummo
|
https://docs.yclas.com/overview-auction-theme/
| 2020-03-28T21:33:47 |
CC-MAIN-2020-16
|
1585370493120.15
|
[]
|
docs.yclas.com
|
Connecting Ring contacts to LDAP¶
It is possible for Ring to search contacts in an LDAP directory. At the moment, this is only possible with the Gnome client. This is done by configuring gnome-contacts and evolution to search LDAP for contacts.
Connecting gnome-contacts to LDAP¶
- Open Evolution
- From the contacts tab, right click and select New Address Book
- Fill the requested information depending on your LDAP configuration. Here is an example:
|
https://ring.readthedocs.io/en/latest/users/connecting_to_ldap.html
| 2020-03-28T21:09:06 |
CC-MAIN-2020-16
|
1585370493120.15
|
[]
|
ring.readthedocs.io
|
Having a logo for your project is the first step to make your reader feel at home. You can add one from the
settings of your organization and update your organization's name.
You can change and personalize your organization’s URL (web address).
🧠 Note: If you change your organization’s URL, GitBook will automatically redirect from the old to the new one.
You can configure your SSO with any SAML solution from your settings, you can read more on how to configure your SAML single sign-on to give your members access to GitBook through an identity provider (IdP) of your choice.
🧠 Note: You need to upgrade ✨ to the Enterprise plan to have access to the SSO feature.
From your
organization's settings > Danger zone, you can export data related to your organization. This data includes:
🏢 Organization information
👨💻👩💻 Memberships (user ID, role in the organization)
🤫 Space information (name, ID, visibility settings)
You can delete your organization but make sure that:
💸 Your organization has no active subscription. It should be on the "Free" plan in the Billing settings.
You have removed every member of the organization.
After that, you can delete your organization by clicking the "Delete" button in the "Danger Zone" panel of the
organization's settings.
Deleting your organization is non reversible, once deleted there is no going back.
|
https://docs.gitbook.com/organizations/organization-management
| 2020-03-28T19:56:02 |
CC-MAIN-2020-16
|
1585370493120.15
|
[]
|
docs.gitbook.com
|
Structured Negative Keywords & Negative Keyword Lists, now Live.
In May we announced the availability of a refreshed set of features across our suite of APIs in Sandbox. Today we are very excited to announce the general availability of the Structured Negative Keywords and Negative Keyword Lists features in our V9 Campaign Management Service, which means all our developers can code against these new capabilities for their API clients. Thanks to our developers, these features were the direct results of all your feedback. have made Negative Keywords a first class entity complete with its own set of create, read, and
delete operations to help manage. We’ve already heard that many developers would welcome this improvement and as a result we plan on deprecate the old methods in the next version of our API. It’s now recommended that API clients should use these new methods to implement negative keywords as they’ll be more efficient and performant.
We are really excited about this API refresh and the new capabilities in our developer platform. As always, we would love for developers to provide their feedback on our APIs.
If you have any questions or comments, feel free to post them in the comments below.
|
https://docs.microsoft.com/en-us/archive/blogs/bing_ads_api/structured-negative-keywords-negative-keyword-lists-now-live
| 2020-03-28T22:34:44 |
CC-MAIN-2020-16
|
1585370493120.15
|
[]
|
docs.microsoft.com
|
What is Desktop Analytics?
The following video is a session from Ignite 2019, which includes more information on Desktop Analytics:
Note
Desktop Analytics is a successor of Windows Analytics, which retired on January 31, 2020.
The capabilities of Windows Analytics are combined in the Desktop Analytics service. Desktop Analytics is also more tightly integrated with Configuration Manager. For more information, see the FAQ for Windows Analytics customers...
Prerequisites
To use Desktop Analytics, make sure your environment meets the following prerequisites.
Technical
An active global Azure subscription, with Global Admin permissions. Microsoft Accounts aren't supported.
Important
Desktop Analytics currently requires that you deploy an Office 365 service in your Azure AD tenant. This won't be a requirement in the future.
Workspace owner.
To access the portal after onboarding, you need:
- Desktop Analytics Administrator role and Owner, or Contributor permissions on the resource group where the workspace was created.
Configuration Manager, version 1902 with update rollup (4500571) or later. For more information, see Update Configuration Manager.
- Full Administrator role in Configuration Manager
Note
Desktop Analytics supports multiple Configuration Manager hierarchies reporting to a single Azure AD tenant. If you have multiple hierarchies in your environment, you have the following options:
- Use different Commercial IDs and Azure AD tenants.
- Configure both hierarchies to use the same Commercial ID to share the Azure AD tenant and Desktop Analytics instance.
Important for Windows 7
Licensing and costs
An active global Azure subscription.
Note
Most of the equivalent subscriptions for Configuration Manager also include Azure AD. For example, see Microsoft 365 plans and Enterprise Mobility + Security licensing.
Devices enrolled in Desktop Analytics need a valid Configuration Manager license. For more information, see Configuration Manager licensing.
Users of the device need one
Note
Beyond the cost of these license subscriptions, there's no additional cost for using Desktop Analytics within Azure Log Analytics. The data types ingested by Desktop Analytics are free from any Log Analytics data ingestion and retention charges. As non-billable data types, this data is also not subject to any Log Analytics daily data ingestion cap. For more information, see Log Analytics usage and costs.
Next steps
The following tutorial provides a step-by-step guide to getting started with Desktop Analytics and Configuration Manager:
Feedback
|
https://docs.microsoft.com/en-us/configmgr/desktop-analytics/overview
| 2020-03-28T21:43:25 |
CC-MAIN-2020-16
|
1585370493120.15
|
[array(['media/portal-home.png',
'Screenshot of the Desktop Analytics home page in the Azure portal'],
dtype=object) ]
|
docs.microsoft.com
|
MinIO Quickstart Guide
MinIO is High Performance Object Storage released under Apache License v2.0. It is API compatible with Amazon S3 cloud storage service. Using MinIO build high performance infrastructure for machine learning, analytics and application data workloads.
Docker Container
Stable
docker pull minio/minio docker run -p 9000:9000 minio/minio server /data
Edge
docker pull minio/minio:edge docker run -p 9000:9000
chmod 755 minio ./minio server /data
GNU/Linux
Binary Download
wget chmod +x minio ./minio server /data
wget chmod +x minio ./minio server /data
Microsoft Windows
Binary Download
minio.exe server D:\Photos
FreeBSD
Port
Install minio packages using pkg, MinIO doesn't officially build FreeBSD binaries but is maintained by FreeBSD upstream here.. Minimum version required is go1.13
GO111MODULE=on go get github.com/minio/minio
Allow port access for Firewalls
By default MinIO uses the port 9000 to listen for incoming connections. If your platform blocks the port by default, you may need to enable access to the port.
ipt
ufw and restart them, as shown in the following command from the MinIO client (mc):
mc admin update <minio alias, e.g., myminio>
Important things to remember during upgrades:
mc admin updatewill only work if the user running MinIO has write access to the parent directory where the binary is located, for example if the current binary is at
/usr/local/bin/minio, you would need write access to
/usr/local/bin.
- In the case of federated setups
mc admin updateshould be run against each cluster individually. Avoid updating
mcuntil all clusters have been updated.
- If you are updating the server it is always recommended (unless explicitly mentioned in MinIO server release notes), to update
mconce all the servers have been upgraded using
mc update.
mc admin updateis disabled in docker/container environments, container environments provide their own mechanisms for updating running containers.
- If you are using Vault as KMS with MinIO, ensure you have followed the Vault upgrade procedure outlined here:
- If you are using etcd with MinIO for the federation, ensure you have followed the etcd upgrade procedure outlined here:
Explore
Please follow MinIO Contributor's Guide
Caveats
MinIO in its default mode doesn't use MD5Sum checkums of incoming streams unless requested by the client in
Content-Md5 header for validation. This may lead to incompatibility with rare S3 clients like
s3ql which unfortunately do not set
Content-Md5 but depend on hex MD5Sum for the stream to be calculated by the server. MinIO considers this as a bug in
s3ql and should be fixed on the client side because MD5Sum is a poor way to checksum and validate the authenticity of the objects. Although MinIO provides a workaround until client applications are fixed use
--compat option instead to start the server.
./minio --compat server /data
|
https://docs.min.io/
| 2020-03-28T20:13:00 |
CC-MAIN-2020-16
|
1585370493120.15
|
[array(['https://github.com/minio/minio/blob/master/docs/screenshots/minio-browser.png?raw=true',
'Screenshot'], dtype=object) ]
|
docs.min.io
|
Follow next steps to setup Agama Slider:
Step 1:
- Go to the page where you want it to appear, (Dashboard ->Pages ).
- Select the desired page .
- At the bottom of the page, click on the Agama Options menu.
- Enable slider on “Enable slider” drop-down menu.
- In “Select Slider” drop-down menu,choose Agama slider.
- Click on “Update” button to save changes.
Step 2 – Go to admin Dashboard -> Apperance -> Customize
Step 3 – Navigate to “Slider” tab, and open “General” tab to set general slider options.
Set General slider options
Step 4 : Select desire Slide
Insert Image file to slide
.
Add slide Title and configure Font style, Font variant, Font size ,Font color and Title Animation…
If you want action button on slide, set button title,button animation, button URL and button Color.
Repeat step 4, for each slide, depending on the number of slides that you want to appear
Clik on “Save & Publish” button to save and apply changes.
|
https://docs.theme-vision.com/article/setup-agama-slider/
| 2020-03-28T21:53:31 |
CC-MAIN-2020-16
|
1585370493120.15
|
[array(['http://docs.theme-vision.com/wp-content/uploads/2017/02/10-262x300.png',
None], dtype=object)
array(['http://docs.theme-vision.com/wp-content/uploads/2017/02/11-300x282.png',
None], dtype=object)
array(['http://docs.theme-vision.com/wp-content/uploads/2017/02/16-300x277.png',
None], dtype=object)
array(['http://docs.theme-vision.com/wp-content/uploads/2017/02/17-300x186.png',
None], dtype=object) ]
|
docs.theme-vision.com
|
Some Tools of a PFE
Updated 10/16/2017
Hi all,
I hope you are all well! Today I will give you a brief overview of the tools I need to use on a regular base.
Chrissy LeMaire, one of the best SQL MVPs in the world, asked me directly via Twitter and also publicly via a Tweet to write down some of the tools a PFE uses and I surely couldn´t deny:
David Peter Hansen started with a fantastic list of tools regarding SQL, which can be found as follows:
My technological specialties are little different though, because I am mainly focused in Windows Client, PowerShell and Security.
I hope that this list will be of help for some of you and I wish you all a lot of fun testing and using the tools!
Client & Debugging:
First of all I start with the typical troubleshooting tools without any order. This is only a small subset of all the tools I sometimes need to use, but you really should be aware of these ones!
DefragTools and Lightsaber
One of the best materials regarding debugging are the DefragTools - Channel 9 video sessions by Andrew Richards, Chad Beeder and Larry Larsen showing some deep dive troublehooting tools and techniques.
In this sessions a so called Lightsaber is explained, which is a dedicated USB-Stick / OneNote-Folder containing the most important debugging tools (the holy grail for every toubleshooter):
Session 131 Lightsabre Windows 10
WinDBG
WinDBG is one of the most important tools debugging memory dumps and many more:
A good way to start here is taking a look at the videos from the DefragTools and using cheat sheets as the following one: here
WinDBG Preview
This year the new WinDBG Preview was announced.
You can see the videos in the DefragTools: here and here
WinDBG - Time Travel Debugging
A cool feature inside the new Preview WinDBG is Time Travel Debugging.
"Time Travel Debugging (TTD) is a reverse debugging solution that allows you to record the execution of an app or process, replay it both forwards and backwards and use queries to search through the entire trace. Today’s debuggers typically allow you to start at a specific point in time and only go forward. TTD improves debugging since you can go back in time to better understand the conditions that lead up to the bug. You can also replay it multiple times to learn how best to fix the problem."
Find further information here:
Wireshark
"Wireshark is the world’s foremost and widely-used network protocol analyzer. It lets you see what’s happening on your network at a microscopic level and is one of the standard across many commercial and non-profit enterprises, government agencies, and educational institutions."
Windows Message Analyzer
. It is the successor to Microsoft Network Monitor 3.4 and is a key component in the Protocol Engineering Framework (PEF) that was created by Microsoft to improve protocol design, development, implementation testing and verification, documentation, and support. With Message Analyzer, you can choose to capture local and remote traffic live or load archived message collections from multiple data sources simultaneously."
Sysmon
"." Defrag Tools #108 - Sysinternals SysMon - Mark Russinovich great blog article - Sysinternals Sysmon unleashed WannaCry Detection with Sysmon
WMI Troubleshooting
The WMI Diagnosis Utility -- Version 2.2 WMI Troubleshooting - Logs
Querying and Viewing the WMI Repository
You should also consider to buy some dedicated books regarding WMI, if you are working very often with it.
Especially also the Windows Internals Book is a good consideration!
WMI Explorer
WMI Explorer and here the download
WBEMTest
WBEMTest is a graphical utility that you can use to test connectivity to remote systems, validate your WMI queries and explore WMI.
Win.
Sometimes the self-repair helps: here
winmgmt /verifyrepository
winmmgmt /salvagerepository
winmgmt /resetrepository
DISM /Online /Cleanup-Image /CheckHealth
DISM /Online /Cleanup-Image /ScanHealth
DISM /Online /Cleanup-Image /RestoreHealth
SFC.
sfc /scannow
findstr /c:"[SR]" %windir%\logs\cbs\cbs.log > c:\windows\logs\cbs\sfcdetails.log
Event Viewer.
Must-read documents (!): Spotting_the_Adversary_with_Windows_Event_Log_Monitoring Detecting-Lateral-Movement-through-Tracking-Event-Logs
Windows Event Forwarding
Windows Event Forwarding (WEF) reads any operational or administrative event log on a device in your organization and forwards the events you choose to a Windows Event Collector (WEC) server.
Windows Event Forwarding to a workgroup Collector Server
Introducing Project Sauron – Centralised Storage of Windows Events – Domain Controller Edition
Telerik Fiddler
"The free web debugging proxy for any browser, system or platform" - Fiddler is great for website performance analysis and troubleshooting of encrypted traffic.
CMTrace
CMTrace is a real time log file viewer for System Center Configuration Manager.
Important features:
- Real-time logging
- Merging multiple log files together at once.
- Highlighting - error messages in red; warning messages in yellow.
- Error Lookups
- Standard format for many log files
Error lookup:
Log Parser
"Log."
.
Further information here.
Windows System Control Center - WSCC
:
- Sysinternals Suite
- NirSoft Utilities
Sysinternals
."
You really should know about the Sysinternals tools! Most of the tools are discussed and explained in the mentioned DefragTools. Start here.
Proc_19<<
Procexp
"TheProcess Explorerdisplay_20<<."
Autoruns
"Autorunsreports Explorer shell extensions, toolbars, browser helper objects, Winlogon notifications, auto-start services, and much more. Autoruns goes way beyond other autostart utilities."
PSExec
_23<<
Nirsoft Tools
"Unique collection of freeware desktop utilities, system utilities, password recovery tools, components, and free source code examples." The NirSoft Tools include some really nice tools as the following: RegistryChangesView
"NirLauncher is a package of more than 200 portable freeware utilities for Windows, all of them developed for NirSoft Web site during the last few years."
PPing
"PPing is designed to give you the easiest possible solution for discovering ports from a windows console. The design was heavily oriented towards the terminology and behavior of the classic ping tool under windows."
Alternatively you can do it with PowerShell:
Test-NetConnection
Further examples can be found here.
PuTTY
"PuTTY is an SSH and telnet client, developed originally by Simon Tatham for the Windows platform. PuTTY is open source software that is available with source code and is developed and supported by a group of volunteers."
Posh-SSH
Windows Powershell module that leverages a custom version of the SSH.NET Library to provide basic SSH functionality in Powershell. The main purpose of the module is to facilitate automating actions against one or multiple SSH enabled servers
LogLauncher
The LogLauncher gathers all important logs from one or many machines and is really awesome! It can be download here.
IE / Edge - F12 Developer Tools
The Microsoft Edge F12 DevTools are built with TypeScript, powered by open source, and optimized for modern front-end workflows.
Use the Debugger to step through code, set watches and breakpoints, live edit your code and inspect your caches. Test and troubleshoot your code
The Performance panel offers tools for profiling and analyzing the responsiveness of your UI during the course of user interaction.
Take a look through the docs and additionally here:
Microsoft Security Compliance Toolkit
"This set of tools allows enterprise security administrators to download, analyze, test, edit and store Microsoft-recommended security configuration baselines for Windows and other Microsoft products, while comparing them against other security configurations.. "
Here you will find important announcements:
And this will give you further guidance: Defrag Tools #174 - Security Baseline, Policy Analyzer and LGPO
PerfView
"PerfView is a performance-analysis tool that helps isolate CPU- and memory-related performance issues."
PerfView Defrag Tools videos: Part8, Part7, Part6, Part5, Part4, Part3, Part2, Part1
Windows."
This tool is one of the most important ones for a Client PFE.
Windows Performance Recorder
"Included in the Windows Assessment and Deployment Kit (Windows ADK), Windows Performance Recorder (WPR) is a performance recording tool that is based on Event Tracing for Windows (ETW). It records system events that you can then analyze by using Windows Performance Analyzer (WPA)."
This tool is necessary to create the traces for the Windows Analyzer.
Xperf and scripts
I also got some (old but gold) xperf-scripts:
Notepad++
Last but not least comes the well-know Notepad++. If you don´t know this tool you definitely missed something! It is especially good, when working with very big log files >50MB and/or with xml files.
It includes the
Visual Studio 2017
Yes - I use it a lot.
PowerShell:
One of my main specialties is also one of my biggest tools. You can actually achieve everything with PowerShell: gather information, automate and even use techniques, which are completely missing in the UI. You can even automate most of the described tools above - and as for example the new Project Honolulu for Windows Server is completely based on PowerShell and uses PowerShell WMI cmdlets in its backend. But for using PowerShell in the daily work there are also some tools you really need to know.
ISE with ISESteoroids
PowerShell.exe and PowerShell_ISE.exe are the most known tools fo PowerShell using in Windows. The ISE is not the best toolset, if you are coming from Visual Studio for example. I am a former .Net software architect and by working with PowerShell this was my first little downside. But - there is this addon called ISESteroids from Tobias Weltner, which brings a bunch of additional functions to the ISE and results into a complete great toolset - here are some of the added capabilities:
- Essential Editor Settings - Secondary Toolbar
- Code Refactoring
- Advanced Search&Replace
- Ensuring Code Compatibility
- Creating Modern User Interfaces
- Security and Protection
- Community Tools
). Begin your journey with VS Code with these introductory videos."
VSCode will replace the most used tool - the ISE - within the next time and therefore you really should take a look at it. I gathered the most important articles around this topic, which you really should go through:
How to install Visual Studio Code and configure it as a replacement for the PowerShell ISE
Why I use Visual Studio Code to write PowerShell
Transitioning from PowerShell ISE to VS Code Here you will find all default keybindings, which will help you a lot.
VSTS / Git / Release Pipeline
Visual Studio Team Services just allows to easily create your complete Release Pipeline. I will not spend too much time in here, because it is a dedicated topic, but focusing into more professional and sophisticated powershelling or dev, you really should take a closer look at it.
PSGUI
Working with XAML-created PowerShell GUIs I very often reuse my own projects PSGUI and PSGUIManager:
Knowledge Management:
A fact is - as a PFE you are always working hard and you are always lacking time. Also no one in the world can now everything, but you should know where to find the information. Very often totally undererstimated, but the knowledge management is one of the most important areas, where you can improve your work quality and performance. I will show you some of my most used tools to manage all the information and my time.
A good email structure is the most important thing nowadays. As a PFE you easily get hundreds or thousands of emails per day. Most of them contain at least some information, which may be usable at some point in the future. There are dozens of books out there to assist you in these kind of tasks. I want to show you one of my favorite books:
How to be a Productivity Ninja: Worry Less, Achieve More and Love What You Do Kindle Edition
OneNote
I grab every information into my OneNote and sort it. The biggest benefit of OneNote is the performant search capability.
It looks like this:
And as you probably would expect, I have dozens of notebooks:
If I found some interesting blog posts I normally just copy them and add them to my OneNote. I always remember some passphrases or keywords to the topics I am searching for and this helps a lot!
Teams
Teams is our new communication tool, which allows to add all other services directly into it, aswell as meetings similar to Skype.
To-Do
"Microsoft To-Do helps you manage, prioritize, and complete the most important things you need to achieve every day, powered by Intelligent Suggestions and Office 365 integration. Download the To-Do Preview today."
It is important to manage my tasks and time - therefore I used for a long time Wunderlist - then To-Do and now the tool below - Office Tasks or so called Microsoft Planner from the O365. I would say, that Microsoft To-Do is the consumer app and Microsoft Planner is the enterprise app.
Office Tasks
"Take the chaos out of teamwork and get more done! Planner makes it easy for your team to create new plans, organize and assign tasks, share files, chat about what you’re working on, and get updates on progress."
Office Tasks is my new tool, which I use with my personal O365 account to manage all upcoming work and personal tasks. The good thing about this specific one is, that you can assign tasks to dedicated users in your O365 account and leverage everything with documents from your OneDrive / for Business.
Social Media:
Social media is important. Networking is important. You really should not ignore this.
Most of the news as blog posts, announcements, official discussions and many more can be catched by being involved into social media. This is one of the most important things today to stay up to date in the IT. Additionally to this I use some more tools, which bring a huge benefit to my daily work. This aren´t all of my tools, but probably the most important ones.
Twitter is necessary to stay up to date and gather all new blog articles from officials or well-known people as MVPs.
In LinkedIn you very often find great high level articles specifically targetting CXOs, which contain good information.
It is also the most important platform for networking. I get frequently asked via LinkedIn regarding little technical topics (and I am totally fine with this!) and in the counterpart I also try to get some feedback from the people regarding our newest technologies.
One more topic is jobs - LinkedIn is from my experience the most used platform for sharing jobs and the place where job hunters are trying to fill up their sophisticated jobs. If you want to join this chance you really should ensure, that your profile is completely and correctly filled. There has also been added a feature to provide headhunters with further information, if you are searching for a job and what direction it should go to.
Blogs
I really need to write this down. We are in a time, where blogs are important.
As you are reading my blog post, you know that blogs may contain useful information, but even more - sometimes official announcements are made via blogs. You need to have a dedicated list of blogs, where you take a look into in regular timeframes.
Michael Niehaus´ one for example is one of the most important ones for me and probably also for you:
Hootsuite
" Hootsuiteis a social media management platform, created by Ryan Holmes in 2008. The system’s user interface takes the form of a dashboard, and supports social network integrations for Twitter, Facebook, Instagram, LinkedIn, Google+, YouTube, and many more."
I am using Hootsuite a lot - it is very useful for me, because I can now plan postings to all my social media accounts in advance.
As you can see it is also combinable with Right Relevance:
Right Relevance
"Discover fresh relevant content to your interests, save interesting articles, follow influential experts, be the first to share soon-to-be viral content and much more."
I really love Right Relevance, because it just gives me the most important blog articles and news regarding specific topics. Included in Hootsuite I can now just share the most important information just in time and set it up into my "read-line".
The Old Reader
The Old Reader is a RSS-reader which I like a lot! I have added my favorite blogs here and can easily prove, what articles I missed.
Conferences & UserGroups
As an IT-Pro you really should visit conferences and usergroups from time to time. As mentioned before - networking is one of the most important things in a life of an IT-Pro and you can do this the best at conferences and usergroups!
MeetUp
This one is my main tool to identify UserGroups in my area and I am managing the German PowerShell UserGroup and more dedicated the Munich one via MeetUp. We are having around 30-50 attendees every time and you realy should use it to connect yourself!
PaperCall
If you are speaking a lot at conferences you would have seen, that many conferences are moving their CFP to Papercall. Take a look - there may be a conference you want to speak on.
The End
Thank you all for reading the whole list - I hope, that some of the mentioned ideas tools and techniques will help you in the future. If you find any important things missing or want to discuss any of the parts you are always free to comment. I am happy to hear your feedback and opinions!
All the best,
David das Neves
Premier Field Engineer, EMEA, Germany
Windows Client, PowerShell, Security
|
https://docs.microsoft.com/en-us/archive/blogs/daviddasneves/some-tools-of-a-pfe
| 2020-03-28T22:08:32 |
CC-MAIN-2020-16
|
1585370493120.15
|
[array(['https://msdnshared.blob.core.windows.net/media/2017/10/DefragTools.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/windbgcdold.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/WinDBGPreview.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/WinDBGPreviewTTD.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/MessageAnalyzer.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/MessageAnalyzer.jpg',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/WindowsInternals-242x300.jpg',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/WMIExplorer-1024x520.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/WBEMTest-300x253.jpg',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/EventViewer-1024x464.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/WEF.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/Fiddler.jpg',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/CMTrace.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/CMTraceEL1.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/CMTraceEL2.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/LogParser.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/LPSStudio.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/WSCC.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/Sysinternals-300x52.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/ProcMon.jpg',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/ProcExp.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/ProcDump.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/Autoruns.jpg',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/PSExec.gif',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/NirLauncher.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/Test-NetConnection.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/PuTTY-300x278.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/LogLauncher.jpg',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/F12.png',
None], dtype=object)
array(['https://docs.microsoft.com/en-us/microsoft-edge/f12-devtools-guide/media/debugger.png',
'The Microsoft Edge F12 DevTools Debugger'], dtype=object)
array(['https://docs.microsoft.com/en-us/microsoft-edge/f12-devtools-guide/media/performance.png',
'F12 DevTools Performance panel'], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/PolicyViewer.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/Perfview.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/WPA.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/WPR.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/xperf.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/NotePad-300x227.gif',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/ISESteroids.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/VSCode.gif',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/VSTS.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/VSTS_PS_Build-1024x564.jpg',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/psgui-manager.jpg',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/ProdNinja-194x300.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/OneNote1-560x1024.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/OneNote21.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/Teams1.jpg',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/To-Do-1024x664.jpg',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/Planner.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/Hootsuite-1024x439.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/rightrelevance-1024x448.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/10/theoldreader-1024x472.png',
None], dtype=object) ]
|
docs.microsoft.com
|
Overview
Thank you for choosing RadGridView - Telerik's Silverlight DataGrid!
RadGridView for Silverlight is the ultimate grid control that provides outstanding performance and a remarkably flexible hierarchy model. RadGridView enables you to create fully customizable and highly interactive interfaces for display and management of large data.
RadGridView key features list:
WPF/Silverlight code compatibility
Powerful data binding to objects, collections, XML and WCF services
Grouping
Sorting
Filtering
Totals row with aggregate functions
Frozen columns
Row details and details presenter for better user experience
In-place data editing with validation
Enable\disable grid elements
Completely stylable control with a variety of themes and properties
Templates for advanced customizations of the look and feel
Custom layout
Flexible hierarchy model, support of self-referencing and custom hierarchy models
Selecting and navigating
Localization support
Flexible API
Enhanced Routed Events Framework will help your code become even more elegant and concise
Merged Cells
Column Groups
You can find a list with all key features and additional explanation of the features here
You can find more examples of how to implement various scenarios available for download from our online SDK repository here. Look for examples listed under GridView. For a better and easier reviewing of our examples you can download the SDK Samples Browser.
|
https://docs.telerik.com/devtools/silverlight/controls/radgridview/overview2
| 2020-03-28T20:32:07 |
CC-MAIN-2020-16
|
1585370493120.15
|
[array(['images/RadGridView_SL.png', 'Telerik SL DataGrid'], dtype=object)
array(['images/RadGridView_Overview_2.png', None], dtype=object)]
|
docs.telerik.com
|
The db.py script can be used to dump and restore a PostgreSQL database.
This script can be used only for the PostgreSQL service. Don’t run this script on a remote database. Execute the script only when the database is up and running.
Usage
Run this script as follows:
./db.py --action=<action> --path=<destination directory>
Options
Mandatory options:
Other options:
|
http://docs.collab.net/teamforge182/db_py.html
| 2020-03-28T21:36:25 |
CC-MAIN-2020-16
|
1585370493120.15
|
[]
|
docs.collab.net
|
Genie and Profound.js
Genie is a Profound UI module that transforms 5250 interfaces to a browser-based HTML5 format on-the-fly. Genie allows users to start an interactive 5250 session directly within a browser. But beyond presenting 5250 information, Genie can also integrate directly with RPG OA Rich Display programs and Node.js modules.
Many features within the Profound.js Connector rely on Genie in order to facilitate a seamless transition between IBM i / 5250 functionality and Node.js. Genie provides a simple way to connect your Profound.js modules to IBM i. More information about Genie is available here.
Navigating to Genie
To start a Genie session, users have to navigate to the following URL:
Where puihost is the sever or host name for where Profound UI is installed and port is the port number used by Profound UI. Please note, this is not the same as the Profound.js server host name and port number.
Calling Node.js from Genie
Genie authenticates users by prompting for an IBM i user id and password by presenting the standard IBM i Sign On screen. Once a user is signed in and an interactive session is established, calls can be made to Profound.js modules either by using the PJSCALL command or by using a Proxy Program. In order for this to happen, Genie must be able to make a connection to the Profound.js server. Because it is possible to configure multiple Profound.js instances, Genie uses environment variables to control which server to connect to.
Environment Variables in Genie
The environment variable PROFOUNDJS_COMM_HOST specifies the Profound.js host name or IP address to connect to. The environment variable PROFOUNDJS_COMM_PORT specifies the port number to connect to.
Both environment variables are automatically set at the system level when you first install Profound.js. If you only installed one copy of Profound.js, there is no need to work with environment variables as they should already point to the correct Profound.js instance.
Each interactive Genie session inherits the system environment variable values, which then become the initial job environment variable values. You can view your current environment variable settings by using the interactive command WRKENVVAR.
You can select option 2 (Change) or use the CHGENVVAR command on PROFOUNDJS_COMM_HOST and PROFOUNDJS_COMM_PORT to change which Profound.js instance to connect to. CHGENVVAR can be used from a command line or in a CL program. This enables you to switch between instances, such as Production, Development, and Test.
The environment variables are changed at the *JOB level by default. In that regard, it is similar to changing a job's library list, meaning that the change affects the current user's session only.
Connecting to a development instance on your PC
It is often convenient to setup a development instance of Profound.js on your personal computer. You can then use PC-based development and debugging tools, such as VS Code. The PJSMYIP command located within your Profound.js install library can automatically detect your PC's IP address and change the environment variable PROFOUNDJS_COMM_HOST for you. However, you may have to manually specify the port number of the Profound.js server running on your PC.
The following example shows how to point your Genie session to a Profound.js instance running on port 8081 on your PC.
Note: If your PC is on a different network than IBM i and not connected to VPN, the PJSMYIP command will not be able to determine your PC's IP address. The PC's network's edge router/firewall IP address will be returned instead. In this case the PC network must be configured to route the connections to the PC using port forwarding or NAT.
|
https://docs.profoundlogic.com/display/PUI/Setting+Up+Genie+on+IBM+i
| 2020-03-28T20:31:42 |
CC-MAIN-2020-16
|
1585370493120.15
|
[]
|
docs.profoundlogic.com
|
This was a highly requested post that I am excited to respond to and provide the necessary information. In this guide I will be talking about how Yclas optimizes your website for search engines automatically and what you can do to make it perform better in this regard. Practicing SEO for your website is not a new thing, it’s been around since 1997 but just until recently SEO is becoming a trend and as you can see in this Google trends experiment it’s even becoming more popular than the term web design. Of course SEO is a variety of practices that help your website go up the rankings in search engine results page, and not all of those practices can be controlled and aided by Yclas, you will need to make some effort with off-page optimization, content creation and so on. And in this guide I will focus on what Yclas offers you with tools that can help get your classifieds website to rank higher in search engine results page. Search Engine Optimization for Classifieds Websites On your website Yclas automatically sets those elements: h1 tags, meta tags and image alt texts. If this is the first time hearing these terms, I suggest that you find some resources online to read about their importance. I can tell you that they have a significant impact on your site’s performance and I will briefly explain each one. Image alt text Image alt text is the text that shows up when the image is not loading or when an image is viewed with devices designed for people with visual impairments. Now the main issue here is that all search engines care about in your website is how user friendly it is and how it can be viewed by everyone regardless of their internet speed or physical state. Yclas automatically sets the alt text for category icons to be the same as category name. Whenever a user uploads a picture on his ad the alt text for that picture will be set automatically as the image file name. User profile picture in the forums has the alt text assigned as the user name. Missing alt texts can harm your website’s performance, so take good care when creating content through your blog or faq to add image alts when including images in your pieces of content. standard practice to have an H1 tag on each page selected category /category/ad: Title of the advertisment : Homepage: Site name” official homepage, get your post listed now. /all: List of all postings in “site name” /all/location: If (location description is empty) then (List of all postings in “Location”) else (location description) /oc-panel/auth/login: login to “site name” /oc-panel/auth/register: Create a new profile at “sitename” /oc-panel/auth/forgot: Here you can reset your password if you forgot it /category: If (category description is empty) then (All “name of category” in “name of website”) else (insert category description) /category/ad: “Ad name” in “category name” on “site name” /Blog: “site name” blog section /Forum: “site name” community forums FAQ: “site name” frequently asked questions Contact: Contact “site name” Search: Search in “site name” /forum/topic: “Topic name” in “site name” forums /pagename: This is free for the user to add with the html editor /blog/post: This is free for the user to add with the html editor Recommendations for your site SEO From what you read in the earlier section you can see that the names and descriptions of categories and locations play a big role in your site SEO and that is something you can control when doing on-page optimization. So my first advice is to write well designed category and location descriptions, not exceeding 155 and with good language (no typos or grammatical mistakes). Second: remember that Yclas lets you have a blog, an FAQ section and a forum section, so you can also use those to your advantage to create content that is designed for your targeted users and filled with relevant keywords. Remember that Yclas doesn’t do all the work for you but it gives you a good push on the right track but make sure you keep on link building, creating content, maintaining good language on the site and no link or keyword cluttering.
|
https://docs.yclas.com/seo-classifieds-website/
| 2020-03-28T21:49:05 |
CC-MAIN-2020-16
|
1585370493120.15
|
[]
|
docs.yclas.com
|
SMI-S profiles¶
This chapter lists SMI-S profiles implemented by OpenLMI-Storage. The implementation does not follow SMI-S strictly and deviates from it where SMI-S model cannot be used. Each such deviation is appropriately marked.
OpenLMI-Storage implements following profiles:
- SMI-S Disk Partition Subprofile
- SMI-S Block Services Package
- SMI-S Extent Composition Subprofile
- SMI-S File Storage Profile
- SMI-S Filesystem Profile
- SMI-S Filesystem Manipulation Profile
- SMI-S Job Control Subprofile
- SMI-S Block Server Performance Subprofile
The OpenLMI-Storage CIM API follows following principles:
- Each block device is represented by exactly one CIM_StorageExtent.
- For example RAID devices are created using LMI_StorageConfigurationService. CreateOrModifyElementFromElements, without any pool being involved.
- No CIM_LogicalDisk is created for devices consumed by the OS, i.e. when there is a filesystem on them.
- Actually, all block devices can be used by the OS and it might be useful to have LMI_StorageExtent as subclass of CIM_LogicalDisk.
Warning
This violates SMI-S, each block device should have both a StorageExtent + LogicalDisk associated from it to be usable by the OS.
- CIM_StoragePool is used only for real pool objects - volume groups.
- PrimordialPool is not present. It might be added in future to track unused disk drives and partitions.
The implementation is not complete, e.g. mandatory Server Profile is not implemented at all. The list will get updated.
|
https://openlmi.readthedocs.io/en/latest/openlmi-storage/smis-profiles.html
| 2020-03-28T21:22:02 |
CC-MAIN-2020-16
|
1585370493120.15
|
[]
|
openlmi.readthedocs.io
|
LMI Scripts common library reference¶
This library builds on top of LMIShell‘s functionality. It provides various utilities and wrappers for building command-line interfaces to OpenLMI Providers.
Generated from version: 0.10.4, git: openlmi-tools-0.10.4
Exported members:
Package with client-side python modules and command line utilities.
- lmi.scripts.common.get_computer_system(ns)¶
Obtain an instance of CIM_ComputerSystem or its subclass. Preferred class name can be configured in configuration file. If such class does not exist, a base class (CIM_ComputerSystem) is enumerated instead. First feasible instance is cached and returned.
Submodules:
|
https://openlmi.readthedocs.io/en/latest/openlmi-tools/api/scripts/common.html
| 2020-03-28T21:08:46 |
CC-MAIN-2020-16
|
1585370493120.15
|
[]
|
openlmi.readthedocs.io
|
Networking command line reference¶
lmi net is a command for LMI metacommand, which allows to manage networking devices and their configuration on a remote host with installed OpenLMI networking provider.
net¶
Networking service management.
Usage:
lmi net device (–help | show [<device_name> ...] | list [<device_name> ...])
lmi net setting (–help | <operation> [<args>...])
lmi net activate <caption> [<device_name>]
lmi net deactivate <caption> [<device_name>]
lmi net enslave <master_caption> <device_name>
lmi net address (–help | <operation> [<args>...])
lmi net route (–help | <operation> [<args>...])
lmi net dns (–help | <operation> [<args>...])
Commands:
- device
- Display information about network devices.
- setting
- Manage the network settings.
- activate
- Activate setting on given network device.
- deactivate
- Deactivate the setting.
- enslave
- Create new slave setting.
- address
- Manipulate the list of IP addresses on given setting.
- route
- Manipulate the list of static routes on given setting.
- dns
- Manipulate the list of DNS servers on given setting.
|
https://openlmi.readthedocs.io/en/latest/openlmi-tools/scripts/commands/networking/cmdline.html
| 2020-03-28T20:36:31 |
CC-MAIN-2020-16
|
1585370493120.15
|
[]
|
openlmi.readthedocs.io
|
Add the custom command to your Splunk deployment
You must add the custom command to the appropriate
commands.conf configuration file.
Prerequisites
Review the following topics.
If you use Splunk Cloud, you do not have filesystem access to your Splunk Cloud deployment. You must file a Support ticket to add a custom search command to your deployment.
The tasks to add a custom command to your deployment are:
- Create or edit the
commands.conffile in a local directory.
- Add a new stanza to the
commands.conffile that describes the command.
- Restart Splunk Enterprise.
Add a new stanza to the local commands.conf file
Edit the local
commands.conf file, to add a stanza for the command.
Each stanza in the
commands.conf file represents the configuration for a specific search command. The following example shows a stanza that enables your custom command script:
[<stanza_name>]
chunked=true
filename = <string>
The
stanza_name is the keyword that is used in searches to invoke the command. The
stanza_name is also the name of the search command. Search command names must be lowercase and consist only of alphanumeric (a-z and 0-9) characters. Command names must be unique. The
stanza_name cannot be the same as any other custom or built-in commands.
The
chunked=true attribute specifies that the command uses the Version 2 protocol.
The
filename attribute specifies the name of your custom command script.
The
filename attribute also specifies the location of the custom command script.
For example, to create the custom command "fizbin", you create a stanza in the
commands.conf file.
[fizbin] chunked = true filename = fizbin.py
Other attributes that you can use to describe the custom command are explained later in this topic.
Describe the command (Version 2 protocol)
Version 2 of the Custom Search Command protocol dynamically determines if the command is a generating command, a streaming command, or a command that generates events.
Additionally, an authentication token is always sent to search commands that use the protocol.
The attributes that you can specify with the protocol are described in the following table.
Read more about these configuration attributes in the commands.conf.spec topic in the Admin Manual.
Describe the command (Version 1 protocol)
Some of the attributes you can use to describe your custom command using the Version 1 protocol specify the type of command.
- You need to understand the differences between the types of commands. There are four broad categorizations for all the search commands:
- Distributable streaming
- Centralized streaming
- Generating
- Transforming
- For a comprehensive explanation about the command types, see Types of commands in this manual. For a complete list of the built-in commands that are in each of these types, see Command types in the Search Reference.
- Describe the type of custom search command in the commands.conf file.
- Specify either the
streamingor
generatingparameter in commands.conf file. Use these attributes to specify whether it is a generating command, a streaming command, or a command that generates events.
- You can also specify whether your custom command retains or transforms events with the
retaineventsparameter.
For a list of configurable settings, see the commands.conf reference in the Admin Manual.commands. Specify 'false' if the command transforms events, similar to the
statscommand.
- Default is false.
These are only a few of the attributes that you can specify in the stanza for your custom search command.
You can see the full list of configuration attributes in the commands.conf.spec topic in the Admin Manual.
Restart Splunk Enterprise
After you add the custom command to the appropriate
commands.conf file, you must restart Splunk Enterprise.
Changes to your custom command script, or to the parameters of an existing command in the
commands.conf file, do not require a restart.
See also
Control access to the custom command and script!
|
https://docs.splunk.com/Documentation/Splunk/7.1.8/Search/AddthecustomcommandtoSplunk
| 2020-03-28T22:16:28 |
CC-MAIN-2020-16
|
1585370493120.15
|
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
|
docs.splunk.com
|
How mitmproxy works¶
Mit [1] in the presence of Server Name Indication.
Explicit HTTP¶.
- The client connects to the proxy and makes a request.
- Mitmproxy connects to the upstream server and simply forwards the request on.
Explicit HTTPS¶.
The MITM in mitmproxy¶.
Complication 1: What’s the remote hostname?¶.
Complication 2: Subject Alternative Name¶ we extract the CN from the upstream cert, we also extract the SANs, and add them to the generated dummy certificate.
Complication 3: Server Name Indication¶. client makes a connection to the server.
- The router redirects the connection to mitmproxy, which is typically listening on a local port of the same host. Mitmproxy then consults the routing mechanism to establish what the original destination was.
-.
Footnotes
|
https://mitmproxy.readthedocs.io/en/readthedocs/howmitmproxy.html
| 2020-03-28T21:40:48 |
CC-MAIN-2020-16
|
1585370493120.15
|
[array(['_images/how-mitmproxy-works-explicit.png',
'_images/how-mitmproxy-works-explicit.png'], dtype=object)
array(['_images/how-mitmproxy-works-explicit-https.png',
'_images/how-mitmproxy-works-explicit-https.png'], dtype=object)
array(['_images/how-mitmproxy-works-transparent.png',
'_images/how-mitmproxy-works-transparent.png'], dtype=object)
array(['_images/how-mitmproxy-works-transparent-https.png',
'_images/how-mitmproxy-works-transparent-https.png'], dtype=object)]
|
mitmproxy.readthedocs.io
|
Save current debugger settings
SAVE filename
Arguments
filename
The name of the file that will contain the debugger settings.
Discussion
The SAVE command enables you to save the current debugger state to a file. The debugger state includes breakpoints, watchpoints, option settings, and ELB names. The name of each ELB that is opened via the OPENELB debugger command is written to the file before any other debugger commands.
Once you’ve saved your debugger settings to a file, you can specify the name of this file as the initialization file for the debugger, which enables you to associate a set of debugger commands with a project and invoke those commands every time you restart the debugging session.
If you don’t specify a filename extension, the default extension is .cmd on Windows and UNIX or .com on OpenVMS. The saved file contains all debugger commands for the current setting state in the debugger, including the WATCH, BREAK, and SET commands.
You can restore the saved debugger commands by executing the @filename debugger command or setting the DBG_INIT environment variable to the name of the file before invoking the debugger.
|
http://docs.synergyde.com/tools/toolsChap2SAVE.htm
| 2018-01-16T17:06:00 |
CC-MAIN-2018-05
|
1516084886476.31
|
[]
|
docs.synergyde.com
|
Saving a Snapshot (Backup Copy) of Your Project
As you continue to make changes to your Stationery design project, you may want to revert to a previous state, before you implemented a specific feature. You can save a backup copy of your project and then use this backup copy at a later time, if needed.
As you develop your Stationery design project, periodically save a snapshot of your project. You can use this snapshot to revert to your Stationery design project at a specific point in time. This snapshot can also help you if your design project or Stationery become corrupted in the future. For more information about the specific files and folders, see “Backing Up Your Stationery Design Project, Stationery, and Projects” on page 184.
To save a snapshot (backup copy) of your project
Save your project and close ePublisher.
Copy your project folder and all its subfolders and files, and your source documents folder, maintaining the same structure, to another location. For example, create a folder with the date from today as the name. Then, copy your project folder and your source documents folder into the folder you created. This copy is your snapshot.
|
http://docs.webworks.com/ePublisher/2008.3/Help/Designing_Templates_and_Stationery/Designing_Stationery.3.88
| 2018-01-16T18:00:59 |
CC-MAIN-2018-05
|
1516084886476.31
|
[]
|
docs.webworks.com
|
Create security categories in Project Server
Summary: Add custom security categories by using the Manage Categories page in Project Web App Settings.
Applies to: Project Server 2016, Project Server 2013
In Project Web App, you can add custom security categories as necessary to create a security model that meets the specific needs of users and groups in your organization.
Note
Categories are only available in Project Server permission mode. If you are using SharePoint permission mode, see Plan SharePoint groups in Project Server for information about managing users in Project Web App.
Note
Because SharePoint Server runs as websites in Internet Information Services (IIS), administrators and users depend on the accessibility features that browsers provide. SharePoint Server supports the accessibility features of supported browsers. For more information, see the following resources:> Plan browser support> Accessibility for SharePoint Products> Accessibility features in SharePoint 2013 Products> Keyboard shortcuts> Touch
Before you begin this operation, review the following information about prerequisites:
Read Manage categories in Project Server.
You must have access to Project Web App.
Important
The Manage users and groups global permission in Project Web App is required to complete this procedure..
Name and Description
Use the Name and Description section to specify a name and description for the category.
The following table describes the name and description options for a category.
Projects.
Resources.
Views
Use the Views section to specify views that users associated with this category can see.
To add a view to the category, select the Add check box for that view. To remove a view, clear the Add check box for that view.
Permissions.
See also
Manage categories in Project Server
Plan groups, categories, and RBS in Project Server
Global permissions in Project Server 2013
Modify categories in Project Server
Delete a category (Project Server permission mode)
|
https://docs.microsoft.com/en-us/project/create-security-categories-in-project-server
| 2018-01-16T18:14:17 |
CC-MAIN-2018-05
|
1516084886476.31
|
[]
|
docs.microsoft.com
|
MBStyle Cookbook¶
The MBStyle Cookbook is a collection of MBStyle “recipes” for creating various types of map styles. Wherever possible, each example is designed to show off a single MBStyle layer so that code can be copied from the examples and adapted when creating MBStyles of your own. While not an exhaustive reference like the MBStyle reference the MBStyle cookbook is designed to be a practical reference, showing common style templates that are easy to understand.
The MBStyle Cookbook is divided into four sections: the first three for each of the vector types (points, lines, and polygons) and the fourth section for rasters to come. Each example in every section contains a screenshot showing actual GeoServer WMS output, a snippet of the MBStyle code for reference, and a link to download the full MBStyle.
Each section uses data created especially for the MBStyle Cookbook, with shapefiles for vector data and GeoTIFFs for raster data.
|
http://docs.geoserver.org/latest/en/user/styling/mbstyle/cookbook/index.html
| 2018-02-18T05:12:12 |
CC-MAIN-2018-09
|
1518891811655.65
|
[]
|
docs.geoserver.org
|
Welcome to Summit User Guides
The Summit User Guide is here as a training tool, Help reference, and FAQ list.
The tabs at the top will take you to the specific topics within the User Guide (or use the layout below with links to those sections):
- Navigation
- Data
- Toolbar
- Approvals
- Management
- Videos
- Release Notes
For questions about a specific proposal, contact the Pre-Award Associate listed in the proposal support staff section
To report an error message, contact 4Help with a copy the error message and proposal ID.
Video Tutorials
Access short videos, as well as longer overviews and demonstrations, showing how to use specific functions in Summit on the Summit Help You Tube channel or see a list of all videos here .
Several Key videos: - Summit Overview
Initiating a Proposal in Summit
Developing a Budget using Summit
How to Submit a Proposal for Routing and Approval
How to Approve a Proposal
Known Issues
- Proposal creation fails when a PI does not have a home org in Banner or default home org in Summit.
- Workaround: The PI should work with their department to update Banner to add their home org or add a default home org in their manage preferences (see Manage Preferences).
- Notifications on Cost Share and Subcontractors
- Due to the Cost Share and Subcontractor sections being tabulated, it is currently not possible for the notifications to scroll to a particular comment within these two sections. It will only scroll if the comment happens to be on the tab that is currently open. If the comment is on a tab that is not open, it will open the comment thread at the top right of the proposal screen.
- CAS Forbidden Login Message
- If you login to Summit and CAS is set to 7 day reminder; then logout of Summit/CAS, then try to log back into Summit again, the Forbidden Access message will appear.
- Workaround: Refresh the page
FAQs
How do I add a support staff or approver on an org?
- The Department Head or Business Manager of the org in question should email [email protected] with the name of who they would like added and as what role type(s) (ex. support staff, support staff lead, approver, approver delegate).
|
https://docs.summit.cloud.vt.edu/
| 2018-02-18T05:00:01 |
CC-MAIN-2018-09
|
1518891811655.65
|
[array(['./images/Ind_PreAward.jpg',
'Pre-Award Associate Listed in Proposal'], dtype=object)]
|
docs.summit.cloud.vt.edu
|
CVE-2012-2135: UTF-16 decoder¶
Vulnerability in the UTF-16 decoder after error handling.
- Disclosure date: 2012-04-14
Fixed In¶
- Python 2.7.4 (2013-04-06) fixed by commit 715a63b (2012-07-20)
- Python 3.2.4 (2013-04-07) fixed by commit 715a63b (2012-07-20)
- Python 3.3.0 (2012-09-29) fixed by commit b4bbee2 (2012-07-20)
Python issue¶
CVE-2012-2135: Vulnerability in the utf-16 decoder after error handling.
- Python issue: issue #14579
- Creation date: 2012-04-14
- Reporter: Serhiy Storchaka
CVE-2012-2135¶.
- CVE ID: CVE-2012-2135
- Published: 2012-08-14
- CVSS Score: 6.4
Timeline¶
Timeline using the disclosure date 2012-04-14 as reference:
- 2012-04-14: Disclosure date
- 2012-04-14 (+0 days): Python issue #14579 reported by Serhiy Storchaka
- 2012-07-20 (+97 days): commit 715a63b
- 2012-07-20 (+97 days): commit b4bbee2
- 2012-08-14 (+122 days): CVE-2012-2135 published
- 2012-09-29: Python 3.3.0 released
- 2013-04-06 (+357 days): Python 2.7.4 released
- 2013-04-07 (+358 days): Python 3.2.4 released
|
http://python-security.readthedocs.io/vuln/cve-2012-2135_utf-16_decoder.html
| 2018-02-18T05:15:31 |
CC-MAIN-2018-09
|
1518891811655.65
|
[]
|
python-security.readthedocs.io
|
SQLite Database Access
Max contains an implementation of the SQLite database engine (more info available @). This database engine is accessible through both the C-API and directly in Max through the JavaScript interface. A tutorial on the Javascript interface written by Andrew Benson has been published on the Cycling '74
website
.
The Javascript interface is composed of two objects: SQLite and SQLResult. A SQLResult object is used to represent the data returned by queries to the SQLite object. These objects are created in the same manner as any object in Javascript, as shown below.
var sqlite = new SQLite; var result = new SQLResult;
SQLite Object Methods
The SQLite object responds to the following methods:
open -- 2 args: name and 'ram-based' boolean, no return val
close -- no args, no return val
exec -- 2 args: query string and SQLResult object, no return val
lastinsertid -- no args, returns int
starttransaction -- no args, no return val
endtransaction -- no args, no return val
SQLResult Object Methods
The SQLResult object responds to the following methods:
numrecords -- no args, returns int
numfields -- no args, returns int
fieldname -- column index (int) arg, returns string
value -- column index(int) and record index (int) args, returns string
All records returned by the SQLResult object are returned as strings. Thus, a numeric value such as 1 is actually returned as the string "1". SQLite only uses datatypes as recommendations. It does not enforce data types and in fact always returns a string. More information about this and other SQLite-specific topics can be found in a Google Talk given by Richard Hipp (the author of the SQLite library) @
video.google.com
.
|
https://docs.cycling74.com/max5/vignettes/js/jssqlite.html
| 2018-02-18T05:02:33 |
CC-MAIN-2018-09
|
1518891811655.65
|
[]
|
docs.cycling74.com
|
Best Practice Rules Reference
Reference of available rules in the Best Practice Service.
Reference of available rules in the Best Practice Service organized in alphabetical order by each Advisor section.
Backup Advisor
Config Advisor
Network Advisor
OpsCenter Config Advisor
OS Advisor
Performance Advisor
Rules for read and write to node performance (Performance Advisor not to be confused with the Performance Services).
Tip: Use LCM Config Profiles to adjust request timeout settings in cassandra.yaml settings and run a configuration job.
Performance Service - Slow Queries Advisor
For more information, see Identifying and tuning slow queries in the Performance Service.
Performance Service - Table Metrics Advisor
For more information, see Identifying poorly performing tables in the Performance Service.
Performance Service - Thread Pools Advisor
For more information, see Monitoring node thread pool statistics in the Performance Service.
Replication Advisor
Search Advisor
Advice for Solr search nodes. For more information, see DSE Search.
|
https://docs.datastax.com/en/opscenter/6.0/opsc/online_help/services/BPRreference.html
| 2018-02-18T04:36:03 |
CC-MAIN-2018-09
|
1518891811655.65
|
[]
|
docs.datastax.com
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.