ACDSee Photo Studio Professional 2020 Crack: A Comprehensive Review
-
If you are looking for a fast, powerful and easy-to-use image management system that can handle all your photography needs, you might have heard of ACDSee Photo Studio Professional 2020. This software is a popular choice among photographers, graphic designers and digital artists who want to acquire, organize, view, enhance and share their digital photos and other media files. But what exactly is ACDSee Photo Studio Professional 2020 and what can it do for you? And more importantly, how can you get it without breaking the law or risking your computer's security? In this article, we will answer these questions and give you a comprehensive review of ACDSee Photo Studio Professional 2020 Crack.
-
What is ACDSee Photo Studio Professional 2020?
-
ACDSee Photo Studio Professional 2020 is a software application that combines the functions of a RAW editor and a digital asset management solution. It allows you to perform a wide range of adjustments on your photos, such as exposure, color, contrast, sharpness, noise reduction, red eye removal, watermarking, cropping, resizing and more. You can also apply various effects and filters to enhance your images and give them a professional edge. Moreover, you can manage your digital assets with powerful searching and cataloging tools that let you find, sort, tag, rate and organize your photos and media files. You can also create password-protected albums and folders on the developer's website and share them with your friends.
ACDSee Photo Studio Professional 2020 has many features and benefits that make it stand out from other similar software. Here are some of them:
-
Acquire, Organize, View, Enhance and Share Your Photos and Media Files
-
With ACDSee Photo Studio Professional 2020, you can connect directly to the folders on your computer without importing them into a separate library. This saves you time and disk space and allows you to enjoy increased speed and performance. You can also run slide shows, play embedded audio and display multiple-page images in any of the more than 50 image and multimedia file formats supported by the software.
-
Process Digital Photos with a Variety of Editing Tools
-
Streamline Your Workflow with Performance Improvements and GPU-Enriched Software
-
ACDSee Photo Studio Professional 2020 is designed to streamline your workflow and give your image development a competitive, professional edge. It features performance improvements, such as faster loading and processing of RAW files, and GPU-enriched software that enhances the speed and quality of your adjustments. You can also use ACDSee Actions to record and batch-apply any of the 200+ adjustments available in Edit mode. Additionally, you can take snapshots of your work at any time and return to them later if you want to undo or redo any changes.
-
Expand Your Creative Scope with Photoshop Plugin Support
-
ACDSee Photo Studio Professional 2020 allows you to expand your creative scope with the ability to import and apply Photoshop plugins to your Edit mode adjustment workflow. You can draw your own effects and use them in combination with best-in-class photo editing tools. You can also use the built-in LUTs (Look-Up Tables) to apply color grading to your images, or create your own custom LUTs for a unique look.
-
Manage Your Digital Assets with Powerful Searching and Cataloging Tools
-
ACDSee Photo Studio Professional 2020 helps you manage your digital assets with powerful searching and cataloging tools that let you find, sort, tag, rate and organize your photos and media files. You can find images based on metadata, file properties, date, event, keyword, rating, color label and GPS location. You can also build and save detailed searches, enter single words or phrases, search only specific folders or find that one particular image with the Quick Search bar. Moreover, you can add ratings, hierarchical keywords, categories and location data to your images and quickly identify photos for further processing with visual tags or customizable color labels.
-
ACDSee Photo Studio Pro 2020 full version with crack
-How to activate ACDSee Photo Studio Professional 2020 for free
-ACDSee Photo Studio Professional 2020 license key generator
-Download ACDSee Photo Studio Pro 2020 cracked software
-ACDSee Photo Studio Professional 2020 serial number and patch
-ACDSee Photo Studio Pro 2020 crack download for Windows 10
-ACDSee Photo Studio Professional 2020 keygen and activation code
-ACDSee Photo Studio Pro 2020 crack + torrent link
-ACDSee Photo Studio Professional 2020 registration code and crack
-ACDSee Photo Studio Pro 2020 cracked version free download
-ACDSee Photo Studio Professional 2020 crack only file
-ACDSee Photo Studio Pro 2020 full crack setup and install
-ACDSee Photo Studio Professional 2020 lifetime crack and update
-ACDSee Photo Studio Pro 2020 crack with offline installer
-ACDSee Photo Studio Professional 2020 portable edition with crack
-ACDSee Photo Studio Pro 2020 crack + direct download link
-ACDSee Photo Studio Professional 2020 activation key and crack
-ACDSee Photo Studio Pro 2020 full crack + online installer
-ACDSee Photo Studio Professional 2020 product key and crack
-ACDSee Photo Studio Pro 2020 crack with latest features and updates
-ACDSee Photo Studio Professional 2020 working crack and patch
-ACDSee Photo Studio Pro 2020 crack for Windows 7/8/8.1/10
-ACDSee Photo Studio Professional 2020 cracked software download site
-ACDSee Photo Studio Pro 2020 full version crack free download
-ACDSee Photo Studio Professional 2020 crack with user guide and tutorial
-ACDSee Photo Studio Pro 2020 best alternative to cracked software
-ACDSee Photo Studio Professional 2020 trial reset and crack
-ACDSee Photo Studio Pro 2020 cracked software review and rating
-ACDSee Photo Studio Professional 2020 system requirements and crack
-ACDSee Photo Studio Pro 2020 latest version download with crack
-ACDSee Photo Studio Professional 2020 comparison with other photo editing software
-ACDSee Photo Studio Pro 2020 features and benefits of using cracked software
-ACDSee Photo Studio Professional 2020 disadvantages and risks of using cracked software
-ACDSee Photo Studio Pro 2020 how to uninstall cracked software safely
-ACDSee Photo Studio Professional 2020 how to avoid malware and viruses from cracked software
-ACDSee Photo Studio Pro 2020 how to fix common errors and issues with cracked software
-ACDSee Photo Studio Professional 2020 how to update cracked software without losing activation
-ACDSee Photo Studio Pro 2020 how to backup and restore cracked software data and settings
-ACDSee Photo Studio Professional 2020 how to transfer cracked software to another computer or device
-ACDSee Photo Studio Pro 2020 how to use cracked software legally and ethically
-
Correct Lens Distortion and Achieve HDR Results
-
ACDSee Photo Studio Professional 2020 enables you to correct lens distortion and achieve HDR results with ease. You can correct barrel and pincushion distortion in your photos by applying the correction calibrated to fix the distortion inherent to the lens used. You can also fix chromatic aberration and map this correction to your lens for future use. Furthermore, you can get HDR results by intelligently adjusting areas that are too light or too dark in your images.
-
How to Download and Install ACDSee Photo Studio Professional 2020 Crack?
-
If you are tempted to download and install ACDSee Photo Studio Professional 2020 Crack from unofficial sources, you should think twice before doing so. There are many risks and disadvantages of using cracked software that you should be aware of.
-
The Risks of Using Cracked Software
-
Using cracked software is illegal and unethical. You are violating the intellectual property rights of the software developers and depriving them of their rightful income. You are also exposing yourself to potential legal consequences if you are caught using pirated software.
-
Moreover, using cracked software is unsafe and unreliable. You are risking your computer's security by downloading files from untrusted sources that may contain viruses, malware or spyware. These malicious programs can damage your system, steal your personal information or compromise your online privacy. You are also risking your data's integrity by using software that may not work properly or crash unexpectedly.
-
The Legal Way to Get ACDSee Photo Studio Professional 2020
-
The legal way to get ACDSee Photo Studio Professional 2020 is to buy it from the official website of ACDSee. By doing so, you will get a genuine license key that will activate the software and grant you access to all its features and updates. You will also get technical support and customer service from the developers in case you encounter any problems or have any questions.
-
The price of ACDSee Photo Studio Professional 2020 is $99.99 for a single user license. However, you can also get a free trial version for 30 days that will let you test the software before buying it. You can download the free trial version from here: https://www.acdsee.com/en/products/photo-studio-professional/
-
Conclusion
-
ACDSee Photo Studio Professional 2020 is a powerful image management system that can help you acquire, organize, view, enhance and share your digital photos and other media files. It has many features and benefits that make it a great choice for photographers, graphic designers and digital artists who want to streamline their workflow and expand their creative scope. However, downloading and installing ACDSee Photo Studio Professional 2020 Crack is not a good idea as it is illegal, unethical, unsafe and unreliable. The best way to get ACDSee Photo Studio Professional 2020 is to buy it from the official website of ACDSee or try it for free for 30 days.
-
FAQs
-
Here are some frequently asked questions about ACDSee Photo Studio Professional 2020 Crack:
-
-
What are the system requirements for ACDSee Photo Studio Professional 2020?
-
The system requirements for ACDSee Photo Studio Professional 2020 are as follows:
-
- Intel® or AMD processor with 64-bit support
-
- Windows® 10 (64-bit editions only)
-
- Microsoft® Internet Explorer® 11 or higher
-
- Microsoft® DirectX® 10 or higher
-
- Windows Media® Player 9.0
-
- Microsoft® Office 2010 or above
-
- Ghostscript 8.0 - for PDF support
-
- Windows Media® Player 9.0 - for M3U support
-
- 2 GB RAM (6 GB RAM recommended)
-
- 2 GB of available hard disk space
-
- High Color display adapter at 1024 x 768 resolution (1920 x 1080 recommended)
-
- CD/DVD Burner - for creating CDs or DVDs
-
-
-
How do I uninstall ACDSee Photo Studio Professional 2020?
-
To uninstall ACDSee Photo Studio Professional 2020 from your computer, follow these steps:
-
- Close ACDSee Photo Studio Professional 2020 if it is running.
-
- Click Start > Control Panel > Programs > Programs and Features.
-
- Select ACDSee Photo Studio Professional 2020 from the list of programs.
-
- Click Uninstall/Change and follow the instructions on the screen.
-
-
-
How do I update ACDSee Photo Studio Professional 2020?
-
-
To update ACDSee Photo Studio Professional 2020, follow these steps:
-
- Open ACDSee Photo Studio Professional 2020 and click Help > Check for Updates.
-
- If there is an update available, click Download and Install.
-
- Follow the instructions on the screen to complete the update process.
-
-
-
How do I contact ACDSee customer support?
-
To contact ACDSee customer support, you can use one of the following methods:
-
- Visit the ACDSee support page: https://www.acdsee.com/en/support/
-
- Submit a ticket online: https://acdsystems.desk.com/customer/portal/emails/new
-
- Call the toll-free number: 1-888-767-9888 (Monday to Friday, 8:30 AM to 5:00 PM PST)
-
-
-
How do I get a refund for ACDSee Photo Studio Professional 2020?
-
To get a refund for ACDSee Photo Studio Professional 2020, you need to contact ACDSee customer support within 30 days of your purchase and provide your order number and reason for requesting a refund. You can use any of the methods mentioned above to contact ACDSee customer support. Please note that refunds are subject to ACDSee's refund policy and terms and conditions.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download DLL-Files Fixer Full Crack 2023 - The Ultimate Solution for DLL Problems.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download DLL-Files Fixer Full Crack 2023 - The Ultimate Solution for DLL Problems.md
deleted file mode 100644
index 3dea7c379a415a6355e68354fb3c6e57eaa0463e..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download DLL-Files Fixer Full Crack 2023 - The Ultimate Solution for DLL Problems.md
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
How to Download DLL-Files Fixer Full Crack for Free
-
DLL-Files Fixer is a software that can help you fix DLL errors, optimize your PC performance, and prevent system crashes. But how can you download DLL-Files Fixer full crack for free? Here are some steps you can follow:
Go to a reliable website that offers DLL-Files Fixer full crack, such as PeskTop , NovaHax , or VSTMenia . Make sure you have a good antivirus program to scan the downloaded files for malware.
-
Download the DLL-Files Fixer setup file and the crack file from the website. You may need to enter a password to extract the files, which is usually 123 or the name of the website.
-
Disable your Windows Defender or firewall temporarily to avoid any interference with the crack files.
-
Install the DLL-Files Fixer program by running the setup file. Do not launch the program after installation.
-
Copy the crack file and paste it into the installation folder of DLL-Files Fixer. Replace the original file if prompted.
-
Run the DLL-Files Fixer program and enjoy its full features for free.
-
-
Note: Downloading and using DLL-Files Fixer full crack may be illegal and risky. You may violate the software license agreement and expose your PC to potential threats. We do not recommend or endorse this method. Use it at your own risk.
What is DLL-Files Fixer and why do you need it?
-
DLL-Files Fixer is a software that can help you fix DLL errors, optimize your PC performance, and prevent system crashes. DLL stands for Dynamic Link Library, which is a file that contains code and data that can be used by multiple programs at the same time. DLL files are essential for the proper functioning of many applications and games on Windows.
-
However, sometimes DLL files can get corrupted, missing, or outdated, causing various problems such as error messages, slow performance, or even blue screen of death. This is where DLL-Files Fixer comes in handy. It can scan your system for any DLL issues and fix them automatically. It can also download and install the latest and compatible DLL files from its online library of over 2.5 million files. Moreover, it can defragment and clean your registry to improve your PC speed and stability.
-
With DLL-Files Fixer, you can easily solve your DLL problems and enjoy a smooth and error-free PC experience.
-
How much does DLL-Files Fixer cost and how can you get it?
-
DLL-Files Fixer is a paid software that offers a free trial version for 15 days. After that, you need to purchase a license key to continue using it. The license key costs $29.95 for one year and $39.95 for three years. You can buy it from the official website of DLL-Files Fixer or from other authorized resellers.
-
To get DLL-Files Fixer, you need to download and install it from the official website or from a trusted source. You can also use the DLL-Files Fixer full crack method that we explained earlier, but we do not recommend it for the reasons we mentioned. Once you have installed the software, you can run it and scan your PC for any DLL errors. You can then fix them with one click or download and install the missing or updated DLL files from the online library.
-
What are the alternatives to DLL-Files Fixer?
-
If you are looking for other software that can help you fix DLL errors, you may want to check out some of these alternatives:
-
-
Glary Utilities: This is a free and powerful system optimizer that can fix registry errors, clean junk files, boost PC speed, and more. It also has a module called \"Fix Missing Software Files\" that can scan and repair DLL issues.
-
Wise Care 365: This is another free and comprehensive PC cleaner and optimizer that can fix registry problems, optimize system settings, protect privacy, and more. It also has a feature called \"PC Checkup\" that can detect and fix DLL errors.
-
DLL Suite: This is a dedicated DLL fixer that can scan and repair all kinds of DLL errors on Windows. It can also download and install the latest DLL files from its online database of over 25 million files. It offers a free trial version and a paid version that costs $19.99 for one year.
-
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Big Boobs Sexy Video Com.md b/spaces/1gistliPinn/ChatGPT4/Examples/Big Boobs Sexy Video Com.md
deleted file mode 100644
index 0cd8e7a5d07175d62563a373ad3a5bac548cf74f..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Big Boobs Sexy Video Com.md
+++ /dev/null
@@ -1,15 +0,0 @@
-
-
father son 3some slutload ford escort power booster eats bulls cum from wifes pussy montana physiological treatment sex offenders older silver daddies gay man. hot adult free ladies phone talk numbers black hardcore assfucking big boobs blojob pornhub eva green pics nude nuns fatties porn movies. nc state university towers porn video naked hot blonde lesbians adult hidden cam videos big tit sluts bukkake nureyev bisexual.
-
molly sirius's sex photoes flat chested tight pussy girls british glamour model sex tape moms and guys sex elephant xxx thumbs. sexy lesbian heel worship newport beach facial rejuvenation one extra sex chromosome hardcore site terra.es nude sexy indian women. cause of sexual addiction support group vintage rosenthal studio line cheek naked sandy anal gland absess french adult rap.
petey the porno puppet porn stamina tricks how to get pornstar size penis nude massage arcadia ca free girlfriends and wives porn videos. female sex inhancers xxx skinny porn women want comic strip rubric need for speed boobs.
-
fat black mature women review of dr wexler facial products ltr lingerie booty pants skinny slut galleries. dirt sexy money season1 bridgett baby doll riley nude loa german girl gives lapdance and blowjob us virgin islands official tourist board.
-
you sexy babe vintage ebony interacial nudists at the beach mom seduced teen guys crusing for cock. deepthroat film after pain sex vaginal blacklight nude gypsy poster sexy college redhead clip day free hardcore.
-
brunette teen pic sexy lilo and stitch pics vpr fake porn pics emma watson mayra naked picture veronica. estrogen receptor breast cancer cumshot compilation give it away tqa vancouver erotic clubs julika wagner nude.
-
grinding lesbian pussy video big tits abuse redheaded naked males pics porn stars who smoke looking at the world from the bottom of a. milf exam bikini 'larry's locker room sexy female strippers erotic male cowboys xian xua hentai.
-
flat chested nude thumbs culture fag god hate paperback religious rhetorics sexual violence onegai teacher hentai hot milf sexy movies busty mayure picture thumbs. female reporter fucked mature chubby doing anal hardcore missy taylor skiny fucks free donkey punch porn videos. bottom flared gown prom how many girls eat pussy libertarians suck hugh laurie kicking ass woman playing with her sexy shoes. sexy teen swallow videos gay and lesbian resource center dallas mature audience only philip k dick speech milf bathing suit video. adult soccer leagues austin tx harvard project for asian international relations vero beach boys center teen challenge metacafe ass same gay.
-
free xxx girl fuck dog vid only blow job blonde it's my pleasure to join carla bonner sexy. neil porn vince lizzie cums video adult stemcell sex internet games mature nudity free. karla edecan telcel xxx girls that suck for money pic jamie bamber nude online cum. cats back spasms and he licks sexy girls in sexy clothes videos gay male model picture free chyna porn.
-
-
tumblr hairy putting cum in a girl's mouth sex clubs in lille do christian women have anal sex musclar gay men. indian femdom free trial upskirt las vegas young amateur milf is homosexual adoption legal female sexy body. asian magazine cover upskirt bloobers young girls sex blogs naked hott sluts free natural amateur breasts tgp. young girls with no boobs how to test a males virginity grandma lesbian porn naughty julie anal free pictures of shaved pussy.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Face To Face Mat Book Free Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/Face To Face Mat Book Free Download.md
deleted file mode 100644
index 8355839e461ddb311ecf692d9476e58a0b2cf06c..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Face To Face Mat Book Free Download.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
this site is great for kids, students, and teachers. they have a large variety of books, videos, music, and games. they have a large selection of kids books and educational materials. you can find the children's books in the books, toys, and games section.
this site is a great resource for those looking for inspiration to start, and those already active in, the digital arts. you will find tutorials, tips, and more. there are even plenty of free projects and resources to use for your own projects. if you are just getting started in the arts, this site can be a great resource for you. they have plenty of projects and tutorials for beginners.
-
this book addresses these weaknesses and gaps. it provides evidence of the capability shortfalls that currently exist in many countries, analyses this evidence and identifies capability traps that hold many governments back, particularly related to isomorphic mimicry and premature load-bearing. the book then describes a process that governments can use to escape these capability traps. called pdia (problem driven iterative adaptation), this process empowers people working in governments to find and fit solutions to the problems they face. this process is explained in a practical manner so that readers can actually apply tools and ideas to the capability challenges they face in their own contexts. these applications will help readers implement policies and reforms that have more impact than those of the past.
-
the book describes the pdia process, outlining the steps to take for governments to find and address problems, escape capability traps, and turn their capabilities into their greatest strengths. the problems that governments face need to be identified, be the need be addressed through policy reform, and be implemented by governments.
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Android 12 Emojis Whats New and How to Get Them on Your Phone.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Android 12 Emojis Whats New and How to Get Them on Your Phone.md
deleted file mode 100644
index 5270cef662c5331bdfeb164eecc6f284e7c1f572..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Android 12 Emojis Whats New and How to Get Them on Your Phone.md
+++ /dev/null
@@ -1,153 +0,0 @@
-
-
How to Download and Install Android 12 Emojis on Any Android Phone
-
Do you want to spice up your text messages, social media posts, and online chats with the latest and coolest emojis? If you are an Android user, you might be interested in getting the new Android 12 emojis on your phone. Android 12 is Google's newest operating system, which introduces many improvements and changes over Android 11. One of the new changes is the introduction of new emojis and the redesign of current ones. The new emojis are part of Unicode's Emoji 13.1 set and will be included on Android 12.
However, if you are not running Android 12 on your phone, you might be wondering how to get the new emojis. Don't worry, we have got you covered. In this article, we will show you how to download and install Android 12 emojis on any Android phone using a simple method. You don't need to wait for the official update or buy a new phone to enjoy the new emojis. All you need is a rooted phone and a Magisk module. Let's get started!
-
What are Android 12 Emojis and Why You Should Get Them
-
Android 12 Emojis Overview
-
Emojis are small icons that represent various emotions, objects, animals, activities, and symbols. They are widely used in digital communication to express feelings, convey messages, and add fun and personality. Emojis are standardized by Unicode, an organization that sets the rules and guidelines for text encoding and representation. Unicode releases new emoji sets every year, which are then adopted by different platforms and devices.
-
Android 12 emojis are the latest emoji set released by Google for its operating system. According to Google, these emoji updates are coming to Android 12 later in the year but will be made available on Gmail, Google Chat, YouTube Live Chat, and Chrome OS later this month. The update includes nearly 1,000 redesigned emojis that make them clearer at small sizes, more accurate, or more cross-platform compatible. The update also adds support for new emojis that were introduced by Unicode in Emoji 13.1 set. Some of the new emojis include melting face, face with open eyes and hand over mouth, face with peeking eye, saluting face, dotted line face, heart on fire, mending heart, woman with beard, man with beard, woman feeding baby, man feeding baby, bubble tea, tamale, potted plant, rock, wood, hut, worm, beaver, seal, fly, cockroach, ninja, military helmet, accordion, long drum, coin, circular saw, screwdriver, mirror, window, /p>
-
Benefits of Android 12 Emojis
-
There are many reasons why you might want to get the new Android 12 emojis on your phone. Here are some of them:
-
-
You can express yourself better with the new emojis that capture a wider range of emotions, situations, and identities.
-
You can stay updated with the latest trends and culture with the new emojis that reflect the current world and society.
-
You can have more fun and creativity with the new emojis that offer more options and variations for your messages and posts.
-
You can communicate more effectively with the new emojis that are clearer, more consistent, and more compatible across different platforms and devices.
-
-
How to Download Android 12 Emojis APK File
-
Requirements for Installing Android 12 Emojis
-
Before you can install the new Android 12 emojis on your phone, you need to make sure that you meet some requirements. These are:
-
How to install android 12 emojis on all android phones
-Android 12 emoji changelog and new designs
-Get android 12 new emojis on rooted android device
-Android 12 emoji set magisk module download
-Android 12 emojis vs android 11 emojis comparison
-Android 12 emoji font file for non-rooted devices
-Android 12 emoji update release date and features
-How to use android 12 emojis on gmail and google chat
-Android 12 emoji list and meanings
-Android 12 emoji keyboard app for android
-How to get android 12 emojis on samsung devices
-Android 12 emoji redesign and improvement
-Android 12 emoji compatibility with other platforms
-Android 12 emoji apk mod for older android versions
-Android 12 emoji preview and beta testing
-How to customize android 12 emojis with root
-Android 12 emoji support for whatsapp and instagram
-Android 12 emoji review and feedback
-Android 12 emoji pack for gboard and swiftkey
-Android 12 emoji font changer app for android
-How to uninstall android 12 emojis from rooted device
-Android 12 emoji bug fixes and issues
-Android 12 emoji quiz and trivia game
-Android 12 emoji wallpapers and stickers for android
-Android 12 emoji generator and creator tool
-How to enable android 12 emojis on chrome os
-Android 12 emoji history and evolution
-Android 12 emoji statistics and trends
-Android 12 emoji tips and tricks
-Android 12 emoji alternatives and replacements
-How to backup android 12 emojis on rooted device
-Android 12 emoji license and terms of use
-Android 12 emoji font converter for pc and mac
-Android 12 emoji art and memes for android
-Android 12 emoji theme and launcher for android
-How to disable android 12 emojis on rooted device
-Android 12 emoji request and suggestion form
-Android 12 emoji news and updates
-Android 12 emoji guide and tutorial for beginners
-Android 12 emoji font editor and modifier tool
-How to restore android 11 emojis on rooted device
-Android 12 emoji problems and solutions
-Android 12 emoji fun facts and trivia
-Android 12 emoji stickers and gifs for android
-Android 12 emoji maker and editor app for android
-How to access android 12 emojis on youtube live chat
-Android 12 emoji comparison with ios and windows
-Android 12 emoji font download link and instructions
-Android 12 emoji rating and opinion poll
-Android 12 emoji best practices and recommendations
-
-
Your phone must be running on Android 8.0 Oreo or higher.
-
Your phone must be rooted with Magisk installed. Rooting is the process of gaining full access and control over your phone's system. Magisk is a tool that allows you to modify your phone's system without affecting its safety and stability. If you don't know how to root your phone or install Magisk, you can search for online guides or tutorials that are specific to your phone model.
-
You must have a file manager app that can access the root directory of your phone. You can use any file manager app that has this feature, such as Solid Explorer, FX File Explorer, or MiXplorer.
-
You must have a backup of your phone's data in case something goes wrong during the installation process. You can use any backup app that you prefer, such as Titanium Backup, Swift Backup, or Google Drive.
-
-
Steps to Download Android 12 Emojis APK File
-
Once you have met the requirements, you can proceed to download the Android 12 emojis APK file. This is a file that contains the new emoji fonts that will replace the existing ones on your phone. Here are the steps to download the APK file:
-
-
Go to this link on your phone's browser. This is the official download page for the Android 12 emojis APK file.
-
Tap on the green button that says "Download APK". This will start downloading the file to your phone's storage.
-
Wait for the download to finish. You can check the progress on your notification bar or your browser's download manager.
-
Once the download is complete, locate the file on your file manager app. It should be in the Downloads folder by default. The file name should be something like "Android-12-Emojis.apk".
-
How to Install Android 12 Emojis Using Magisk Module
-
What is Magisk and How to Use It
-
Magisk is a powerful tool that allows you to modify your phone's system without affecting its safety and stability. It works by creating a virtual partition on your phone that overlays the original system partition. This way, you can make changes to the system without actually modifying it. Magisk also has a feature called Magisk Modules, which are add-ons that can enhance your phone's functionality and performance. You can install various Magisk Modules from the Magisk Manager app, which is the main interface for managing Magisk on your phone.
-
Steps to Install Android 12 Emojis Using Magisk Module
-
Now that you have downloaded the Android 12 emojis APK file, you need to install it using a Magisk Module. This will ensure that the new emojis are applied to your phone's system and apps. Here are the steps to install Android 12 emojis using a Magisk Module:
-
-
Open the Magisk Manager app on your phone. If you don't have it, you can download it from here.
-
Tap on the menu icon on the top left corner and select "Modules". This will open the list of installed and available Magisk Modules on your phone.
-
Tap on the plus icon on the bottom right corner and navigate to the folder where you saved the Android 12 emojis APK file. Tap on the file and select "Open". This will start installing the Magisk Module for Android 12 emojis.
-
Wait for the installation to finish. You will see a message that says "Module installed" when it is done.
-
Tap on the "Reboot" button at the bottom of the screen. This will restart your phone and apply the changes.
-
-
How to Enjoy Android 12 Emojis on Your Phone
-
How to Access and Use Android 12 Emojis
-
Congratulations, you have successfully installed Android 12 emojis on your phone. Now you can enjoy using them in your text messages, social media posts, and online chats. Here is how to access and use Android 12 emojis on your phone:
-
-
To access Android 12 emojis, you need to use a keyboard app that supports them. You can use any keyboard app that you prefer, such as Gboard, SwiftKey, or Samsung Keyboard.
-
To use Android 12 emojis, you need to tap on the emoji icon on your keyboard app. This will open the emoji panel where you can browse and select the emojis that you want to use.
-
You can also search for specific emojis by typing their names or keywords in the search bar at the bottom of the emoji panel. For example, if you want to use the melting face emoji, you can type "melting" or "face" in the search bar and find it easily.
-
You can also customize some of the emojis by tapping and holding on them. This will open a pop-up menu where you can choose different skin tones, hair styles, or genders for some of the emojis.
-
-
Tips and Tricks for Android 12 Emojis
-
To make the most out of Android 12 emojis, here are some tips and tricks that you can try:
-
-
You can create shortcuts for your favorite or frequently used emojis by adding them to your keyboard app's clipboard or dictionary. This way, you can access them quickly without browsing through the emoji panel.
-
You can combine different emojis to create new meanings or expressions. For example, you can combine heart on fire and mending heart to show that you are recovering from a heartbreak or combine ninja and military helmet to show that you are ready for action.
-
You can use emojis to spice up your contacts' names or labels. For example, you can add potted plant or worm to your friend's name who loves gardening or add bubble tea or tamale to your favorite food delivery service.
-
-
Conclusion and FAQs
-
Conclusion
-
In this article, we have shown you how to download and install Android 12 emojis on any Android phone using a simple method. You don't need to wait for the official update or buy a new phone to enjoy the new emojis. All you need is a rooted phone and a Magisk module. We have also explained what are Android 12 emojis and why you should get them. We have also given you some tips and tricks on how to access and use Android 12 emojis on your phone. We hope that you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy texting!
-
FAQs
-
Here are some of the frequently asked questions about Android 12 emojis:
-
-
-
Question
-
Answer
-
-
-
Can I get Android 12 emojis without rooting my phone?
-
No, you cannot get Android 12 emojis without rooting your phone. Rooting is necessary to install the Magisk module that replaces the existing emoji fonts on your phone. If you don't want to root your phone, you will have to wait for the official update from Google or your phone manufacturer.
-
-
-
Will Android 12 emojis work on all apps and platforms?
-
Yes, Android 12 emojis will work on all apps and platforms that support Unicode's Emoji 13.1 set. However, some apps and platforms may display the emojis differently depending on their own design and style. For example, WhatsApp and Facebook Messenger have their own emoji sets that may not match the Android 12 emojis exactly.
-
-
-
How can I revert back to the old emojis if I don't like the new ones?
-
If you want to revert back to the old emojis, you can uninstall the Magisk module that you installed for Android 12 emojis. To do this, open the Magisk Manager app, go to Modules, find the Android 12 Emojis module, and tap on the trash icon. Then reboot your phone and the old emojis will be restored.
-
-
-
Are there any risks or drawbacks of installing Android 12 emojis on my phone?
-
There are no major risks or drawbacks of installing Android 12 emojis on your phone, as long as you follow the instructions carefully and backup your data before proceeding. However, some minor issues that you may encounter are:
-
-
You may lose some of the original emoji fonts that came with your phone.
-
You may experience some compatibility issues with some apps or platforms that do not support the new emojis.
-
You may void your phone's warranty or lose access to some features or services that require an unrooted phone.
-
-
-
-
Where can I find more information or help about Android 12 emojis?
-
If you want to find more information or help about Android 12 emojis, you can visit these sources:
-
-
The official Google blog post that announces the new emoji updates.
-
The official Unicode website that lists all the new emojis in Emoji 13.1 set.
-
The XDA Developers forum thread that provides the download link and instructions for the Android 12 Emojis Magisk module.
-
The Reddit community that discusses and shares everything related to Android 12.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Clash Mini APK and Play the Beta in Any Country.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Clash Mini APK and Play the Beta in Any Country.md
deleted file mode 100644
index 1e2c9a92ff96c3035719e5dfb385ce80eb341934..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Clash Mini APK and Play the Beta in Any Country.md
+++ /dev/null
@@ -1,102 +0,0 @@
-
-
Where to Download Clash Mini
-
Clash Mini is a brand new mobile game launched by Supercell, the same developers behind popular titles such as Clash Royale and Clash of Clans. Clash Mini takes characters from the ‘Clash Universe’ which are used to create a unique deck to battle against other players in in a board-game-like set up. If you are a fan of the Clash games or you are looking for a fun and strategic board game, you might be wondering where to download Clash Mini and how to play it. In this article, we will answer all your questions and give you some tips and tricks to help you master the game.
Clash Mini is a game of choices, where you have to duel and rumble with other players in a fun, strategy-packed board game. You have to collect, summon and upgrade your army of Minis, which are adorable versions of the iconic Clash characters. You have to predict your opponent’s moves and then assemble your winning strategy and formation. Watch your Minis come to life and clash to be the last one standing!
-
A new member of the Clash family
-
Clash Mini is one of the three new games in the Clash of Clans universe unveiled by Supercell in April 2021, alongside Clash Quest and Clash Heroes. The developer said that they wanted to offer a new Clash experience to current players and broaden Clash to new audiences who haven’t experienced Clash before. Clash Mini is designed to be easy to learn but challenging to master, making it suitable for casual and hardcore gamers alike.
-
Which platforms can play Clash Mini?
-
Mobile exclusive game
-
Like all the previous Clash titles, Clash Mini is exclusive to mobile devices. You won’t be able to play it on any non-mobile platforms such as PC or console. However, you can use an emulator to play it on your computer if you really want to, but this might affect the performance and compatibility of the game.
-
Available for iOS and Android devices
-
Clash Mini is available for both iOS and Android devices. You can download it from the App Store or Google Play Store depending on your device. However, the game is not yet globally released, so you might not be able to find it in your region. The game is currently in beta testing phase, which means that it is only available in certain countries for a limited number of players.
-
When is the Clash Mini release date?
-
Beta version launched in November 2021
-
The beta version of Clash Mini was launched on November 8, 2021 for players in Finland, Sweden, Norway, Denmark, Iceland, Canada, Singapore, Chile, Hong Kong, Sri Lanka and The Philippines. The beta version allows players to test the game before its official release and provide feedback to the developer. The beta version also helps the developer to fix any bugs or issues that might occur in the game.
-
Global release date not confirmed yet
-
The global release date of Clash Mini has not been confirmed yet by Supercell. The developer has not announced when they plan to launch the game worldwide or which countries will be added next to the beta version. However, based on the previous Clash games, we can expect that the game will be released globally sometime in 2023. Until then, you can follow the official Clash Mini social media accounts and website to get the latest news and updates about the game.
-
How to download clash mini on android
-Clash mini apk download latest version
-Clash mini beta download link
-Clash mini release date and download guide
-Clash mini download for PC windows 10
-Clash mini mod apk download unlimited gems
-Clash mini strategy and tips for beginners
-Clash mini review and gameplay video
-Clash mini download error and how to fix it
-Clash mini vs clash royale comparison
-Clash mini best minis and heroes to use
-Clash mini cheats and hacks for free gems
-Clash mini update and patch notes
-Clash mini wiki and fan site
-Clash mini support and contact information
-How to play clash mini on mac
-Clash mini download size and system requirements
-Clash mini online multiplayer mode
-Clash mini skins and customization options
-Clash mini tournaments and events
-How to download clash mini on ios
-Clash mini google play store link
-Clash mini discord server and community
-Clash mini reddit and forum discussions
-Clash mini official website and blog
-How to download clash mini on fire tablet
-Clash mini amazon app store link
-Clash mini gift codes and redeem codes
-Clash mini feedback and suggestions
-Clash mini faq and troubleshooting
-How to download clash mini on chromebook
-Clash mini web version and browser extension
-Clash mini achievements and rewards
-Clash mini leaderboard and rankings
-Clash mini social media accounts and news
-How to download clash mini on linux
-Clash mini github repository and source code
-Clash mini development and history
-Clash mini future plans and roadmap
-Clash mini testimonials and ratings
-How to download clash mini on bluestacks emulator
-Clash mini nox player emulator download link
-Clash mini best settings and optimization tips
-Clash mini fun facts and trivia
-Clash mini fan art and wallpapers
-Clash mini merchandise and products
-How to download clash mini on smart tv
-Clash mini streaming platforms and channels
-Clash mini podcast and interviews
-
How to download Clash Mini beta?
-
Sign up on the official website
-
If you want to play Clash Mini beta, you have to sign up on the official website. You have to enter your email address and choose your preferred platform (iOS or Android). You will also have to agree to the terms and conditions and privacy policy of the game. After you sign up, you will receive a confirmation email with a link to download the game.
-
Download from the App Store or Google Play Store
-
After you receive the confirmation email, you can download Clash Mini beta from the App Store or Google Play Store. You have to search for Clash Mini in the store and tap on the download button. You might have to enter your Apple ID or Google account credentials to verify your identity. Once the download is complete, you can open the game and start playing.
-
How to play Clash Mini?
-
Collect, summon and upgrade your Minis
-
The core gameplay of Clash Mini is to collect, summon and upgrade your Minis. Minis are cute and powerful versions of the Clash characters that you can use to fight against other players. There are different types of Minis, such as tanks, damage dealers, healers, support and more. Each Mini has its own stats, abilities and synergies with other Minis. You can collect Minis by opening chests or buying them from the shop. You can summon Minis by placing them on the board before each battle. You can upgrade Minis by spending gold and cards to increase their level and power.
-
Predict, position and clash with your opponent
-
The other aspect of Clash Mini is to predict, position and clash with your opponent. Each battle consists of three rounds, where you have to place your Minis on a 4x4 grid board. You have to predict what your opponent will do and try to counter their strategy. You have to position your Minis wisely on the board, taking into account their range, movement, direction and abilities. You have to clash with your opponent by watching your Minis fight automatically based on their stats and skills. The player who wins two out of three rounds wins the battle.
-
Tips and tricks for Clash Mini
-
Choose the right characters for your army
-
One of the most important tips for Clash Mini is to choose the right characters for your army. You have to consider the strengths and weaknesses of each Mini and how they work together as a team. You have to balance your army with different roles, such as tanks, damage dealers, healers and support. You have to adapt your army according to the game mode, the map and the opponent you are facing. You have to experiment with different combinations of Minis and find out what works best for you.
-
Position your Minis wisely on the battlefield
-
Another crucial tip for Clash Mini is to position your Minis wisely on the battlefield. You have to think strategically about where you place your Minis on the board before each round. You have to consider factors such as range, movement, direction and abilities of your Minis and how they interact with each other and with the enemy Minis. You have to avoid placing your Minis in vulnerable spots where they can be easily attacked or countered by the opponent. You have to use the terrain features such as walls, bridges and obstacles to your advantage.
-
Utilize special abilities and upgrades during battle
-
The final tip for Clash Mini is to utilize special abilities and upgrades during battle. Each Mini has a unique ability that can be activated once per round by tapping on it. These abilities can be offensive, defensive or supportive in nature and can change the outcome of a battle if used at the right time. You also have access to upgrades that can boost your Minis’ stats or skills during a battle. These upgrades are randomly generated from a pool of options and can be applied by dragging them onto a Mini. You have to use these abilities and upgrades wisely and strategically to gain an edge over your opponent.
-
Conclusion
-
Clash Mini is a fun and strategic board game that features adorable versions of the Clash characters in a fast-paced duel against other players. The game is currently in beta testing phase and is only available in certain countries for iOS and Android devices. The global release date of the game is not confirmed yet but is expected sometime in 2023. If you want to play Clash Mini beta , you have to sign up on the official website and download it from the App Store or Google Play Store. To play Clash Mini, you have to collect, summon and upgrade your Minis, predict, position and clash with your opponent, and utilize special abilities and upgrades during battle. We hope this article has helped you learn more about Clash Mini and how to download and play it. If you have any questions, you can check out the FAQs below or visit the official Clash Mini website for more information.
-
FAQs
-
-
Q: How much does Clash Mini cost?
-
A: Clash Mini is free to download and play, but it offers in-app purchases for some items and features.
-
Q: How can I contact Supercell for feedback or support?
-
A: You can contact Supercell through the in-game settings menu or by visiting their website or social media accounts.
-
Q: How can I join a clan or create my own clan in Clash Mini?
-
A: You can join a clan or create your own clan by tapping on the clan icon on the main screen. You can invite your friends or other players to join your clan or search for an existing clan to join.
-
Q: How can I earn rewards and chests in Clash Mini?
-
A: You can earn rewards and chests by winning battles, completing quests, participating in events, ranking up in leagues, and opening the free chest every four hours.
-
Q: How can I watch replays or share my battles in Clash Mini?
-
A: You can watch replays or share your battles by tapping on the battle log icon on the main screen. You can also watch live battles of other players or top players by tapping on the TV icon on the main screen.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/ppdiffusers/commands/__init__.py b/spaces/1toTree/lora_test/ppdiffusers/commands/__init__.py
deleted file mode 100644
index 6a9e6f456198f63505db022b021fe92c19d5f236..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/commands/__init__.py
+++ /dev/null
@@ -1,28 +0,0 @@
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from abc import ABC, abstractmethod
-from argparse import ArgumentParser
-
-
-class BasePPDiffusersCLICommand(ABC):
- @staticmethod
- @abstractmethod
- def register_subcommand(parser: ArgumentParser):
- raise NotImplementedError()
-
- @abstractmethod
- def run(self):
- raise NotImplementedError()
diff --git a/spaces/52Hz/CMFNet_deraindrop/model/CMFNet.py b/spaces/52Hz/CMFNet_deraindrop/model/CMFNet.py
deleted file mode 100644
index 290a108ac388fe34e0524201e75ae4760d91654c..0000000000000000000000000000000000000000
--- a/spaces/52Hz/CMFNet_deraindrop/model/CMFNet.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import torch
-import torch.nn as nn
-from model.block import SAB, CAB, PAB, conv, SAM, conv3x3, conv_down
-
-##########################################################################
-## U-Net
-bn = 2 # block number-1
-
-class Encoder(nn.Module):
- def __init__(self, n_feat, kernel_size, reduction, act, bias, scale_unetfeats, block):
- super(Encoder, self).__init__()
- if block == 'CAB':
- self.encoder_level1 = [CAB(n_feat, kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
- self.encoder_level2 = [CAB(n_feat + scale_unetfeats, kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
- self.encoder_level3 = [CAB(n_feat + (scale_unetfeats * 2), kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
- elif block == 'PAB':
- self.encoder_level1 = [PAB(n_feat, kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
- self.encoder_level2 = [PAB(n_feat + scale_unetfeats, kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
- self.encoder_level3 = [PAB(n_feat + (scale_unetfeats * 2), kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
- elif block == 'SAB':
- self.encoder_level1 = [SAB(n_feat, kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
- self.encoder_level2 = [SAB(n_feat + scale_unetfeats, kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
- self.encoder_level3 = [SAB(n_feat + (scale_unetfeats * 2), kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
- self.encoder_level1 = nn.Sequential(*self.encoder_level1)
- self.encoder_level2 = nn.Sequential(*self.encoder_level2)
- self.encoder_level3 = nn.Sequential(*self.encoder_level3)
- self.down12 = DownSample(n_feat, scale_unetfeats)
- self.down23 = DownSample(n_feat + scale_unetfeats, scale_unetfeats)
-
- def forward(self, x):
- enc1 = self.encoder_level1(x)
- x = self.down12(enc1)
- enc2 = self.encoder_level2(x)
- x = self.down23(enc2)
- enc3 = self.encoder_level3(x)
- return [enc1, enc2, enc3]
-
-class Decoder(nn.Module):
- def __init__(self, n_feat, kernel_size, reduction, act, bias, scale_unetfeats, block):
- super(Decoder, self).__init__()
- if block == 'CAB':
- self.decoder_level1 = [CAB(n_feat, kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
- self.decoder_level2 = [CAB(n_feat + scale_unetfeats, kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
- self.decoder_level3 = [CAB(n_feat + (scale_unetfeats * 2), kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
- elif block == 'PAB':
- self.decoder_level1 = [PAB(n_feat, kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
- self.decoder_level2 = [PAB(n_feat + scale_unetfeats, kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
- self.decoder_level3 = [PAB(n_feat + (scale_unetfeats * 2), kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
- elif block == 'SAB':
- self.decoder_level1 = [SAB(n_feat, kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
- self.decoder_level2 = [SAB(n_feat + scale_unetfeats, kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
- self.decoder_level3 = [SAB(n_feat + (scale_unetfeats * 2), kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
- self.decoder_level1 = nn.Sequential(*self.decoder_level1)
- self.decoder_level2 = nn.Sequential(*self.decoder_level2)
- self.decoder_level3 = nn.Sequential(*self.decoder_level3)
- if block == 'CAB':
- self.skip_attn1 = CAB(n_feat, kernel_size, reduction, bias=bias, act=act)
- self.skip_attn2 = CAB(n_feat + scale_unetfeats, kernel_size, reduction, bias=bias, act=act)
- if block == 'PAB':
- self.skip_attn1 = PAB(n_feat, kernel_size, reduction, bias=bias, act=act)
- self.skip_attn2 = PAB(n_feat + scale_unetfeats, kernel_size, reduction, bias=bias, act=act)
- if block == 'SAB':
- self.skip_attn1 = SAB(n_feat, kernel_size, reduction, bias=bias, act=act)
- self.skip_attn2 = SAB(n_feat + scale_unetfeats, kernel_size, reduction, bias=bias, act=act)
- self.up21 = SkipUpSample(n_feat, scale_unetfeats)
- self.up32 = SkipUpSample(n_feat + scale_unetfeats, scale_unetfeats)
-
- def forward(self, outs):
- enc1, enc2, enc3 = outs
- dec3 = self.decoder_level3(enc3)
- x = self.up32(dec3, self.skip_attn2(enc2))
- dec2 = self.decoder_level2(x)
- x = self.up21(dec2, self.skip_attn1(enc1))
- dec1 = self.decoder_level1(x)
- return [dec1, dec2, dec3]
-
-##########################################################################
-##---------- Resizing Modules ----------
-class DownSample(nn.Module):
- def __init__(self, in_channels, s_factor):
- super(DownSample, self).__init__()
- self.down = nn.Sequential(nn.Upsample(scale_factor=0.5, mode='bilinear', align_corners=False),
- nn.Conv2d(in_channels, in_channels + s_factor, 1, stride=1, padding=0, bias=False))
-
- def forward(self, x):
- x = self.down(x)
- return x
-
-class UpSample(nn.Module):
- def __init__(self, in_channels, s_factor):
- super(UpSample, self).__init__()
- self.up = nn.Sequential(nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False),
- nn.Conv2d(in_channels + s_factor, in_channels, 1, stride=1, padding=0, bias=False))
-
- def forward(self, x):
- x = self.up(x)
- return x
-
-class SkipUpSample(nn.Module):
- def __init__(self, in_channels, s_factor):
- super(SkipUpSample, self).__init__()
- self.up = nn.Sequential(nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False),
- nn.Conv2d(in_channels + s_factor, in_channels, 1, stride=1, padding=0, bias=False))
-
- def forward(self, x, y):
- x = self.up(x)
- x = x + y
- return x
-
-##########################################################################
-# Mixed Residual Module
-class Mix(nn.Module):
- def __init__(self, m=1):
- super(Mix, self).__init__()
- w = nn.Parameter(torch.FloatTensor([m]), requires_grad=True)
- w = nn.Parameter(w, requires_grad=True)
- self.w = w
- self.mix_block = nn.Sigmoid()
-
- def forward(self, fea1, fea2, feat3):
- factor = self.mix_block(self.w)
- other = (1 - factor)/2
- output = fea1 * other.expand_as(fea1) + fea2 * factor.expand_as(fea2) + feat3 * other.expand_as(feat3)
- return output, factor
-
-##########################################################################
-# Architecture
-class CMFNet(nn.Module):
- def __init__(self, in_c=3, out_c=3, n_feat=96, scale_unetfeats=48, kernel_size=3, reduction=4, bias=False):
- super(CMFNet, self).__init__()
-
- p_act = nn.PReLU()
- self.shallow_feat1 = nn.Sequential(conv(in_c, n_feat // 2, kernel_size, bias=bias), p_act,
- conv(n_feat // 2, n_feat, kernel_size, bias=bias))
- self.shallow_feat2 = nn.Sequential(conv(in_c, n_feat // 2, kernel_size, bias=bias), p_act,
- conv(n_feat // 2, n_feat, kernel_size, bias=bias))
- self.shallow_feat3 = nn.Sequential(conv(in_c, n_feat // 2, kernel_size, bias=bias), p_act,
- conv(n_feat // 2, n_feat, kernel_size, bias=bias))
-
- self.stage1_encoder = Encoder(n_feat, kernel_size, reduction, p_act, bias, scale_unetfeats, 'CAB')
- self.stage1_decoder = Decoder(n_feat, kernel_size, reduction, p_act, bias, scale_unetfeats, 'CAB')
-
- self.stage2_encoder = Encoder(n_feat, kernel_size, reduction, p_act, bias, scale_unetfeats, 'PAB')
- self.stage2_decoder = Decoder(n_feat, kernel_size, reduction, p_act, bias, scale_unetfeats, 'PAB')
-
- self.stage3_encoder = Encoder(n_feat, kernel_size, reduction, p_act, bias, scale_unetfeats, 'SAB')
- self.stage3_decoder = Decoder(n_feat, kernel_size, reduction, p_act, bias, scale_unetfeats, 'SAB')
-
- self.sam1o = SAM(n_feat, kernel_size=3, bias=bias)
- self.sam2o = SAM(n_feat, kernel_size=3, bias=bias)
- self.sam3o = SAM(n_feat, kernel_size=3, bias=bias)
-
- self.mix = Mix(1)
- self.add123 = conv(out_c, out_c, kernel_size, bias=bias)
- self.concat123 = conv(n_feat*3, n_feat, kernel_size, bias=bias)
- self.tail = conv(n_feat, out_c, kernel_size, bias=bias)
-
-
- def forward(self, x):
- ## Compute Shallow Features
- shallow1 = self.shallow_feat1(x)
- shallow2 = self.shallow_feat2(x)
- shallow3 = self.shallow_feat3(x)
-
- ## Enter the UNet-CAB
- x1 = self.stage1_encoder(shallow1)
- x1_D = self.stage1_decoder(x1)
- ## Apply SAM
- x1_out, x1_img = self.sam1o(x1_D[0], x)
-
- ## Enter the UNet-PAB
- x2 = self.stage2_encoder(shallow2)
- x2_D = self.stage2_decoder(x2)
- ## Apply SAM
- x2_out, x2_img = self.sam2o(x2_D[0], x)
-
- ## Enter the UNet-SAB
- x3 = self.stage3_encoder(shallow3)
- x3_D = self.stage3_decoder(x3)
- ## Apply SAM
- x3_out, x3_img = self.sam3o(x3_D[0], x)
-
- ## Aggregate SAM features of Stage 1, Stage 2 and Stage 3
- mix_r = self.mix(x1_img, x2_img, x3_img)
- mixed_img = self.add123(mix_r[0])
-
- ## Concat SAM features of Stage 1, Stage 2 and Stage 3
- concat_feat = self.concat123(torch.cat([x1_out, x2_out, x3_out], 1))
- x_final = self.tail(concat_feat)
-
- return x_final + mixed_img
-
-
diff --git a/spaces/52Hz/SUNet_AWGN_denoising/README.md b/spaces/52Hz/SUNet_AWGN_denoising/README.md
deleted file mode 100644
index 5cda9959189f51495f751fd8d64acd9558f9ef0d..0000000000000000000000000000000000000000
--- a/spaces/52Hz/SUNet_AWGN_denoising/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: SUNet_AWGN_denoising
-emoji: 🌪
-colorFrom: red
-colorTo: yellow
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio`, `streamlit`, or `static`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/AI-ZTH-03-23/8.Datasets-NER-Biomed-ClinicalTerms/backup.app.py b/spaces/AI-ZTH-03-23/8.Datasets-NER-Biomed-ClinicalTerms/backup.app.py
deleted file mode 100644
index fd97bf2a8592b219ba1c2d4c94187d984e63d114..0000000000000000000000000000000000000000
--- a/spaces/AI-ZTH-03-23/8.Datasets-NER-Biomed-ClinicalTerms/backup.app.py
+++ /dev/null
@@ -1,268 +0,0 @@
-import gradio as gr
-import pandas as pd
-import json
-from collections import defaultdict
-
-# Create tokenizer for biomed model
-from transformers import pipeline, AutoTokenizer, AutoModelForTokenClassification
-tokenizer = AutoTokenizer.from_pretrained("d4data/biomedical-ner-all") # https://huggingface.co/d4data/biomedical-ner-all?text=asthma
-model = AutoModelForTokenClassification.from_pretrained("d4data/biomedical-ner-all")
-pipe = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="simple")
-
-# Matplotlib for entity graph
-import matplotlib.pyplot as plt
-plt.switch_backend("Agg")
-
-# Load examples from JSON
-import os
-
-# Load terminology datasets:
-basedir = os.path.dirname(__file__)
-#dataLOINC = pd.read_csv(basedir + "\\" + f'LoincTableCore.csv')
-#dataPanels = pd.read_csv(basedir + "\\" + f'PanelsAndForms-ACW1208Labeled.csv')
-#dataSNOMED = pd.read_csv(basedir + "\\" + f'sct2_TextDefinition_Full-en_US1000124_20220901.txt',sep='\t')
-#dataOMS = pd.read_csv(basedir + "\\" + f'SnomedOMS.csv')
-#dataICD10 = pd.read_csv(basedir + "\\" + f'ICD10Diagnosis.csv')
-
-dataLOINC = pd.read_csv(f'LoincTableCore.csv')
-dataPanels = pd.read_csv(f'PanelsAndForms-ACW1208Labeled.csv')
-dataSNOMED = pd.read_csv(f'sct2_TextDefinition_Full-en_US1000124_20220901.txt',sep='\t')
-dataOMS = pd.read_csv(f'SnomedOMS.csv')
-dataICD10 = pd.read_csv(f'ICD10Diagnosis.csv')
-
-dir_path = os.path.dirname(os.path.realpath(__file__))
-EXAMPLES = {}
-#with open(dir_path + "\\" + "examples.json", "r") as f:
-with open("examples.json", "r") as f:
- example_json = json.load(f)
- EXAMPLES = {x["text"]: x["label"] for x in example_json}
-
-def MatchLOINC(name):
- #basedir = os.path.dirname(__file__)
- pd.set_option("display.max_rows", None)
- #data = pd.read_csv(basedir + "\\" + f'LoincTableCore.csv')
- data = dataLOINC
- swith=data.loc[data['COMPONENT'].str.contains(name, case=False, na=False)]
- return swith
-
-def MatchLOINCPanelsandForms(name):
- #basedir = os.path.dirname(__file__)
- #data = pd.read_csv(basedir + "\\" + f'PanelsAndForms-ACW1208Labeled.csv')
- data = dataPanels
- # Assessment Name:
- #swith=data.loc[data['ParentName'].str.contains(name, case=False, na=False)]
- # Assessment Question:
- swith=data.loc[data['LoincName'].str.contains(name, case=False, na=False)]
- return swith
-
-def MatchSNOMED(name):
- #basedir = os.path.dirname(__file__)
- #data = pd.read_csv(basedir + "\\" + f'sct2_TextDefinition_Full-en_US1000124_20220901.txt',sep='\t')
- data = dataSNOMED
- swith=data.loc[data['term'].str.contains(name, case=False, na=False)]
- return swith
-
-def MatchOMS(name):
- #basedir = os.path.dirname(__file__)
- #data = pd.read_csv(basedir + "\\" + f'SnomedOMS.csv')
- data = dataOMS
- swith=data.loc[data['SNOMED CT'].str.contains(name, case=False, na=False)]
- return swith
-
-def MatchICD10(name):
- #basedir = os.path.dirname(__file__)
- #data = pd.read_csv(basedir + "\\" + f'ICD10Diagnosis.csv')
- data = dataICD10
- swith=data.loc[data['Description'].str.contains(name, case=False, na=False)]
- return swith
-
-def SaveResult(text, outputfileName):
- #try:
- basedir = os.path.dirname(__file__)
- savePath = outputfileName
- print("Saving: " + text + " to " + savePath)
- from os.path import exists
- file_exists = exists(savePath)
- if file_exists:
- with open(outputfileName, "a") as f: #append
- #for line in text:
- f.write(str(text.replace("\n"," ")))
- f.write('\n')
- else:
- with open(outputfileName, "w") as f: #write
- #for line in text:
- f.write(str(text.replace("\n"," ")))
- f.write('\n')
- #except ValueError as err:
- # raise ValueError("File Save Error in SaveResult \n" + format_tb(err.__traceback__)[0] + err.args[0] + "\nEnd of error message.") from None
-
- return
-
-def loadFile(filename):
- try:
- basedir = os.path.dirname(__file__)
- loadPath = basedir + "\\" + filename
-
- print("Loading: " + loadPath)
-
- from os.path import exists
- file_exists = exists(loadPath)
-
- if file_exists:
- with open(loadPath, "r") as f: #read
- contents = f.read()
- print(contents)
- return contents
-
- except ValueError as err:
- raise ValueError("File Save Error in SaveResult \n" + format_tb(err.__traceback__)[0] + err.args[0] + "\nEnd of error message.") from None
-
- return ""
-
-def get_today_filename():
- from datetime import datetime
- date = datetime.now().strftime("%Y_%m_%d-%I.%M.%S.%p")
- #print(f"filename_{date}") 'filename_2023_01_12-03-29-22_AM'
- return f"MedNER_{date}.csv"
-
-def get_base(filename):
- basedir = os.path.dirname(__file__)
- loadPath = basedir + "\\" + filename
- #print("Loading: " + loadPath)
- return loadPath
-
-def group_by_entity(raw):
- outputFile = get_base(get_today_filename())
- out = defaultdict(int)
-
- for ent in raw:
- out[ent["entity_group"]] += 1
- myEntityGroup = ent["entity_group"]
- print("Found entity group type: " + myEntityGroup)
-
- if (myEntityGroup in ['Sign_symptom', 'Detailed_description', 'History', 'Activity', 'Medication' ]):
- eterm = ent["word"].replace('#','')
- minlength = 3
- if len(eterm) > minlength:
- print("Found eterm: " + eterm)
- eterm.replace("#","")
- g1=MatchLOINC(eterm)
- g2=MatchLOINCPanelsandForms(eterm)
- g3=MatchSNOMED(eterm)
- g4=MatchOMS(eterm)
- g5=MatchICD10(eterm)
- sAll = ""
-
- print("Saving to output file " + outputFile)
- # Create harmonisation output format of input to output code, name, Text
-
- try: # 18 fields, output to labeled CSV dataset for results teaching on scored regret changes to action plan with data inputs
- col = " 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19"
-
- #LOINC
- g11 = g1['LOINC_NUM'].to_string().replace(","," ").replace("\n"," ")
- g12 = g1['COMPONENT'].to_string().replace(","," ").replace("\n"," ")
- s1 = ("LOINC," + myEntityGroup + "," + eterm + ",questions of ," + g12 + "," + g11 + ", Label,Value, Label,Value, Label,Value ")
- if g11 != 'Series([] )': SaveResult(s1, outputFile)
-
- #LOINC Panels
- g21 = g2['Loinc'].to_string().replace(","," ").replace("\n"," ")
- g22 = g2['LoincName'].to_string().replace(","," ").replace("\n"," ")
- g23 = g2['ParentLoinc'].to_string().replace(","," ").replace("\n"," ")
- g24 = g2['ParentName'].to_string().replace(","," ").replace("\n"," ")
- # s2 = ("LOINC Panel," + myEntityGroup + "," + eterm + ",name of ," + g22 + "," + g21 + ", and Parent codes of ," + g23 + ", with Parent names of ," + g24 + ", Label,Value ")
- s2 = ("LOINC Panel," + myEntityGroup + "," + eterm + ",name of ," + g22 + "," + g21 + "," + g24 + ", and Parent codes of ," + g23 + "," + ", Label,Value ")
- if g21 != 'Series([] )': SaveResult(s2, outputFile)
-
- #SNOMED
- g31 = g3['conceptId'].to_string().replace(","," ").replace("\n"," ").replace("\l"," ").replace("\r"," ")
- g32 = g3['term'].to_string().replace(","," ").replace("\n"," ").replace("\l"," ").replace("\r"," ")
- s3 = ("SNOMED Concept," + myEntityGroup + "," + eterm + ",terms of ," + g32 + "," + g31 + ", Label,Value, Label,Value, Label,Value ")
- if g31 != 'Series([] )': SaveResult(s3, outputFile)
-
- #OMS
- g41 = g4['Omaha Code'].to_string().replace(","," ").replace("\n"," ")
- g42 = g4['SNOMED CT concept ID'].to_string().replace(","," ").replace("\n"," ")
- g43 = g4['SNOMED CT'].to_string().replace(","," ").replace("\n"," ")
- g44 = g4['PR'].to_string().replace(","," ").replace("\n"," ")
- g45 = g4['S&S'].to_string().replace(","," ").replace("\n"," ")
- s4 = ("OMS," + myEntityGroup + "," + eterm + ",concepts of ," + g44 + "," + g45 + ", and SNOMED codes of ," + g43 + ", and OMS problem of ," + g42 + ", and OMS Sign Symptom of ," + g41)
- if g41 != 'Series([] )': SaveResult(s4, outputFile)
-
- #ICD10
- g51 = g5['Code'].to_string().replace(","," ").replace("\n"," ")
- g52 = g5['Description'].to_string().replace(","," ").replace("\n"," ")
- s5 = ("ICD10," + myEntityGroup + "," + eterm + ",descriptions of ," + g52 + "," + g51 + ", Label,Value, Label,Value, Label,Value ")
- if g51 != 'Series([] )': SaveResult(s5, outputFile)
-
- except ValueError as err:
- raise ValueError("Error in group by entity \n" + format_tb(err.__traceback__)[0] + err.args[0] + "\nEnd of error message.") from None
-
- return outputFile
-
-
-def plot_to_figure(grouped):
- fig = plt.figure()
- plt.bar(x=list(grouped.keys()), height=list(grouped.values()))
- plt.margins(0.2)
- plt.subplots_adjust(bottom=0.4)
- plt.xticks(rotation=90)
- return fig
-
-
-def ner(text):
- raw = pipe(text)
- ner_content = {
- "text": text,
- "entities": [
- {
- "entity": x["entity_group"],
- "word": x["word"],
- "score": x["score"],
- "start": x["start"],
- "end": x["end"],
- }
- for x in raw
- ],
- }
-
- outputFile = group_by_entity(raw)
- label = EXAMPLES.get(text, "Unknown")
- outputDataframe = pd.read_csv(outputFile)
- return (ner_content, outputDataframe, outputFile)
-
-demo = gr.Blocks()
-with demo:
- gr.Markdown(
- """
- # 🩺⚕️NLP Clinical Ontology Biomedical NER
- """
- )
- input = gr.Textbox(label="Note text", value="")
-
- with gr.Tab("Biomedical Entity Recognition"):
- output=[
- gr.HighlightedText(label="NER", combine_adjacent=True),
- #gr.JSON(label="Entity Counts"),
- #gr.Label(label="Rating"),
- #gr.Plot(label="Bar"),
- gr.Dataframe(label="Dataframe"),
- gr.File(label="File"),
- ]
- examples=list(EXAMPLES.keys())
- gr.Examples(examples, inputs=input)
- input.change(fn=ner, inputs=input, outputs=output)
-
- with gr.Tab("Clinical Terminology Resolution"):
- with gr.Row(variant="compact"):
- btnLOINC = gr.Button("LOINC")
- btnPanels = gr.Button("Panels")
- btnSNOMED = gr.Button("SNOMED")
- btnOMS = gr.Button("OMS")
- btnICD10 = gr.Button("ICD10")
-
- examples=list(EXAMPLES.keys())
- gr.Examples(examples, inputs=input)
- input.change(fn=ner, inputs=input, outputs=output)
-#layout="vertical"
-demo.launch(debug=True)
diff --git a/spaces/AIGC-Audio/Make_An_Audio/vocoder/bigvgan/alias_free_torch/filter.py b/spaces/AIGC-Audio/Make_An_Audio/vocoder/bigvgan/alias_free_torch/filter.py
deleted file mode 100644
index 7ad6ea87c1f10ddd94c544037791d7a4634d5ae1..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio/vocoder/bigvgan/alias_free_torch/filter.py
+++ /dev/null
@@ -1,95 +0,0 @@
-# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0
-# LICENSE is in incl_licenses directory.
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import math
-
-if 'sinc' in dir(torch):
- sinc = torch.sinc
-else:
- # This code is adopted from adefossez's julius.core.sinc under the MIT License
- # https://adefossez.github.io/julius/julius/core.html
- # LICENSE is in incl_licenses directory.
- def sinc(x: torch.Tensor):
- """
- Implementation of sinc, i.e. sin(pi * x) / (pi * x)
- __Warning__: Different to julius.sinc, the input is multiplied by `pi`!
- """
- return torch.where(x == 0,
- torch.tensor(1., device=x.device, dtype=x.dtype),
- torch.sin(math.pi * x) / math.pi / x)
-
-
-# This code is adopted from adefossez's julius.lowpass.LowPassFilters under the MIT License
-# https://adefossez.github.io/julius/julius/lowpass.html
-# LICENSE is in incl_licenses directory.
-def kaiser_sinc_filter1d(cutoff, half_width, kernel_size): # return filter [1,1,kernel_size]
- even = (kernel_size % 2 == 0)
- half_size = kernel_size // 2
-
- #For kaiser window
- delta_f = 4 * half_width
- A = 2.285 * (half_size - 1) * math.pi * delta_f + 7.95
- if A > 50.:
- beta = 0.1102 * (A - 8.7)
- elif A >= 21.:
- beta = 0.5842 * (A - 21)**0.4 + 0.07886 * (A - 21.)
- else:
- beta = 0.
- window = torch.kaiser_window(kernel_size, beta=beta, periodic=False)
-
- # ratio = 0.5/cutoff -> 2 * cutoff = 1 / ratio
- if even:
- time = (torch.arange(-half_size, half_size) + 0.5)
- else:
- time = torch.arange(kernel_size) - half_size
- if cutoff == 0:
- filter_ = torch.zeros_like(time)
- else:
- filter_ = 2 * cutoff * window * sinc(2 * cutoff * time)
- # Normalize filter to have sum = 1, otherwise we will have a small leakage
- # of the constant component in the input signal.
- filter_ /= filter_.sum()
- filter = filter_.view(1, 1, kernel_size)
-
- return filter
-
-
-class LowPassFilter1d(nn.Module):
- def __init__(self,
- cutoff=0.5,
- half_width=0.6,
- stride: int = 1,
- padding: bool = True,
- padding_mode: str = 'replicate',
- kernel_size: int = 12):
- # kernel_size should be even number for stylegan3 setup,
- # in this implementation, odd number is also possible.
- super().__init__()
- if cutoff < -0.:
- raise ValueError("Minimum cutoff must be larger than zero.")
- if cutoff > 0.5:
- raise ValueError("A cutoff above 0.5 does not make sense.")
- self.kernel_size = kernel_size
- self.even = (kernel_size % 2 == 0)
- self.pad_left = kernel_size // 2 - int(self.even)
- self.pad_right = kernel_size // 2
- self.stride = stride
- self.padding = padding
- self.padding_mode = padding_mode
- filter = kaiser_sinc_filter1d(cutoff, half_width, kernel_size)
- self.register_buffer("filter", filter)
-
- #input [B, C, T]
- def forward(self, x):
- _, C, _ = x.shape
-
- if self.padding:
- x = F.pad(x, (self.pad_left, self.pad_right),
- mode=self.padding_mode)
- out = F.conv1d(x, self.filter.expand(C, -1, -1),
- stride=self.stride, groups=C)
-
- return out
\ No newline at end of file
diff --git a/spaces/Aaaaaaaabdualh/poetry2023/app.py b/spaces/Aaaaaaaabdualh/poetry2023/app.py
deleted file mode 100644
index 5b6654d5a405778ddbc9ca5fa5d041aff535f3b5..0000000000000000000000000000000000000000
--- a/spaces/Aaaaaaaabdualh/poetry2023/app.py
+++ /dev/null
@@ -1,53 +0,0 @@
-import gc
-import gradio as gr
-from transformers import pipeline, set_seed
-
-pipe = pipeline('text-generation', framework='pt', model='akhooli/ap2023', tokenizer='akhooli/ap2023')
-#gc.collect()
-samples = [['أنت'
- ,1.0, 50, 1.0, 1.0, 114],['هل غادر'
- ,1.0, 50, 1.0, 1.0, 114 ],['ألا ليت'
- ,1.0, 50, 1.0, 1.0, 114 ],['يا قدس'
- ,1.0, 50, 1.0, 1.0, 114],['عيد بأية حال'
- ,1.0, 50, 1.0, 1.0, 114],['لكل شيء إذا ما'
- ,1.0, 50, 1.0, 1.0, 114 ],['.'
- ,1.0, 50, 1.0, 1.0, 114]]
-
-notes = """
-- Enter a short prompt or select (click) one of the examples and click SEND
-- Adjust parameters (temperture, top k, top p and penalty) through the slider (keep close to default values).
-- For the same seed (randomness), the same output is regenerated if other parameters are fixed
-- Clear and enter new prompt or select another example and SEND to regenerate
-- The '.' means start a new line from no prompt (your prompt need not be long)
-- Be patient: this runs on CPU (free tier)
-- Feedback (Twitter): @akhooli (https://twitter.com/akhooli/status/1611025232201977859)
-- Note/Disclaimer: may generate unaccepted or inappropriate content. Use at your own risk.
-"""
-def sayPoetry(prompt, temp=1.0, topk = 50, topp = 1.0, penalty=1.0, seed=114):
- if not int(seed) >= 0: seed=114
- set_seed(seed)
- gen = pipe(prompt, max_length=96, do_sample=True, temperature=temp, top_k=topk, top_p=topp, repetition_penalty=penalty,
- min_length = 64, no_repeat_ngram_size = 3, return_full_text=True,
- num_beams=5, num_return_sequences=1)[0]["generated_text"]
- poetry =""
- for line in gen.split('.')[:-1]:
- poetry += line #+ "\n"
- return poetry
-poetry = gr.Interface(fn=sayPoetry,
- inputs=[
- gr.Textbox(label="Enter short prompt or select from examples:"),
- gr.Slider(0.70, 1.2, step=0.01,value=1.0, label='control temperature'),
- gr.Slider(25, 100, step=1,value=50, label='control top k'),
- gr.Slider(0.80, 1.0, step=0.01,value=1.0, label='control top p'),
- gr.Slider(0.90, 1.50, step=0.01,value=1.0, label='control penalty'),
- gr.Number(value=139750, precision=0, label='Seed'),
- ],
- outputs=[gr.Textbox(label="Generated Poetry:")],
-
- allow_flagging='never',
- title='Arabic Poetry Generation Demo (updated Jan. 2023)',
- description = "A simple demo of AI generated poetry based on 1M poems fine-tuned using AraGPT2 (be patient, runs on cpu)",
- examples=samples,
- cache_examples=False,
- article = notes)
-poetry.launch() # show_error = True, debug=True
\ No newline at end of file
diff --git a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/quantization/__init__.py b/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/quantization/__init__.py
deleted file mode 100644
index 836d6eb518978480c6b95d6f29ce4f84a9428793..0000000000000000000000000000000000000000
--- a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/quantization/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-# flake8: noqa
-from .vq import ResidualVectorQuantizer
-from .base import BaseQuantizer, DummyQuantizer, QuantizedResult
diff --git a/spaces/AgentVerse/agentVerse/agentverse/agents/simulation_agent/conversation.py b/spaces/AgentVerse/agentVerse/agentverse/agents/simulation_agent/conversation.py
deleted file mode 100644
index 3c356cf2f9772e8ab438628848e5ed2c1414118d..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/agentverse/agents/simulation_agent/conversation.py
+++ /dev/null
@@ -1,107 +0,0 @@
-from __future__ import annotations
-from colorama import Fore
-
-# import logging
-from agentverse.logging import get_logger
-import bdb
-from string import Template
-from typing import TYPE_CHECKING, List
-
-from agentverse.message import Message
-
-#from . import agent_registry
-#from .base import BaseAgent
-from agentverse.agents import agent_registry
-from agentverse.agents.base import BaseAgent
-
-logger = get_logger()
-
-
-@agent_registry.register("conversation")
-class ConversationAgent(BaseAgent):
- def step(self, env_description: str = "") -> Message:
- prompt = self._fill_prompt_template(env_description)
-
- parsed_response = None
- for i in range(self.max_retry):
- try:
- response = self.llm.generate_response(prompt)
- parsed_response = self.output_parser.parse(response)
- break
- except KeyboardInterrupt:
- raise
- except Exception as e:
- logger.error(e)
- logger.warn("Retrying...")
- continue
-
- if parsed_response is None:
- logger.error(f"{self.name} failed to generate valid response.")
-
- message = Message(
- content=""
- if parsed_response is None
- else parsed_response.return_values["output"],
- sender=self.name,
- receiver=self.get_receiver(),
- )
- return message
-
- async def astep(self, env_description: str = "") -> Message:
- """Asynchronous version of step"""
- prompt = self._fill_prompt_template(env_description)
-
- parsed_response = None
- for i in range(self.max_retry):
- try:
- # if self.name == "Code Reviewer":
- logger.debug(prompt, "Prompt", Fore.CYAN)
- response = await self.llm.agenerate_response(prompt)
-
- # logging.info(f"{self.name}'s request result:"
- # f" {response.content}")
- parsed_response = self.output_parser.parse(response)
- break
- except (KeyboardInterrupt, bdb.BdbQuit):
- raise
- except Exception as e:
- logger.error(e)
- logger.warning("Retrying...")
- continue
-
- if parsed_response is None:
- logger.error(f"{self.name} failed to generate valid response.")
-
- message = Message(
- content=""
- if parsed_response is None
- else parsed_response.return_values["output"],
- sender=self.name,
- receiver=self.get_receiver(),
- )
- return message
-
- def _fill_prompt_template(self, env_description: str = "") -> str:
- """Fill the placeholders in the prompt template
-
- In the conversation agent, three placeholders are supported:
- - ${agent_name}: the name of the agent
- - ${env_description}: the description of the environment
- - ${role_description}: the description of the role of the agent
- - ${chat_history}: the chat history of the agent
- """
- input_arguments = {
- "agent_name": self.name,
- "env_description": env_description,
- "role_description": self.role_description,
- "chat_history": self.memory.to_string(add_sender_prefix=True),
- }
- return Template(self.prompt_template).safe_substitute(input_arguments)
-
- def add_message_to_memory(self, messages: List[Message]) -> None:
- self.memory.add_message(messages)
-
- def reset(self) -> None:
- """Reset the agent"""
- self.memory.reset()
- # TODO: reset receiver
diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/evaluator/__init__.py b/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/evaluator/__init__.py
deleted file mode 100644
index 016e72c67bc445fe2fef58f1cca31aa6b7840831..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/evaluator/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-from agentverse.registry import Registry
-
-evaluator_registry = Registry(name="EvaluatorRegistry")
-
-from .base import BaseEvaluator, NoneEvaluator
-from .basic import BasicEvaluator
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorpicker/ColorPicker.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorpicker/ColorPicker.d.ts
deleted file mode 100644
index 5e8ea508dc550d7f47e6eb46b6b23d848be5b39e..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorpicker/ColorPicker.d.ts
+++ /dev/null
@@ -1,38 +0,0 @@
-import Sizer from '../../sizer/Sizer';
-
-export default ColorPicker;
-
-declare namespace ColorPicker {
- interface IConfig extends Sizer.IConfig {
- background?: Phaser.GameObjects.GameObject,
-
- hPalette?: {
- position?: 0 | 1 | 2 | 3 | 'bottom' | 'left' | 'top' | 'right',
- size?: number, width?: number, height?: number,
- },
-
- svPalette?: {
- width?: number, height?: number,
- },
-
- valuechangeCallback: (newValue: number, oldValue: number, colorPicker: ColorPicker) => void,
- valuechangeCallbackScope?: Object,
-
- value?: number,
- }
-}
-
-declare class ColorPicker extends Sizer {
- constructor(
- scene: Phaser.Scene,
- config?: ColorPicker.IConfig
- );
-
- setValue(value: number): this;
- value: number;
-
- setColor(color: number): this;
- color: number;
-
-
-}
\ No newline at end of file
diff --git a/spaces/AlexZou/Deploy_Restoration/model/IAT_main.py b/spaces/AlexZou/Deploy_Restoration/model/IAT_main.py
deleted file mode 100644
index 0c050dbd72bfa1a28a2878c15eaf42ff8b4bb89a..0000000000000000000000000000000000000000
--- a/spaces/AlexZou/Deploy_Restoration/model/IAT_main.py
+++ /dev/null
@@ -1,133 +0,0 @@
-import torch
-import numpy as np
-from torch import nn
-import torch.nn.functional as F
-import os
-import math
-
-from timm.models.layers import trunc_normal_
-from model.blocks import CBlock_ln, SwinTransformerBlock
-from model.global_net import Global_pred
-
-class Local_pred(nn.Module):
- def __init__(self, dim=16, number=4, type='ccc'):
- super(Local_pred, self).__init__()
- # initial convolution
- self.conv1 = nn.Conv2d(3, dim, 3, padding=1, groups=1)
- self.relu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
- # main blocks
- block = CBlock_ln(dim)
- block_t = SwinTransformerBlock(dim) # head number
- if type =='ccc':
- #blocks1, blocks2 = [block for _ in range(number)], [block for _ in range(number)]
- blocks1 = [CBlock_ln(16, drop_path=0.01), CBlock_ln(16, drop_path=0.05), CBlock_ln(16, drop_path=0.1)]
- blocks2 = [CBlock_ln(16, drop_path=0.01), CBlock_ln(16, drop_path=0.05), CBlock_ln(16, drop_path=0.1)]
- elif type =='ttt':
- blocks1, blocks2 = [block_t for _ in range(number)], [block_t for _ in range(number)]
- elif type =='cct':
- blocks1, blocks2 = [block, block, block_t], [block, block, block_t]
- # block1 = [CBlock_ln(16), nn.Conv2d(16,24,3,1,1)]
- self.mul_blocks = nn.Sequential(*blocks1, nn.Conv2d(dim, 3, 3, 1, 1), nn.ReLU())
- self.add_blocks = nn.Sequential(*blocks2, nn.Conv2d(dim, 3, 3, 1, 1), nn.Tanh())
-
-
- def forward(self, img):
- img1 = self.relu(self.conv1(img))
- mul = self.mul_blocks(img1)
- add = self.add_blocks(img1)
-
- return mul, add
-
-# Short Cut Connection on Final Layer
-class Local_pred_S(nn.Module):
- def __init__(self, in_dim=3, dim=16, number=4, type='ccc'):
- super(Local_pred_S, self).__init__()
- # initial convolution
- self.conv1 = nn.Conv2d(in_dim, dim, 3, padding=1, groups=1)
- self.relu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
- # main blocks
- block = CBlock_ln(dim)
- block_t = SwinTransformerBlock(dim) # head number
- if type =='ccc':
- blocks1 = [CBlock_ln(16, drop_path=0.01), CBlock_ln(16, drop_path=0.05), CBlock_ln(16, drop_path=0.1)]
- blocks2 = [CBlock_ln(16, drop_path=0.01), CBlock_ln(16, drop_path=0.05), CBlock_ln(16, drop_path=0.1)]
- elif type =='ttt':
- blocks1, blocks2 = [block_t for _ in range(number)], [block_t for _ in range(number)]
- elif type =='cct':
- blocks1, blocks2 = [block, block, block_t], [block, block, block_t]
- # block1 = [CBlock_ln(16), nn.Conv2d(16,24,3,1,1)]
- self.mul_blocks = nn.Sequential(*blocks1)
- self.add_blocks = nn.Sequential(*blocks2)
-
- self.mul_end = nn.Sequential(nn.Conv2d(dim, 3, 3, 1, 1), nn.ReLU())
- self.add_end = nn.Sequential(nn.Conv2d(dim, 3, 3, 1, 1), nn.Tanh())
- self.apply(self._init_weights)
-
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
- elif isinstance(m, nn.Conv2d):
- fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
- fan_out //= m.groups
- m.weight.data.normal_(0, math.sqrt(2.0 / fan_out))
- if m.bias is not None:
- m.bias.data.zero_()
-
-
-
- def forward(self, img):
- img1 = self.relu(self.conv1(img))
- # short cut connection
- mul = self.mul_blocks(img1) + img1
- add = self.add_blocks(img1) + img1
- mul = self.mul_end(mul)
- add = self.add_end(add)
-
- return mul, add
-
-class IAT(nn.Module):
- def __init__(self, in_dim=3, with_global=True, type='lol'):
- super(IAT, self).__init__()
- #self.local_net = Local_pred()
-
- self.local_net = Local_pred_S(in_dim=in_dim)
-
- self.with_global = with_global
- if self.with_global:
- self.global_net = Global_pred(in_channels=in_dim, type=type)
-
- def apply_color(self, image, ccm):
- shape = image.shape
- image = image.view(-1, 3)
- image = torch.tensordot(image, ccm, dims=[[-1], [-1]])
- image = image.view(shape)
- return torch.clamp(image, 1e-8, 1.0)
-
- def forward(self, img_low):
- #print(self.with_global)
- mul, add = self.local_net(img_low)
- img_high = (img_low.mul(mul)).add(add)
-
- if not self.with_global:
- return img_high
-
- else:
- gamma, color = self.global_net(img_low)
- b = img_high.shape[0]
- img_high = img_high.permute(0, 2, 3, 1) # (B,C,H,W) -- (B,H,W,C)
- img_high = torch.stack([self.apply_color(img_high[i,:,:,:], color[i,:,:])**gamma[i,:] for i in range(b)], dim=0)
- img_high = img_high.permute(0, 3, 1, 2) # (B,H,W,C) -- (B,C,H,W)
- return img_high
-
-
-if __name__ == "__main__":
- os.environ['CUDA_VISIBLE_DEVICES']='3'
- img = torch.Tensor(1, 3, 400, 600)
- net = IAT()
- print('total parameters:', sum(param.numel() for param in net.parameters()))
- _, _, high = net(img)
\ No newline at end of file
diff --git a/spaces/Altinas/vits-uma-genshin-honkais/attentions.py b/spaces/Altinas/vits-uma-genshin-honkais/attentions.py
deleted file mode 100644
index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000
--- a/spaces/Altinas/vits-uma-genshin-honkais/attentions.py
+++ /dev/null
@@ -1,300 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/cpp/cppipc/prod_cons.h b/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/cpp/cppipc/prod_cons.h
deleted file mode 100644
index c9004bb8043a12e32814436baa6262a00c8ef68e..0000000000000000000000000000000000000000
--- a/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/cpp/cppipc/prod_cons.h
+++ /dev/null
@@ -1,433 +0,0 @@
-#pragma once
-
-#include
-#include
-#include
-#include
-#include
-
-#include "libipc/def.h"
-
-#include "libipc/platform/detail.h"
-#include "libipc/circ/elem_def.h"
-#include "libipc/utility/log.h"
-#include "libipc/utility/utility.h"
-
-namespace ipc {
-
-////////////////////////////////////////////////////////////////
-/// producer-consumer implementation
-////////////////////////////////////////////////////////////////
-
-template
-struct prod_cons_impl;
-
-template <>
-struct prod_cons_impl> {
-
- template
- struct elem_t {
- std::aligned_storage_t data_ {};
- };
-
- alignas(cache_line_size) std::atomic rd_; // read index
- alignas(cache_line_size) std::atomic wt_; // write index
-
- constexpr circ::u2_t cursor() const noexcept {
- return 0;
- }
-
- template
- bool push(W* /*wrapper*/, F&& f, E* elems) {
- auto cur_wt = circ::index_of(wt_.load(std::memory_order_relaxed));
- if (cur_wt == circ::index_of(rd_.load(std::memory_order_acquire) - 1)) {
- return false; // full
- }
- std::forward(f)(&(elems[cur_wt].data_));
- wt_.fetch_add(1, std::memory_order_release);
- return true;
- }
-
- /**
- * In single-single-unicast, 'force_push' means 'no reader' or 'the only one reader is dead'.
- * So we could just disconnect all connections of receiver, and return false.
- */
- template
- bool force_push(W* wrapper, F&&, E*) {
- wrapper->elems()->disconnect_receiver(~static_cast(0u));
- return false;
- }
-
- template
- bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) {
- auto cur_rd = circ::index_of(rd_.load(std::memory_order_relaxed));
- if (cur_rd == circ::index_of(wt_.load(std::memory_order_acquire))) {
- return false; // empty
- }
- std::forward(f)(&(elems[cur_rd].data_));
- std::forward(out)(true);
- rd_.fetch_add(1, std::memory_order_release);
- return true;
- }
-};
-
-template <>
-struct prod_cons_impl>
- : prod_cons_impl> {
-
- template
- bool force_push(W* wrapper, F&&, E*) {
- wrapper->elems()->disconnect_receiver(1);
- return false;
- }
-
- template class E, std::size_t DS, std::size_t AS>
- bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) {
- byte_t buff[DS];
- for (unsigned k = 0;;) {
- auto cur_rd = rd_.load(std::memory_order_relaxed);
- if (circ::index_of(cur_rd) ==
- circ::index_of(wt_.load(std::memory_order_acquire))) {
- return false; // empty
- }
- std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
- if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
- std::forward(f)(buff);
- std::forward(out)(true);
- return true;
- }
- ipc::yield(k);
- }
- }
-};
-
-template <>
-struct prod_cons_impl>
- : prod_cons_impl> {
-
- using flag_t = std::uint64_t;
-
- template
- struct elem_t {
- std::aligned_storage_t data_ {};
- std::atomic f_ct_ { 0 }; // commit flag
- };
-
- alignas(cache_line_size) std::atomic ct_; // commit index
-
- template
- bool push(W* /*wrapper*/, F&& f, E* elems) {
- circ::u2_t cur_ct, nxt_ct;
- for (unsigned k = 0;;) {
- cur_ct = ct_.load(std::memory_order_relaxed);
- if (circ::index_of(nxt_ct = cur_ct + 1) ==
- circ::index_of(rd_.load(std::memory_order_acquire))) {
- return false; // full
- }
- if (ct_.compare_exchange_weak(cur_ct, nxt_ct, std::memory_order_acq_rel)) {
- break;
- }
- ipc::yield(k);
- }
- auto* el = elems + circ::index_of(cur_ct);
- std::forward(f)(&(el->data_));
- // set flag & try update wt
- el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release);
- while (1) {
- auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
- if (cur_ct != wt_.load(std::memory_order_relaxed)) {
- return true;
- }
- if ((~cac_ct) != cur_ct) {
- return true;
- }
- if (!el->f_ct_.compare_exchange_strong(cac_ct, 0, std::memory_order_relaxed)) {
- return true;
- }
- wt_.store(nxt_ct, std::memory_order_release);
- cur_ct = nxt_ct;
- nxt_ct = cur_ct + 1;
- el = elems + circ::index_of(cur_ct);
- }
- return true;
- }
-
- template
- bool force_push(W* wrapper, F&&, E*) {
- wrapper->elems()->disconnect_receiver(1);
- return false;
- }
-
- template class E, std::size_t DS, std::size_t AS>
- bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) {
- byte_t buff[DS];
- for (unsigned k = 0;;) {
- auto cur_rd = rd_.load(std::memory_order_relaxed);
- auto cur_wt = wt_.load(std::memory_order_acquire);
- auto id_rd = circ::index_of(cur_rd);
- auto id_wt = circ::index_of(cur_wt);
- if (id_rd == id_wt) {
- auto* el = elems + id_wt;
- auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
- if ((~cac_ct) != cur_wt) {
- return false; // empty
- }
- if (el->f_ct_.compare_exchange_weak(cac_ct, 0, std::memory_order_relaxed)) {
- wt_.store(cur_wt + 1, std::memory_order_release);
- }
- k = 0;
- }
- else {
- std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
- if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
- std::forward(f)(buff);
- std::forward(out)(true);
- return true;
- }
- ipc::yield(k);
- }
- }
- }
-};
-
-template <>
-struct prod_cons_impl> {
-
- using rc_t = std::uint64_t;
-
- enum : rc_t {
- ep_mask = 0x00000000ffffffffull,
- ep_incr = 0x0000000100000000ull
- };
-
- template
- struct elem_t {
- std::aligned_storage_t data_ {};
- std::atomic rc_ { 0 }; // read-counter
- };
-
- alignas(cache_line_size) std::atomic wt_; // write index
- alignas(cache_line_size) rc_t epoch_ { 0 }; // only one writer
-
- circ::u2_t cursor() const noexcept {
- return wt_.load(std::memory_order_acquire);
- }
-
- template
- bool push(W* wrapper, F&& f, E* elems) {
- E* el;
- for (unsigned k = 0;;) {
- circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
- if (cc == 0) return false; // no reader
- el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
- // check all consumers have finished reading this element
- auto cur_rc = el->rc_.load(std::memory_order_acquire);
- circ::cc_t rem_cc = cur_rc & ep_mask;
- if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch_)) {
- return false; // has not finished yet
- }
- // consider rem_cc to be 0 here
- if (el->rc_.compare_exchange_weak(
- cur_rc, epoch_ | static_cast(cc), std::memory_order_release)) {
- break;
- }
- ipc::yield(k);
- }
- std::forward(f)(&(el->data_));
- wt_.fetch_add(1, std::memory_order_release);
- return true;
- }
-
- template
- bool force_push(W* wrapper, F&& f, E* elems) {
- E* el;
- epoch_ += ep_incr;
- for (unsigned k = 0;;) {
- circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
- if (cc == 0) return false; // no reader
- el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
- // check all consumers have finished reading this element
- auto cur_rc = el->rc_.load(std::memory_order_acquire);
- circ::cc_t rem_cc = cur_rc & ep_mask;
- if (cc & rem_cc) {
- ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
- cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
- if (cc == 0) return false; // no reader
- }
- // just compare & exchange
- if (el->rc_.compare_exchange_weak(
- cur_rc, epoch_ | static_cast(cc), std::memory_order_release)) {
- break;
- }
- ipc::yield(k);
- }
- std::forward(f)(&(el->data_));
- wt_.fetch_add(1, std::memory_order_release);
- return true;
- }
-
- template
- bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E* elems) {
- if (cur == cursor()) return false; // acquire
- auto* el = elems + circ::index_of(cur++);
- std::forward(f)(&(el->data_));
- for (unsigned k = 0;;) {
- auto cur_rc = el->rc_.load(std::memory_order_acquire);
- if ((cur_rc & ep_mask) == 0) {
- std::forward(out)(true);
- return true;
- }
- auto nxt_rc = cur_rc & ~static_cast(wrapper->connected_id());
- if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
- std::forward(out)((nxt_rc & ep_mask) == 0);
- return true;
- }
- ipc::yield(k);
- }
- }
-};
-
-template <>
-struct prod_cons_impl> {
-
- using rc_t = std::uint64_t;
- using flag_t = std::uint64_t;
-
- enum : rc_t {
- rc_mask = 0x00000000ffffffffull,
- ep_mask = 0x00ffffffffffffffull,
- ep_incr = 0x0100000000000000ull,
- ic_mask = 0xff000000ffffffffull,
- ic_incr = 0x0000000100000000ull
- };
-
- template
- struct elem_t {
- std::aligned_storage_t data_ {};
- std::atomic rc_ { 0 }; // read-counter
- std::atomic f_ct_ { 0 }; // commit flag
- };
-
- alignas(cache_line_size) std::atomic ct_; // commit index
- alignas(cache_line_size) std::atomic epoch_ { 0 };
-
- circ::u2_t cursor() const noexcept {
- return ct_.load(std::memory_order_acquire);
- }
-
- constexpr static rc_t inc_rc(rc_t rc) noexcept {
- return (rc & ic_mask) | ((rc + ic_incr) & ~ic_mask);
- }
-
- constexpr static rc_t inc_mask(rc_t rc) noexcept {
- return inc_rc(rc) & ~rc_mask;
- }
-
- template
- bool push(W* wrapper, F&& f, E* elems) {
- E* el;
- circ::u2_t cur_ct;
- rc_t epoch = epoch_.load(std::memory_order_acquire);
- for (unsigned k = 0;;) {
- circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
- if (cc == 0) return false; // no reader
- el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
- // check all consumers have finished reading this element
- auto cur_rc = el->rc_.load(std::memory_order_relaxed);
- circ::cc_t rem_cc = cur_rc & rc_mask;
- if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch)) {
- return false; // has not finished yet
- }
- else if (!rem_cc) {
- auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
- if ((cur_fl != cur_ct) && cur_fl) {
- return false; // full
- }
- }
- // consider rem_cc to be 0 here
- if (el->rc_.compare_exchange_weak(
- cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast(cc), std::memory_order_relaxed) &&
- epoch_.compare_exchange_weak(epoch, epoch, std::memory_order_acq_rel)) {
- break;
- }
- ipc::yield(k);
- }
- // only one thread/process would touch here at one time
- ct_.store(cur_ct + 1, std::memory_order_release);
- std::forward(f)(&(el->data_));
- // set flag & try update wt
- el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release);
- return true;
- }
-
- template
- bool force_push(W* wrapper, F&& f, E* elems) {
- E* el;
- circ::u2_t cur_ct;
- rc_t epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
- for (unsigned k = 0;;) {
- circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
- if (cc == 0) return false; // no reader
- el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
- // check all consumers have finished reading this element
- auto cur_rc = el->rc_.load(std::memory_order_acquire);
- circ::cc_t rem_cc = cur_rc & rc_mask;
- if (cc & rem_cc) {
- ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
- cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
- if (cc == 0) return false; // no reader
- }
- // just compare & exchange
- if (el->rc_.compare_exchange_weak(
- cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast(cc), std::memory_order_relaxed)) {
- if (epoch == epoch_.load(std::memory_order_acquire)) {
- break;
- }
- else if (push(wrapper, std::forward(f), elems)) {
- return true;
- }
- epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
- }
- ipc::yield(k);
- }
- // only one thread/process would touch here at one time
- ct_.store(cur_ct + 1, std::memory_order_release);
- std::forward(f)(&(el->data_));
- // set flag & try update wt
- el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release);
- return true;
- }
-
- template
- bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E(& elems)[N]) {
- auto* el = elems + circ::index_of(cur);
- auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
- if (cur_fl != ~static_cast(cur)) {
- return false; // empty
- }
- ++cur;
- std::forward(f)(&(el->data_));
- for (unsigned k = 0;;) {
- auto cur_rc = el->rc_.load(std::memory_order_acquire);
- if ((cur_rc & rc_mask) == 0) {
- std::forward(out)(true);
- el->f_ct_.store(cur + N - 1, std::memory_order_release);
- return true;
- }
- auto nxt_rc = inc_rc(cur_rc) & ~static_cast(wrapper->connected_id());
- bool last_one = false;
- if ((last_one = (nxt_rc & rc_mask) == 0)) {
- el->f_ct_.store(cur + N - 1, std::memory_order_release);
- }
- if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
- std::forward(out)(last_one);
- return true;
- }
- ipc::yield(k);
- }
- }
-};
-
-} // namespace ipc
diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/op/upfirdn2d.cpp b/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/op/upfirdn2d.cpp
deleted file mode 100644
index d2e633dc896433c205e18bc3e455539192ff968e..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/op/upfirdn2d.cpp
+++ /dev/null
@@ -1,23 +0,0 @@
-#include
-
-
-torch::Tensor upfirdn2d_op(const torch::Tensor& input, const torch::Tensor& kernel,
- int up_x, int up_y, int down_x, int down_y,
- int pad_x0, int pad_x1, int pad_y0, int pad_y1);
-
-#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
-#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous")
-#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
-
-torch::Tensor upfirdn2d(const torch::Tensor& input, const torch::Tensor& kernel,
- int up_x, int up_y, int down_x, int down_y,
- int pad_x0, int pad_x1, int pad_y0, int pad_y1) {
- CHECK_CUDA(input);
- CHECK_CUDA(kernel);
-
- return upfirdn2d_op(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1);
-}
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
- m.def("upfirdn2d", &upfirdn2d, "upfirdn2d (CUDA)");
-}
\ No newline at end of file
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/libra_rcnn/libra_retinanet_r50_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/libra_rcnn/libra_retinanet_r50_fpn_1x_coco.py
deleted file mode 100644
index be2742098fb8f1e46bbb16c9d3e2e20c2e3083aa..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/libra_rcnn/libra_retinanet_r50_fpn_1x_coco.py
+++ /dev/null
@@ -1,26 +0,0 @@
-_base_ = '../retinanet/retinanet_r50_fpn_1x_coco.py'
-# model settings
-model = dict(
- neck=[
- dict(
- type='FPN',
- in_channels=[256, 512, 1024, 2048],
- out_channels=256,
- start_level=1,
- add_extra_convs='on_input',
- num_outs=5),
- dict(
- type='BFP',
- in_channels=256,
- num_levels=5,
- refine_level=1,
- refine_type='non_local')
- ],
- bbox_head=dict(
- loss_bbox=dict(
- _delete_=True,
- type='BalancedL1Loss',
- alpha=0.5,
- gamma=1.5,
- beta=0.11,
- loss_weight=1.0)))
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/datasets/xml_style.py b/spaces/Andy1621/uniformer_image_detection/mmdet/datasets/xml_style.py
deleted file mode 100644
index 71069488b0f6da3b37e588228f44460ce5f00679..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/datasets/xml_style.py
+++ /dev/null
@@ -1,170 +0,0 @@
-import os.path as osp
-import xml.etree.ElementTree as ET
-
-import mmcv
-import numpy as np
-from PIL import Image
-
-from .builder import DATASETS
-from .custom import CustomDataset
-
-
-@DATASETS.register_module()
-class XMLDataset(CustomDataset):
- """XML dataset for detection.
-
- Args:
- min_size (int | float, optional): The minimum size of bounding
- boxes in the images. If the size of a bounding box is less than
- ``min_size``, it would be add to ignored field.
- """
-
- def __init__(self, min_size=None, **kwargs):
- assert self.CLASSES or kwargs.get(
- 'classes', None), 'CLASSES in `XMLDataset` can not be None.'
- super(XMLDataset, self).__init__(**kwargs)
- self.cat2label = {cat: i for i, cat in enumerate(self.CLASSES)}
- self.min_size = min_size
-
- def load_annotations(self, ann_file):
- """Load annotation from XML style ann_file.
-
- Args:
- ann_file (str): Path of XML file.
-
- Returns:
- list[dict]: Annotation info from XML file.
- """
-
- data_infos = []
- img_ids = mmcv.list_from_file(ann_file)
- for img_id in img_ids:
- filename = f'JPEGImages/{img_id}.jpg'
- xml_path = osp.join(self.img_prefix, 'Annotations',
- f'{img_id}.xml')
- tree = ET.parse(xml_path)
- root = tree.getroot()
- size = root.find('size')
- if size is not None:
- width = int(size.find('width').text)
- height = int(size.find('height').text)
- else:
- img_path = osp.join(self.img_prefix, 'JPEGImages',
- '{}.jpg'.format(img_id))
- img = Image.open(img_path)
- width, height = img.size
- data_infos.append(
- dict(id=img_id, filename=filename, width=width, height=height))
-
- return data_infos
-
- def _filter_imgs(self, min_size=32):
- """Filter images too small or without annotation."""
- valid_inds = []
- for i, img_info in enumerate(self.data_infos):
- if min(img_info['width'], img_info['height']) < min_size:
- continue
- if self.filter_empty_gt:
- img_id = img_info['id']
- xml_path = osp.join(self.img_prefix, 'Annotations',
- f'{img_id}.xml')
- tree = ET.parse(xml_path)
- root = tree.getroot()
- for obj in root.findall('object'):
- name = obj.find('name').text
- if name in self.CLASSES:
- valid_inds.append(i)
- break
- else:
- valid_inds.append(i)
- return valid_inds
-
- def get_ann_info(self, idx):
- """Get annotation from XML file by index.
-
- Args:
- idx (int): Index of data.
-
- Returns:
- dict: Annotation info of specified index.
- """
-
- img_id = self.data_infos[idx]['id']
- xml_path = osp.join(self.img_prefix, 'Annotations', f'{img_id}.xml')
- tree = ET.parse(xml_path)
- root = tree.getroot()
- bboxes = []
- labels = []
- bboxes_ignore = []
- labels_ignore = []
- for obj in root.findall('object'):
- name = obj.find('name').text
- if name not in self.CLASSES:
- continue
- label = self.cat2label[name]
- difficult = obj.find('difficult')
- difficult = 0 if difficult is None else int(difficult.text)
- bnd_box = obj.find('bndbox')
- # TODO: check whether it is necessary to use int
- # Coordinates may be float type
- bbox = [
- int(float(bnd_box.find('xmin').text)),
- int(float(bnd_box.find('ymin').text)),
- int(float(bnd_box.find('xmax').text)),
- int(float(bnd_box.find('ymax').text))
- ]
- ignore = False
- if self.min_size:
- assert not self.test_mode
- w = bbox[2] - bbox[0]
- h = bbox[3] - bbox[1]
- if w < self.min_size or h < self.min_size:
- ignore = True
- if difficult or ignore:
- bboxes_ignore.append(bbox)
- labels_ignore.append(label)
- else:
- bboxes.append(bbox)
- labels.append(label)
- if not bboxes:
- bboxes = np.zeros((0, 4))
- labels = np.zeros((0, ))
- else:
- bboxes = np.array(bboxes, ndmin=2) - 1
- labels = np.array(labels)
- if not bboxes_ignore:
- bboxes_ignore = np.zeros((0, 4))
- labels_ignore = np.zeros((0, ))
- else:
- bboxes_ignore = np.array(bboxes_ignore, ndmin=2) - 1
- labels_ignore = np.array(labels_ignore)
- ann = dict(
- bboxes=bboxes.astype(np.float32),
- labels=labels.astype(np.int64),
- bboxes_ignore=bboxes_ignore.astype(np.float32),
- labels_ignore=labels_ignore.astype(np.int64))
- return ann
-
- def get_cat_ids(self, idx):
- """Get category ids in XML file by index.
-
- Args:
- idx (int): Index of data.
-
- Returns:
- list[int]: All categories in the image of specified index.
- """
-
- cat_ids = []
- img_id = self.data_infos[idx]['id']
- xml_path = osp.join(self.img_prefix, 'Annotations', f'{img_id}.xml')
- tree = ET.parse(xml_path)
- root = tree.getroot()
- for obj in root.findall('object'):
- name = obj.find('name').text
- if name not in self.CLASSES:
- continue
- label = self.cat2label[name]
- cat_ids.append(label)
-
- return cat_ids
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_80k_pascal_context_59.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_80k_pascal_context_59.py
deleted file mode 100644
index a6a7688c7a5f6ff1209eb7c44abdd105e91a2b76..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_80k_pascal_context_59.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './deeplabv3plus_r50-d8_480x480_80k_pascal_context_59.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/Ariharasudhan/YoloV5/utils/aws/mime.sh b/spaces/Ariharasudhan/YoloV5/utils/aws/mime.sh
deleted file mode 100644
index c319a83cfbdf09bea634c3bd9fca737c0b1dd505..0000000000000000000000000000000000000000
--- a/spaces/Ariharasudhan/YoloV5/utils/aws/mime.sh
+++ /dev/null
@@ -1,26 +0,0 @@
-# AWS EC2 instance startup 'MIME' script https://aws.amazon.com/premiumsupport/knowledge-center/execute-user-data-ec2/
-# This script will run on every instance restart, not only on first start
-# --- DO NOT COPY ABOVE COMMENTS WHEN PASTING INTO USERDATA ---
-
-Content-Type: multipart/mixed; boundary="//"
-MIME-Version: 1.0
-
---//
-Content-Type: text/cloud-config; charset="us-ascii"
-MIME-Version: 1.0
-Content-Transfer-Encoding: 7bit
-Content-Disposition: attachment; filename="cloud-config.txt"
-
-#cloud-config
-cloud_final_modules:
-- [scripts-user, always]
-
---//
-Content-Type: text/x-shellscript; charset="us-ascii"
-MIME-Version: 1.0
-Content-Transfer-Encoding: 7bit
-Content-Disposition: attachment; filename="userdata.txt"
-
-#!/bin/bash
-# --- paste contents of userdata.sh here ---
---//
diff --git a/spaces/Artificio/AdversarialArt/src/.ipynb_checkpoints/utils-checkpoint.py b/spaces/Artificio/AdversarialArt/src/.ipynb_checkpoints/utils-checkpoint.py
deleted file mode 100644
index 59e8228aff6ed528250f87234287d80b0b85c96b..0000000000000000000000000000000000000000
--- a/spaces/Artificio/AdversarialArt/src/.ipynb_checkpoints/utils-checkpoint.py
+++ /dev/null
@@ -1,35 +0,0 @@
-from PIL import Image
-import torch
-import torch.nn as nn
-from typing import Dict, Iterable, Callable
-from torch import Tensor
-import glob
-from tqdm import tqdm
-import numpy as np
-from PIL import ImageFile
-ImageFile.LOAD_TRUNCATED_IMAGES = True
-Image.MAX_IMAGE_PIXELS = None
-
-
-# +
-class RobustModel(nn.Module):
- def __init__(self, model):
- super().__init__()
- self.model = model
- def forward(self, x, *args, **kwargs):
- return self.model(x)
-
-
-class CustomArt(torch.utils.data.Dataset):
- def __init__(self, image,transforms=None):
- self.transforms = transforms
- self.image = image
- self.mean = torch.tensor([0.4850, 0.4560, 0.4060])
- self.std = torch.tensor([0.2290, 0.2240, 0.2250])
- def __getitem__(self, idx):
- if self.transforms:
- img = self.transforms(self.image)
- return torch.as_tensor(img, dtype=torch.float)
-
- def __len__(self):
- return len(self.image)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/models/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/models/__init__.py
deleted file mode 100644
index 7855226e4b500142deef8fb247cd33a9a991d122..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/models/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-"""A package that contains models that represent entities.
-"""
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/token.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/token.py
deleted file mode 100644
index e3e565ad591485563a93db89609213c00ca16ca3..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/token.py
+++ /dev/null
@@ -1,213 +0,0 @@
-"""
- pygments.token
- ~~~~~~~~~~~~~~
-
- Basic token types and the standard tokens.
-
- :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-
-class _TokenType(tuple):
- parent = None
-
- def split(self):
- buf = []
- node = self
- while node is not None:
- buf.append(node)
- node = node.parent
- buf.reverse()
- return buf
-
- def __init__(self, *args):
- # no need to call super.__init__
- self.subtypes = set()
-
- def __contains__(self, val):
- return self is val or (
- type(val) is self.__class__ and
- val[:len(self)] == self
- )
-
- def __getattr__(self, val):
- if not val or not val[0].isupper():
- return tuple.__getattribute__(self, val)
- new = _TokenType(self + (val,))
- setattr(self, val, new)
- self.subtypes.add(new)
- new.parent = self
- return new
-
- def __repr__(self):
- return 'Token' + (self and '.' or '') + '.'.join(self)
-
- def __copy__(self):
- # These instances are supposed to be singletons
- return self
-
- def __deepcopy__(self, memo):
- # These instances are supposed to be singletons
- return self
-
-
-Token = _TokenType()
-
-# Special token types
-Text = Token.Text
-Whitespace = Text.Whitespace
-Escape = Token.Escape
-Error = Token.Error
-# Text that doesn't belong to this lexer (e.g. HTML in PHP)
-Other = Token.Other
-
-# Common token types for source code
-Keyword = Token.Keyword
-Name = Token.Name
-Literal = Token.Literal
-String = Literal.String
-Number = Literal.Number
-Punctuation = Token.Punctuation
-Operator = Token.Operator
-Comment = Token.Comment
-
-# Generic types for non-source code
-Generic = Token.Generic
-
-# String and some others are not direct children of Token.
-# alias them:
-Token.Token = Token
-Token.String = String
-Token.Number = Number
-
-
-def is_token_subtype(ttype, other):
- """
- Return True if ``ttype`` is a subtype of ``other``.
-
- exists for backwards compatibility. use ``ttype in other`` now.
- """
- return ttype in other
-
-
-def string_to_tokentype(s):
- """
- Convert a string into a token type::
-
- >>> string_to_token('String.Double')
- Token.Literal.String.Double
- >>> string_to_token('Token.Literal.Number')
- Token.Literal.Number
- >>> string_to_token('')
- Token
-
- Tokens that are already tokens are returned unchanged:
-
- >>> string_to_token(String)
- Token.Literal.String
- """
- if isinstance(s, _TokenType):
- return s
- if not s:
- return Token
- node = Token
- for item in s.split('.'):
- node = getattr(node, item)
- return node
-
-
-# Map standard token types to short names, used in CSS class naming.
-# If you add a new item, please be sure to run this file to perform
-# a consistency check for duplicate values.
-STANDARD_TYPES = {
- Token: '',
-
- Text: '',
- Whitespace: 'w',
- Escape: 'esc',
- Error: 'err',
- Other: 'x',
-
- Keyword: 'k',
- Keyword.Constant: 'kc',
- Keyword.Declaration: 'kd',
- Keyword.Namespace: 'kn',
- Keyword.Pseudo: 'kp',
- Keyword.Reserved: 'kr',
- Keyword.Type: 'kt',
-
- Name: 'n',
- Name.Attribute: 'na',
- Name.Builtin: 'nb',
- Name.Builtin.Pseudo: 'bp',
- Name.Class: 'nc',
- Name.Constant: 'no',
- Name.Decorator: 'nd',
- Name.Entity: 'ni',
- Name.Exception: 'ne',
- Name.Function: 'nf',
- Name.Function.Magic: 'fm',
- Name.Property: 'py',
- Name.Label: 'nl',
- Name.Namespace: 'nn',
- Name.Other: 'nx',
- Name.Tag: 'nt',
- Name.Variable: 'nv',
- Name.Variable.Class: 'vc',
- Name.Variable.Global: 'vg',
- Name.Variable.Instance: 'vi',
- Name.Variable.Magic: 'vm',
-
- Literal: 'l',
- Literal.Date: 'ld',
-
- String: 's',
- String.Affix: 'sa',
- String.Backtick: 'sb',
- String.Char: 'sc',
- String.Delimiter: 'dl',
- String.Doc: 'sd',
- String.Double: 's2',
- String.Escape: 'se',
- String.Heredoc: 'sh',
- String.Interpol: 'si',
- String.Other: 'sx',
- String.Regex: 'sr',
- String.Single: 's1',
- String.Symbol: 'ss',
-
- Number: 'm',
- Number.Bin: 'mb',
- Number.Float: 'mf',
- Number.Hex: 'mh',
- Number.Integer: 'mi',
- Number.Integer.Long: 'il',
- Number.Oct: 'mo',
-
- Operator: 'o',
- Operator.Word: 'ow',
-
- Punctuation: 'p',
- Punctuation.Marker: 'pm',
-
- Comment: 'c',
- Comment.Hashbang: 'ch',
- Comment.Multiline: 'cm',
- Comment.Preproc: 'cp',
- Comment.PreprocFile: 'cpf',
- Comment.Single: 'c1',
- Comment.Special: 'cs',
-
- Generic: 'g',
- Generic.Deleted: 'gd',
- Generic.Emph: 'ge',
- Generic.Error: 'gr',
- Generic.Heading: 'gh',
- Generic.Inserted: 'gi',
- Generic.Output: 'go',
- Generic.Prompt: 'gp',
- Generic.Strong: 'gs',
- Generic.Subheading: 'gu',
- Generic.Traceback: 'gt',
-}
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/bar.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/bar.py
deleted file mode 100644
index ed86a552d1ca6baa0cfd48ec73a7a5c952d047c9..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/bar.py
+++ /dev/null
@@ -1,94 +0,0 @@
-from typing import Optional, Union
-
-from .color import Color
-from .console import Console, ConsoleOptions, RenderResult
-from .jupyter import JupyterMixin
-from .measure import Measurement
-from .segment import Segment
-from .style import Style
-
-# There are left-aligned characters for 1/8 to 7/8, but
-# the right-aligned characters exist only for 1/8 and 4/8.
-BEGIN_BLOCK_ELEMENTS = ["█", "█", "█", "▐", "▐", "▐", "▕", "▕"]
-END_BLOCK_ELEMENTS = [" ", "▏", "▎", "▍", "▌", "▋", "▊", "▉"]
-FULL_BLOCK = "█"
-
-
-class Bar(JupyterMixin):
- """Renders a solid block bar.
-
- Args:
- size (float): Value for the end of the bar.
- begin (float): Begin point (between 0 and size, inclusive).
- end (float): End point (between 0 and size, inclusive).
- width (int, optional): Width of the bar, or ``None`` for maximum width. Defaults to None.
- color (Union[Color, str], optional): Color of the bar. Defaults to "default".
- bgcolor (Union[Color, str], optional): Color of bar background. Defaults to "default".
- """
-
- def __init__(
- self,
- size: float,
- begin: float,
- end: float,
- *,
- width: Optional[int] = None,
- color: Union[Color, str] = "default",
- bgcolor: Union[Color, str] = "default",
- ):
- self.size = size
- self.begin = max(begin, 0)
- self.end = min(end, size)
- self.width = width
- self.style = Style(color=color, bgcolor=bgcolor)
-
- def __repr__(self) -> str:
- return f"Bar({self.size}, {self.begin}, {self.end})"
-
- def __rich_console__(
- self, console: Console, options: ConsoleOptions
- ) -> RenderResult:
-
- width = min(
- self.width if self.width is not None else options.max_width,
- options.max_width,
- )
-
- if self.begin >= self.end:
- yield Segment(" " * width, self.style)
- yield Segment.line()
- return
-
- prefix_complete_eights = int(width * 8 * self.begin / self.size)
- prefix_bar_count = prefix_complete_eights // 8
- prefix_eights_count = prefix_complete_eights % 8
-
- body_complete_eights = int(width * 8 * self.end / self.size)
- body_bar_count = body_complete_eights // 8
- body_eights_count = body_complete_eights % 8
-
- # When start and end fall into the same cell, we ideally should render
- # a symbol that's "center-aligned", but there is no good symbol in Unicode.
- # In this case, we fall back to right-aligned block symbol for simplicity.
-
- prefix = " " * prefix_bar_count
- if prefix_eights_count:
- prefix += BEGIN_BLOCK_ELEMENTS[prefix_eights_count]
-
- body = FULL_BLOCK * body_bar_count
- if body_eights_count:
- body += END_BLOCK_ELEMENTS[body_eights_count]
-
- suffix = " " * (width - len(body))
-
- yield Segment(prefix + body[len(prefix) :] + suffix, self.style)
- yield Segment.line()
-
- def __rich_measure__(
- self, console: Console, options: ConsoleOptions
- ) -> Measurement:
- return (
- Measurement(self.width, self.width)
- if self.width is not None
- else Measurement(4, options.max_width)
- )
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/packaging/_structures.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/packaging/_structures.py
deleted file mode 100644
index 90a6465f9682c886363eea5327dac64bf623a6ff..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/packaging/_structures.py
+++ /dev/null
@@ -1,61 +0,0 @@
-# This file is dual licensed under the terms of the Apache License, Version
-# 2.0, and the BSD License. See the LICENSE file in the root of this repository
-# for complete details.
-
-
-class InfinityType:
- def __repr__(self) -> str:
- return "Infinity"
-
- def __hash__(self) -> int:
- return hash(repr(self))
-
- def __lt__(self, other: object) -> bool:
- return False
-
- def __le__(self, other: object) -> bool:
- return False
-
- def __eq__(self, other: object) -> bool:
- return isinstance(other, self.__class__)
-
- def __gt__(self, other: object) -> bool:
- return True
-
- def __ge__(self, other: object) -> bool:
- return True
-
- def __neg__(self: object) -> "NegativeInfinityType":
- return NegativeInfinity
-
-
-Infinity = InfinityType()
-
-
-class NegativeInfinityType:
- def __repr__(self) -> str:
- return "-Infinity"
-
- def __hash__(self) -> int:
- return hash(repr(self))
-
- def __lt__(self, other: object) -> bool:
- return True
-
- def __le__(self, other: object) -> bool:
- return True
-
- def __eq__(self, other: object) -> bool:
- return isinstance(other, self.__class__)
-
- def __gt__(self, other: object) -> bool:
- return False
-
- def __ge__(self, other: object) -> bool:
- return False
-
- def __neg__(self: object) -> InfinityType:
- return Infinity
-
-
-NegativeInfinity = NegativeInfinityType()
diff --git a/spaces/Awesimo/jojogan/e4e/utils/alignment.py b/spaces/Awesimo/jojogan/e4e/utils/alignment.py
deleted file mode 100644
index a02798f0f7c9fdcc319f7884a491b9e6580cc8aa..0000000000000000000000000000000000000000
--- a/spaces/Awesimo/jojogan/e4e/utils/alignment.py
+++ /dev/null
@@ -1,115 +0,0 @@
-import numpy as np
-import PIL
-import PIL.Image
-import scipy
-import scipy.ndimage
-import dlib
-
-
-def get_landmark(filepath, predictor):
- """get landmark with dlib
- :return: np.array shape=(68, 2)
- """
- detector = dlib.get_frontal_face_detector()
-
- img = dlib.load_rgb_image(filepath)
- dets = detector(img, 1)
-
- for k, d in enumerate(dets):
- shape = predictor(img, d)
-
- t = list(shape.parts())
- a = []
- for tt in t:
- a.append([tt.x, tt.y])
- lm = np.array(a)
- return lm
-
-
-def align_face(filepath, predictor):
- """
- :param filepath: str
- :return: PIL Image
- """
-
- lm = get_landmark(filepath, predictor)
-
- lm_chin = lm[0: 17] # left-right
- lm_eyebrow_left = lm[17: 22] # left-right
- lm_eyebrow_right = lm[22: 27] # left-right
- lm_nose = lm[27: 31] # top-down
- lm_nostrils = lm[31: 36] # top-down
- lm_eye_left = lm[36: 42] # left-clockwise
- lm_eye_right = lm[42: 48] # left-clockwise
- lm_mouth_outer = lm[48: 60] # left-clockwise
- lm_mouth_inner = lm[60: 68] # left-clockwise
-
- # Calculate auxiliary vectors.
- eye_left = np.mean(lm_eye_left, axis=0)
- eye_right = np.mean(lm_eye_right, axis=0)
- eye_avg = (eye_left + eye_right) * 0.5
- eye_to_eye = eye_right - eye_left
- mouth_left = lm_mouth_outer[0]
- mouth_right = lm_mouth_outer[6]
- mouth_avg = (mouth_left + mouth_right) * 0.5
- eye_to_mouth = mouth_avg - eye_avg
-
- # Choose oriented crop rectangle.
- x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1]
- x /= np.hypot(*x)
- x *= max(np.hypot(*eye_to_eye) * 2.0, np.hypot(*eye_to_mouth) * 1.8)
- y = np.flipud(x) * [-1, 1]
- c = eye_avg + eye_to_mouth * 0.1
- quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y])
- qsize = np.hypot(*x) * 2
-
- # read image
- img = PIL.Image.open(filepath)
-
- output_size = 256
- transform_size = 256
- enable_padding = True
-
- # Shrink.
- shrink = int(np.floor(qsize / output_size * 0.5))
- if shrink > 1:
- rsize = (int(np.rint(float(img.size[0]) / shrink)), int(np.rint(float(img.size[1]) / shrink)))
- img = img.resize(rsize, PIL.Image.ANTIALIAS)
- quad /= shrink
- qsize /= shrink
-
- # Crop.
- border = max(int(np.rint(qsize * 0.1)), 3)
- crop = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))),
- int(np.ceil(max(quad[:, 1]))))
- crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, img.size[0]),
- min(crop[3] + border, img.size[1]))
- if crop[2] - crop[0] < img.size[0] or crop[3] - crop[1] < img.size[1]:
- img = img.crop(crop)
- quad -= crop[0:2]
-
- # Pad.
- pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))),
- int(np.ceil(max(quad[:, 1]))))
- pad = (max(-pad[0] + border, 0), max(-pad[1] + border, 0), max(pad[2] - img.size[0] + border, 0),
- max(pad[3] - img.size[1] + border, 0))
- if enable_padding and max(pad) > border - 4:
- pad = np.maximum(pad, int(np.rint(qsize * 0.3)))
- img = np.pad(np.float32(img), ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect')
- h, w, _ = img.shape
- y, x, _ = np.ogrid[:h, :w, :1]
- mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0], np.float32(w - 1 - x) / pad[2]),
- 1.0 - np.minimum(np.float32(y) / pad[1], np.float32(h - 1 - y) / pad[3]))
- blur = qsize * 0.02
- img += (scipy.ndimage.gaussian_filter(img, [blur, blur, 0]) - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0)
- img += (np.median(img, axis=(0, 1)) - img) * np.clip(mask, 0.0, 1.0)
- img = PIL.Image.fromarray(np.uint8(np.clip(np.rint(img), 0, 255)), 'RGB')
- quad += pad[:2]
-
- # Transform.
- img = img.transform((transform_size, transform_size), PIL.Image.QUAD, (quad + 0.5).flatten(), PIL.Image.BILINEAR)
- if output_size < transform_size:
- img = img.resize((output_size, output_size), PIL.Image.ANTIALIAS)
-
- # Return aligned image.
- return img
diff --git a/spaces/BAAI/vid2vid-zero/vid2vid_zero/p2p/null_text_w_ptp.py b/spaces/BAAI/vid2vid-zero/vid2vid_zero/p2p/null_text_w_ptp.py
deleted file mode 100644
index c69a8376760ef10cf8aabd528fb6eba40ceea747..0000000000000000000000000000000000000000
--- a/spaces/BAAI/vid2vid-zero/vid2vid_zero/p2p/null_text_w_ptp.py
+++ /dev/null
@@ -1,504 +0,0 @@
-# Copyright 2022 Google LLC
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-from typing import Optional, Union, Tuple, List, Callable, Dict
-from tqdm import tqdm
-import torch
-import torch.nn.functional as nnf
-import numpy as np
-import abc
-from . import ptp_utils
-from . import seq_aligner
-import shutil
-from torch.optim.adam import Adam
-from PIL import Image
-
-
-LOW_RESOURCE = False
-NUM_DDIM_STEPS = 50
-MAX_NUM_WORDS = 77
-device = torch.device('cuda')
-from transformers import CLIPTextModel, CLIPTokenizer
-
-pretrained_model_path = "checkpoints/CompVis/stable-diffusion-v1-4/"
-
-ldm_stable = None
-tokenizer = CLIPTokenizer.from_pretrained(pretrained_model_path, subfolder="tokenizer")
-
-
-class LocalBlend:
-
- def get_mask(self, maps, alpha, use_pool):
- k = 1
- maps = (maps * alpha).sum(-1).mean(1)
- if use_pool:
- maps = nnf.max_pool2d(maps, (k * 2 + 1, k * 2 +1), (1, 1), padding=(k, k))
- mask = nnf.interpolate(maps, size=(x_t.shape[2:]))
- mask = mask / mask.max(2, keepdims=True)[0].max(3, keepdims=True)[0]
- mask = mask.gt(self.th[1-int(use_pool)])
- mask = mask[:1] + mask
- return mask
-
- def __call__(self, x_t, attention_store):
- self.counter += 1
- if self.counter > self.start_blend:
-
- maps = attention_store["down_cross"][2:4] + attention_store["up_cross"][:3]
- maps = [item.reshape(self.alpha_layers.shape[0], -1, 1, 16, 16, MAX_NUM_WORDS) for item in maps]
- maps = torch.cat(maps, dim=1)
- mask = self.get_mask(maps, self.alpha_layers, True)
- if self.substruct_layers is not None:
- maps_sub = ~self.get_mask(maps, self.substruct_layers, False)
- mask = mask * maps_sub
- mask = mask.float()
- x_t = x_t[:1] + mask * (x_t - x_t[:1])
- return x_t
-
- def __init__(self, prompts: List[str], words: List[List[str]], substruct_words=None, start_blend=0.2, th=(.3, .3)):
- alpha_layers = torch.zeros(len(prompts), 1, 1, 1, 1, MAX_NUM_WORDS)
- for i, (prompt, words_) in enumerate(zip(prompts, words)):
- if type(words_) is str:
- words_ = [words_]
- for word in words_:
- ind = ptp_utils.get_word_inds(prompt, word, tokenizer)
- alpha_layers[i, :, :, :, :, ind] = 1
-
- if substruct_words is not None:
- substruct_layers = torch.zeros(len(prompts), 1, 1, 1, 1, MAX_NUM_WORDS)
- for i, (prompt, words_) in enumerate(zip(prompts, substruct_words)):
- if type(words_) is str:
- words_ = [words_]
- for word in words_:
- ind = ptp_utils.get_word_inds(prompt, word, tokenizer)
- substruct_layers[i, :, :, :, :, ind] = 1
- self.substruct_layers = substruct_layers.to(device)
- else:
- self.substruct_layers = None
- self.alpha_layers = alpha_layers.to(device)
- self.start_blend = int(start_blend * NUM_DDIM_STEPS)
- self.counter = 0
- self.th=th
-
-
-class EmptyControl:
-
-
- def step_callback(self, x_t):
- return x_t
-
- def between_steps(self):
- return
-
- def __call__(self, attn, is_cross: bool, place_in_unet: str):
- return attn
-
-
-class AttentionControl(abc.ABC):
-
- def step_callback(self, x_t):
- return x_t
-
- def between_steps(self):
- return
-
- @property
- def num_uncond_att_layers(self):
- return self.num_att_layers if LOW_RESOURCE else 0
-
- @abc.abstractmethod
- def forward (self, attn, is_cross: bool, place_in_unet: str):
- raise NotImplementedError
-
- def __call__(self, attn, is_cross: bool, place_in_unet: str):
- if self.cur_att_layer >= self.num_uncond_att_layers:
- if LOW_RESOURCE:
- attn = self.forward(attn, is_cross, place_in_unet)
- else:
- h = attn.shape[0]
- attn[h // 2:] = self.forward(attn[h // 2:], is_cross, place_in_unet)
- self.cur_att_layer += 1
- if self.cur_att_layer == self.num_att_layers + self.num_uncond_att_layers:
- self.cur_att_layer = 0
- self.cur_step += 1
- self.between_steps()
- return attn
-
- def reset(self):
- self.cur_step = 0
- self.cur_att_layer = 0
-
- def __init__(self):
- self.cur_step = 0
- self.num_att_layers = -1
- self.cur_att_layer = 0
-
-
-class SpatialReplace(EmptyControl):
-
- def step_callback(self, x_t):
- if self.cur_step < self.stop_inject:
- b = x_t.shape[0]
- x_t = x_t[:1].expand(b, *x_t.shape[1:])
- return x_t
-
- def __init__(self, stop_inject: float):
- super(SpatialReplace, self).__init__()
- self.stop_inject = int((1 - stop_inject) * NUM_DDIM_STEPS)
-
-
-class AttentionStore(AttentionControl):
-
- @staticmethod
- def get_empty_store():
- return {"down_cross": [], "mid_cross": [], "up_cross": [],
- "down_self": [], "mid_self": [], "up_self": []}
-
- def forward(self, attn, is_cross: bool, place_in_unet: str):
- key = f"{place_in_unet}_{'cross' if is_cross else 'self'}"
- if attn.shape[1] <= 32 ** 2: # avoid memory overhead
- self.step_store[key].append(attn)
- return attn
-
- def between_steps(self):
- if len(self.attention_store) == 0:
- self.attention_store = self.step_store
- else:
- for key in self.attention_store:
- for i in range(len(self.attention_store[key])):
- self.attention_store[key][i] += self.step_store[key][i]
- self.step_store = self.get_empty_store()
-
- def get_average_attention(self):
- average_attention = {key: [item / self.cur_step for item in self.attention_store[key]] for key in self.attention_store}
- return average_attention
-
-
- def reset(self):
- super(AttentionStore, self).reset()
- self.step_store = self.get_empty_store()
- self.attention_store = {}
-
- def __init__(self):
- super(AttentionStore, self).__init__()
- self.step_store = self.get_empty_store()
- self.attention_store = {}
-
-
-class AttentionControlEdit(AttentionStore, abc.ABC):
-
- def step_callback(self, x_t):
- if self.local_blend is not None:
- x_t = self.local_blend(x_t, self.attention_store)
- return x_t
-
- def replace_self_attention(self, attn_base, att_replace, place_in_unet):
- if att_replace.shape[2] <= 32 ** 2:
- attn_base = attn_base.unsqueeze(0).expand(att_replace.shape[0], *attn_base.shape)
- return attn_base
- else:
- return att_replace
-
- @abc.abstractmethod
- def replace_cross_attention(self, attn_base, att_replace):
- raise NotImplementedError
-
- def forward(self, attn, is_cross: bool, place_in_unet: str):
- super(AttentionControlEdit, self).forward(attn, is_cross, place_in_unet)
- if is_cross or (self.num_self_replace[0] <= self.cur_step < self.num_self_replace[1]):
- h = attn.shape[0] // (self.batch_size)
- attn = attn.reshape(self.batch_size, h, *attn.shape[1:])
- attn_base, attn_repalce = attn[0], attn[1:]
- if is_cross:
- alpha_words = self.cross_replace_alpha[self.cur_step]
- attn_repalce_new = self.replace_cross_attention(attn_base, attn_repalce) * alpha_words + (1 - alpha_words) * attn_repalce
- attn[1:] = attn_repalce_new
- else:
- attn[1:] = self.replace_self_attention(attn_base, attn_repalce, place_in_unet)
- attn = attn.reshape(self.batch_size * h, *attn.shape[2:])
- return attn
-
- def __init__(self, prompts, num_steps: int,
- cross_replace_steps: Union[float, Tuple[float, float], Dict[str, Tuple[float, float]]],
- self_replace_steps: Union[float, Tuple[float, float]],
- local_blend: Optional[LocalBlend]):
- super(AttentionControlEdit, self).__init__()
- self.batch_size = len(prompts)
- self.cross_replace_alpha = ptp_utils.get_time_words_attention_alpha(prompts, num_steps, cross_replace_steps, tokenizer).to(device)
- if type(self_replace_steps) is float:
- self_replace_steps = 0, self_replace_steps
- self.num_self_replace = int(num_steps * self_replace_steps[0]), int(num_steps * self_replace_steps[1])
- self.local_blend = local_blend
-
-class AttentionReplace(AttentionControlEdit):
-
- def replace_cross_attention(self, attn_base, att_replace):
- return torch.einsum('hpw,bwn->bhpn', attn_base, self.mapper)
-
- def __init__(self, prompts, num_steps: int, cross_replace_steps: float, self_replace_steps: float,
- local_blend: Optional[LocalBlend] = None):
- super(AttentionReplace, self).__init__(prompts, num_steps, cross_replace_steps, self_replace_steps, local_blend)
- self.mapper = seq_aligner.get_replacement_mapper(prompts, tokenizer).to(device)
-
-
-class AttentionRefine(AttentionControlEdit):
-
- def replace_cross_attention(self, attn_base, att_replace):
- attn_base_replace = attn_base[:, :, self.mapper].permute(2, 0, 1, 3)
- attn_replace = attn_base_replace * self.alphas + att_replace * (1 - self.alphas)
- # attn_replace = attn_replace / attn_replace.sum(-1, keepdims=True)
- return attn_replace
-
- def __init__(self, prompts, num_steps: int, cross_replace_steps: float, self_replace_steps: float,
- local_blend: Optional[LocalBlend] = None):
- super(AttentionRefine, self).__init__(prompts, num_steps, cross_replace_steps, self_replace_steps, local_blend)
- self.mapper, alphas = seq_aligner.get_refinement_mapper(prompts, tokenizer)
- self.mapper, alphas = self.mapper.to(device), alphas.to(device)
- self.alphas = alphas.reshape(alphas.shape[0], 1, 1, alphas.shape[1])
-
-
-class AttentionReweight(AttentionControlEdit):
-
- def replace_cross_attention(self, attn_base, att_replace):
- if self.prev_controller is not None:
- attn_base = self.prev_controller.replace_cross_attention(attn_base, att_replace)
- attn_replace = attn_base[None, :, :, :] * self.equalizer[:, None, None, :]
- # attn_replace = attn_replace / attn_replace.sum(-1, keepdims=True)
- return attn_replace
-
- def __init__(self, prompts, num_steps: int, cross_replace_steps: float, self_replace_steps: float, equalizer,
- local_blend: Optional[LocalBlend] = None, controller: Optional[AttentionControlEdit] = None):
- super(AttentionReweight, self).__init__(prompts, num_steps, cross_replace_steps, self_replace_steps, local_blend)
- self.equalizer = equalizer.to(device)
- self.prev_controller = controller
-
-
-def get_equalizer(text: str, word_select: Union[int, Tuple[int, ...]], values: Union[List[float],
- Tuple[float, ...]]):
- if type(word_select) is int or type(word_select) is str:
- word_select = (word_select,)
- equalizer = torch.ones(1, 77)
-
- for word, val in zip(word_select, values):
- inds = ptp_utils.get_word_inds(text, word, tokenizer)
- equalizer[:, inds] = val
- return equalizer
-
-def aggregate_attention(attention_store: AttentionStore, res: int, from_where: List[str], is_cross: bool, select: int):
- out = []
- attention_maps = attention_store.get_average_attention()
- num_pixels = res ** 2
- for location in from_where:
- for item in attention_maps[f"{location}_{'cross' if is_cross else 'self'}"]:
- if item.shape[1] == num_pixels:
- cross_maps = item.reshape(len(prompts), -1, res, res, item.shape[-1])[select]
- out.append(cross_maps)
- out = torch.cat(out, dim=0)
- out = out.sum(0) / out.shape[0]
- return out.cpu()
-
-
-def make_controller(prompts: List[str], is_replace_controller: bool, cross_replace_steps: Dict[str, float], self_replace_steps: float, blend_words=None, equilizer_params=None) -> AttentionControlEdit:
- if blend_words is None:
- lb = None
- else:
- lb = LocalBlend(prompts, blend_word)
- if is_replace_controller:
- controller = AttentionReplace(prompts, NUM_DDIM_STEPS, cross_replace_steps=cross_replace_steps, self_replace_steps=self_replace_steps, local_blend=lb)
- else:
- controller = AttentionRefine(prompts, NUM_DDIM_STEPS, cross_replace_steps=cross_replace_steps, self_replace_steps=self_replace_steps, local_blend=lb)
- if equilizer_params is not None:
- eq = get_equalizer(prompts[1], equilizer_params["words"], equilizer_params["values"])
- controller = AttentionReweight(prompts, NUM_DDIM_STEPS, cross_replace_steps=cross_replace_steps,
- self_replace_steps=self_replace_steps, equalizer=eq, local_blend=lb, controller=controller)
- return controller
-
-
-def show_cross_attention(attention_store: AttentionStore, res: int, from_where: List[str], select: int = 0):
- tokens = tokenizer.encode(prompts[select])
- decoder = tokenizer.decode
- attention_maps = aggregate_attention(attention_store, res, from_where, True, select)
- images = []
- for i in range(len(tokens)):
- image = attention_maps[:, :, i]
- image = 255 * image / image.max()
- image = image.unsqueeze(-1).expand(*image.shape, 3)
- image = image.numpy().astype(np.uint8)
- image = np.array(Image.fromarray(image).resize((256, 256)))
- image = ptp_utils.text_under_image(image, decoder(int(tokens[i])))
- images.append(image)
- ptp_utils.view_images(np.stack(images, axis=0))
-
-
-class NullInversion:
-
- def prev_step(self, model_output: Union[torch.FloatTensor, np.ndarray], timestep: int, sample: Union[torch.FloatTensor, np.ndarray]):
- prev_timestep = timestep - self.scheduler.config.num_train_timesteps // self.scheduler.num_inference_steps
- alpha_prod_t = self.scheduler.alphas_cumprod[timestep]
- alpha_prod_t_prev = self.scheduler.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else self.scheduler.final_alpha_cumprod
- beta_prod_t = 1 - alpha_prod_t
- pred_original_sample = (sample - beta_prod_t ** 0.5 * model_output) / alpha_prod_t ** 0.5
- pred_sample_direction = (1 - alpha_prod_t_prev) ** 0.5 * model_output
- prev_sample = alpha_prod_t_prev ** 0.5 * pred_original_sample + pred_sample_direction
- return prev_sample
-
- def next_step(self, model_output: Union[torch.FloatTensor, np.ndarray], timestep: int, sample: Union[torch.FloatTensor, np.ndarray]):
- timestep, next_timestep = min(timestep - self.scheduler.config.num_train_timesteps // self.scheduler.num_inference_steps, 999), timestep
- alpha_prod_t = self.scheduler.alphas_cumprod[timestep] if timestep >= 0 else self.scheduler.final_alpha_cumprod
- alpha_prod_t_next = self.scheduler.alphas_cumprod[next_timestep]
- beta_prod_t = 1 - alpha_prod_t
- next_original_sample = (sample - beta_prod_t ** 0.5 * model_output) / alpha_prod_t ** 0.5
- next_sample_direction = (1 - alpha_prod_t_next) ** 0.5 * model_output
- next_sample = alpha_prod_t_next ** 0.5 * next_original_sample + next_sample_direction
- return next_sample
-
- def get_noise_pred_single(self, latents, t, context, normal_infer=True):
- noise_pred = self.model.unet(latents, t, encoder_hidden_states=context, normal_infer=normal_infer)["sample"]
- return noise_pred
-
- def get_noise_pred(self, latents, t, is_forward=True, context=None, normal_infer=True):
- latents_input = torch.cat([latents] * 2)
- if context is None:
- context = self.context
- guidance_scale = 1 if is_forward else self.guidance_scale
- noise_pred = self.model.unet(latents_input, t, encoder_hidden_states=context, normal_infer=normal_infer)["sample"]
- noise_pred_uncond, noise_prediction_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_prediction_text - noise_pred_uncond)
- if is_forward:
- latents = self.next_step(noise_pred, t, latents)
- else:
- latents = self.prev_step(noise_pred, t, latents)
- return latents
-
- @torch.no_grad()
- def latent2image(self, latents, return_type='np'):
- latents = 1 / 0.18215 * latents.detach()
- image = self.model.vae.decode(latents)['sample']
- if return_type == 'np':
- image = (image / 2 + 0.5).clamp(0, 1)
- image = image.cpu().permute(0, 2, 3, 1).numpy()[0]
- image = (image * 255).astype(np.uint8)
- return image
-
- @torch.no_grad()
- def image2latent(self, image):
- with torch.no_grad():
- if type(image) is Image:
- image = np.array(image)
- if type(image) is torch.Tensor and image.dim() == 4:
- latents = image
- else:
- image = torch.from_numpy(image).float() / 127.5 - 1
- image = image.permute(2, 0, 1).unsqueeze(0).to(device)
- latents = self.model.vae.encode(image)['latent_dist'].mean
- latents = latents * 0.18215
- return latents
-
- @torch.no_grad()
- def init_prompt(self, prompt: str):
- uncond_input = self.model.tokenizer(
- [""], padding="max_length", max_length=self.model.tokenizer.model_max_length,
- return_tensors="pt"
- )
- uncond_embeddings = self.model.text_encoder(uncond_input.input_ids.to(self.model.device))[0]
- text_input = self.model.tokenizer(
- [prompt],
- padding="max_length",
- max_length=self.model.tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
- # (1, 77, 768)
- text_embeddings = self.model.text_encoder(text_input.input_ids.to(self.model.device))[0]
- # (2, 77, 768)
- self.context = torch.cat([uncond_embeddings, text_embeddings])
- self.prompt = prompt
-
- @torch.no_grad()
- def ddim_loop(self, latent):
- uncond_embeddings, cond_embeddings = self.context.chunk(2)
- cond = cond_embeddings if self.null_inv_with_prompt else uncond_embeddings
- all_latent = [latent]
- latent = latent.clone().detach()
- for i in range(NUM_DDIM_STEPS):
- t = self.model.scheduler.timesteps[len(self.model.scheduler.timesteps) - i - 1]
- noise_pred = self.get_noise_pred_single(latent, t, cond, normal_infer=True)
- latent = self.next_step(noise_pred, t, latent)
- all_latent.append(latent)
- return all_latent
-
- @property
- def scheduler(self):
- return self.model.scheduler
-
- @torch.no_grad()
- def ddim_inversion(self, latent):
- ddim_latents = self.ddim_loop(latent)
- return ddim_latents
-
- def null_optimization(self, latents, null_inner_steps, epsilon, null_base_lr=1e-2):
- uncond_embeddings, cond_embeddings = self.context.chunk(2)
- uncond_embeddings_list = []
- latent_cur = latents[-1]
- bar = tqdm(total=null_inner_steps * NUM_DDIM_STEPS)
- for i in range(NUM_DDIM_STEPS):
- uncond_embeddings = uncond_embeddings.clone().detach()
- uncond_embeddings.requires_grad = True
- optimizer = Adam([uncond_embeddings], lr=null_base_lr * (1. - i / 100.))
- latent_prev = latents[len(latents) - i - 2]
- t = self.model.scheduler.timesteps[i]
- with torch.no_grad():
- noise_pred_cond = self.get_noise_pred_single(latent_cur, t, cond_embeddings, normal_infer=self.null_normal_infer)
- for j in range(null_inner_steps):
- noise_pred_uncond = self.get_noise_pred_single(latent_cur, t, uncond_embeddings, normal_infer=self.null_normal_infer)
- noise_pred = noise_pred_uncond + self.guidance_scale * (noise_pred_cond - noise_pred_uncond)
- latents_prev_rec = self.prev_step(noise_pred, t, latent_cur)
- loss = nnf.mse_loss(latents_prev_rec, latent_prev)
- optimizer.zero_grad()
- loss.backward()
- optimizer.step()
- assert not torch.isnan(uncond_embeddings.abs().mean())
- loss_item = loss.item()
- bar.update()
- if loss_item < epsilon + i * 2e-5:
- break
- for j in range(j + 1, null_inner_steps):
- bar.update()
- uncond_embeddings_list.append(uncond_embeddings[:1].detach())
- with torch.no_grad():
- context = torch.cat([uncond_embeddings, cond_embeddings])
- latent_cur = self.get_noise_pred(latent_cur, t, False, context, normal_infer=self.null_normal_infer)
- bar.close()
- return uncond_embeddings_list
-
- def invert(self, latents: torch.Tensor, prompt: str, null_inner_steps=10, early_stop_epsilon=1e-5, verbose=False, null_base_lr=1e-2):
- self.init_prompt(prompt)
- if verbose:
- print("DDIM inversion...")
- ddim_latents = self.ddim_inversion(latents.to(torch.float32))
- if verbose:
- print("Null-text optimization...")
- uncond_embeddings = self.null_optimization(ddim_latents, null_inner_steps, early_stop_epsilon, null_base_lr=null_base_lr)
- return ddim_latents[-1], uncond_embeddings
-
-
- def __init__(self, model, guidance_scale, null_inv_with_prompt, null_normal_infer=True):
- self.null_normal_infer = null_normal_infer
- self.null_inv_with_prompt = null_inv_with_prompt
- self.guidance_scale = guidance_scale
- self.model = model
- self.tokenizer = self.model.tokenizer
- self.model.scheduler.set_timesteps(NUM_DDIM_STEPS)
- self.prompt = None
- self.context = None
diff --git a/spaces/BairaS/Tabular_ML/README.md b/spaces/BairaS/Tabular_ML/README.md
deleted file mode 100644
index 5d70322d04d1207ba7e21559ed5e076c829adb98..0000000000000000000000000000000000000000
--- a/spaces/BairaS/Tabular_ML/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Tabular ML
-emoji: 😻
-colorFrom: red
-colorTo: red
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Benson/text-generation/Examples/Chicos De La Escuela Apk.md b/spaces/Benson/text-generation/Examples/Chicos De La Escuela Apk.md
deleted file mode 100644
index 8a42e45fcf919adf0deeb38ed43e74de4254ae51..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Chicos De La Escuela Apk.md
+++ /dev/null
@@ -1,27 +0,0 @@
-
-
Stumble chicos Mod APK: Cómo desbloquear todo y tener más diversión
- Si estás buscando un divertido y caótico juego multijugador que puedas jugar en tu teléfono o PC, entonces es posible que quieras echar un vistazo a Stumble Guys. Este juego está inspirado en los populares Fall Guys, pero es completamente gratuito y exclusivo para dispositivos Android e iOS. Sin embargo, si desea desbloquear todo en el juego y divertirse más, entonces es posible que desee utilizar Stumble Guys Mod APK. En este artículo, le diremos lo que Stumble Guys es, ¿por qué debe utilizar Stumble Guys Mod APK, cómo descargar e instalar, y algunos consejos y trucos para ganar en el juego.
¿Qué es Stumble Guys?
-
Un juego de fiesta multijugador inspirado en Fall Guys
- Stumble Guys es un juego multijugador masivo con hasta 32 jugadores en línea. El objetivo del juego es avanzar a través de una serie de niveles corriendo y saltando, evitando obstáculos y peligros. El juego cuenta con un motor basado en la física, que da a los personajes un sentido de peso y el momento. El juego también tiene un diseño colorido y loco, con muchos trajes desbloqueables y emotes. El juego es muy similar a Fall Guys, pero está diseñado para dispositivos móviles.
Características y jugabilidad de Stumble Guys
-
-
Beneficios de usar la versión modificada de Stumble Guys
- Stumble Guys Mod APK es una versión modificada del juego original que le da algunas ventajas y características adicionales. Algunos de los beneficios de usar Stumble Guys Mod APK son: - Desbloqueado todo: Se puede acceder a todos los trajes, emotes, pasos, pieles, sombreros, gafas, máscaras, etc. sin gastar monedas o gemas. - Monedas y gemas ilimitadas: Puedes conseguir monedas y gemas ilimitadas para comprar lo que quieras en el juego. - Sin anuncios: Puedes disfrutar del juego sin que ningún anuncio molesto interrumpa tu juego. - No se requiere raíz: Usted no necesita para rootear su dispositivo para utilizar Stumble Guys Mod APK.
- Stumble Guys Mod APK no es una versión oficial del juego, y puede tener algunos riesgos y desventajas que usted debe ser consciente de. Algunos de los riesgos y desventajas son: - Problemas de compatibilidad: Stumble Guys Mod APK puede no funcionar en algunos dispositivos o con algunas actualizaciones del juego. - Problemas de seguridad: Stumble Guys Mod APK puede contener virus, malware o spyware que puede dañar su dispositivo o robar sus datos. - Cuestiones de van: Stumble Guys Mod APK puede violar los términos y condiciones del juego, y puede obtener prohibido jugar en línea o acceder a su cuenta. - Cuestiones éticas: Stumble Guys Mod APK puede darle una ventaja injusta sobre otros jugadores, y puede arruinar la diversión y el desafío del juego.
¿Cómo descargar e instalar Stumble Guys Mod APK?
-
Pasos para descargar e instalar Stumble Guys Mod APK en dispositivos Android
-
- Si desea jugar Stumble Guys Mod APK en su PC, necesitará un emulador de Android que puede ejecutar aplicaciones Android en su ordenador. Algunos de los emuladores de Android populares son [BlueStacks], [NoxPlayer], y [LDPlayer]. Puede seguir estos pasos para descargar e instalar Stumble Guys Mod APK en su PC utilizando un emulador: - Paso 1: Descargar e instalar un emulador de Android de su elección en su PC. - Paso 2: Inicie el emulador e inicie sesión con su cuenta de Google. - Paso 3: Ir a un sitio web de confianza que proporciona Stumble Guys Mod APK, tales como [APKPure] o [APKDone]. - Paso 4: Descargar la última versión del archivo Stumble Guys Mod APK en su PC. - Paso 5: Arrastre y suelte el archivo descargado Stumble Guys Mod APK en la ventana del emulador, o utilice el navegador incorporado para localizar e instalar. - Paso 6: Esperar a que la instalación termine y lanzar el juego.
Consejos y trucos para ganar en Stumble Guys
-
Configura tus controles antes de jugar
- Stumble Guys tiene dos opciones de control: joystick o botones. Puede elegir el que más le convenga en el menú de configuración. También puede ajustar la sensibilidad y el tamaño de los controles según su preferencia. Asegúrese de probar sus controles antes de jugar, para que pueda tener un juego suave y cómodo.
Usa la física de tu personaje para tu ventaja
- Stumble Guys tiene un motor de física realista que afecta la forma en que tu personaje se mueve e interactúa con el entorno. Puedes usar esto a tu favor usando el momento, la inercia, la gravedad, la fricción, etc. Por ejemplo, puedes saltar más alto corriendo más rápido, puedes deslizarte por las pendientes agachándote, puedes rebotar contra las paredes golpeándolas en un ángulo, etc. Experimenta con diferentes movimientos y observa cómo afectan tu rendimiento.
Usa los desafíos a tu favor
-
- Stumble Guys es un juego en el que todo puede suceder. Puedes estar liderando en un nivel, pero te quedas atrás en otro. Puede ser eliminado por un obstáculo al azar o un jugador astuto. Puede ser afortunado o desafortunado dependiendo de la situación. El punto es que no siempre se trata de ser el primero en todos los niveles. A veces, es mejor ser inteligente y estratégico que rápido e imprudente. Por ejemplo, puedes esperar a que otros jugadores despejen el camino para ti, puedes evitar áreas llenas de gente donde sobreviene el caos, puedes usar los obstáculos para tu ventaja, etc. El objetivo es sobrevivir y calificar para el siguiente nivel, no ser el más rápido. Recuerda, es un juego de diversión y caos, no una carrera.
Conclusión
- Stumble Guys es un divertido y caótico juego multijugador que puedes jugar en tu teléfono o PC. Está inspirado en Fall Guys, pero es gratuito y exclusivo para dispositivos Android e iOS. Si desea desbloquear todo en el juego y divertirse más, puede utilizar Stumble Guys Mod APK, que le da monedas y gemas ilimitadas, desbloqueado trajes y emotes, sin anuncios, y más. Sin embargo, también debes ser consciente de los riesgos y desventajas de usar la versión modificada del juego, como problemas de compatibilidad, problemas de seguridad, problemas de prohibición y cuestiones éticas. También debes seguir algunos consejos y trucos para ganar en el juego, como configurar tus controles, usar la física de tu personaje, usar los desafíos y ser inteligente y estratégico. Esperamos que este artículo le ayudó a aprender más acerca de Stumble Guys Mod APK y cómo usarlo. Divertirse y disfrutar del juego!
Preguntas frecuentes
-
Q: ¿Es Stumble Guys Mod APK seguro de usar?
-
-A: Stumble Guys Mod APK puede violar los términos y condiciones del juego, y puede obtener prohibido jugar en línea o acceder a su cuenta. Usted debe utilizar Stumble Guys Mod APK a su propio riesgo, y respetar los derechos de los desarrolladores y otros jugadores.
Q: Cómo actualizar Stumble Guys Mod APK?
-A: Stumble Guys Mod APK no puede funcionar con algunas actualizaciones del juego. Usted debe comprobar el sitio web donde ha descargado Stumble Guys Mod APK para las nuevas versiones o actualizaciones. También debe desinstalar la versión anterior de Stumble Guys Mod APK antes de instalar el nuevo.
Q: Cómo desinstalar Stumble Guys Mod APK?
-R: Si desea desinstalar Stumble Guys Mod APK de su dispositivo, puede seguir estos pasos: - Paso 1: Ir a la configuración del dispositivo y encontrar las aplicaciones o menú. - Paso 2: Encuentra Stumble Guys Mod APK en la lista de aplicaciones y toque en él. - Paso 3: Toque en el botón de desinstalación y confirmar su acción. - Paso 4: Espere a que la desinstalación para terminar y reiniciar el dispositivo.
Q: Cómo ponerse en contacto con los desarrolladores de Stumble Guys?
-R: Si tienes alguna pregunta, comentario o sugerencia para los desarrolladores de Stumble Guys, puedes contactarlos a través de sus cuentas oficiales de redes sociales o su dirección de correo electrónico. Estos son algunos de sus datos de contacto: - Facebook: https://www.facebook.com/StumbleGuys/ - Twitter: https://twitter.com/StumbleGuys - Instagram: https:/www.instagram.com/stumbleguys/ - Correo electrónico: support@kitkagames.com
-
""")
-
-
-#demo.queue()
-demo.launch(debug=True)
-
-
-
-
-### EOF ###
diff --git a/spaces/Codecooker/rvcapi/README.md b/spaces/Codecooker/rvcapi/README.md
deleted file mode 100644
index c97a705f1f774a35bda1857cad67985282cb8e6d..0000000000000000000000000000000000000000
--- a/spaces/Codecooker/rvcapi/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Rvcapi
-emoji: 🚀
-colorFrom: purple
-colorTo: red
-sdk: gradio
-sdk_version: 3.40.1
-app_file: src/webui.py
-pinned: false
-license: gpl-3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/rpn/rpn.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/rpn/rpn.py
deleted file mode 100644
index 8b027855cf114594c437f7d867a187b496b3bc80..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/rpn/rpn.py
+++ /dev/null
@@ -1,321 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-import torch
-import torch.nn.functional as F
-from torch import nn
-import math
-from maskrcnn_benchmark.modeling import registry
-from maskrcnn_benchmark.modeling.box_coder import BoxCoder
-from maskrcnn_benchmark.modeling.rpn.retinanet.retinanet import build_retinanet
-from maskrcnn_benchmark.modeling.rpn.fcos.fcos import build_fcos
-from .loss import make_rpn_loss_evaluator
-from .anchor_generator import make_anchor_generator
-from .inference import make_rpn_postprocessor
-
-
-class RPNHeadConvRegressor(nn.Module):
- """
- A simple RPN Head for classification and bbox regression
- """
-
- def __init__(self, cfg, in_channels, num_anchors):
- """
- Arguments:
- cfg : config
- in_channels (int): number of channels of the input feature
- num_anchors (int): number of anchors to be predicted
- """
- super(RPNHeadConvRegressor, self).__init__()
- self.cls_logits = nn.Conv2d(in_channels, num_anchors, kernel_size=1, stride=1)
- self.bbox_pred = nn.Conv2d(
- in_channels, num_anchors * 4, kernel_size=1, stride=1
- )
-
- for l in [self.cls_logits, self.bbox_pred]:
- torch.nn.init.normal_(l.weight, std=0.01)
- torch.nn.init.constant_(l.bias, 0)
-
- def forward(self, x):
- assert isinstance(x, (list, tuple))
- logits = [self.cls_logits(y) for y in x]
- bbox_reg = [self.bbox_pred(y) for y in x]
-
- return logits, bbox_reg
-
-
-class RPNHeadFeatureSingleConv(nn.Module):
- """
- Adds a simple RPN Head with one conv to extract the feature
- """
-
- def __init__(self, cfg, in_channels):
- """
- Arguments:
- cfg : config
- in_channels (int): number of channels of the input feature
- """
- super(RPNHeadFeatureSingleConv, self).__init__()
- self.conv = nn.Conv2d(
- in_channels, in_channels, kernel_size=3, stride=1, padding=1
- )
-
- for l in [self.conv]:
- torch.nn.init.normal_(l.weight, std=0.01)
- torch.nn.init.constant_(l.bias, 0)
-
- self.out_channels = in_channels
-
- def forward(self, x):
- assert isinstance(x, (list, tuple))
- x = [F.relu(self.conv(z)) for z in x]
-
- return x
-
-
-@registry.RPN_HEADS.register("SingleConvRPNHead_1")
-class RPNHead(nn.Module):
- """
- Adds a simple RPN Head with classification and regression heads
- """
-
- def __init__(self, cfg, in_channels, num_anchors):
- """
- Arguments:
- cfg : config
- in_channels (int): number of channels of the input feature
- num_anchors (int): number of anchors to be predicted
- """
- super(RPNHead, self).__init__()
- self.conv = nn.Conv2d(
- in_channels, in_channels, kernel_size=3, stride=1, padding=1
- )
- self.cls_logits = nn.Conv2d(in_channels, num_anchors, kernel_size=1, stride=1)
- self.bbox_pred_new = nn.Conv2d(
- in_channels, num_anchors * 18, kernel_size=1, stride=1
- )
-
- for l in [self.conv, self.cls_logits, self.bbox_pred_new]:
- torch.nn.init.normal_(l.weight, std=0.01)
- torch.nn.init.constant_(l.bias, 0)
-
- def forward(self, x):
-
- logits = []
- bbox_reg = []
- for feature in x:
- t = F.relu(self.conv(feature))
- logits.append(self.cls_logits(t))
- bbox_reg.append(self.bbox_pred_new(t))
- return logits, bbox_reg
-
-
-class RPNModule(torch.nn.Module):
- """
- Module for RPN computation. Takes feature maps from the backbone and RPN
- proposals and losses. Works for both FPN and non-FPN.
- """
-
- def __init__(self, cfg, in_channels):
- super(RPNModule, self).__init__()
-
- self.cfg = cfg.clone()
-
- anchor_generator = make_anchor_generator(cfg)
-
- rpn_head = registry.RPN_HEADS[cfg.MODEL.RPN.RPN_HEAD]
- head = rpn_head(
- cfg, in_channels, anchor_generator.num_anchors_per_location()[0]
- )
-
- rpn_box_coder = BoxCoder(weights=(1.0, 1.0, 1.0, 1.0))
-
- box_selector_train = make_rpn_postprocessor(cfg, rpn_box_coder, is_train=True)
- box_selector_test = make_rpn_postprocessor(cfg, rpn_box_coder, is_train=False)
-
- loss_evaluator = make_rpn_loss_evaluator(cfg, rpn_box_coder)
-
- self.anchor_generator = anchor_generator
- self.head = head
- self.box_selector_train = box_selector_train
- self.box_selector_test = box_selector_test
- self.loss_evaluator = loss_evaluator
-
- def forward(self, images, features, targets=None, prefix=''):
- """
- Arguments:
- images (ImageList): images for which we want to compute the predictions
- features (list[Tensor]): features computed from the images that are
- used for computing the predictions. Each tensor in the list
- correspond to different feature levels
- targets (list[BoxList): ground-truth boxes present in the image (optional)
-
- Returns:
- boxes (list[BoxList]): the predicted boxes from the RPN, one BoxList per
- image.
- losses (dict[Tensor]): the losses for the model during training. During
- testing, it is an empty dict.
- """
- objectness, rpn_box_regression = self.head(features) # len = 5
- anchors = self.anchor_generator(images, features)
-
- if self.training:
- return self._forward_train(anchors, objectness,
- rpn_box_regression, targets, prefix)
- else:
- return self._forward_test(anchors, objectness, rpn_box_regression)
-
- def _forward_train(self, anchors, objectness, rpn_box_regression, # [image,number,[n,4]]
- targets, prefix):
- if self.cfg.MODEL.RPN_ONLY:
- # When training an RPN-only model, the loss is determined by the
- # predicted objectness and rpn_box_regression values and there is
- # no need to transform the anchors into predicted boxes; this is an
- # optimization that avoids the unnecessary transformation.
- boxes = anchors
- else:
- # print('\n---end-to-end model---\n')
- # For end-to-end models, anchors must be transformed into boxes and
- # sampled into a training batch.
- with torch.no_grad():
- boxes = self.box_selector_train(
- anchors, objectness, rpn_box_regression, targets
- )
- anchors_new = list(zip(*anchors))
- regress_new = regress_to_box(anchors_new, rpn_box_regression)
-
- loss_objectness, loss_rpn_box_reg = self.loss_evaluator(
- anchors, objectness, regress_new, targets
- )
- losses = {
- prefix + "loss_objectness": loss_objectness,
- prefix + "loss_rpn_box_reg": loss_rpn_box_reg,
- }
- return boxes, losses
-
- def _forward_test(self, anchors, objectness, rpn_box_regression):
- boxes = self.box_selector_test(anchors, objectness, rpn_box_regression)
- if self.cfg.MODEL.RPN_ONLY:
- # For end-to-end models, the RPN proposals are an intermediate state
- # and don't bother to sort them in decreasing score order. For RPN-only
- # models, the proposals are the final output and we return them in
- # high-to-low confidence order.
- inds = [
- box.get_field("objectness").sort(descending=True)[1] for box in boxes
- ]
- boxes = [box[ind] for box, ind in zip(boxes, inds)]
- return boxes, {}
-
-
-def build_rpn(cfg, in_channels):
- """
- This gives the gist of it. Not super important because it doesn't change as much
- """
- if cfg.MODEL.FCOS_ON:
- return build_fcos(cfg, in_channels)
- if cfg.MODEL.RETINANET_ON:
- return build_retinanet(cfg, in_channels)
-
- return RPNModule(cfg, in_channels)
-
-
-def regress_to_box(anchor_define,regress_pre):
-
- boxes_total = []
- num_f = 0
- for a, b in zip(anchor_define, regress_pre):
- boxes_total.append(forward_feature_map(a, b))
- num_f += 1
- return boxes_total
-
-def forward_feature_map(anchors_define, boxes_regression):
- N, A, H, W = boxes_regression.shape
-
- boxes_regression = faltten(boxes_regression, N, A, 18, H, W) #
-
- # image_shapes = [box.size for box in anchors_define]
- concat_anchors = torch.cat([a.bbox for a in anchors_define], dim=0)
- concat_anchors = concat_anchors.reshape(N, -1, 4)
- proposals = decode_iou(boxes_regression.view(-1, 18), concat_anchors.view(-1, 4))
- box_temp_post = proposals.view(N, -1, 4)
-
- return box_temp_post
-
-def faltten(layer, N, A, C, H, W):
- layer = layer.view(N, -1, C, H, W)
- layer = layer.permute(0, 3, 4, 1, 2) #N H W A C
- layer = layer.reshape(N, -1, C) # N H*W*A C
- return layer
-
-def decode_iou( rel_codes, boxes, num_p = 8):
- """
- From a set of original boxes and encoded relative box offsets,
- get the decoded boxes.
-
- Arguments:
- rel_codes (Tensor): encoded boxes # predict [2, 12000, 4]
- boxes (Tensor): reference boxes. # anchor [2, 12000, 4] xmin0 ymin1 xmax2 ymax3
- """
- boxes = boxes.to(rel_codes.dtype)
-
- TO_REMOVE = 1 # TODO remove
- widths = boxes[:, 2] - boxes[:, 0] + TO_REMOVE
- heights = boxes[:, 3] - boxes[:, 1] + TO_REMOVE
- dx = rel_codes[:, 16]
- dy = rel_codes[:, 17]
-
- ctr_x = boxes[:, 0] + 0.5 * widths
- ctr_y = boxes[:, 1] + 0.5 * heights
-
- ctr_x_new = dx * widths * 0.5 + ctr_x
- ctr_y_new = dy * heights * 0.5 + ctr_y
- # 123
- # 8#4
- # 765
- if num_p == 8: # 8 boundary points
- x_1 = boxes[:, 0] + widths * rel_codes[:, 0]
- y_1 = boxes[:, 1] + heights * rel_codes[:, 1]
- x_2 = ctr_x + widths * rel_codes[:, 2]
- y_2 = boxes[:, 1] + heights * rel_codes[:, 3]
- x_3 = boxes[:, 2] + widths * rel_codes[:, 4]
- y_3 = boxes[:, 1] + heights * rel_codes[:, 5]
- x_4 = boxes[:, 2] + widths * rel_codes[:, 6]
- y_4 = ctr_y + heights * rel_codes[:, 7]
- x_5 = boxes[:, 2] + widths * rel_codes[:, 8]
- y_5 = boxes[:, 3] + heights * rel_codes[:, 9]
- x_6 = ctr_x + widths * rel_codes[:, 10]
- y_6 = boxes[:, 3] + heights * rel_codes[:, 11]
- x_7 = boxes[:, 0] + widths * rel_codes[:, 12]
- y_7 = boxes[:, 3] + heights * rel_codes[:, 13]
- x_8 = boxes[:, 0] + widths * rel_codes[:, 14]
- y_8 = ctr_y + heights * rel_codes[:, 15]
- x_total = torch.stack([x_1, x_2, x_3, x_4, x_5, x_6, x_7, x_8], 0) # [8, N]
- y_total = torch.stack([y_1, y_2, y_3, y_4, y_5, y_6, y_7, y_8], 0)
-
- x_min = torch.min(x_total, 0, keepdim=True) # [1, N]
- x_max = torch.max(x_total, 0, keepdim=True) # [1, N]
- y_min = torch.min(y_total, 0, keepdim=True) # [1, N]
- y_max = torch.max(y_total, 0, keepdim=True) # [1, N]
-
- N1, N2 = x_min[0].shape
- x_min = x_min[0].view([N2])
- x_max = x_max[0].view([N2])
- y_min = y_min[0].view([N2])
- y_max = y_max[0].view([N2])
-
- x_min = torch.stack([x_min, ctr_x_new], 0)
- x_max = torch.stack([x_max, ctr_x_new], 0)
- y_min = torch.stack([y_min, ctr_y_new], 0)
- y_max = torch.stack([y_max, ctr_y_new], 0)
-
- x_min = torch.min(x_min, 0, keepdim=True) # [1, N]
- x_max = torch.max(x_max, 0, keepdim=True) # [1, N]
- y_min = torch.min(y_min, 0, keepdim=True) # [1, N]
- y_max = torch.max(y_max, 0, keepdim=True) # [1, N]
-
- pred_boxes = torch.zeros_like(boxes)
-
- pred_boxes[:, 0] = x_min[0][0, :]
- pred_boxes[:, 1] = y_min[0][0, :]
- pred_boxes[:, 2] = x_max[0][0, :]
- pred_boxes[:, 3] = y_max[0][0, :]
-
- return pred_boxes
\ No newline at end of file
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_b_s_l_n.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_b_s_l_n.py
deleted file mode 100644
index 8e266fa54d0f0fd05bfde372627e1fb948d6f0fd..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_b_s_l_n.py
+++ /dev/null
@@ -1,6 +0,0 @@
-from .otBase import BaseTTXConverter
-
-
-# https://developer.apple.com/fonts/TrueType-Reference-Manual/RM06/Chap6bsln.html
-class table__b_s_l_n(BaseTTXConverter):
- pass
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/otData.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/otData.py
deleted file mode 100644
index 56716824ecd7950dda249a159b5b292dbd2a86f7..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/otData.py
+++ /dev/null
@@ -1,6236 +0,0 @@
-otData = [
- #
- # common
- #
- ("LookupOrder", []),
- (
- "ScriptList",
- [
- ("uint16", "ScriptCount", None, None, "Number of ScriptRecords"),
- (
- "struct",
- "ScriptRecord",
- "ScriptCount",
- 0,
- "Array of ScriptRecords -listed alphabetically by ScriptTag",
- ),
- ],
- ),
- (
- "ScriptRecord",
- [
- ("Tag", "ScriptTag", None, None, "4-byte ScriptTag identifier"),
- (
- "Offset",
- "Script",
- None,
- None,
- "Offset to Script table-from beginning of ScriptList",
- ),
- ],
- ),
- (
- "Script",
- [
- (
- "Offset",
- "DefaultLangSys",
- None,
- None,
- "Offset to DefaultLangSys table-from beginning of Script table-may be NULL",
- ),
- (
- "uint16",
- "LangSysCount",
- None,
- None,
- "Number of LangSysRecords for this script-excluding the DefaultLangSys",
- ),
- (
- "struct",
- "LangSysRecord",
- "LangSysCount",
- 0,
- "Array of LangSysRecords-listed alphabetically by LangSysTag",
- ),
- ],
- ),
- (
- "LangSysRecord",
- [
- ("Tag", "LangSysTag", None, None, "4-byte LangSysTag identifier"),
- (
- "Offset",
- "LangSys",
- None,
- None,
- "Offset to LangSys table-from beginning of Script table",
- ),
- ],
- ),
- (
- "LangSys",
- [
- (
- "Offset",
- "LookupOrder",
- None,
- None,
- "= NULL (reserved for an offset to a reordering table)",
- ),
- (
- "uint16",
- "ReqFeatureIndex",
- None,
- None,
- "Index of a feature required for this language system- if no required features = 0xFFFF",
- ),
- (
- "uint16",
- "FeatureCount",
- None,
- None,
- "Number of FeatureIndex values for this language system-excludes the required feature",
- ),
- (
- "uint16",
- "FeatureIndex",
- "FeatureCount",
- 0,
- "Array of indices into the FeatureList-in arbitrary order",
- ),
- ],
- ),
- (
- "FeatureList",
- [
- (
- "uint16",
- "FeatureCount",
- None,
- None,
- "Number of FeatureRecords in this table",
- ),
- (
- "struct",
- "FeatureRecord",
- "FeatureCount",
- 0,
- "Array of FeatureRecords-zero-based (first feature has FeatureIndex = 0)-listed alphabetically by FeatureTag",
- ),
- ],
- ),
- (
- "FeatureRecord",
- [
- ("Tag", "FeatureTag", None, None, "4-byte feature identification tag"),
- (
- "Offset",
- "Feature",
- None,
- None,
- "Offset to Feature table-from beginning of FeatureList",
- ),
- ],
- ),
- (
- "Feature",
- [
- (
- "Offset",
- "FeatureParams",
- None,
- None,
- "= NULL (reserved for offset to FeatureParams)",
- ),
- (
- "uint16",
- "LookupCount",
- None,
- None,
- "Number of LookupList indices for this feature",
- ),
- (
- "uint16",
- "LookupListIndex",
- "LookupCount",
- 0,
- "Array of LookupList indices for this feature -zero-based (first lookup is LookupListIndex = 0)",
- ),
- ],
- ),
- ("FeatureParams", []),
- (
- "FeatureParamsSize",
- [
- (
- "DeciPoints",
- "DesignSize",
- None,
- None,
- "The design size in 720/inch units (decipoints).",
- ),
- (
- "uint16",
- "SubfamilyID",
- None,
- None,
- "Serves as an identifier that associates fonts in a subfamily.",
- ),
- ("NameID", "SubfamilyNameID", None, None, "Subfamily NameID."),
- (
- "DeciPoints",
- "RangeStart",
- None,
- None,
- "Small end of recommended usage range (exclusive) in 720/inch units.",
- ),
- (
- "DeciPoints",
- "RangeEnd",
- None,
- None,
- "Large end of recommended usage range (inclusive) in 720/inch units.",
- ),
- ],
- ),
- (
- "FeatureParamsStylisticSet",
- [
- ("uint16", "Version", None, None, "Set to 0."),
- ("NameID", "UINameID", None, None, "UI NameID."),
- ],
- ),
- (
- "FeatureParamsCharacterVariants",
- [
- ("uint16", "Format", None, None, "Set to 0."),
- ("NameID", "FeatUILabelNameID", None, None, "Feature UI label NameID."),
- (
- "NameID",
- "FeatUITooltipTextNameID",
- None,
- None,
- "Feature UI tooltip text NameID.",
- ),
- ("NameID", "SampleTextNameID", None, None, "Sample text NameID."),
- ("uint16", "NumNamedParameters", None, None, "Number of named parameters."),
- (
- "NameID",
- "FirstParamUILabelNameID",
- None,
- None,
- "First NameID of UI feature parameters.",
- ),
- (
- "uint16",
- "CharCount",
- None,
- None,
- "Count of characters this feature provides glyph variants for.",
- ),
- (
- "uint24",
- "Character",
- "CharCount",
- 0,
- "Unicode characters for which this feature provides glyph variants.",
- ),
- ],
- ),
- (
- "LookupList",
- [
- ("uint16", "LookupCount", None, None, "Number of lookups in this table"),
- (
- "Offset",
- "Lookup",
- "LookupCount",
- 0,
- "Array of offsets to Lookup tables-from beginning of LookupList -zero based (first lookup is Lookup index = 0)",
- ),
- ],
- ),
- (
- "Lookup",
- [
- (
- "uint16",
- "LookupType",
- None,
- None,
- "Different enumerations for GSUB and GPOS",
- ),
- ("LookupFlag", "LookupFlag", None, None, "Lookup qualifiers"),
- (
- "uint16",
- "SubTableCount",
- None,
- None,
- "Number of SubTables for this lookup",
- ),
- (
- "Offset",
- "SubTable",
- "SubTableCount",
- 0,
- "Array of offsets to SubTables-from beginning of Lookup table",
- ),
- (
- "uint16",
- "MarkFilteringSet",
- None,
- "LookupFlag & 0x0010",
- "If set, indicates that the lookup table structure is followed by a MarkFilteringSet field. The layout engine skips over all mark glyphs not in the mark filtering set indicated.",
- ),
- ],
- ),
- (
- "CoverageFormat1",
- [
- ("uint16", "CoverageFormat", None, None, "Format identifier-format = 1"),
- ("uint16", "GlyphCount", None, None, "Number of glyphs in the GlyphArray"),
- (
- "GlyphID",
- "GlyphArray",
- "GlyphCount",
- 0,
- "Array of GlyphIDs-in numerical order",
- ),
- ],
- ),
- (
- "CoverageFormat2",
- [
- ("uint16", "CoverageFormat", None, None, "Format identifier-format = 2"),
- ("uint16", "RangeCount", None, None, "Number of RangeRecords"),
- (
- "struct",
- "RangeRecord",
- "RangeCount",
- 0,
- "Array of glyph ranges-ordered by Start GlyphID",
- ),
- ],
- ),
- (
- "RangeRecord",
- [
- ("GlyphID", "Start", None, None, "First GlyphID in the range"),
- ("GlyphID", "End", None, None, "Last GlyphID in the range"),
- (
- "uint16",
- "StartCoverageIndex",
- None,
- None,
- "Coverage Index of first GlyphID in range",
- ),
- ],
- ),
- (
- "ClassDefFormat1",
- [
- ("uint16", "ClassFormat", None, None, "Format identifier-format = 1"),
- (
- "GlyphID",
- "StartGlyph",
- None,
- None,
- "First GlyphID of the ClassValueArray",
- ),
- ("uint16", "GlyphCount", None, None, "Size of the ClassValueArray"),
- (
- "uint16",
- "ClassValueArray",
- "GlyphCount",
- 0,
- "Array of Class Values-one per GlyphID",
- ),
- ],
- ),
- (
- "ClassDefFormat2",
- [
- ("uint16", "ClassFormat", None, None, "Format identifier-format = 2"),
- ("uint16", "ClassRangeCount", None, None, "Number of ClassRangeRecords"),
- (
- "struct",
- "ClassRangeRecord",
- "ClassRangeCount",
- 0,
- "Array of ClassRangeRecords-ordered by Start GlyphID",
- ),
- ],
- ),
- (
- "ClassRangeRecord",
- [
- ("GlyphID", "Start", None, None, "First GlyphID in the range"),
- ("GlyphID", "End", None, None, "Last GlyphID in the range"),
- ("uint16", "Class", None, None, "Applied to all glyphs in the range"),
- ],
- ),
- (
- "Device",
- [
- ("uint16", "StartSize", None, None, "Smallest size to correct-in ppem"),
- ("uint16", "EndSize", None, None, "Largest size to correct-in ppem"),
- (
- "uint16",
- "DeltaFormat",
- None,
- None,
- "Format of DeltaValue array data: 1, 2, or 3",
- ),
- (
- "DeltaValue",
- "DeltaValue",
- "",
- "DeltaFormat in (1,2,3)",
- "Array of compressed data",
- ),
- ],
- ),
- #
- # gpos
- #
- (
- "GPOS",
- [
- (
- "Version",
- "Version",
- None,
- None,
- "Version of the GPOS table- 0x00010000 or 0x00010001",
- ),
- (
- "Offset",
- "ScriptList",
- None,
- None,
- "Offset to ScriptList table-from beginning of GPOS table",
- ),
- (
- "Offset",
- "FeatureList",
- None,
- None,
- "Offset to FeatureList table-from beginning of GPOS table",
- ),
- (
- "Offset",
- "LookupList",
- None,
- None,
- "Offset to LookupList table-from beginning of GPOS table",
- ),
- (
- "LOffset",
- "FeatureVariations",
- None,
- "Version >= 0x00010001",
- "Offset to FeatureVariations table-from beginning of GPOS table",
- ),
- ],
- ),
- (
- "SinglePosFormat1",
- [
- ("uint16", "PosFormat", None, None, "Format identifier-format = 1"),
- (
- "Offset",
- "Coverage",
- None,
- None,
- "Offset to Coverage table-from beginning of SinglePos subtable",
- ),
- (
- "uint16",
- "ValueFormat",
- None,
- None,
- "Defines the types of data in the ValueRecord",
- ),
- (
- "ValueRecord",
- "Value",
- None,
- None,
- "Defines positioning value(s)-applied to all glyphs in the Coverage table",
- ),
- ],
- ),
- (
- "SinglePosFormat2",
- [
- ("uint16", "PosFormat", None, None, "Format identifier-format = 2"),
- (
- "Offset",
- "Coverage",
- None,
- None,
- "Offset to Coverage table-from beginning of SinglePos subtable",
- ),
- (
- "uint16",
- "ValueFormat",
- None,
- None,
- "Defines the types of data in the ValueRecord",
- ),
- ("uint16", "ValueCount", None, None, "Number of ValueRecords"),
- (
- "ValueRecord",
- "Value",
- "ValueCount",
- 0,
- "Array of ValueRecords-positioning values applied to glyphs",
- ),
- ],
- ),
- (
- "PairPosFormat1",
- [
- ("uint16", "PosFormat", None, None, "Format identifier-format = 1"),
- (
- "Offset",
- "Coverage",
- None,
- None,
- "Offset to Coverage table-from beginning of PairPos subtable-only the first glyph in each pair",
- ),
- (
- "uint16",
- "ValueFormat1",
- None,
- None,
- "Defines the types of data in ValueRecord1-for the first glyph in the pair -may be zero (0)",
- ),
- (
- "uint16",
- "ValueFormat2",
- None,
- None,
- "Defines the types of data in ValueRecord2-for the second glyph in the pair -may be zero (0)",
- ),
- ("uint16", "PairSetCount", None, None, "Number of PairSet tables"),
- (
- "Offset",
- "PairSet",
- "PairSetCount",
- 0,
- "Array of offsets to PairSet tables-from beginning of PairPos subtable-ordered by Coverage Index",
- ),
- ],
- ),
- (
- "PairSet",
- [
- ("uint16", "PairValueCount", None, None, "Number of PairValueRecords"),
- (
- "struct",
- "PairValueRecord",
- "PairValueCount",
- 0,
- "Array of PairValueRecords-ordered by GlyphID of the second glyph",
- ),
- ],
- ),
- (
- "PairValueRecord",
- [
- (
- "GlyphID",
- "SecondGlyph",
- None,
- None,
- "GlyphID of second glyph in the pair-first glyph is listed in the Coverage table",
- ),
- (
- "ValueRecord",
- "Value1",
- None,
- None,
- "Positioning data for the first glyph in the pair",
- ),
- (
- "ValueRecord",
- "Value2",
- None,
- None,
- "Positioning data for the second glyph in the pair",
- ),
- ],
- ),
- (
- "PairPosFormat2",
- [
- ("uint16", "PosFormat", None, None, "Format identifier-format = 2"),
- (
- "Offset",
- "Coverage",
- None,
- None,
- "Offset to Coverage table-from beginning of PairPos subtable-for the first glyph of the pair",
- ),
- (
- "uint16",
- "ValueFormat1",
- None,
- None,
- "ValueRecord definition-for the first glyph of the pair-may be zero (0)",
- ),
- (
- "uint16",
- "ValueFormat2",
- None,
- None,
- "ValueRecord definition-for the second glyph of the pair-may be zero (0)",
- ),
- (
- "Offset",
- "ClassDef1",
- None,
- None,
- "Offset to ClassDef table-from beginning of PairPos subtable-for the first glyph of the pair",
- ),
- (
- "Offset",
- "ClassDef2",
- None,
- None,
- "Offset to ClassDef table-from beginning of PairPos subtable-for the second glyph of the pair",
- ),
- (
- "uint16",
- "Class1Count",
- None,
- None,
- "Number of classes in ClassDef1 table-includes Class0",
- ),
- (
- "uint16",
- "Class2Count",
- None,
- None,
- "Number of classes in ClassDef2 table-includes Class0",
- ),
- (
- "struct",
- "Class1Record",
- "Class1Count",
- 0,
- "Array of Class1 records-ordered by Class1",
- ),
- ],
- ),
- (
- "Class1Record",
- [
- (
- "struct",
- "Class2Record",
- "Class2Count",
- 0,
- "Array of Class2 records-ordered by Class2",
- ),
- ],
- ),
- (
- "Class2Record",
- [
- (
- "ValueRecord",
- "Value1",
- None,
- None,
- "Positioning for first glyph-empty if ValueFormat1 = 0",
- ),
- (
- "ValueRecord",
- "Value2",
- None,
- None,
- "Positioning for second glyph-empty if ValueFormat2 = 0",
- ),
- ],
- ),
- (
- "CursivePosFormat1",
- [
- ("uint16", "PosFormat", None, None, "Format identifier-format = 1"),
- (
- "Offset",
- "Coverage",
- None,
- None,
- "Offset to Coverage table-from beginning of CursivePos subtable",
- ),
- ("uint16", "EntryExitCount", None, None, "Number of EntryExit records"),
- (
- "struct",
- "EntryExitRecord",
- "EntryExitCount",
- 0,
- "Array of EntryExit records-in Coverage Index order",
- ),
- ],
- ),
- (
- "EntryExitRecord",
- [
- (
- "Offset",
- "EntryAnchor",
- None,
- None,
- "Offset to EntryAnchor table-from beginning of CursivePos subtable-may be NULL",
- ),
- (
- "Offset",
- "ExitAnchor",
- None,
- None,
- "Offset to ExitAnchor table-from beginning of CursivePos subtable-may be NULL",
- ),
- ],
- ),
- (
- "MarkBasePosFormat1",
- [
- ("uint16", "PosFormat", None, None, "Format identifier-format = 1"),
- (
- "Offset",
- "MarkCoverage",
- None,
- None,
- "Offset to MarkCoverage table-from beginning of MarkBasePos subtable",
- ),
- (
- "Offset",
- "BaseCoverage",
- None,
- None,
- "Offset to BaseCoverage table-from beginning of MarkBasePos subtable",
- ),
- ("uint16", "ClassCount", None, None, "Number of classes defined for marks"),
- (
- "Offset",
- "MarkArray",
- None,
- None,
- "Offset to MarkArray table-from beginning of MarkBasePos subtable",
- ),
- (
- "Offset",
- "BaseArray",
- None,
- None,
- "Offset to BaseArray table-from beginning of MarkBasePos subtable",
- ),
- ],
- ),
- (
- "BaseArray",
- [
- ("uint16", "BaseCount", None, None, "Number of BaseRecords"),
- (
- "struct",
- "BaseRecord",
- "BaseCount",
- 0,
- "Array of BaseRecords-in order of BaseCoverage Index",
- ),
- ],
- ),
- (
- "BaseRecord",
- [
- (
- "Offset",
- "BaseAnchor",
- "ClassCount",
- 0,
- "Array of offsets (one per class) to Anchor tables-from beginning of BaseArray table-ordered by class-zero-based",
- ),
- ],
- ),
- (
- "MarkLigPosFormat1",
- [
- ("uint16", "PosFormat", None, None, "Format identifier-format = 1"),
- (
- "Offset",
- "MarkCoverage",
- None,
- None,
- "Offset to Mark Coverage table-from beginning of MarkLigPos subtable",
- ),
- (
- "Offset",
- "LigatureCoverage",
- None,
- None,
- "Offset to Ligature Coverage table-from beginning of MarkLigPos subtable",
- ),
- ("uint16", "ClassCount", None, None, "Number of defined mark classes"),
- (
- "Offset",
- "MarkArray",
- None,
- None,
- "Offset to MarkArray table-from beginning of MarkLigPos subtable",
- ),
- (
- "Offset",
- "LigatureArray",
- None,
- None,
- "Offset to LigatureArray table-from beginning of MarkLigPos subtable",
- ),
- ],
- ),
- (
- "LigatureArray",
- [
- (
- "uint16",
- "LigatureCount",
- None,
- None,
- "Number of LigatureAttach table offsets",
- ),
- (
- "Offset",
- "LigatureAttach",
- "LigatureCount",
- 0,
- "Array of offsets to LigatureAttach tables-from beginning of LigatureArray table-ordered by LigatureCoverage Index",
- ),
- ],
- ),
- (
- "LigatureAttach",
- [
- (
- "uint16",
- "ComponentCount",
- None,
- None,
- "Number of ComponentRecords in this ligature",
- ),
- (
- "struct",
- "ComponentRecord",
- "ComponentCount",
- 0,
- "Array of Component records-ordered in writing direction",
- ),
- ],
- ),
- (
- "ComponentRecord",
- [
- (
- "Offset",
- "LigatureAnchor",
- "ClassCount",
- 0,
- "Array of offsets (one per class) to Anchor tables-from beginning of LigatureAttach table-ordered by class-NULL if a component does not have an attachment for a class-zero-based array",
- ),
- ],
- ),
- (
- "MarkMarkPosFormat1",
- [
- ("uint16", "PosFormat", None, None, "Format identifier-format = 1"),
- (
- "Offset",
- "Mark1Coverage",
- None,
- None,
- "Offset to Combining Mark Coverage table-from beginning of MarkMarkPos subtable",
- ),
- (
- "Offset",
- "Mark2Coverage",
- None,
- None,
- "Offset to Base Mark Coverage table-from beginning of MarkMarkPos subtable",
- ),
- (
- "uint16",
- "ClassCount",
- None,
- None,
- "Number of Combining Mark classes defined",
- ),
- (
- "Offset",
- "Mark1Array",
- None,
- None,
- "Offset to MarkArray table for Mark1-from beginning of MarkMarkPos subtable",
- ),
- (
- "Offset",
- "Mark2Array",
- None,
- None,
- "Offset to Mark2Array table for Mark2-from beginning of MarkMarkPos subtable",
- ),
- ],
- ),
- (
- "Mark2Array",
- [
- ("uint16", "Mark2Count", None, None, "Number of Mark2 records"),
- (
- "struct",
- "Mark2Record",
- "Mark2Count",
- 0,
- "Array of Mark2 records-in Coverage order",
- ),
- ],
- ),
- (
- "Mark2Record",
- [
- (
- "Offset",
- "Mark2Anchor",
- "ClassCount",
- 0,
- "Array of offsets (one per class) to Anchor tables-from beginning of Mark2Array table-zero-based array",
- ),
- ],
- ),
- (
- "PosLookupRecord",
- [
- (
- "uint16",
- "SequenceIndex",
- None,
- None,
- "Index to input glyph sequence-first glyph = 0",
- ),
- (
- "uint16",
- "LookupListIndex",
- None,
- None,
- "Lookup to apply to that position-zero-based",
- ),
- ],
- ),
- (
- "ContextPosFormat1",
- [
- ("uint16", "PosFormat", None, None, "Format identifier-format = 1"),
- (
- "Offset",
- "Coverage",
- None,
- None,
- "Offset to Coverage table-from beginning of ContextPos subtable",
- ),
- ("uint16", "PosRuleSetCount", None, None, "Number of PosRuleSet tables"),
- (
- "Offset",
- "PosRuleSet",
- "PosRuleSetCount",
- 0,
- "Array of offsets to PosRuleSet tables-from beginning of ContextPos subtable-ordered by Coverage Index",
- ),
- ],
- ),
- (
- "PosRuleSet",
- [
- ("uint16", "PosRuleCount", None, None, "Number of PosRule tables"),
- (
- "Offset",
- "PosRule",
- "PosRuleCount",
- 0,
- "Array of offsets to PosRule tables-from beginning of PosRuleSet-ordered by preference",
- ),
- ],
- ),
- (
- "PosRule",
- [
- (
- "uint16",
- "GlyphCount",
- None,
- None,
- "Number of glyphs in the Input glyph sequence",
- ),
- ("uint16", "PosCount", None, None, "Number of PosLookupRecords"),
- (
- "GlyphID",
- "Input",
- "GlyphCount",
- -1,
- "Array of input GlyphIDs-starting with the second glyph",
- ),
- (
- "struct",
- "PosLookupRecord",
- "PosCount",
- 0,
- "Array of positioning lookups-in design order",
- ),
- ],
- ),
- (
- "ContextPosFormat2",
- [
- ("uint16", "PosFormat", None, None, "Format identifier-format = 2"),
- (
- "Offset",
- "Coverage",
- None,
- None,
- "Offset to Coverage table-from beginning of ContextPos subtable",
- ),
- (
- "Offset",
- "ClassDef",
- None,
- None,
- "Offset to ClassDef table-from beginning of ContextPos subtable",
- ),
- ("uint16", "PosClassSetCount", None, None, "Number of PosClassSet tables"),
- (
- "Offset",
- "PosClassSet",
- "PosClassSetCount",
- 0,
- "Array of offsets to PosClassSet tables-from beginning of ContextPos subtable-ordered by class-may be NULL",
- ),
- ],
- ),
- (
- "PosClassSet",
- [
- (
- "uint16",
- "PosClassRuleCount",
- None,
- None,
- "Number of PosClassRule tables",
- ),
- (
- "Offset",
- "PosClassRule",
- "PosClassRuleCount",
- 0,
- "Array of offsets to PosClassRule tables-from beginning of PosClassSet-ordered by preference",
- ),
- ],
- ),
- (
- "PosClassRule",
- [
- ("uint16", "GlyphCount", None, None, "Number of glyphs to be matched"),
- ("uint16", "PosCount", None, None, "Number of PosLookupRecords"),
- (
- "uint16",
- "Class",
- "GlyphCount",
- -1,
- "Array of classes-beginning with the second class-to be matched to the input glyph sequence",
- ),
- (
- "struct",
- "PosLookupRecord",
- "PosCount",
- 0,
- "Array of positioning lookups-in design order",
- ),
- ],
- ),
- (
- "ContextPosFormat3",
- [
- ("uint16", "PosFormat", None, None, "Format identifier-format = 3"),
- (
- "uint16",
- "GlyphCount",
- None,
- None,
- "Number of glyphs in the input sequence",
- ),
- ("uint16", "PosCount", None, None, "Number of PosLookupRecords"),
- (
- "Offset",
- "Coverage",
- "GlyphCount",
- 0,
- "Array of offsets to Coverage tables-from beginning of ContextPos subtable",
- ),
- (
- "struct",
- "PosLookupRecord",
- "PosCount",
- 0,
- "Array of positioning lookups-in design order",
- ),
- ],
- ),
- (
- "ChainContextPosFormat1",
- [
- ("uint16", "PosFormat", None, None, "Format identifier-format = 1"),
- (
- "Offset",
- "Coverage",
- None,
- None,
- "Offset to Coverage table-from beginning of ContextPos subtable",
- ),
- (
- "uint16",
- "ChainPosRuleSetCount",
- None,
- None,
- "Number of ChainPosRuleSet tables",
- ),
- (
- "Offset",
- "ChainPosRuleSet",
- "ChainPosRuleSetCount",
- 0,
- "Array of offsets to ChainPosRuleSet tables-from beginning of ContextPos subtable-ordered by Coverage Index",
- ),
- ],
- ),
- (
- "ChainPosRuleSet",
- [
- (
- "uint16",
- "ChainPosRuleCount",
- None,
- None,
- "Number of ChainPosRule tables",
- ),
- (
- "Offset",
- "ChainPosRule",
- "ChainPosRuleCount",
- 0,
- "Array of offsets to ChainPosRule tables-from beginning of ChainPosRuleSet-ordered by preference",
- ),
- ],
- ),
- (
- "ChainPosRule",
- [
- (
- "uint16",
- "BacktrackGlyphCount",
- None,
- None,
- "Total number of glyphs in the backtrack sequence (number of glyphs to be matched before the first glyph)",
- ),
- (
- "GlyphID",
- "Backtrack",
- "BacktrackGlyphCount",
- 0,
- "Array of backtracking GlyphID's (to be matched before the input sequence)",
- ),
- (
- "uint16",
- "InputGlyphCount",
- None,
- None,
- "Total number of glyphs in the input sequence (includes the first glyph)",
- ),
- (
- "GlyphID",
- "Input",
- "InputGlyphCount",
- -1,
- "Array of input GlyphIDs (start with second glyph)",
- ),
- (
- "uint16",
- "LookAheadGlyphCount",
- None,
- None,
- "Total number of glyphs in the look ahead sequence (number of glyphs to be matched after the input sequence)",
- ),
- (
- "GlyphID",
- "LookAhead",
- "LookAheadGlyphCount",
- 0,
- "Array of lookahead GlyphID's (to be matched after the input sequence)",
- ),
- ("uint16", "PosCount", None, None, "Number of PosLookupRecords"),
- (
- "struct",
- "PosLookupRecord",
- "PosCount",
- 0,
- "Array of PosLookupRecords (in design order)",
- ),
- ],
- ),
- (
- "ChainContextPosFormat2",
- [
- ("uint16", "PosFormat", None, None, "Format identifier-format = 2"),
- (
- "Offset",
- "Coverage",
- None,
- None,
- "Offset to Coverage table-from beginning of ChainContextPos subtable",
- ),
- (
- "Offset",
- "BacktrackClassDef",
- None,
- None,
- "Offset to ClassDef table containing backtrack sequence context-from beginning of ChainContextPos subtable",
- ),
- (
- "Offset",
- "InputClassDef",
- None,
- None,
- "Offset to ClassDef table containing input sequence context-from beginning of ChainContextPos subtable",
- ),
- (
- "Offset",
- "LookAheadClassDef",
- None,
- None,
- "Offset to ClassDef table containing lookahead sequence context-from beginning of ChainContextPos subtable",
- ),
- (
- "uint16",
- "ChainPosClassSetCount",
- None,
- None,
- "Number of ChainPosClassSet tables",
- ),
- (
- "Offset",
- "ChainPosClassSet",
- "ChainPosClassSetCount",
- 0,
- "Array of offsets to ChainPosClassSet tables-from beginning of ChainContextPos subtable-ordered by input class-may be NULL",
- ),
- ],
- ),
- (
- "ChainPosClassSet",
- [
- (
- "uint16",
- "ChainPosClassRuleCount",
- None,
- None,
- "Number of ChainPosClassRule tables",
- ),
- (
- "Offset",
- "ChainPosClassRule",
- "ChainPosClassRuleCount",
- 0,
- "Array of offsets to ChainPosClassRule tables-from beginning of ChainPosClassSet-ordered by preference",
- ),
- ],
- ),
- (
- "ChainPosClassRule",
- [
- (
- "uint16",
- "BacktrackGlyphCount",
- None,
- None,
- "Total number of glyphs in the backtrack sequence (number of glyphs to be matched before the first glyph)",
- ),
- (
- "uint16",
- "Backtrack",
- "BacktrackGlyphCount",
- 0,
- "Array of backtracking classes(to be matched before the input sequence)",
- ),
- (
- "uint16",
- "InputGlyphCount",
- None,
- None,
- "Total number of classes in the input sequence (includes the first class)",
- ),
- (
- "uint16",
- "Input",
- "InputGlyphCount",
- -1,
- "Array of input classes(start with second class; to be matched with the input glyph sequence)",
- ),
- (
- "uint16",
- "LookAheadGlyphCount",
- None,
- None,
- "Total number of classes in the look ahead sequence (number of classes to be matched after the input sequence)",
- ),
- (
- "uint16",
- "LookAhead",
- "LookAheadGlyphCount",
- 0,
- "Array of lookahead classes(to be matched after the input sequence)",
- ),
- ("uint16", "PosCount", None, None, "Number of PosLookupRecords"),
- (
- "struct",
- "PosLookupRecord",
- "PosCount",
- 0,
- "Array of PosLookupRecords (in design order)",
- ),
- ],
- ),
- (
- "ChainContextPosFormat3",
- [
- ("uint16", "PosFormat", None, None, "Format identifier-format = 3"),
- (
- "uint16",
- "BacktrackGlyphCount",
- None,
- None,
- "Number of glyphs in the backtracking sequence",
- ),
- (
- "Offset",
- "BacktrackCoverage",
- "BacktrackGlyphCount",
- 0,
- "Array of offsets to coverage tables in backtracking sequence, in glyph sequence order",
- ),
- (
- "uint16",
- "InputGlyphCount",
- None,
- None,
- "Number of glyphs in input sequence",
- ),
- (
- "Offset",
- "InputCoverage",
- "InputGlyphCount",
- 0,
- "Array of offsets to coverage tables in input sequence, in glyph sequence order",
- ),
- (
- "uint16",
- "LookAheadGlyphCount",
- None,
- None,
- "Number of glyphs in lookahead sequence",
- ),
- (
- "Offset",
- "LookAheadCoverage",
- "LookAheadGlyphCount",
- 0,
- "Array of offsets to coverage tables in lookahead sequence, in glyph sequence order",
- ),
- ("uint16", "PosCount", None, None, "Number of PosLookupRecords"),
- (
- "struct",
- "PosLookupRecord",
- "PosCount",
- 0,
- "Array of PosLookupRecords,in design order",
- ),
- ],
- ),
- (
- "ExtensionPosFormat1",
- [
- ("uint16", "ExtFormat", None, None, "Format identifier. Set to 1."),
- (
- "uint16",
- "ExtensionLookupType",
- None,
- None,
- "Lookup type of subtable referenced by ExtensionOffset (i.e. the extension subtable).",
- ),
- ("LOffset", "ExtSubTable", None, None, "Offset to SubTable"),
- ],
- ),
- # ('ValueRecord', [
- # ('int16', 'XPlacement', None, None, 'Horizontal adjustment for placement-in design units'),
- # ('int16', 'YPlacement', None, None, 'Vertical adjustment for placement-in design units'),
- # ('int16', 'XAdvance', None, None, 'Horizontal adjustment for advance-in design units (only used for horizontal writing)'),
- # ('int16', 'YAdvance', None, None, 'Vertical adjustment for advance-in design units (only used for vertical writing)'),
- # ('Offset', 'XPlaDevice', None, None, 'Offset to Device table for horizontal placement-measured from beginning of PosTable (may be NULL)'),
- # ('Offset', 'YPlaDevice', None, None, 'Offset to Device table for vertical placement-measured from beginning of PosTable (may be NULL)'),
- # ('Offset', 'XAdvDevice', None, None, 'Offset to Device table for horizontal advance-measured from beginning of PosTable (may be NULL)'),
- # ('Offset', 'YAdvDevice', None, None, 'Offset to Device table for vertical advance-measured from beginning of PosTable (may be NULL)'),
- # ]),
- (
- "AnchorFormat1",
- [
- ("uint16", "AnchorFormat", None, None, "Format identifier-format = 1"),
- ("int16", "XCoordinate", None, None, "Horizontal value-in design units"),
- ("int16", "YCoordinate", None, None, "Vertical value-in design units"),
- ],
- ),
- (
- "AnchorFormat2",
- [
- ("uint16", "AnchorFormat", None, None, "Format identifier-format = 2"),
- ("int16", "XCoordinate", None, None, "Horizontal value-in design units"),
- ("int16", "YCoordinate", None, None, "Vertical value-in design units"),
- ("uint16", "AnchorPoint", None, None, "Index to glyph contour point"),
- ],
- ),
- (
- "AnchorFormat3",
- [
- ("uint16", "AnchorFormat", None, None, "Format identifier-format = 3"),
- ("int16", "XCoordinate", None, None, "Horizontal value-in design units"),
- ("int16", "YCoordinate", None, None, "Vertical value-in design units"),
- (
- "Offset",
- "XDeviceTable",
- None,
- None,
- "Offset to Device table for X coordinate- from beginning of Anchor table (may be NULL)",
- ),
- (
- "Offset",
- "YDeviceTable",
- None,
- None,
- "Offset to Device table for Y coordinate- from beginning of Anchor table (may be NULL)",
- ),
- ],
- ),
- (
- "MarkArray",
- [
- ("uint16", "MarkCount", None, None, "Number of MarkRecords"),
- (
- "struct",
- "MarkRecord",
- "MarkCount",
- 0,
- "Array of MarkRecords-in Coverage order",
- ),
- ],
- ),
- (
- "MarkRecord",
- [
- ("uint16", "Class", None, None, "Class defined for this mark"),
- (
- "Offset",
- "MarkAnchor",
- None,
- None,
- "Offset to Anchor table-from beginning of MarkArray table",
- ),
- ],
- ),
- #
- # gsub
- #
- (
- "GSUB",
- [
- (
- "Version",
- "Version",
- None,
- None,
- "Version of the GSUB table- 0x00010000 or 0x00010001",
- ),
- (
- "Offset",
- "ScriptList",
- None,
- None,
- "Offset to ScriptList table-from beginning of GSUB table",
- ),
- (
- "Offset",
- "FeatureList",
- None,
- None,
- "Offset to FeatureList table-from beginning of GSUB table",
- ),
- (
- "Offset",
- "LookupList",
- None,
- None,
- "Offset to LookupList table-from beginning of GSUB table",
- ),
- (
- "LOffset",
- "FeatureVariations",
- None,
- "Version >= 0x00010001",
- "Offset to FeatureVariations table-from beginning of GSUB table",
- ),
- ],
- ),
- (
- "SingleSubstFormat1",
- [
- ("uint16", "SubstFormat", None, None, "Format identifier-format = 1"),
- (
- "Offset",
- "Coverage",
- None,
- None,
- "Offset to Coverage table-from beginning of Substitution table",
- ),
- (
- "uint16",
- "DeltaGlyphID",
- None,
- None,
- "Add to original GlyphID modulo 65536 to get substitute GlyphID",
- ),
- ],
- ),
- (
- "SingleSubstFormat2",
- [
- ("uint16", "SubstFormat", None, None, "Format identifier-format = 2"),
- (
- "Offset",
- "Coverage",
- None,
- None,
- "Offset to Coverage table-from beginning of Substitution table",
- ),
- (
- "uint16",
- "GlyphCount",
- None,
- None,
- "Number of GlyphIDs in the Substitute array",
- ),
- (
- "GlyphID",
- "Substitute",
- "GlyphCount",
- 0,
- "Array of substitute GlyphIDs-ordered by Coverage Index",
- ),
- ],
- ),
- (
- "MultipleSubstFormat1",
- [
- ("uint16", "SubstFormat", None, None, "Format identifier-format = 1"),
- (
- "Offset",
- "Coverage",
- None,
- None,
- "Offset to Coverage table-from beginning of Substitution table",
- ),
- (
- "uint16",
- "SequenceCount",
- None,
- None,
- "Number of Sequence table offsets in the Sequence array",
- ),
- (
- "Offset",
- "Sequence",
- "SequenceCount",
- 0,
- "Array of offsets to Sequence tables-from beginning of Substitution table-ordered by Coverage Index",
- ),
- ],
- ),
- (
- "Sequence",
- [
- (
- "uint16",
- "GlyphCount",
- None,
- None,
- "Number of GlyphIDs in the Substitute array. This should always be greater than 0.",
- ),
- (
- "GlyphID",
- "Substitute",
- "GlyphCount",
- 0,
- "String of GlyphIDs to substitute",
- ),
- ],
- ),
- (
- "AlternateSubstFormat1",
- [
- ("uint16", "SubstFormat", None, None, "Format identifier-format = 1"),
- (
- "Offset",
- "Coverage",
- None,
- None,
- "Offset to Coverage table-from beginning of Substitution table",
- ),
- (
- "uint16",
- "AlternateSetCount",
- None,
- None,
- "Number of AlternateSet tables",
- ),
- (
- "Offset",
- "AlternateSet",
- "AlternateSetCount",
- 0,
- "Array of offsets to AlternateSet tables-from beginning of Substitution table-ordered by Coverage Index",
- ),
- ],
- ),
- (
- "AlternateSet",
- [
- (
- "uint16",
- "GlyphCount",
- None,
- None,
- "Number of GlyphIDs in the Alternate array",
- ),
- (
- "GlyphID",
- "Alternate",
- "GlyphCount",
- 0,
- "Array of alternate GlyphIDs-in arbitrary order",
- ),
- ],
- ),
- (
- "LigatureSubstFormat1",
- [
- ("uint16", "SubstFormat", None, None, "Format identifier-format = 1"),
- (
- "Offset",
- "Coverage",
- None,
- None,
- "Offset to Coverage table-from beginning of Substitution table",
- ),
- ("uint16", "LigSetCount", None, None, "Number of LigatureSet tables"),
- (
- "Offset",
- "LigatureSet",
- "LigSetCount",
- 0,
- "Array of offsets to LigatureSet tables-from beginning of Substitution table-ordered by Coverage Index",
- ),
- ],
- ),
- (
- "LigatureSet",
- [
- ("uint16", "LigatureCount", None, None, "Number of Ligature tables"),
- (
- "Offset",
- "Ligature",
- "LigatureCount",
- 0,
- "Array of offsets to Ligature tables-from beginning of LigatureSet table-ordered by preference",
- ),
- ],
- ),
- (
- "Ligature",
- [
- ("GlyphID", "LigGlyph", None, None, "GlyphID of ligature to substitute"),
- ("uint16", "CompCount", None, None, "Number of components in the ligature"),
- (
- "GlyphID",
- "Component",
- "CompCount",
- -1,
- "Array of component GlyphIDs-start with the second component-ordered in writing direction",
- ),
- ],
- ),
- (
- "SubstLookupRecord",
- [
- (
- "uint16",
- "SequenceIndex",
- None,
- None,
- "Index into current glyph sequence-first glyph = 0",
- ),
- (
- "uint16",
- "LookupListIndex",
- None,
- None,
- "Lookup to apply to that position-zero-based",
- ),
- ],
- ),
- (
- "ContextSubstFormat1",
- [
- ("uint16", "SubstFormat", None, None, "Format identifier-format = 1"),
- (
- "Offset",
- "Coverage",
- None,
- None,
- "Offset to Coverage table-from beginning of Substitution table",
- ),
- (
- "uint16",
- "SubRuleSetCount",
- None,
- None,
- "Number of SubRuleSet tables-must equal GlyphCount in Coverage table",
- ),
- (
- "Offset",
- "SubRuleSet",
- "SubRuleSetCount",
- 0,
- "Array of offsets to SubRuleSet tables-from beginning of Substitution table-ordered by Coverage Index",
- ),
- ],
- ),
- (
- "SubRuleSet",
- [
- ("uint16", "SubRuleCount", None, None, "Number of SubRule tables"),
- (
- "Offset",
- "SubRule",
- "SubRuleCount",
- 0,
- "Array of offsets to SubRule tables-from beginning of SubRuleSet table-ordered by preference",
- ),
- ],
- ),
- (
- "SubRule",
- [
- (
- "uint16",
- "GlyphCount",
- None,
- None,
- "Total number of glyphs in input glyph sequence-includes the first glyph",
- ),
- ("uint16", "SubstCount", None, None, "Number of SubstLookupRecords"),
- (
- "GlyphID",
- "Input",
- "GlyphCount",
- -1,
- "Array of input GlyphIDs-start with second glyph",
- ),
- (
- "struct",
- "SubstLookupRecord",
- "SubstCount",
- 0,
- "Array of SubstLookupRecords-in design order",
- ),
- ],
- ),
- (
- "ContextSubstFormat2",
- [
- ("uint16", "SubstFormat", None, None, "Format identifier-format = 2"),
- (
- "Offset",
- "Coverage",
- None,
- None,
- "Offset to Coverage table-from beginning of Substitution table",
- ),
- (
- "Offset",
- "ClassDef",
- None,
- None,
- "Offset to glyph ClassDef table-from beginning of Substitution table",
- ),
- ("uint16", "SubClassSetCount", None, None, "Number of SubClassSet tables"),
- (
- "Offset",
- "SubClassSet",
- "SubClassSetCount",
- 0,
- "Array of offsets to SubClassSet tables-from beginning of Substitution table-ordered by class-may be NULL",
- ),
- ],
- ),
- (
- "SubClassSet",
- [
- (
- "uint16",
- "SubClassRuleCount",
- None,
- None,
- "Number of SubClassRule tables",
- ),
- (
- "Offset",
- "SubClassRule",
- "SubClassRuleCount",
- 0,
- "Array of offsets to SubClassRule tables-from beginning of SubClassSet-ordered by preference",
- ),
- ],
- ),
- (
- "SubClassRule",
- [
- (
- "uint16",
- "GlyphCount",
- None,
- None,
- "Total number of classes specified for the context in the rule-includes the first class",
- ),
- ("uint16", "SubstCount", None, None, "Number of SubstLookupRecords"),
- (
- "uint16",
- "Class",
- "GlyphCount",
- -1,
- "Array of classes-beginning with the second class-to be matched to the input glyph class sequence",
- ),
- (
- "struct",
- "SubstLookupRecord",
- "SubstCount",
- 0,
- "Array of Substitution lookups-in design order",
- ),
- ],
- ),
- (
- "ContextSubstFormat3",
- [
- ("uint16", "SubstFormat", None, None, "Format identifier-format = 3"),
- (
- "uint16",
- "GlyphCount",
- None,
- None,
- "Number of glyphs in the input glyph sequence",
- ),
- ("uint16", "SubstCount", None, None, "Number of SubstLookupRecords"),
- (
- "Offset",
- "Coverage",
- "GlyphCount",
- 0,
- "Array of offsets to Coverage table-from beginning of Substitution table-in glyph sequence order",
- ),
- (
- "struct",
- "SubstLookupRecord",
- "SubstCount",
- 0,
- "Array of SubstLookupRecords-in design order",
- ),
- ],
- ),
- (
- "ChainContextSubstFormat1",
- [
- ("uint16", "SubstFormat", None, None, "Format identifier-format = 1"),
- (
- "Offset",
- "Coverage",
- None,
- None,
- "Offset to Coverage table-from beginning of Substitution table",
- ),
- (
- "uint16",
- "ChainSubRuleSetCount",
- None,
- None,
- "Number of ChainSubRuleSet tables-must equal GlyphCount in Coverage table",
- ),
- (
- "Offset",
- "ChainSubRuleSet",
- "ChainSubRuleSetCount",
- 0,
- "Array of offsets to ChainSubRuleSet tables-from beginning of Substitution table-ordered by Coverage Index",
- ),
- ],
- ),
- (
- "ChainSubRuleSet",
- [
- (
- "uint16",
- "ChainSubRuleCount",
- None,
- None,
- "Number of ChainSubRule tables",
- ),
- (
- "Offset",
- "ChainSubRule",
- "ChainSubRuleCount",
- 0,
- "Array of offsets to ChainSubRule tables-from beginning of ChainSubRuleSet table-ordered by preference",
- ),
- ],
- ),
- (
- "ChainSubRule",
- [
- (
- "uint16",
- "BacktrackGlyphCount",
- None,
- None,
- "Total number of glyphs in the backtrack sequence (number of glyphs to be matched before the first glyph)",
- ),
- (
- "GlyphID",
- "Backtrack",
- "BacktrackGlyphCount",
- 0,
- "Array of backtracking GlyphID's (to be matched before the input sequence)",
- ),
- (
- "uint16",
- "InputGlyphCount",
- None,
- None,
- "Total number of glyphs in the input sequence (includes the first glyph)",
- ),
- (
- "GlyphID",
- "Input",
- "InputGlyphCount",
- -1,
- "Array of input GlyphIDs (start with second glyph)",
- ),
- (
- "uint16",
- "LookAheadGlyphCount",
- None,
- None,
- "Total number of glyphs in the look ahead sequence (number of glyphs to be matched after the input sequence)",
- ),
- (
- "GlyphID",
- "LookAhead",
- "LookAheadGlyphCount",
- 0,
- "Array of lookahead GlyphID's (to be matched after the input sequence)",
- ),
- ("uint16", "SubstCount", None, None, "Number of SubstLookupRecords"),
- (
- "struct",
- "SubstLookupRecord",
- "SubstCount",
- 0,
- "Array of SubstLookupRecords (in design order)",
- ),
- ],
- ),
- (
- "ChainContextSubstFormat2",
- [
- ("uint16", "SubstFormat", None, None, "Format identifier-format = 2"),
- (
- "Offset",
- "Coverage",
- None,
- None,
- "Offset to Coverage table-from beginning of Substitution table",
- ),
- (
- "Offset",
- "BacktrackClassDef",
- None,
- None,
- "Offset to glyph ClassDef table containing backtrack sequence data-from beginning of Substitution table",
- ),
- (
- "Offset",
- "InputClassDef",
- None,
- None,
- "Offset to glyph ClassDef table containing input sequence data-from beginning of Substitution table",
- ),
- (
- "Offset",
- "LookAheadClassDef",
- None,
- None,
- "Offset to glyph ClassDef table containing lookahead sequence data-from beginning of Substitution table",
- ),
- (
- "uint16",
- "ChainSubClassSetCount",
- None,
- None,
- "Number of ChainSubClassSet tables",
- ),
- (
- "Offset",
- "ChainSubClassSet",
- "ChainSubClassSetCount",
- 0,
- "Array of offsets to ChainSubClassSet tables-from beginning of Substitution table-ordered by input class-may be NULL",
- ),
- ],
- ),
- (
- "ChainSubClassSet",
- [
- (
- "uint16",
- "ChainSubClassRuleCount",
- None,
- None,
- "Number of ChainSubClassRule tables",
- ),
- (
- "Offset",
- "ChainSubClassRule",
- "ChainSubClassRuleCount",
- 0,
- "Array of offsets to ChainSubClassRule tables-from beginning of ChainSubClassSet-ordered by preference",
- ),
- ],
- ),
- (
- "ChainSubClassRule",
- [
- (
- "uint16",
- "BacktrackGlyphCount",
- None,
- None,
- "Total number of glyphs in the backtrack sequence (number of glyphs to be matched before the first glyph)",
- ),
- (
- "uint16",
- "Backtrack",
- "BacktrackGlyphCount",
- 0,
- "Array of backtracking classes(to be matched before the input sequence)",
- ),
- (
- "uint16",
- "InputGlyphCount",
- None,
- None,
- "Total number of classes in the input sequence (includes the first class)",
- ),
- (
- "uint16",
- "Input",
- "InputGlyphCount",
- -1,
- "Array of input classes(start with second class; to be matched with the input glyph sequence)",
- ),
- (
- "uint16",
- "LookAheadGlyphCount",
- None,
- None,
- "Total number of classes in the look ahead sequence (number of classes to be matched after the input sequence)",
- ),
- (
- "uint16",
- "LookAhead",
- "LookAheadGlyphCount",
- 0,
- "Array of lookahead classes(to be matched after the input sequence)",
- ),
- ("uint16", "SubstCount", None, None, "Number of SubstLookupRecords"),
- (
- "struct",
- "SubstLookupRecord",
- "SubstCount",
- 0,
- "Array of SubstLookupRecords (in design order)",
- ),
- ],
- ),
- (
- "ChainContextSubstFormat3",
- [
- ("uint16", "SubstFormat", None, None, "Format identifier-format = 3"),
- (
- "uint16",
- "BacktrackGlyphCount",
- None,
- None,
- "Number of glyphs in the backtracking sequence",
- ),
- (
- "Offset",
- "BacktrackCoverage",
- "BacktrackGlyphCount",
- 0,
- "Array of offsets to coverage tables in backtracking sequence, in glyph sequence order",
- ),
- (
- "uint16",
- "InputGlyphCount",
- None,
- None,
- "Number of glyphs in input sequence",
- ),
- (
- "Offset",
- "InputCoverage",
- "InputGlyphCount",
- 0,
- "Array of offsets to coverage tables in input sequence, in glyph sequence order",
- ),
- (
- "uint16",
- "LookAheadGlyphCount",
- None,
- None,
- "Number of glyphs in lookahead sequence",
- ),
- (
- "Offset",
- "LookAheadCoverage",
- "LookAheadGlyphCount",
- 0,
- "Array of offsets to coverage tables in lookahead sequence, in glyph sequence order",
- ),
- ("uint16", "SubstCount", None, None, "Number of SubstLookupRecords"),
- (
- "struct",
- "SubstLookupRecord",
- "SubstCount",
- 0,
- "Array of SubstLookupRecords, in design order",
- ),
- ],
- ),
- (
- "ExtensionSubstFormat1",
- [
- ("uint16", "ExtFormat", None, None, "Format identifier. Set to 1."),
- (
- "uint16",
- "ExtensionLookupType",
- None,
- None,
- "Lookup type of subtable referenced by ExtensionOffset (i.e. the extension subtable).",
- ),
- (
- "LOffset",
- "ExtSubTable",
- None,
- None,
- "Array of offsets to Lookup tables-from beginning of LookupList -zero based (first lookup is Lookup index = 0)",
- ),
- ],
- ),
- (
- "ReverseChainSingleSubstFormat1",
- [
- ("uint16", "SubstFormat", None, None, "Format identifier-format = 1"),
- (
- "Offset",
- "Coverage",
- None,
- 0,
- "Offset to Coverage table - from beginning of Substitution table",
- ),
- (
- "uint16",
- "BacktrackGlyphCount",
- None,
- None,
- "Number of glyphs in the backtracking sequence",
- ),
- (
- "Offset",
- "BacktrackCoverage",
- "BacktrackGlyphCount",
- 0,
- "Array of offsets to coverage tables in backtracking sequence, in glyph sequence order",
- ),
- (
- "uint16",
- "LookAheadGlyphCount",
- None,
- None,
- "Number of glyphs in lookahead sequence",
- ),
- (
- "Offset",
- "LookAheadCoverage",
- "LookAheadGlyphCount",
- 0,
- "Array of offsets to coverage tables in lookahead sequence, in glyph sequence order",
- ),
- (
- "uint16",
- "GlyphCount",
- None,
- None,
- "Number of GlyphIDs in the Substitute array",
- ),
- (
- "GlyphID",
- "Substitute",
- "GlyphCount",
- 0,
- "Array of substitute GlyphIDs-ordered by Coverage index",
- ),
- ],
- ),
- #
- # gdef
- #
- (
- "GDEF",
- [
- (
- "Version",
- "Version",
- None,
- None,
- "Version of the GDEF table- 0x00010000, 0x00010002, or 0x00010003",
- ),
- (
- "Offset",
- "GlyphClassDef",
- None,
- None,
- "Offset to class definition table for glyph type-from beginning of GDEF header (may be NULL)",
- ),
- (
- "Offset",
- "AttachList",
- None,
- None,
- "Offset to list of glyphs with attachment points-from beginning of GDEF header (may be NULL)",
- ),
- (
- "Offset",
- "LigCaretList",
- None,
- None,
- "Offset to list of positioning points for ligature carets-from beginning of GDEF header (may be NULL)",
- ),
- (
- "Offset",
- "MarkAttachClassDef",
- None,
- None,
- "Offset to class definition table for mark attachment type-from beginning of GDEF header (may be NULL)",
- ),
- (
- "Offset",
- "MarkGlyphSetsDef",
- None,
- "Version >= 0x00010002",
- "Offset to the table of mark set definitions-from beginning of GDEF header (may be NULL)",
- ),
- (
- "LOffset",
- "VarStore",
- None,
- "Version >= 0x00010003",
- "Offset to variation store (may be NULL)",
- ),
- ],
- ),
- (
- "AttachList",
- [
- (
- "Offset",
- "Coverage",
- None,
- None,
- "Offset to Coverage table - from beginning of AttachList table",
- ),
- (
- "uint16",
- "GlyphCount",
- None,
- None,
- "Number of glyphs with attachment points",
- ),
- (
- "Offset",
- "AttachPoint",
- "GlyphCount",
- 0,
- "Array of offsets to AttachPoint tables-from beginning of AttachList table-in Coverage Index order",
- ),
- ],
- ),
- (
- "AttachPoint",
- [
- (
- "uint16",
- "PointCount",
- None,
- None,
- "Number of attachment points on this glyph",
- ),
- (
- "uint16",
- "PointIndex",
- "PointCount",
- 0,
- "Array of contour point indices -in increasing numerical order",
- ),
- ],
- ),
- (
- "LigCaretList",
- [
- (
- "Offset",
- "Coverage",
- None,
- None,
- "Offset to Coverage table - from beginning of LigCaretList table",
- ),
- ("uint16", "LigGlyphCount", None, None, "Number of ligature glyphs"),
- (
- "Offset",
- "LigGlyph",
- "LigGlyphCount",
- 0,
- "Array of offsets to LigGlyph tables-from beginning of LigCaretList table-in Coverage Index order",
- ),
- ],
- ),
- (
- "LigGlyph",
- [
- (
- "uint16",
- "CaretCount",
- None,
- None,
- "Number of CaretValues for this ligature (components - 1)",
- ),
- (
- "Offset",
- "CaretValue",
- "CaretCount",
- 0,
- "Array of offsets to CaretValue tables-from beginning of LigGlyph table-in increasing coordinate order",
- ),
- ],
- ),
- (
- "CaretValueFormat1",
- [
- ("uint16", "CaretValueFormat", None, None, "Format identifier-format = 1"),
- ("int16", "Coordinate", None, None, "X or Y value, in design units"),
- ],
- ),
- (
- "CaretValueFormat2",
- [
- ("uint16", "CaretValueFormat", None, None, "Format identifier-format = 2"),
- ("uint16", "CaretValuePoint", None, None, "Contour point index on glyph"),
- ],
- ),
- (
- "CaretValueFormat3",
- [
- ("uint16", "CaretValueFormat", None, None, "Format identifier-format = 3"),
- ("int16", "Coordinate", None, None, "X or Y value, in design units"),
- (
- "Offset",
- "DeviceTable",
- None,
- None,
- "Offset to Device table for X or Y value-from beginning of CaretValue table",
- ),
- ],
- ),
- (
- "MarkGlyphSetsDef",
- [
- ("uint16", "MarkSetTableFormat", None, None, "Format identifier == 1"),
- ("uint16", "MarkSetCount", None, None, "Number of mark sets defined"),
- (
- "LOffset",
- "Coverage",
- "MarkSetCount",
- 0,
- "Array of offsets to mark set coverage tables.",
- ),
- ],
- ),
- #
- # base
- #
- (
- "BASE",
- [
- (
- "Version",
- "Version",
- None,
- None,
- "Version of the BASE table-initially 0x00010000",
- ),
- (
- "Offset",
- "HorizAxis",
- None,
- None,
- "Offset to horizontal Axis table-from beginning of BASE table-may be NULL",
- ),
- (
- "Offset",
- "VertAxis",
- None,
- None,
- "Offset to vertical Axis table-from beginning of BASE table-may be NULL",
- ),
- (
- "LOffset",
- "VarStore",
- None,
- "Version >= 0x00010001",
- "Offset to variation store (may be NULL)",
- ),
- ],
- ),
- (
- "Axis",
- [
- (
- "Offset",
- "BaseTagList",
- None,
- None,
- "Offset to BaseTagList table-from beginning of Axis table-may be NULL",
- ),
- (
- "Offset",
- "BaseScriptList",
- None,
- None,
- "Offset to BaseScriptList table-from beginning of Axis table",
- ),
- ],
- ),
- (
- "BaseTagList",
- [
- (
- "uint16",
- "BaseTagCount",
- None,
- None,
- "Number of baseline identification tags in this text direction-may be zero (0)",
- ),
- (
- "Tag",
- "BaselineTag",
- "BaseTagCount",
- 0,
- "Array of 4-byte baseline identification tags-must be in alphabetical order",
- ),
- ],
- ),
- (
- "BaseScriptList",
- [
- (
- "uint16",
- "BaseScriptCount",
- None,
- None,
- "Number of BaseScriptRecords defined",
- ),
- (
- "struct",
- "BaseScriptRecord",
- "BaseScriptCount",
- 0,
- "Array of BaseScriptRecords-in alphabetical order by BaseScriptTag",
- ),
- ],
- ),
- (
- "BaseScriptRecord",
- [
- ("Tag", "BaseScriptTag", None, None, "4-byte script identification tag"),
- (
- "Offset",
- "BaseScript",
- None,
- None,
- "Offset to BaseScript table-from beginning of BaseScriptList",
- ),
- ],
- ),
- (
- "BaseScript",
- [
- (
- "Offset",
- "BaseValues",
- None,
- None,
- "Offset to BaseValues table-from beginning of BaseScript table-may be NULL",
- ),
- (
- "Offset",
- "DefaultMinMax",
- None,
- None,
- "Offset to MinMax table- from beginning of BaseScript table-may be NULL",
- ),
- (
- "uint16",
- "BaseLangSysCount",
- None,
- None,
- "Number of BaseLangSysRecords defined-may be zero (0)",
- ),
- (
- "struct",
- "BaseLangSysRecord",
- "BaseLangSysCount",
- 0,
- "Array of BaseLangSysRecords-in alphabetical order by BaseLangSysTag",
- ),
- ],
- ),
- (
- "BaseLangSysRecord",
- [
- (
- "Tag",
- "BaseLangSysTag",
- None,
- None,
- "4-byte language system identification tag",
- ),
- (
- "Offset",
- "MinMax",
- None,
- None,
- "Offset to MinMax table-from beginning of BaseScript table",
- ),
- ],
- ),
- (
- "BaseValues",
- [
- (
- "uint16",
- "DefaultIndex",
- None,
- None,
- "Index number of default baseline for this script-equals index position of baseline tag in BaselineArray of the BaseTagList",
- ),
- (
- "uint16",
- "BaseCoordCount",
- None,
- None,
- "Number of BaseCoord tables defined-should equal BaseTagCount in the BaseTagList",
- ),
- (
- "Offset",
- "BaseCoord",
- "BaseCoordCount",
- 0,
- "Array of offsets to BaseCoord-from beginning of BaseValues table-order matches BaselineTag array in the BaseTagList",
- ),
- ],
- ),
- (
- "MinMax",
- [
- (
- "Offset",
- "MinCoord",
- None,
- None,
- "Offset to BaseCoord table-defines minimum extent value-from the beginning of MinMax table-may be NULL",
- ),
- (
- "Offset",
- "MaxCoord",
- None,
- None,
- "Offset to BaseCoord table-defines maximum extent value-from the beginning of MinMax table-may be NULL",
- ),
- (
- "uint16",
- "FeatMinMaxCount",
- None,
- None,
- "Number of FeatMinMaxRecords-may be zero (0)",
- ),
- (
- "struct",
- "FeatMinMaxRecord",
- "FeatMinMaxCount",
- 0,
- "Array of FeatMinMaxRecords-in alphabetical order, by FeatureTableTag",
- ),
- ],
- ),
- (
- "FeatMinMaxRecord",
- [
- (
- "Tag",
- "FeatureTableTag",
- None,
- None,
- "4-byte feature identification tag-must match FeatureTag in FeatureList",
- ),
- (
- "Offset",
- "MinCoord",
- None,
- None,
- "Offset to BaseCoord table-defines minimum extent value-from beginning of MinMax table-may be NULL",
- ),
- (
- "Offset",
- "MaxCoord",
- None,
- None,
- "Offset to BaseCoord table-defines maximum extent value-from beginning of MinMax table-may be NULL",
- ),
- ],
- ),
- (
- "BaseCoordFormat1",
- [
- ("uint16", "BaseCoordFormat", None, None, "Format identifier-format = 1"),
- ("int16", "Coordinate", None, None, "X or Y value, in design units"),
- ],
- ),
- (
- "BaseCoordFormat2",
- [
- ("uint16", "BaseCoordFormat", None, None, "Format identifier-format = 2"),
- ("int16", "Coordinate", None, None, "X or Y value, in design units"),
- ("GlyphID", "ReferenceGlyph", None, None, "GlyphID of control glyph"),
- (
- "uint16",
- "BaseCoordPoint",
- None,
- None,
- "Index of contour point on the ReferenceGlyph",
- ),
- ],
- ),
- (
- "BaseCoordFormat3",
- [
- ("uint16", "BaseCoordFormat", None, None, "Format identifier-format = 3"),
- ("int16", "Coordinate", None, None, "X or Y value, in design units"),
- (
- "Offset",
- "DeviceTable",
- None,
- None,
- "Offset to Device table for X or Y value",
- ),
- ],
- ),
- #
- # jstf
- #
- (
- "JSTF",
- [
- (
- "Version",
- "Version",
- None,
- None,
- "Version of the JSTF table-initially set to 0x00010000",
- ),
- (
- "uint16",
- "JstfScriptCount",
- None,
- None,
- "Number of JstfScriptRecords in this table",
- ),
- (
- "struct",
- "JstfScriptRecord",
- "JstfScriptCount",
- 0,
- "Array of JstfScriptRecords-in alphabetical order, by JstfScriptTag",
- ),
- ],
- ),
- (
- "JstfScriptRecord",
- [
- ("Tag", "JstfScriptTag", None, None, "4-byte JstfScript identification"),
- (
- "Offset",
- "JstfScript",
- None,
- None,
- "Offset to JstfScript table-from beginning of JSTF Header",
- ),
- ],
- ),
- (
- "JstfScript",
- [
- (
- "Offset",
- "ExtenderGlyph",
- None,
- None,
- "Offset to ExtenderGlyph table-from beginning of JstfScript table-may be NULL",
- ),
- (
- "Offset",
- "DefJstfLangSys",
- None,
- None,
- "Offset to Default JstfLangSys table-from beginning of JstfScript table-may be NULL",
- ),
- (
- "uint16",
- "JstfLangSysCount",
- None,
- None,
- "Number of JstfLangSysRecords in this table- may be zero (0)",
- ),
- (
- "struct",
- "JstfLangSysRecord",
- "JstfLangSysCount",
- 0,
- "Array of JstfLangSysRecords-in alphabetical order, by JstfLangSysTag",
- ),
- ],
- ),
- (
- "JstfLangSysRecord",
- [
- ("Tag", "JstfLangSysTag", None, None, "4-byte JstfLangSys identifier"),
- (
- "Offset",
- "JstfLangSys",
- None,
- None,
- "Offset to JstfLangSys table-from beginning of JstfScript table",
- ),
- ],
- ),
- (
- "ExtenderGlyph",
- [
- (
- "uint16",
- "GlyphCount",
- None,
- None,
- "Number of Extender Glyphs in this script",
- ),
- (
- "GlyphID",
- "ExtenderGlyph",
- "GlyphCount",
- 0,
- "GlyphIDs-in increasing numerical order",
- ),
- ],
- ),
- (
- "JstfLangSys",
- [
- (
- "uint16",
- "JstfPriorityCount",
- None,
- None,
- "Number of JstfPriority tables",
- ),
- (
- "Offset",
- "JstfPriority",
- "JstfPriorityCount",
- 0,
- "Array of offsets to JstfPriority tables-from beginning of JstfLangSys table-in priority order",
- ),
- ],
- ),
- (
- "JstfPriority",
- [
- (
- "Offset",
- "ShrinkageEnableGSUB",
- None,
- None,
- "Offset to Shrinkage Enable JstfGSUBModList table-from beginning of JstfPriority table-may be NULL",
- ),
- (
- "Offset",
- "ShrinkageDisableGSUB",
- None,
- None,
- "Offset to Shrinkage Disable JstfGSUBModList table-from beginning of JstfPriority table-may be NULL",
- ),
- (
- "Offset",
- "ShrinkageEnableGPOS",
- None,
- None,
- "Offset to Shrinkage Enable JstfGPOSModList table-from beginning of JstfPriority table-may be NULL",
- ),
- (
- "Offset",
- "ShrinkageDisableGPOS",
- None,
- None,
- "Offset to Shrinkage Disable JstfGPOSModList table-from beginning of JstfPriority table-may be NULL",
- ),
- (
- "Offset",
- "ShrinkageJstfMax",
- None,
- None,
- "Offset to Shrinkage JstfMax table-from beginning of JstfPriority table -may be NULL",
- ),
- (
- "Offset",
- "ExtensionEnableGSUB",
- None,
- None,
- "Offset to Extension Enable JstfGSUBModList table-may be NULL",
- ),
- (
- "Offset",
- "ExtensionDisableGSUB",
- None,
- None,
- "Offset to Extension Disable JstfGSUBModList table-from beginning of JstfPriority table-may be NULL",
- ),
- (
- "Offset",
- "ExtensionEnableGPOS",
- None,
- None,
- "Offset to Extension Enable JstfGSUBModList table-may be NULL",
- ),
- (
- "Offset",
- "ExtensionDisableGPOS",
- None,
- None,
- "Offset to Extension Disable JstfGSUBModList table-from beginning of JstfPriority table-may be NULL",
- ),
- (
- "Offset",
- "ExtensionJstfMax",
- None,
- None,
- "Offset to Extension JstfMax table-from beginning of JstfPriority table -may be NULL",
- ),
- ],
- ),
- (
- "JstfGSUBModList",
- [
- (
- "uint16",
- "LookupCount",
- None,
- None,
- "Number of lookups for this modification",
- ),
- (
- "uint16",
- "GSUBLookupIndex",
- "LookupCount",
- 0,
- "Array of LookupIndex identifiers in GSUB-in increasing numerical order",
- ),
- ],
- ),
- (
- "JstfGPOSModList",
- [
- (
- "uint16",
- "LookupCount",
- None,
- None,
- "Number of lookups for this modification",
- ),
- (
- "uint16",
- "GPOSLookupIndex",
- "LookupCount",
- 0,
- "Array of LookupIndex identifiers in GPOS-in increasing numerical order",
- ),
- ],
- ),
- (
- "JstfMax",
- [
- (
- "uint16",
- "LookupCount",
- None,
- None,
- "Number of lookup Indices for this modification",
- ),
- (
- "Offset",
- "Lookup",
- "LookupCount",
- 0,
- "Array of offsets to GPOS-type lookup tables-from beginning of JstfMax table-in design order",
- ),
- ],
- ),
- #
- # STAT
- #
- (
- "STAT",
- [
- (
- "Version",
- "Version",
- None,
- None,
- "Version of the table-initially set to 0x00010000, currently 0x00010002.",
- ),
- (
- "uint16",
- "DesignAxisRecordSize",
- None,
- None,
- "Size in bytes of each design axis record",
- ),
- ("uint16", "DesignAxisCount", None, None, "Number of design axis records"),
- (
- "LOffsetTo(AxisRecordArray)",
- "DesignAxisRecord",
- None,
- None,
- "Offset in bytes from the beginning of the STAT table to the start of the design axes array",
- ),
- ("uint16", "AxisValueCount", None, None, "Number of axis value tables"),
- (
- "LOffsetTo(AxisValueArray)",
- "AxisValueArray",
- None,
- None,
- "Offset in bytes from the beginning of the STAT table to the start of the axes value offset array",
- ),
- (
- "NameID",
- "ElidedFallbackNameID",
- None,
- "Version >= 0x00010001",
- "NameID to use when all style attributes are elided.",
- ),
- ],
- ),
- (
- "AxisRecordArray",
- [
- ("AxisRecord", "Axis", "DesignAxisCount", 0, "Axis records"),
- ],
- ),
- (
- "AxisRecord",
- [
- (
- "Tag",
- "AxisTag",
- None,
- None,
- "A tag identifying the axis of design variation",
- ),
- (
- "NameID",
- "AxisNameID",
- None,
- None,
- 'The name ID for entries in the "name" table that provide a display string for this axis',
- ),
- (
- "uint16",
- "AxisOrdering",
- None,
- None,
- "A value that applications can use to determine primary sorting of face names, or for ordering of descriptors when composing family or face names",
- ),
- (
- "uint8",
- "MoreBytes",
- "DesignAxisRecordSize",
- -8,
- "Extra bytes. Set to empty array.",
- ),
- ],
- ),
- (
- "AxisValueArray",
- [
- ("Offset", "AxisValue", "AxisValueCount", 0, "Axis values"),
- ],
- ),
- (
- "AxisValueFormat1",
- [
- ("uint16", "Format", None, None, "Format, = 1"),
- (
- "uint16",
- "AxisIndex",
- None,
- None,
- "Index into the axis record array identifying the axis of design variation to which the axis value record applies.",
- ),
- ("STATFlags", "Flags", None, None, "Flags."),
- ("NameID", "ValueNameID", None, None, ""),
- ("Fixed", "Value", None, None, ""),
- ],
- ),
- (
- "AxisValueFormat2",
- [
- ("uint16", "Format", None, None, "Format, = 2"),
- (
- "uint16",
- "AxisIndex",
- None,
- None,
- "Index into the axis record array identifying the axis of design variation to which the axis value record applies.",
- ),
- ("STATFlags", "Flags", None, None, "Flags."),
- ("NameID", "ValueNameID", None, None, ""),
- ("Fixed", "NominalValue", None, None, ""),
- ("Fixed", "RangeMinValue", None, None, ""),
- ("Fixed", "RangeMaxValue", None, None, ""),
- ],
- ),
- (
- "AxisValueFormat3",
- [
- ("uint16", "Format", None, None, "Format, = 3"),
- (
- "uint16",
- "AxisIndex",
- None,
- None,
- "Index into the axis record array identifying the axis of design variation to which the axis value record applies.",
- ),
- ("STATFlags", "Flags", None, None, "Flags."),
- ("NameID", "ValueNameID", None, None, ""),
- ("Fixed", "Value", None, None, ""),
- ("Fixed", "LinkedValue", None, None, ""),
- ],
- ),
- (
- "AxisValueFormat4",
- [
- ("uint16", "Format", None, None, "Format, = 4"),
- (
- "uint16",
- "AxisCount",
- None,
- None,
- "The total number of axes contributing to this axis-values combination.",
- ),
- ("STATFlags", "Flags", None, None, "Flags."),
- ("NameID", "ValueNameID", None, None, ""),
- (
- "struct",
- "AxisValueRecord",
- "AxisCount",
- 0,
- "Array of AxisValue records that provide the combination of axis values, one for each contributing axis. ",
- ),
- ],
- ),
- (
- "AxisValueRecord",
- [
- (
- "uint16",
- "AxisIndex",
- None,
- None,
- "Index into the axis record array identifying the axis of design variation to which the axis value record applies.",
- ),
- ("Fixed", "Value", None, None, "A numeric value for this attribute value."),
- ],
- ),
- #
- # Variation fonts
- #
- # GSUB/GPOS FeatureVariations
- (
- "FeatureVariations",
- [
- (
- "Version",
- "Version",
- None,
- None,
- "Version of the table-initially set to 0x00010000",
- ),
- (
- "uint32",
- "FeatureVariationCount",
- None,
- None,
- "Number of records in the FeatureVariationRecord array",
- ),
- (
- "struct",
- "FeatureVariationRecord",
- "FeatureVariationCount",
- 0,
- "Array of FeatureVariationRecord",
- ),
- ],
- ),
- (
- "FeatureVariationRecord",
- [
- (
- "LOffset",
- "ConditionSet",
- None,
- None,
- "Offset to a ConditionSet table, from beginning of the FeatureVariations table.",
- ),
- (
- "LOffset",
- "FeatureTableSubstitution",
- None,
- None,
- "Offset to a FeatureTableSubstitution table, from beginning of the FeatureVariations table",
- ),
- ],
- ),
- (
- "ConditionSet",
- [
- (
- "uint16",
- "ConditionCount",
- None,
- None,
- "Number of condition tables in the ConditionTable array",
- ),
- (
- "LOffset",
- "ConditionTable",
- "ConditionCount",
- 0,
- "Array of condition tables.",
- ),
- ],
- ),
- (
- "ConditionTableFormat1",
- [
- ("uint16", "Format", None, None, "Format, = 1"),
- (
- "uint16",
- "AxisIndex",
- None,
- None,
- "Index for the variation axis within the fvar table, base 0.",
- ),
- (
- "F2Dot14",
- "FilterRangeMinValue",
- None,
- None,
- "Minimum normalized axis value of the font variation instances that satisfy this condition.",
- ),
- (
- "F2Dot14",
- "FilterRangeMaxValue",
- None,
- None,
- "Maximum value that satisfies this condition.",
- ),
- ],
- ),
- (
- "FeatureTableSubstitution",
- [
- (
- "Version",
- "Version",
- None,
- None,
- "Version of the table-initially set to 0x00010000",
- ),
- (
- "uint16",
- "SubstitutionCount",
- None,
- None,
- "Number of records in the FeatureVariationRecords array",
- ),
- (
- "FeatureTableSubstitutionRecord",
- "SubstitutionRecord",
- "SubstitutionCount",
- 0,
- "Array of FeatureTableSubstitutionRecord",
- ),
- ],
- ),
- (
- "FeatureTableSubstitutionRecord",
- [
- ("uint16", "FeatureIndex", None, None, "The feature table index to match."),
- (
- "LOffset",
- "Feature",
- None,
- None,
- "Offset to an alternate feature table, from start of the FeatureTableSubstitution table.",
- ),
- ],
- ),
- # VariationStore
- (
- "VarRegionAxis",
- [
- ("F2Dot14", "StartCoord", None, None, ""),
- ("F2Dot14", "PeakCoord", None, None, ""),
- ("F2Dot14", "EndCoord", None, None, ""),
- ],
- ),
- (
- "VarRegion",
- [
- ("struct", "VarRegionAxis", "RegionAxisCount", 0, ""),
- ],
- ),
- (
- "VarRegionList",
- [
- ("uint16", "RegionAxisCount", None, None, ""),
- ("uint16", "RegionCount", None, None, ""),
- ("VarRegion", "Region", "RegionCount", 0, ""),
- ],
- ),
- (
- "VarData",
- [
- ("uint16", "ItemCount", None, None, ""),
- ("uint16", "NumShorts", None, None, ""),
- ("uint16", "VarRegionCount", None, None, ""),
- ("uint16", "VarRegionIndex", "VarRegionCount", 0, ""),
- ("VarDataValue", "Item", "ItemCount", 0, ""),
- ],
- ),
- (
- "VarStore",
- [
- ("uint16", "Format", None, None, "Set to 1."),
- ("LOffset", "VarRegionList", None, None, ""),
- ("uint16", "VarDataCount", None, None, ""),
- ("LOffset", "VarData", "VarDataCount", 0, ""),
- ],
- ),
- # Variation helpers
- (
- "VarIdxMap",
- [
- ("uint16", "EntryFormat", None, None, ""), # Automatically computed
- ("uint16", "MappingCount", None, None, ""), # Automatically computed
- ("VarIdxMapValue", "mapping", "", 0, "Array of compressed data"),
- ],
- ),
- (
- "DeltaSetIndexMapFormat0",
- [
- ("uint8", "Format", None, None, "Format of the DeltaSetIndexMap = 0"),
- ("uint8", "EntryFormat", None, None, ""), # Automatically computed
- ("uint16", "MappingCount", None, None, ""), # Automatically computed
- ("VarIdxMapValue", "mapping", "", 0, "Array of compressed data"),
- ],
- ),
- (
- "DeltaSetIndexMapFormat1",
- [
- ("uint8", "Format", None, None, "Format of the DeltaSetIndexMap = 1"),
- ("uint8", "EntryFormat", None, None, ""), # Automatically computed
- ("uint32", "MappingCount", None, None, ""), # Automatically computed
- ("VarIdxMapValue", "mapping", "", 0, "Array of compressed data"),
- ],
- ),
- # Glyph advance variations
- (
- "HVAR",
- [
- (
- "Version",
- "Version",
- None,
- None,
- "Version of the HVAR table-initially = 0x00010000",
- ),
- ("LOffset", "VarStore", None, None, ""),
- ("LOffsetTo(VarIdxMap)", "AdvWidthMap", None, None, ""),
- ("LOffsetTo(VarIdxMap)", "LsbMap", None, None, ""),
- ("LOffsetTo(VarIdxMap)", "RsbMap", None, None, ""),
- ],
- ),
- (
- "VVAR",
- [
- (
- "Version",
- "Version",
- None,
- None,
- "Version of the VVAR table-initially = 0x00010000",
- ),
- ("LOffset", "VarStore", None, None, ""),
- ("LOffsetTo(VarIdxMap)", "AdvHeightMap", None, None, ""),
- ("LOffsetTo(VarIdxMap)", "TsbMap", None, None, ""),
- ("LOffsetTo(VarIdxMap)", "BsbMap", None, None, ""),
- ("LOffsetTo(VarIdxMap)", "VOrgMap", None, None, "Vertical origin mapping."),
- ],
- ),
- # Font-wide metrics variations
- (
- "MetricsValueRecord",
- [
- ("Tag", "ValueTag", None, None, "4-byte font-wide measure identifier"),
- ("uint32", "VarIdx", None, None, "Combined outer-inner variation index"),
- (
- "uint8",
- "MoreBytes",
- "ValueRecordSize",
- -8,
- "Extra bytes. Set to empty array.",
- ),
- ],
- ),
- (
- "MVAR",
- [
- (
- "Version",
- "Version",
- None,
- None,
- "Version of the MVAR table-initially = 0x00010000",
- ),
- ("uint16", "Reserved", None, None, "Set to 0"),
- ("uint16", "ValueRecordSize", None, None, ""),
- ("uint16", "ValueRecordCount", None, None, ""),
- ("Offset", "VarStore", None, None, ""),
- ("MetricsValueRecord", "ValueRecord", "ValueRecordCount", 0, ""),
- ],
- ),
- #
- # math
- #
- (
- "MATH",
- [
- (
- "Version",
- "Version",
- None,
- None,
- "Version of the MATH table-initially set to 0x00010000.",
- ),
- (
- "Offset",
- "MathConstants",
- None,
- None,
- "Offset to MathConstants table - from the beginning of MATH table.",
- ),
- (
- "Offset",
- "MathGlyphInfo",
- None,
- None,
- "Offset to MathGlyphInfo table - from the beginning of MATH table.",
- ),
- (
- "Offset",
- "MathVariants",
- None,
- None,
- "Offset to MathVariants table - from the beginning of MATH table.",
- ),
- ],
- ),
- (
- "MathValueRecord",
- [
- ("int16", "Value", None, None, "The X or Y value in design units."),
- (
- "Offset",
- "DeviceTable",
- None,
- None,
- "Offset to the device table - from the beginning of parent table. May be NULL. Suggested format for device table is 1.",
- ),
- ],
- ),
- (
- "MathConstants",
- [
- (
- "int16",
- "ScriptPercentScaleDown",
- None,
- None,
- "Percentage of scaling down for script level 1. Suggested value: 80%.",
- ),
- (
- "int16",
- "ScriptScriptPercentScaleDown",
- None,
- None,
- "Percentage of scaling down for script level 2 (ScriptScript). Suggested value: 60%.",
- ),
- (
- "uint16",
- "DelimitedSubFormulaMinHeight",
- None,
- None,
- "Minimum height required for a delimited expression to be treated as a subformula. Suggested value: normal line height x1.5.",
- ),
- (
- "uint16",
- "DisplayOperatorMinHeight",
- None,
- None,
- "Minimum height of n-ary operators (such as integral and summation) for formulas in display mode.",
- ),
- (
- "MathValueRecord",
- "MathLeading",
- None,
- None,
- "White space to be left between math formulas to ensure proper line spacing. For example, for applications that treat line gap as a part of line ascender, formulas with ink going above (os2.sTypoAscender + os2.sTypoLineGap - MathLeading) or with ink going below os2.sTypoDescender will result in increasing line height.",
- ),
- ("MathValueRecord", "AxisHeight", None, None, "Axis height of the font."),
- (
- "MathValueRecord",
- "AccentBaseHeight",
- None,
- None,
- "Maximum (ink) height of accent base that does not require raising the accents. Suggested: x-height of the font (os2.sxHeight) plus any possible overshots.",
- ),
- (
- "MathValueRecord",
- "FlattenedAccentBaseHeight",
- None,
- None,
- "Maximum (ink) height of accent base that does not require flattening the accents. Suggested: cap height of the font (os2.sCapHeight).",
- ),
- (
- "MathValueRecord",
- "SubscriptShiftDown",
- None,
- None,
- "The standard shift down applied to subscript elements. Positive for moving in the downward direction. Suggested: os2.ySubscriptYOffset.",
- ),
- (
- "MathValueRecord",
- "SubscriptTopMax",
- None,
- None,
- "Maximum allowed height of the (ink) top of subscripts that does not require moving subscripts further down. Suggested: 4/5 x-height.",
- ),
- (
- "MathValueRecord",
- "SubscriptBaselineDropMin",
- None,
- None,
- "Minimum allowed drop of the baseline of subscripts relative to the (ink) bottom of the base. Checked for bases that are treated as a box or extended shape. Positive for subscript baseline dropped below the base bottom.",
- ),
- (
- "MathValueRecord",
- "SuperscriptShiftUp",
- None,
- None,
- "Standard shift up applied to superscript elements. Suggested: os2.ySuperscriptYOffset.",
- ),
- (
- "MathValueRecord",
- "SuperscriptShiftUpCramped",
- None,
- None,
- "Standard shift of superscripts relative to the base, in cramped style.",
- ),
- (
- "MathValueRecord",
- "SuperscriptBottomMin",
- None,
- None,
- "Minimum allowed height of the (ink) bottom of superscripts that does not require moving subscripts further up. Suggested: 1/4 x-height.",
- ),
- (
- "MathValueRecord",
- "SuperscriptBaselineDropMax",
- None,
- None,
- "Maximum allowed drop of the baseline of superscripts relative to the (ink) top of the base. Checked for bases that are treated as a box or extended shape. Positive for superscript baseline below the base top.",
- ),
- (
- "MathValueRecord",
- "SubSuperscriptGapMin",
- None,
- None,
- "Minimum gap between the superscript and subscript ink. Suggested: 4x default rule thickness.",
- ),
- (
- "MathValueRecord",
- "SuperscriptBottomMaxWithSubscript",
- None,
- None,
- "The maximum level to which the (ink) bottom of superscript can be pushed to increase the gap between superscript and subscript, before subscript starts being moved down. Suggested: 4/5 x-height.",
- ),
- (
- "MathValueRecord",
- "SpaceAfterScript",
- None,
- None,
- "Extra white space to be added after each subscript and superscript. Suggested: 0.5pt for a 12 pt font.",
- ),
- (
- "MathValueRecord",
- "UpperLimitGapMin",
- None,
- None,
- "Minimum gap between the (ink) bottom of the upper limit, and the (ink) top of the base operator.",
- ),
- (
- "MathValueRecord",
- "UpperLimitBaselineRiseMin",
- None,
- None,
- "Minimum distance between baseline of upper limit and (ink) top of the base operator.",
- ),
- (
- "MathValueRecord",
- "LowerLimitGapMin",
- None,
- None,
- "Minimum gap between (ink) top of the lower limit, and (ink) bottom of the base operator.",
- ),
- (
- "MathValueRecord",
- "LowerLimitBaselineDropMin",
- None,
- None,
- "Minimum distance between baseline of the lower limit and (ink) bottom of the base operator.",
- ),
- (
- "MathValueRecord",
- "StackTopShiftUp",
- None,
- None,
- "Standard shift up applied to the top element of a stack.",
- ),
- (
- "MathValueRecord",
- "StackTopDisplayStyleShiftUp",
- None,
- None,
- "Standard shift up applied to the top element of a stack in display style.",
- ),
- (
- "MathValueRecord",
- "StackBottomShiftDown",
- None,
- None,
- "Standard shift down applied to the bottom element of a stack. Positive for moving in the downward direction.",
- ),
- (
- "MathValueRecord",
- "StackBottomDisplayStyleShiftDown",
- None,
- None,
- "Standard shift down applied to the bottom element of a stack in display style. Positive for moving in the downward direction.",
- ),
- (
- "MathValueRecord",
- "StackGapMin",
- None,
- None,
- "Minimum gap between (ink) bottom of the top element of a stack, and the (ink) top of the bottom element. Suggested: 3x default rule thickness.",
- ),
- (
- "MathValueRecord",
- "StackDisplayStyleGapMin",
- None,
- None,
- "Minimum gap between (ink) bottom of the top element of a stack, and the (ink) top of the bottom element in display style. Suggested: 7x default rule thickness.",
- ),
- (
- "MathValueRecord",
- "StretchStackTopShiftUp",
- None,
- None,
- "Standard shift up applied to the top element of the stretch stack.",
- ),
- (
- "MathValueRecord",
- "StretchStackBottomShiftDown",
- None,
- None,
- "Standard shift down applied to the bottom element of the stretch stack. Positive for moving in the downward direction.",
- ),
- (
- "MathValueRecord",
- "StretchStackGapAboveMin",
- None,
- None,
- "Minimum gap between the ink of the stretched element, and the (ink) bottom of the element above. Suggested: UpperLimitGapMin",
- ),
- (
- "MathValueRecord",
- "StretchStackGapBelowMin",
- None,
- None,
- "Minimum gap between the ink of the stretched element, and the (ink) top of the element below. Suggested: LowerLimitGapMin.",
- ),
- (
- "MathValueRecord",
- "FractionNumeratorShiftUp",
- None,
- None,
- "Standard shift up applied to the numerator.",
- ),
- (
- "MathValueRecord",
- "FractionNumeratorDisplayStyleShiftUp",
- None,
- None,
- "Standard shift up applied to the numerator in display style. Suggested: StackTopDisplayStyleShiftUp.",
- ),
- (
- "MathValueRecord",
- "FractionDenominatorShiftDown",
- None,
- None,
- "Standard shift down applied to the denominator. Positive for moving in the downward direction.",
- ),
- (
- "MathValueRecord",
- "FractionDenominatorDisplayStyleShiftDown",
- None,
- None,
- "Standard shift down applied to the denominator in display style. Positive for moving in the downward direction. Suggested: StackBottomDisplayStyleShiftDown.",
- ),
- (
- "MathValueRecord",
- "FractionNumeratorGapMin",
- None,
- None,
- "Minimum tolerated gap between the (ink) bottom of the numerator and the ink of the fraction bar. Suggested: default rule thickness",
- ),
- (
- "MathValueRecord",
- "FractionNumDisplayStyleGapMin",
- None,
- None,
- "Minimum tolerated gap between the (ink) bottom of the numerator and the ink of the fraction bar in display style. Suggested: 3x default rule thickness.",
- ),
- (
- "MathValueRecord",
- "FractionRuleThickness",
- None,
- None,
- "Thickness of the fraction bar. Suggested: default rule thickness.",
- ),
- (
- "MathValueRecord",
- "FractionDenominatorGapMin",
- None,
- None,
- "Minimum tolerated gap between the (ink) top of the denominator and the ink of the fraction bar. Suggested: default rule thickness",
- ),
- (
- "MathValueRecord",
- "FractionDenomDisplayStyleGapMin",
- None,
- None,
- "Minimum tolerated gap between the (ink) top of the denominator and the ink of the fraction bar in display style. Suggested: 3x default rule thickness.",
- ),
- (
- "MathValueRecord",
- "SkewedFractionHorizontalGap",
- None,
- None,
- "Horizontal distance between the top and bottom elements of a skewed fraction.",
- ),
- (
- "MathValueRecord",
- "SkewedFractionVerticalGap",
- None,
- None,
- "Vertical distance between the ink of the top and bottom elements of a skewed fraction.",
- ),
- (
- "MathValueRecord",
- "OverbarVerticalGap",
- None,
- None,
- "Distance between the overbar and the (ink) top of he base. Suggested: 3x default rule thickness.",
- ),
- (
- "MathValueRecord",
- "OverbarRuleThickness",
- None,
- None,
- "Thickness of overbar. Suggested: default rule thickness.",
- ),
- (
- "MathValueRecord",
- "OverbarExtraAscender",
- None,
- None,
- "Extra white space reserved above the overbar. Suggested: default rule thickness.",
- ),
- (
- "MathValueRecord",
- "UnderbarVerticalGap",
- None,
- None,
- "Distance between underbar and (ink) bottom of the base. Suggested: 3x default rule thickness.",
- ),
- (
- "MathValueRecord",
- "UnderbarRuleThickness",
- None,
- None,
- "Thickness of underbar. Suggested: default rule thickness.",
- ),
- (
- "MathValueRecord",
- "UnderbarExtraDescender",
- None,
- None,
- "Extra white space reserved below the underbar. Always positive. Suggested: default rule thickness.",
- ),
- (
- "MathValueRecord",
- "RadicalVerticalGap",
- None,
- None,
- "Space between the (ink) top of the expression and the bar over it. Suggested: 1 1/4 default rule thickness.",
- ),
- (
- "MathValueRecord",
- "RadicalDisplayStyleVerticalGap",
- None,
- None,
- "Space between the (ink) top of the expression and the bar over it. Suggested: default rule thickness + 1/4 x-height.",
- ),
- (
- "MathValueRecord",
- "RadicalRuleThickness",
- None,
- None,
- "Thickness of the radical rule. This is the thickness of the rule in designed or constructed radical signs. Suggested: default rule thickness.",
- ),
- (
- "MathValueRecord",
- "RadicalExtraAscender",
- None,
- None,
- "Extra white space reserved above the radical. Suggested: RadicalRuleThickness.",
- ),
- (
- "MathValueRecord",
- "RadicalKernBeforeDegree",
- None,
- None,
- "Extra horizontal kern before the degree of a radical, if such is present. Suggested: 5/18 of em.",
- ),
- (
- "MathValueRecord",
- "RadicalKernAfterDegree",
- None,
- None,
- "Negative kern after the degree of a radical, if such is present. Suggested: 10/18 of em.",
- ),
- (
- "uint16",
- "RadicalDegreeBottomRaisePercent",
- None,
- None,
- "Height of the bottom of the radical degree, if such is present, in proportion to the ascender of the radical sign. Suggested: 60%.",
- ),
- ],
- ),
- (
- "MathGlyphInfo",
- [
- (
- "Offset",
- "MathItalicsCorrectionInfo",
- None,
- None,
- "Offset to MathItalicsCorrectionInfo table - from the beginning of MathGlyphInfo table.",
- ),
- (
- "Offset",
- "MathTopAccentAttachment",
- None,
- None,
- "Offset to MathTopAccentAttachment table - from the beginning of MathGlyphInfo table.",
- ),
- (
- "Offset",
- "ExtendedShapeCoverage",
- None,
- None,
- "Offset to coverage table for Extended Shape glyphs - from the beginning of MathGlyphInfo table. When the left or right glyph of a box is an extended shape variant, the (ink) box (and not the default position defined by values in MathConstants table) should be used for vertical positioning purposes. May be NULL.",
- ),
- (
- "Offset",
- "MathKernInfo",
- None,
- None,
- "Offset to MathKernInfo table - from the beginning of MathGlyphInfo table.",
- ),
- ],
- ),
- (
- "MathItalicsCorrectionInfo",
- [
- (
- "Offset",
- "Coverage",
- None,
- None,
- "Offset to Coverage table - from the beginning of MathItalicsCorrectionInfo table.",
- ),
- (
- "uint16",
- "ItalicsCorrectionCount",
- None,
- None,
- "Number of italics correction values. Should coincide with the number of covered glyphs.",
- ),
- (
- "MathValueRecord",
- "ItalicsCorrection",
- "ItalicsCorrectionCount",
- 0,
- "Array of MathValueRecords defining italics correction values for each covered glyph.",
- ),
- ],
- ),
- (
- "MathTopAccentAttachment",
- [
- (
- "Offset",
- "TopAccentCoverage",
- None,
- None,
- "Offset to Coverage table - from the beginning of MathTopAccentAttachment table.",
- ),
- (
- "uint16",
- "TopAccentAttachmentCount",
- None,
- None,
- "Number of top accent attachment point values. Should coincide with the number of covered glyphs",
- ),
- (
- "MathValueRecord",
- "TopAccentAttachment",
- "TopAccentAttachmentCount",
- 0,
- "Array of MathValueRecords defining top accent attachment points for each covered glyph",
- ),
- ],
- ),
- (
- "MathKernInfo",
- [
- (
- "Offset",
- "MathKernCoverage",
- None,
- None,
- "Offset to Coverage table - from the beginning of the MathKernInfo table.",
- ),
- ("uint16", "MathKernCount", None, None, "Number of MathKernInfoRecords."),
- (
- "MathKernInfoRecord",
- "MathKernInfoRecords",
- "MathKernCount",
- 0,
- "Array of MathKernInfoRecords, per-glyph information for mathematical positioning of subscripts and superscripts.",
- ),
- ],
- ),
- (
- "MathKernInfoRecord",
- [
- (
- "Offset",
- "TopRightMathKern",
- None,
- None,
- "Offset to MathKern table for top right corner - from the beginning of MathKernInfo table. May be NULL.",
- ),
- (
- "Offset",
- "TopLeftMathKern",
- None,
- None,
- "Offset to MathKern table for the top left corner - from the beginning of MathKernInfo table. May be NULL.",
- ),
- (
- "Offset",
- "BottomRightMathKern",
- None,
- None,
- "Offset to MathKern table for bottom right corner - from the beginning of MathKernInfo table. May be NULL.",
- ),
- (
- "Offset",
- "BottomLeftMathKern",
- None,
- None,
- "Offset to MathKern table for bottom left corner - from the beginning of MathKernInfo table. May be NULL.",
- ),
- ],
- ),
- (
- "MathKern",
- [
- (
- "uint16",
- "HeightCount",
- None,
- None,
- "Number of heights on which the kern value changes.",
- ),
- (
- "MathValueRecord",
- "CorrectionHeight",
- "HeightCount",
- 0,
- "Array of correction heights at which the kern value changes. Sorted by the height value in design units.",
- ),
- (
- "MathValueRecord",
- "KernValue",
- "HeightCount",
- 1,
- "Array of kern values corresponding to heights. First value is the kern value for all heights less or equal than the first height in this table.Last value is the value to be applied for all heights greater than the last height in this table. Negative values are interpreted as move glyphs closer to each other.",
- ),
- ],
- ),
- (
- "MathVariants",
- [
- (
- "uint16",
- "MinConnectorOverlap",
- None,
- None,
- "Minimum overlap of connecting glyphs during glyph construction, in design units.",
- ),
- (
- "Offset",
- "VertGlyphCoverage",
- None,
- None,
- "Offset to Coverage table - from the beginning of MathVariants table.",
- ),
- (
- "Offset",
- "HorizGlyphCoverage",
- None,
- None,
- "Offset to Coverage table - from the beginning of MathVariants table.",
- ),
- (
- "uint16",
- "VertGlyphCount",
- None,
- None,
- "Number of glyphs for which information is provided for vertically growing variants.",
- ),
- (
- "uint16",
- "HorizGlyphCount",
- None,
- None,
- "Number of glyphs for which information is provided for horizontally growing variants.",
- ),
- (
- "Offset",
- "VertGlyphConstruction",
- "VertGlyphCount",
- 0,
- "Array of offsets to MathGlyphConstruction tables - from the beginning of the MathVariants table, for shapes growing in vertical direction.",
- ),
- (
- "Offset",
- "HorizGlyphConstruction",
- "HorizGlyphCount",
- 0,
- "Array of offsets to MathGlyphConstruction tables - from the beginning of the MathVariants table, for shapes growing in horizontal direction.",
- ),
- ],
- ),
- (
- "MathGlyphConstruction",
- [
- (
- "Offset",
- "GlyphAssembly",
- None,
- None,
- "Offset to GlyphAssembly table for this shape - from the beginning of MathGlyphConstruction table. May be NULL",
- ),
- (
- "uint16",
- "VariantCount",
- None,
- None,
- "Count of glyph growing variants for this glyph.",
- ),
- (
- "MathGlyphVariantRecord",
- "MathGlyphVariantRecord",
- "VariantCount",
- 0,
- "MathGlyphVariantRecords for alternative variants of the glyphs.",
- ),
- ],
- ),
- (
- "MathGlyphVariantRecord",
- [
- ("GlyphID", "VariantGlyph", None, None, "Glyph ID for the variant."),
- (
- "uint16",
- "AdvanceMeasurement",
- None,
- None,
- "Advance width/height, in design units, of the variant, in the direction of requested glyph extension.",
- ),
- ],
- ),
- (
- "GlyphAssembly",
- [
- (
- "MathValueRecord",
- "ItalicsCorrection",
- None,
- None,
- "Italics correction of this GlyphAssembly. Should not depend on the assembly size.",
- ),
- ("uint16", "PartCount", None, None, "Number of parts in this assembly."),
- (
- "GlyphPartRecord",
- "PartRecords",
- "PartCount",
- 0,
- "Array of part records, from left to right and bottom to top.",
- ),
- ],
- ),
- (
- "GlyphPartRecord",
- [
- ("GlyphID", "glyph", None, None, "Glyph ID for the part."),
- (
- "uint16",
- "StartConnectorLength",
- None,
- None,
- "Advance width/ height of the straight bar connector material, in design units, is at the beginning of the glyph, in the direction of the extension.",
- ),
- (
- "uint16",
- "EndConnectorLength",
- None,
- None,
- "Advance width/ height of the straight bar connector material, in design units, is at the end of the glyph, in the direction of the extension.",
- ),
- (
- "uint16",
- "FullAdvance",
- None,
- None,
- "Full advance width/height for this part, in the direction of the extension. In design units.",
- ),
- (
- "uint16",
- "PartFlags",
- None,
- None,
- "Part qualifiers. PartFlags enumeration currently uses only one bit: 0x0001 fExtender: If set, the part can be skipped or repeated. 0xFFFE Reserved",
- ),
- ],
- ),
- ##
- ## Apple Advanced Typography (AAT) tables
- ##
- (
- "AATLookupSegment",
- [
- ("uint16", "lastGlyph", None, None, "Last glyph index in this segment."),
- ("uint16", "firstGlyph", None, None, "First glyph index in this segment."),
- (
- "uint16",
- "value",
- None,
- None,
- "A 16-bit offset from the start of the table to the data.",
- ),
- ],
- ),
- #
- # ankr
- #
- (
- "ankr",
- [
- ("struct", "AnchorPoints", None, None, "Anchor points table."),
- ],
- ),
- (
- "AnchorPointsFormat0",
- [
- ("uint16", "Format", None, None, "Format of the anchor points table, = 0."),
- ("uint16", "Flags", None, None, "Flags. Currenty unused, set to zero."),
- (
- "AATLookupWithDataOffset(AnchorGlyphData)",
- "Anchors",
- None,
- None,
- "Table of with anchor overrides for each glyph.",
- ),
- ],
- ),
- (
- "AnchorGlyphData",
- [
- (
- "uint32",
- "AnchorPointCount",
- None,
- None,
- "Number of anchor points for this glyph.",
- ),
- (
- "struct",
- "AnchorPoint",
- "AnchorPointCount",
- 0,
- "Individual anchor points.",
- ),
- ],
- ),
- (
- "AnchorPoint",
- [
- ("int16", "XCoordinate", None, None, "X coordinate of this anchor point."),
- ("int16", "YCoordinate", None, None, "Y coordinate of this anchor point."),
- ],
- ),
- #
- # bsln
- #
- (
- "bsln",
- [
- (
- "Version",
- "Version",
- None,
- None,
- "Version number of the AAT baseline table (0x00010000 for the initial version).",
- ),
- ("struct", "Baseline", None, None, "Baseline table."),
- ],
- ),
- (
- "BaselineFormat0",
- [
- ("uint16", "Format", None, None, "Format of the baseline table, = 0."),
- (
- "uint16",
- "DefaultBaseline",
- None,
- None,
- "Default baseline value for all glyphs. This value can be from 0 through 31.",
- ),
- (
- "uint16",
- "Delta",
- 32,
- 0,
- "These are the FUnit distance deltas from the font’s natural baseline to the other baselines used in the font. A total of 32 deltas must be assigned.",
- ),
- ],
- ),
- (
- "BaselineFormat1",
- [
- ("uint16", "Format", None, None, "Format of the baseline table, = 1."),
- (
- "uint16",
- "DefaultBaseline",
- None,
- None,
- "Default baseline value for all glyphs. This value can be from 0 through 31.",
- ),
- (
- "uint16",
- "Delta",
- 32,
- 0,
- "These are the FUnit distance deltas from the font’s natural baseline to the other baselines used in the font. A total of 32 deltas must be assigned.",
- ),
- (
- "AATLookup(uint16)",
- "BaselineValues",
- None,
- None,
- "Lookup table that maps glyphs to their baseline values.",
- ),
- ],
- ),
- (
- "BaselineFormat2",
- [
- ("uint16", "Format", None, None, "Format of the baseline table, = 1."),
- (
- "uint16",
- "DefaultBaseline",
- None,
- None,
- "Default baseline value for all glyphs. This value can be from 0 through 31.",
- ),
- (
- "GlyphID",
- "StandardGlyph",
- None,
- None,
- "Glyph index of the glyph in this font to be used to set the baseline values. This glyph must contain a set of control points (whose numbers are contained in the following field) that determines baseline distances.",
- ),
- (
- "uint16",
- "ControlPoint",
- 32,
- 0,
- "Array of 32 control point numbers, associated with the standard glyph. A value of 0xFFFF means there is no corresponding control point in the standard glyph.",
- ),
- ],
- ),
- (
- "BaselineFormat3",
- [
- ("uint16", "Format", None, None, "Format of the baseline table, = 1."),
- (
- "uint16",
- "DefaultBaseline",
- None,
- None,
- "Default baseline value for all glyphs. This value can be from 0 through 31.",
- ),
- (
- "GlyphID",
- "StandardGlyph",
- None,
- None,
- "Glyph index of the glyph in this font to be used to set the baseline values. This glyph must contain a set of control points (whose numbers are contained in the following field) that determines baseline distances.",
- ),
- (
- "uint16",
- "ControlPoint",
- 32,
- 0,
- "Array of 32 control point numbers, associated with the standard glyph. A value of 0xFFFF means there is no corresponding control point in the standard glyph.",
- ),
- (
- "AATLookup(uint16)",
- "BaselineValues",
- None,
- None,
- "Lookup table that maps glyphs to their baseline values.",
- ),
- ],
- ),
- #
- # cidg
- #
- (
- "cidg",
- [
- ("struct", "CIDGlyphMapping", None, None, "CID-to-glyph mapping table."),
- ],
- ),
- (
- "CIDGlyphMappingFormat0",
- [
- (
- "uint16",
- "Format",
- None,
- None,
- "Format of the CID-to-glyph mapping table, = 0.",
- ),
- ("uint16", "DataFormat", None, None, "Currenty unused, set to zero."),
- ("uint32", "StructLength", None, None, "Size of the table in bytes."),
- ("uint16", "Registry", None, None, "The registry ID."),
- (
- "char64",
- "RegistryName",
- None,
- None,
- "The registry name in ASCII; unused bytes should be set to 0.",
- ),
- ("uint16", "Order", None, None, "The order ID."),
- (
- "char64",
- "OrderName",
- None,
- None,
- "The order name in ASCII; unused bytes should be set to 0.",
- ),
- ("uint16", "SupplementVersion", None, None, "The supplement version."),
- (
- "CIDGlyphMap",
- "Mapping",
- None,
- None,
- "A mapping from CIDs to the glyphs in the font, starting with CID 0. If a CID from the identified collection has no glyph in the font, 0xFFFF is used",
- ),
- ],
- ),
- #
- # feat
- #
- (
- "feat",
- [
- (
- "Version",
- "Version",
- None,
- None,
- "Version of the feat table-initially set to 0x00010000.",
- ),
- ("FeatureNames", "FeatureNames", None, None, "The feature names."),
- ],
- ),
- (
- "FeatureNames",
- [
- (
- "uint16",
- "FeatureNameCount",
- None,
- None,
- "Number of entries in the feature name array.",
- ),
- ("uint16", "Reserved1", None, None, "Reserved (set to zero)."),
- ("uint32", "Reserved2", None, None, "Reserved (set to zero)."),
- (
- "FeatureName",
- "FeatureName",
- "FeatureNameCount",
- 0,
- "The feature name array.",
- ),
- ],
- ),
- (
- "FeatureName",
- [
- ("uint16", "FeatureType", None, None, "Feature type."),
- (
- "uint16",
- "SettingsCount",
- None,
- None,
- "The number of records in the setting name array.",
- ),
- (
- "LOffset",
- "Settings",
- None,
- None,
- "Offset to setting table for this feature.",
- ),
- (
- "uint16",
- "FeatureFlags",
- None,
- None,
- "Single-bit flags associated with the feature type.",
- ),
- (
- "NameID",
- "FeatureNameID",
- None,
- None,
- "The name table index for the feature name.",
- ),
- ],
- ),
- (
- "Settings",
- [
- ("Setting", "Setting", "SettingsCount", 0, "The setting array."),
- ],
- ),
- (
- "Setting",
- [
- ("uint16", "SettingValue", None, None, "The setting."),
- (
- "NameID",
- "SettingNameID",
- None,
- None,
- "The name table index for the setting name.",
- ),
- ],
- ),
- #
- # gcid
- #
- (
- "gcid",
- [
- ("struct", "GlyphCIDMapping", None, None, "Glyph to CID mapping table."),
- ],
- ),
- (
- "GlyphCIDMappingFormat0",
- [
- (
- "uint16",
- "Format",
- None,
- None,
- "Format of the glyph-to-CID mapping table, = 0.",
- ),
- ("uint16", "DataFormat", None, None, "Currenty unused, set to zero."),
- ("uint32", "StructLength", None, None, "Size of the table in bytes."),
- ("uint16", "Registry", None, None, "The registry ID."),
- (
- "char64",
- "RegistryName",
- None,
- None,
- "The registry name in ASCII; unused bytes should be set to 0.",
- ),
- ("uint16", "Order", None, None, "The order ID."),
- (
- "char64",
- "OrderName",
- None,
- None,
- "The order name in ASCII; unused bytes should be set to 0.",
- ),
- ("uint16", "SupplementVersion", None, None, "The supplement version."),
- (
- "GlyphCIDMap",
- "Mapping",
- None,
- None,
- "The CIDs for the glyphs in the font, starting with glyph 0. If a glyph does not correspond to a CID in the identified collection, 0xFFFF is used",
- ),
- ],
- ),
- #
- # lcar
- #
- (
- "lcar",
- [
- (
- "Version",
- "Version",
- None,
- None,
- "Version number of the ligature caret table (0x00010000 for the initial version).",
- ),
- ("struct", "LigatureCarets", None, None, "Ligature carets table."),
- ],
- ),
- (
- "LigatureCaretsFormat0",
- [
- (
- "uint16",
- "Format",
- None,
- None,
- "Format of the ligature caret table. Format 0 indicates division points are distances in font units, Format 1 indicates division points are indexes of control points.",
- ),
- (
- "AATLookup(LigCaretDistances)",
- "Carets",
- None,
- None,
- "Lookup table associating ligature glyphs with their caret positions, in font unit distances.",
- ),
- ],
- ),
- (
- "LigatureCaretsFormat1",
- [
- (
- "uint16",
- "Format",
- None,
- None,
- "Format of the ligature caret table. Format 0 indicates division points are distances in font units, Format 1 indicates division points are indexes of control points.",
- ),
- (
- "AATLookup(LigCaretPoints)",
- "Carets",
- None,
- None,
- "Lookup table associating ligature glyphs with their caret positions, as control points.",
- ),
- ],
- ),
- (
- "LigCaretDistances",
- [
- ("uint16", "DivsionPointCount", None, None, "Number of division points."),
- (
- "int16",
- "DivisionPoint",
- "DivsionPointCount",
- 0,
- "Distance in font units through which a subdivision is made orthogonally to the baseline.",
- ),
- ],
- ),
- (
- "LigCaretPoints",
- [
- ("uint16", "DivsionPointCount", None, None, "Number of division points."),
- (
- "int16",
- "DivisionPoint",
- "DivsionPointCount",
- 0,
- "The number of the control point through which a subdivision is made orthogonally to the baseline.",
- ),
- ],
- ),
- #
- # mort
- #
- (
- "mort",
- [
- ("Version", "Version", None, None, "Version of the mort table."),
- (
- "uint32",
- "MorphChainCount",
- None,
- None,
- "Number of metamorphosis chains.",
- ),
- (
- "MortChain",
- "MorphChain",
- "MorphChainCount",
- 0,
- "Array of metamorphosis chains.",
- ),
- ],
- ),
- (
- "MortChain",
- [
- (
- "Flags32",
- "DefaultFlags",
- None,
- None,
- "The default specification for subtables.",
- ),
- (
- "uint32",
- "StructLength",
- None,
- None,
- "Total byte count, including this header; must be a multiple of 4.",
- ),
- (
- "uint16",
- "MorphFeatureCount",
- None,
- None,
- "Number of metamorphosis feature entries.",
- ),
- (
- "uint16",
- "MorphSubtableCount",
- None,
- None,
- "The number of subtables in the chain.",
- ),
- (
- "struct",
- "MorphFeature",
- "MorphFeatureCount",
- 0,
- "Array of metamorphosis features.",
- ),
- (
- "MortSubtable",
- "MorphSubtable",
- "MorphSubtableCount",
- 0,
- "Array of metamorphosis subtables.",
- ),
- ],
- ),
- (
- "MortSubtable",
- [
- (
- "uint16",
- "StructLength",
- None,
- None,
- "Total subtable length, including this header.",
- ),
- (
- "uint8",
- "CoverageFlags",
- None,
- None,
- "Most significant byte of coverage flags.",
- ),
- ("uint8", "MorphType", None, None, "Subtable type."),
- (
- "Flags32",
- "SubFeatureFlags",
- None,
- None,
- "The 32-bit mask identifying which subtable this is (the subtable being executed if the AND of this value and the processed defaultFlags is nonzero).",
- ),
- ("SubStruct", "SubStruct", None, None, "SubTable."),
- ],
- ),
- #
- # morx
- #
- (
- "morx",
- [
- ("uint16", "Version", None, None, "Version of the morx table."),
- ("uint16", "Reserved", None, None, "Reserved (set to zero)."),
- (
- "uint32",
- "MorphChainCount",
- None,
- None,
- "Number of extended metamorphosis chains.",
- ),
- (
- "MorxChain",
- "MorphChain",
- "MorphChainCount",
- 0,
- "Array of extended metamorphosis chains.",
- ),
- ],
- ),
- (
- "MorxChain",
- [
- (
- "Flags32",
- "DefaultFlags",
- None,
- None,
- "The default specification for subtables.",
- ),
- (
- "uint32",
- "StructLength",
- None,
- None,
- "Total byte count, including this header; must be a multiple of 4.",
- ),
- (
- "uint32",
- "MorphFeatureCount",
- None,
- None,
- "Number of feature subtable entries.",
- ),
- (
- "uint32",
- "MorphSubtableCount",
- None,
- None,
- "The number of subtables in the chain.",
- ),
- (
- "MorphFeature",
- "MorphFeature",
- "MorphFeatureCount",
- 0,
- "Array of metamorphosis features.",
- ),
- (
- "MorxSubtable",
- "MorphSubtable",
- "MorphSubtableCount",
- 0,
- "Array of extended metamorphosis subtables.",
- ),
- ],
- ),
- (
- "MorphFeature",
- [
- ("uint16", "FeatureType", None, None, "The type of feature."),
- (
- "uint16",
- "FeatureSetting",
- None,
- None,
- "The feature's setting (aka selector).",
- ),
- (
- "Flags32",
- "EnableFlags",
- None,
- None,
- "Flags for the settings that this feature and setting enables.",
- ),
- (
- "Flags32",
- "DisableFlags",
- None,
- None,
- "Complement of flags for the settings that this feature and setting disable.",
- ),
- ],
- ),
- # Apple TrueType Reference Manual, chapter “The ‘morx’ table”,
- # section “Metamorphosis Subtables”.
- # https://developer.apple.com/fonts/TrueType-Reference-Manual/RM06/Chap6morx.html
- (
- "MorxSubtable",
- [
- (
- "uint32",
- "StructLength",
- None,
- None,
- "Total subtable length, including this header.",
- ),
- (
- "uint8",
- "CoverageFlags",
- None,
- None,
- "Most significant byte of coverage flags.",
- ),
- ("uint16", "Reserved", None, None, "Unused."),
- ("uint8", "MorphType", None, None, "Subtable type."),
- (
- "Flags32",
- "SubFeatureFlags",
- None,
- None,
- "The 32-bit mask identifying which subtable this is (the subtable being executed if the AND of this value and the processed defaultFlags is nonzero).",
- ),
- ("SubStruct", "SubStruct", None, None, "SubTable."),
- ],
- ),
- (
- "StateHeader",
- [
- (
- "uint32",
- "ClassCount",
- None,
- None,
- "Number of classes, which is the number of 16-bit entry indices in a single line in the state array.",
- ),
- (
- "uint32",
- "MorphClass",
- None,
- None,
- "Offset from the start of this state table header to the start of the class table.",
- ),
- (
- "uint32",
- "StateArrayOffset",
- None,
- None,
- "Offset from the start of this state table header to the start of the state array.",
- ),
- (
- "uint32",
- "EntryTableOffset",
- None,
- None,
- "Offset from the start of this state table header to the start of the entry table.",
- ),
- ],
- ),
- (
- "RearrangementMorph",
- [
- (
- "STXHeader(RearrangementMorphAction)",
- "StateTable",
- None,
- None,
- "Finite-state transducer table for indic rearrangement.",
- ),
- ],
- ),
- (
- "ContextualMorph",
- [
- (
- "STXHeader(ContextualMorphAction)",
- "StateTable",
- None,
- None,
- "Finite-state transducer for contextual glyph substitution.",
- ),
- ],
- ),
- (
- "LigatureMorph",
- [
- (
- "STXHeader(LigatureMorphAction)",
- "StateTable",
- None,
- None,
- "Finite-state transducer for ligature substitution.",
- ),
- ],
- ),
- (
- "NoncontextualMorph",
- [
- (
- "AATLookup(GlyphID)",
- "Substitution",
- None,
- None,
- "The noncontextual glyph substitution table.",
- ),
- ],
- ),
- (
- "InsertionMorph",
- [
- (
- "STXHeader(InsertionMorphAction)",
- "StateTable",
- None,
- None,
- "Finite-state transducer for glyph insertion.",
- ),
- ],
- ),
- (
- "MorphClass",
- [
- (
- "uint16",
- "FirstGlyph",
- None,
- None,
- "Glyph index of the first glyph in the class table.",
- ),
- # ('uint16', 'GlyphCount', None, None, 'Number of glyphs in class table.'),
- # ('uint8', 'GlyphClass', 'GlyphCount', 0, 'The class codes (indexed by glyph index minus firstGlyph). Class codes range from 0 to the value of stateSize minus 1.'),
- ],
- ),
- # If the 'morx' table version is 3 or greater, then the last subtable in the chain is followed by a subtableGlyphCoverageArray, as described below.
- # ('Offset', 'MarkGlyphSetsDef', None, 'round(Version*0x10000) >= 0x00010002', 'Offset to the table of mark set definitions-from beginning of GDEF header (may be NULL)'),
- #
- # prop
- #
- (
- "prop",
- [
- (
- "Fixed",
- "Version",
- None,
- None,
- "Version number of the AAT glyphs property table. Version 1.0 is the initial table version. Version 2.0, which is recognized by macOS 8.5 and later, adds support for the “attaches on right” bit. Version 3.0, which gets recognized by macOS X and iOS, adds support for the additional directional properties defined in Unicode 3.0.",
- ),
- ("struct", "GlyphProperties", None, None, "Glyph properties."),
- ],
- ),
- (
- "GlyphPropertiesFormat0",
- [
- ("uint16", "Format", None, None, "Format, = 0."),
- (
- "uint16",
- "DefaultProperties",
- None,
- None,
- "Default properties applied to a glyph. Since there is no lookup table in prop format 0, the default properties get applied to every glyph in the font.",
- ),
- ],
- ),
- (
- "GlyphPropertiesFormat1",
- [
- ("uint16", "Format", None, None, "Format, = 1."),
- (
- "uint16",
- "DefaultProperties",
- None,
- None,
- "Default properties applied to a glyph if that glyph is not present in the Properties lookup table.",
- ),
- (
- "AATLookup(uint16)",
- "Properties",
- None,
- None,
- "Lookup data associating glyphs with their properties.",
- ),
- ],
- ),
- #
- # opbd
- #
- (
- "opbd",
- [
- (
- "Version",
- "Version",
- None,
- None,
- "Version number of the optical bounds table (0x00010000 for the initial version).",
- ),
- ("struct", "OpticalBounds", None, None, "Optical bounds table."),
- ],
- ),
- (
- "OpticalBoundsFormat0",
- [
- (
- "uint16",
- "Format",
- None,
- None,
- "Format of the optical bounds table, = 0.",
- ),
- (
- "AATLookup(OpticalBoundsDeltas)",
- "OpticalBoundsDeltas",
- None,
- None,
- "Lookup table associating glyphs with their optical bounds, given as deltas in font units.",
- ),
- ],
- ),
- (
- "OpticalBoundsFormat1",
- [
- (
- "uint16",
- "Format",
- None,
- None,
- "Format of the optical bounds table, = 1.",
- ),
- (
- "AATLookup(OpticalBoundsPoints)",
- "OpticalBoundsPoints",
- None,
- None,
- "Lookup table associating glyphs with their optical bounds, given as references to control points.",
- ),
- ],
- ),
- (
- "OpticalBoundsDeltas",
- [
- (
- "int16",
- "Left",
- None,
- None,
- "Delta value for the left-side optical edge.",
- ),
- ("int16", "Top", None, None, "Delta value for the top-side optical edge."),
- (
- "int16",
- "Right",
- None,
- None,
- "Delta value for the right-side optical edge.",
- ),
- (
- "int16",
- "Bottom",
- None,
- None,
- "Delta value for the bottom-side optical edge.",
- ),
- ],
- ),
- (
- "OpticalBoundsPoints",
- [
- (
- "int16",
- "Left",
- None,
- None,
- "Control point index for the left-side optical edge, or -1 if this glyph has none.",
- ),
- (
- "int16",
- "Top",
- None,
- None,
- "Control point index for the top-side optical edge, or -1 if this glyph has none.",
- ),
- (
- "int16",
- "Right",
- None,
- None,
- "Control point index for the right-side optical edge, or -1 if this glyph has none.",
- ),
- (
- "int16",
- "Bottom",
- None,
- None,
- "Control point index for the bottom-side optical edge, or -1 if this glyph has none.",
- ),
- ],
- ),
- #
- # TSIC
- #
- (
- "TSIC",
- [
- (
- "Version",
- "Version",
- None,
- None,
- "Version of table initially set to 0x00010000.",
- ),
- ("uint16", "Flags", None, None, "TSIC flags - set to 0"),
- ("uint16", "AxisCount", None, None, "Axis count from fvar"),
- ("uint16", "RecordCount", None, None, "TSIC record count"),
- ("uint16", "Reserved", None, None, "Set to 0"),
- ("Tag", "AxisArray", "AxisCount", 0, "Array of axis tags in fvar order"),
- (
- "LocationRecord",
- "RecordLocations",
- "RecordCount",
- 0,
- "Location in variation space of TSIC record",
- ),
- ("TSICRecord", "Record", "RecordCount", 0, "Array of TSIC records"),
- ],
- ),
- (
- "LocationRecord",
- [
- ("F2Dot14", "Axis", "AxisCount", 0, "Axis record"),
- ],
- ),
- (
- "TSICRecord",
- [
- ("uint16", "Flags", None, None, "Record flags - set to 0"),
- ("uint16", "NumCVTEntries", None, None, "Number of CVT number value pairs"),
- ("uint16", "NameLength", None, None, "Length of optional user record name"),
- ("uint16", "NameArray", "NameLength", 0, "Unicode 16 name"),
- ("uint16", "CVTArray", "NumCVTEntries", 0, "CVT number array"),
- ("int16", "CVTValueArray", "NumCVTEntries", 0, "CVT value"),
- ],
- ),
- #
- # COLR
- #
- (
- "COLR",
- [
- ("uint16", "Version", None, None, "Table version number (starts at 0)."),
- (
- "uint16",
- "BaseGlyphRecordCount",
- None,
- None,
- "Number of Base Glyph Records.",
- ),
- (
- "LOffset",
- "BaseGlyphRecordArray",
- None,
- None,
- "Offset (from beginning of COLR table) to Base Glyph records.",
- ),
- (
- "LOffset",
- "LayerRecordArray",
- None,
- None,
- "Offset (from beginning of COLR table) to Layer Records.",
- ),
- ("uint16", "LayerRecordCount", None, None, "Number of Layer Records."),
- (
- "LOffset",
- "BaseGlyphList",
- None,
- "Version >= 1",
- "Offset (from beginning of COLR table) to array of Version-1 Base Glyph records.",
- ),
- (
- "LOffset",
- "LayerList",
- None,
- "Version >= 1",
- "Offset (from beginning of COLR table) to LayerList.",
- ),
- (
- "LOffset",
- "ClipList",
- None,
- "Version >= 1",
- "Offset to ClipList table (may be NULL)",
- ),
- (
- "LOffsetTo(DeltaSetIndexMap)",
- "VarIndexMap",
- None,
- "Version >= 1",
- "Offset to DeltaSetIndexMap table (may be NULL)",
- ),
- (
- "LOffset",
- "VarStore",
- None,
- "Version >= 1",
- "Offset to variation store (may be NULL)",
- ),
- ],
- ),
- (
- "BaseGlyphRecordArray",
- [
- (
- "BaseGlyphRecord",
- "BaseGlyphRecord",
- "BaseGlyphRecordCount",
- 0,
- "Base Glyph records.",
- ),
- ],
- ),
- (
- "BaseGlyphRecord",
- [
- (
- "GlyphID",
- "BaseGlyph",
- None,
- None,
- "Glyph ID of reference glyph. This glyph is for reference only and is not rendered for color.",
- ),
- (
- "uint16",
- "FirstLayerIndex",
- None,
- None,
- "Index (from beginning of the Layer Records) to the layer record. There will be numLayers consecutive entries for this base glyph.",
- ),
- (
- "uint16",
- "NumLayers",
- None,
- None,
- "Number of color layers associated with this glyph.",
- ),
- ],
- ),
- (
- "LayerRecordArray",
- [
- ("LayerRecord", "LayerRecord", "LayerRecordCount", 0, "Layer records."),
- ],
- ),
- (
- "LayerRecord",
- [
- (
- "GlyphID",
- "LayerGlyph",
- None,
- None,
- "Glyph ID of layer glyph (must be in z-order from bottom to top).",
- ),
- (
- "uint16",
- "PaletteIndex",
- None,
- None,
- "Index value to use with a selected color palette.",
- ),
- ],
- ),
- (
- "BaseGlyphList",
- [
- (
- "uint32",
- "BaseGlyphCount",
- None,
- None,
- "Number of Version-1 Base Glyph records",
- ),
- (
- "struct",
- "BaseGlyphPaintRecord",
- "BaseGlyphCount",
- 0,
- "Array of Version-1 Base Glyph records",
- ),
- ],
- ),
- (
- "BaseGlyphPaintRecord",
- [
- ("GlyphID", "BaseGlyph", None, None, "Glyph ID of reference glyph."),
- (
- "LOffset",
- "Paint",
- None,
- None,
- "Offset (from beginning of BaseGlyphPaintRecord) to Paint, typically a PaintColrLayers.",
- ),
- ],
- ),
- (
- "LayerList",
- [
- ("uint32", "LayerCount", None, None, "Number of Version-1 Layers"),
- (
- "LOffset",
- "Paint",
- "LayerCount",
- 0,
- "Array of offsets to Paint tables, from the start of the LayerList table.",
- ),
- ],
- ),
- (
- "ClipListFormat1",
- [
- (
- "uint8",
- "Format",
- None,
- None,
- "Format for ClipList with 16bit glyph IDs: 1",
- ),
- ("uint32", "ClipCount", None, None, "Number of Clip records."),
- (
- "struct",
- "ClipRecord",
- "ClipCount",
- 0,
- "Array of Clip records sorted by glyph ID.",
- ),
- ],
- ),
- (
- "ClipRecord",
- [
- ("uint16", "StartGlyphID", None, None, "First glyph ID in the range."),
- ("uint16", "EndGlyphID", None, None, "Last glyph ID in the range."),
- ("Offset24", "ClipBox", None, None, "Offset to a ClipBox table."),
- ],
- ),
- (
- "ClipBoxFormat1",
- [
- (
- "uint8",
- "Format",
- None,
- None,
- "Format for ClipBox without variation: set to 1.",
- ),
- ("int16", "xMin", None, None, "Minimum x of clip box."),
- ("int16", "yMin", None, None, "Minimum y of clip box."),
- ("int16", "xMax", None, None, "Maximum x of clip box."),
- ("int16", "yMax", None, None, "Maximum y of clip box."),
- ],
- ),
- (
- "ClipBoxFormat2",
- [
- ("uint8", "Format", None, None, "Format for variable ClipBox: set to 2."),
- ("int16", "xMin", None, None, "Minimum x of clip box. VarIndexBase + 0."),
- ("int16", "yMin", None, None, "Minimum y of clip box. VarIndexBase + 1."),
- ("int16", "xMax", None, None, "Maximum x of clip box. VarIndexBase + 2."),
- ("int16", "yMax", None, None, "Maximum y of clip box. VarIndexBase + 3."),
- (
- "VarIndex",
- "VarIndexBase",
- None,
- None,
- "Base index into DeltaSetIndexMap.",
- ),
- ],
- ),
- # COLRv1 Affine2x3 uses the same column-major order to serialize a 2D
- # Affine Transformation as the one used by fontTools.misc.transform.
- # However, for historical reasons, the labels 'xy' and 'yx' are swapped.
- # Their fundamental meaning is the same though.
- # COLRv1 Affine2x3 follows the names found in FreeType and Cairo.
- # In all case, the second element in the 6-tuple correspond to the
- # y-part of the x basis vector, and the third to the x-part of the y
- # basis vector.
- # See https://github.com/googlefonts/colr-gradients-spec/pull/85
- (
- "Affine2x3",
- [
- ("Fixed", "xx", None, None, "x-part of x basis vector"),
- ("Fixed", "yx", None, None, "y-part of x basis vector"),
- ("Fixed", "xy", None, None, "x-part of y basis vector"),
- ("Fixed", "yy", None, None, "y-part of y basis vector"),
- ("Fixed", "dx", None, None, "Translation in x direction"),
- ("Fixed", "dy", None, None, "Translation in y direction"),
- ],
- ),
- (
- "VarAffine2x3",
- [
- ("Fixed", "xx", None, None, "x-part of x basis vector. VarIndexBase + 0."),
- ("Fixed", "yx", None, None, "y-part of x basis vector. VarIndexBase + 1."),
- ("Fixed", "xy", None, None, "x-part of y basis vector. VarIndexBase + 2."),
- ("Fixed", "yy", None, None, "y-part of y basis vector. VarIndexBase + 3."),
- (
- "Fixed",
- "dx",
- None,
- None,
- "Translation in x direction. VarIndexBase + 4.",
- ),
- (
- "Fixed",
- "dy",
- None,
- None,
- "Translation in y direction. VarIndexBase + 5.",
- ),
- (
- "VarIndex",
- "VarIndexBase",
- None,
- None,
- "Base index into DeltaSetIndexMap.",
- ),
- ],
- ),
- (
- "ColorStop",
- [
- ("F2Dot14", "StopOffset", None, None, ""),
- ("uint16", "PaletteIndex", None, None, "Index for a CPAL palette entry."),
- ("F2Dot14", "Alpha", None, None, "Values outsided [0.,1.] reserved"),
- ],
- ),
- (
- "VarColorStop",
- [
- ("F2Dot14", "StopOffset", None, None, "VarIndexBase + 0."),
- ("uint16", "PaletteIndex", None, None, "Index for a CPAL palette entry."),
- (
- "F2Dot14",
- "Alpha",
- None,
- None,
- "Values outsided [0.,1.] reserved. VarIndexBase + 1.",
- ),
- (
- "VarIndex",
- "VarIndexBase",
- None,
- None,
- "Base index into DeltaSetIndexMap.",
- ),
- ],
- ),
- (
- "ColorLine",
- [
- (
- "ExtendMode",
- "Extend",
- None,
- None,
- "Enum {PAD = 0, REPEAT = 1, REFLECT = 2}",
- ),
- ("uint16", "StopCount", None, None, "Number of Color stops."),
- ("ColorStop", "ColorStop", "StopCount", 0, "Array of Color stops."),
- ],
- ),
- (
- "VarColorLine",
- [
- (
- "ExtendMode",
- "Extend",
- None,
- None,
- "Enum {PAD = 0, REPEAT = 1, REFLECT = 2}",
- ),
- ("uint16", "StopCount", None, None, "Number of Color stops."),
- ("VarColorStop", "ColorStop", "StopCount", 0, "Array of Color stops."),
- ],
- ),
- # PaintColrLayers
- (
- "PaintFormat1",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 1"),
- (
- "uint8",
- "NumLayers",
- None,
- None,
- "Number of offsets to Paint to read from LayerList.",
- ),
- ("uint32", "FirstLayerIndex", None, None, "Index into LayerList."),
- ],
- ),
- # PaintSolid
- (
- "PaintFormat2",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 2"),
- ("uint16", "PaletteIndex", None, None, "Index for a CPAL palette entry."),
- ("F2Dot14", "Alpha", None, None, "Values outsided [0.,1.] reserved"),
- ],
- ),
- # PaintVarSolid
- (
- "PaintFormat3",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 3"),
- ("uint16", "PaletteIndex", None, None, "Index for a CPAL palette entry."),
- (
- "F2Dot14",
- "Alpha",
- None,
- None,
- "Values outsided [0.,1.] reserved. VarIndexBase + 0.",
- ),
- (
- "VarIndex",
- "VarIndexBase",
- None,
- None,
- "Base index into DeltaSetIndexMap.",
- ),
- ],
- ),
- # PaintLinearGradient
- (
- "PaintFormat4",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 4"),
- (
- "Offset24",
- "ColorLine",
- None,
- None,
- "Offset (from beginning of PaintLinearGradient table) to ColorLine subtable.",
- ),
- ("int16", "x0", None, None, ""),
- ("int16", "y0", None, None, ""),
- ("int16", "x1", None, None, ""),
- ("int16", "y1", None, None, ""),
- ("int16", "x2", None, None, ""),
- ("int16", "y2", None, None, ""),
- ],
- ),
- # PaintVarLinearGradient
- (
- "PaintFormat5",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 5"),
- (
- "LOffset24To(VarColorLine)",
- "ColorLine",
- None,
- None,
- "Offset (from beginning of PaintVarLinearGradient table) to VarColorLine subtable.",
- ),
- ("int16", "x0", None, None, "VarIndexBase + 0."),
- ("int16", "y0", None, None, "VarIndexBase + 1."),
- ("int16", "x1", None, None, "VarIndexBase + 2."),
- ("int16", "y1", None, None, "VarIndexBase + 3."),
- ("int16", "x2", None, None, "VarIndexBase + 4."),
- ("int16", "y2", None, None, "VarIndexBase + 5."),
- (
- "VarIndex",
- "VarIndexBase",
- None,
- None,
- "Base index into DeltaSetIndexMap.",
- ),
- ],
- ),
- # PaintRadialGradient
- (
- "PaintFormat6",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 6"),
- (
- "Offset24",
- "ColorLine",
- None,
- None,
- "Offset (from beginning of PaintRadialGradient table) to ColorLine subtable.",
- ),
- ("int16", "x0", None, None, ""),
- ("int16", "y0", None, None, ""),
- ("uint16", "r0", None, None, ""),
- ("int16", "x1", None, None, ""),
- ("int16", "y1", None, None, ""),
- ("uint16", "r1", None, None, ""),
- ],
- ),
- # PaintVarRadialGradient
- (
- "PaintFormat7",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 7"),
- (
- "LOffset24To(VarColorLine)",
- "ColorLine",
- None,
- None,
- "Offset (from beginning of PaintVarRadialGradient table) to VarColorLine subtable.",
- ),
- ("int16", "x0", None, None, "VarIndexBase + 0."),
- ("int16", "y0", None, None, "VarIndexBase + 1."),
- ("uint16", "r0", None, None, "VarIndexBase + 2."),
- ("int16", "x1", None, None, "VarIndexBase + 3."),
- ("int16", "y1", None, None, "VarIndexBase + 4."),
- ("uint16", "r1", None, None, "VarIndexBase + 5."),
- (
- "VarIndex",
- "VarIndexBase",
- None,
- None,
- "Base index into DeltaSetIndexMap.",
- ),
- ],
- ),
- # PaintSweepGradient
- (
- "PaintFormat8",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 8"),
- (
- "Offset24",
- "ColorLine",
- None,
- None,
- "Offset (from beginning of PaintSweepGradient table) to ColorLine subtable.",
- ),
- ("int16", "centerX", None, None, "Center x coordinate."),
- ("int16", "centerY", None, None, "Center y coordinate."),
- (
- "BiasedAngle",
- "startAngle",
- None,
- None,
- "Start of the angular range of the gradient.",
- ),
- (
- "BiasedAngle",
- "endAngle",
- None,
- None,
- "End of the angular range of the gradient.",
- ),
- ],
- ),
- # PaintVarSweepGradient
- (
- "PaintFormat9",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 9"),
- (
- "LOffset24To(VarColorLine)",
- "ColorLine",
- None,
- None,
- "Offset (from beginning of PaintVarSweepGradient table) to VarColorLine subtable.",
- ),
- ("int16", "centerX", None, None, "Center x coordinate. VarIndexBase + 0."),
- ("int16", "centerY", None, None, "Center y coordinate. VarIndexBase + 1."),
- (
- "BiasedAngle",
- "startAngle",
- None,
- None,
- "Start of the angular range of the gradient. VarIndexBase + 2.",
- ),
- (
- "BiasedAngle",
- "endAngle",
- None,
- None,
- "End of the angular range of the gradient. VarIndexBase + 3.",
- ),
- (
- "VarIndex",
- "VarIndexBase",
- None,
- None,
- "Base index into DeltaSetIndexMap.",
- ),
- ],
- ),
- # PaintGlyph
- (
- "PaintFormat10",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 10"),
- (
- "Offset24",
- "Paint",
- None,
- None,
- "Offset (from beginning of PaintGlyph table) to Paint subtable.",
- ),
- ("GlyphID", "Glyph", None, None, "Glyph ID for the source outline."),
- ],
- ),
- # PaintColrGlyph
- (
- "PaintFormat11",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 11"),
- (
- "GlyphID",
- "Glyph",
- None,
- None,
- "Virtual glyph ID for a BaseGlyphList base glyph.",
- ),
- ],
- ),
- # PaintTransform
- (
- "PaintFormat12",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 12"),
- (
- "Offset24",
- "Paint",
- None,
- None,
- "Offset (from beginning of PaintTransform table) to Paint subtable.",
- ),
- (
- "LOffset24To(Affine2x3)",
- "Transform",
- None,
- None,
- "2x3 matrix for 2D affine transformations.",
- ),
- ],
- ),
- # PaintVarTransform
- (
- "PaintFormat13",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 13"),
- (
- "Offset24",
- "Paint",
- None,
- None,
- "Offset (from beginning of PaintVarTransform table) to Paint subtable.",
- ),
- (
- "LOffset24To(VarAffine2x3)",
- "Transform",
- None,
- None,
- "2x3 matrix for 2D affine transformations.",
- ),
- ],
- ),
- # PaintTranslate
- (
- "PaintFormat14",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 14"),
- (
- "Offset24",
- "Paint",
- None,
- None,
- "Offset (from beginning of PaintTranslate table) to Paint subtable.",
- ),
- ("int16", "dx", None, None, "Translation in x direction."),
- ("int16", "dy", None, None, "Translation in y direction."),
- ],
- ),
- # PaintVarTranslate
- (
- "PaintFormat15",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 15"),
- (
- "Offset24",
- "Paint",
- None,
- None,
- "Offset (from beginning of PaintVarTranslate table) to Paint subtable.",
- ),
- (
- "int16",
- "dx",
- None,
- None,
- "Translation in x direction. VarIndexBase + 0.",
- ),
- (
- "int16",
- "dy",
- None,
- None,
- "Translation in y direction. VarIndexBase + 1.",
- ),
- (
- "VarIndex",
- "VarIndexBase",
- None,
- None,
- "Base index into DeltaSetIndexMap.",
- ),
- ],
- ),
- # PaintScale
- (
- "PaintFormat16",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 16"),
- (
- "Offset24",
- "Paint",
- None,
- None,
- "Offset (from beginning of PaintScale table) to Paint subtable.",
- ),
- ("F2Dot14", "scaleX", None, None, ""),
- ("F2Dot14", "scaleY", None, None, ""),
- ],
- ),
- # PaintVarScale
- (
- "PaintFormat17",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 17"),
- (
- "Offset24",
- "Paint",
- None,
- None,
- "Offset (from beginning of PaintVarScale table) to Paint subtable.",
- ),
- ("F2Dot14", "scaleX", None, None, "VarIndexBase + 0."),
- ("F2Dot14", "scaleY", None, None, "VarIndexBase + 1."),
- (
- "VarIndex",
- "VarIndexBase",
- None,
- None,
- "Base index into DeltaSetIndexMap.",
- ),
- ],
- ),
- # PaintScaleAroundCenter
- (
- "PaintFormat18",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 18"),
- (
- "Offset24",
- "Paint",
- None,
- None,
- "Offset (from beginning of PaintScaleAroundCenter table) to Paint subtable.",
- ),
- ("F2Dot14", "scaleX", None, None, ""),
- ("F2Dot14", "scaleY", None, None, ""),
- ("int16", "centerX", None, None, ""),
- ("int16", "centerY", None, None, ""),
- ],
- ),
- # PaintVarScaleAroundCenter
- (
- "PaintFormat19",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 19"),
- (
- "Offset24",
- "Paint",
- None,
- None,
- "Offset (from beginning of PaintVarScaleAroundCenter table) to Paint subtable.",
- ),
- ("F2Dot14", "scaleX", None, None, "VarIndexBase + 0."),
- ("F2Dot14", "scaleY", None, None, "VarIndexBase + 1."),
- ("int16", "centerX", None, None, "VarIndexBase + 2."),
- ("int16", "centerY", None, None, "VarIndexBase + 3."),
- (
- "VarIndex",
- "VarIndexBase",
- None,
- None,
- "Base index into DeltaSetIndexMap.",
- ),
- ],
- ),
- # PaintScaleUniform
- (
- "PaintFormat20",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 20"),
- (
- "Offset24",
- "Paint",
- None,
- None,
- "Offset (from beginning of PaintScaleUniform table) to Paint subtable.",
- ),
- ("F2Dot14", "scale", None, None, ""),
- ],
- ),
- # PaintVarScaleUniform
- (
- "PaintFormat21",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 21"),
- (
- "Offset24",
- "Paint",
- None,
- None,
- "Offset (from beginning of PaintVarScaleUniform table) to Paint subtable.",
- ),
- ("F2Dot14", "scale", None, None, "VarIndexBase + 0."),
- (
- "VarIndex",
- "VarIndexBase",
- None,
- None,
- "Base index into DeltaSetIndexMap.",
- ),
- ],
- ),
- # PaintScaleUniformAroundCenter
- (
- "PaintFormat22",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 22"),
- (
- "Offset24",
- "Paint",
- None,
- None,
- "Offset (from beginning of PaintScaleUniformAroundCenter table) to Paint subtable.",
- ),
- ("F2Dot14", "scale", None, None, ""),
- ("int16", "centerX", None, None, ""),
- ("int16", "centerY", None, None, ""),
- ],
- ),
- # PaintVarScaleUniformAroundCenter
- (
- "PaintFormat23",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 23"),
- (
- "Offset24",
- "Paint",
- None,
- None,
- "Offset (from beginning of PaintVarScaleUniformAroundCenter table) to Paint subtable.",
- ),
- ("F2Dot14", "scale", None, None, "VarIndexBase + 0"),
- ("int16", "centerX", None, None, "VarIndexBase + 1"),
- ("int16", "centerY", None, None, "VarIndexBase + 2"),
- (
- "VarIndex",
- "VarIndexBase",
- None,
- None,
- "Base index into DeltaSetIndexMap.",
- ),
- ],
- ),
- # PaintRotate
- (
- "PaintFormat24",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 24"),
- (
- "Offset24",
- "Paint",
- None,
- None,
- "Offset (from beginning of PaintRotate table) to Paint subtable.",
- ),
- ("Angle", "angle", None, None, ""),
- ],
- ),
- # PaintVarRotate
- (
- "PaintFormat25",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 25"),
- (
- "Offset24",
- "Paint",
- None,
- None,
- "Offset (from beginning of PaintVarRotate table) to Paint subtable.",
- ),
- ("Angle", "angle", None, None, "VarIndexBase + 0."),
- (
- "VarIndex",
- "VarIndexBase",
- None,
- None,
- "Base index into DeltaSetIndexMap.",
- ),
- ],
- ),
- # PaintRotateAroundCenter
- (
- "PaintFormat26",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 26"),
- (
- "Offset24",
- "Paint",
- None,
- None,
- "Offset (from beginning of PaintRotateAroundCenter table) to Paint subtable.",
- ),
- ("Angle", "angle", None, None, ""),
- ("int16", "centerX", None, None, ""),
- ("int16", "centerY", None, None, ""),
- ],
- ),
- # PaintVarRotateAroundCenter
- (
- "PaintFormat27",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 27"),
- (
- "Offset24",
- "Paint",
- None,
- None,
- "Offset (from beginning of PaintVarRotateAroundCenter table) to Paint subtable.",
- ),
- ("Angle", "angle", None, None, "VarIndexBase + 0."),
- ("int16", "centerX", None, None, "VarIndexBase + 1."),
- ("int16", "centerY", None, None, "VarIndexBase + 2."),
- (
- "VarIndex",
- "VarIndexBase",
- None,
- None,
- "Base index into DeltaSetIndexMap.",
- ),
- ],
- ),
- # PaintSkew
- (
- "PaintFormat28",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 28"),
- (
- "Offset24",
- "Paint",
- None,
- None,
- "Offset (from beginning of PaintSkew table) to Paint subtable.",
- ),
- ("Angle", "xSkewAngle", None, None, ""),
- ("Angle", "ySkewAngle", None, None, ""),
- ],
- ),
- # PaintVarSkew
- (
- "PaintFormat29",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 29"),
- (
- "Offset24",
- "Paint",
- None,
- None,
- "Offset (from beginning of PaintVarSkew table) to Paint subtable.",
- ),
- ("Angle", "xSkewAngle", None, None, "VarIndexBase + 0."),
- ("Angle", "ySkewAngle", None, None, "VarIndexBase + 1."),
- (
- "VarIndex",
- "VarIndexBase",
- None,
- None,
- "Base index into DeltaSetIndexMap.",
- ),
- ],
- ),
- # PaintSkewAroundCenter
- (
- "PaintFormat30",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 30"),
- (
- "Offset24",
- "Paint",
- None,
- None,
- "Offset (from beginning of PaintSkewAroundCenter table) to Paint subtable.",
- ),
- ("Angle", "xSkewAngle", None, None, ""),
- ("Angle", "ySkewAngle", None, None, ""),
- ("int16", "centerX", None, None, ""),
- ("int16", "centerY", None, None, ""),
- ],
- ),
- # PaintVarSkewAroundCenter
- (
- "PaintFormat31",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 31"),
- (
- "Offset24",
- "Paint",
- None,
- None,
- "Offset (from beginning of PaintVarSkewAroundCenter table) to Paint subtable.",
- ),
- ("Angle", "xSkewAngle", None, None, "VarIndexBase + 0."),
- ("Angle", "ySkewAngle", None, None, "VarIndexBase + 1."),
- ("int16", "centerX", None, None, "VarIndexBase + 2."),
- ("int16", "centerY", None, None, "VarIndexBase + 3."),
- (
- "VarIndex",
- "VarIndexBase",
- None,
- None,
- "Base index into DeltaSetIndexMap.",
- ),
- ],
- ),
- # PaintComposite
- (
- "PaintFormat32",
- [
- ("uint8", "PaintFormat", None, None, "Format identifier-format = 32"),
- (
- "LOffset24To(Paint)",
- "SourcePaint",
- None,
- None,
- "Offset (from beginning of PaintComposite table) to source Paint subtable.",
- ),
- (
- "CompositeMode",
- "CompositeMode",
- None,
- None,
- "A CompositeMode enumeration value.",
- ),
- (
- "LOffset24To(Paint)",
- "BackdropPaint",
- None,
- None,
- "Offset (from beginning of PaintComposite table) to backdrop Paint subtable.",
- ),
- ],
- ),
- #
- # avar
- #
- (
- "AxisValueMap",
- [
- (
- "F2Dot14",
- "FromCoordinate",
- None,
- None,
- "A normalized coordinate value obtained using default normalization",
- ),
- (
- "F2Dot14",
- "ToCoordinate",
- None,
- None,
- "The modified, normalized coordinate value",
- ),
- ],
- ),
- (
- "AxisSegmentMap",
- [
- (
- "uint16",
- "PositionMapCount",
- None,
- None,
- "The number of correspondence pairs for this axis",
- ),
- (
- "AxisValueMap",
- "AxisValueMap",
- "PositionMapCount",
- 0,
- "The array of axis value map records for this axis",
- ),
- ],
- ),
- (
- "avar",
- [
- (
- "Version",
- "Version",
- None,
- None,
- "Version of the avar table- 0x00010000 or 0x00020000",
- ),
- ("uint16", "Reserved", None, None, "Permanently reserved; set to zero"),
- (
- "uint16",
- "AxisCount",
- None,
- None,
- 'The number of variation axes for this font. This must be the same number as axisCount in the "fvar" table',
- ),
- (
- "AxisSegmentMap",
- "AxisSegmentMap",
- "AxisCount",
- 0,
- 'The segment maps array — one segment map for each axis, in the order of axes specified in the "fvar" table',
- ),
- (
- "LOffsetTo(DeltaSetIndexMap)",
- "VarIdxMap",
- None,
- "Version >= 0x00020000",
- "",
- ),
- ("LOffset", "VarStore", None, "Version >= 0x00020000", ""),
- ],
- ),
-]
diff --git a/spaces/Danielito/webui/README.md b/spaces/Danielito/webui/README.md
deleted file mode 100644
index 013d12c9f3a56698056ae1bdbbfb0ec009805237..0000000000000000000000000000000000000000
--- a/spaces/Danielito/webui/README.md
+++ /dev/null
@@ -1,20 +0,0 @@
----
-title: Stable Diffusion Web UI
-emoji: 🚧
-colorFrom: yellow
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.9
-app_file: app.py
-pinned: false
-duplicated_from: camenduru/webui
----
-
-## Stable Diffusion Web UI
-[https://github.com/AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui)
-
-## Documentation
-[https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki)
-
-## Models License
-https://huggingface.co/spaces/CompVis/stable-diffusion-license
\ No newline at end of file
diff --git a/spaces/Dify-AI/Baichuan2-13B-Chat/README.md b/spaces/Dify-AI/Baichuan2-13B-Chat/README.md
deleted file mode 100644
index 51e0c37d3f0272cc5aa0f3798bd5c7604ddf7eba..0000000000000000000000000000000000000000
--- a/spaces/Dify-AI/Baichuan2-13B-Chat/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Baichuan2 13B Chat
-emoji: 🔥
-colorFrom: gray
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.42.0
-app_file: app.py
-pinned: false
-license: other
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git "a/spaces/DrBenjamin/AI_Demo/pages/\360\237\222\201\342\200\215 Open_Assistant.py" "b/spaces/DrBenjamin/AI_Demo/pages/\360\237\222\201\342\200\215 Open_Assistant.py"
deleted file mode 100644
index 16510a128ab2e9904af72ca7f71680b1b5aa3ffa..0000000000000000000000000000000000000000
--- "a/spaces/DrBenjamin/AI_Demo/pages/\360\237\222\201\342\200\215 Open_Assistant.py"
+++ /dev/null
@@ -1,359 +0,0 @@
-##### `💁 Open_Assistant.py`
-##### Chat Llm Streaming
-##### https://huggingface.co/spaces/olivierdehaene/chat-llm-streaming/blob/main/README.md
-##### https://open-assistant.io/dashboard
-##### https://github.com/LAION-AI/Open-Assistant
-
-##### Please reach out to ben@benbox.org for any questions
-#### Loading needed Python libraries
-import streamlit as st
-import os
-from text_generation import Client, InferenceAPIClient
-from text_generation import InferenceAPIClient
-
-
-
-
-#### Streamlit initial setup
-st.set_page_config(
- page_title = "💁 Open Assistant LLM",
- page_icon = "images/OpenAssistant.png",
- layout = "centered",
- initial_sidebar_state = "expanded"
-)
-
-
-
-
-#### Main program
-st.header('💁 Open Assistant LLM')
-st.write('Conversational AI for everyone.')
-st.write('In the same way that Stable Diffusion helped the world make art and images in new ways, this helps to improve the world by providing amazing conversational AI.')
-st.write('This is the first iteration English supervised-fine-tuning (SFT) model of the Open-Assistant project. It is based on a Pythia 12B that was fine-tuned on ~22k human demonstrations of assistant conversations collected through the https://open-assistant.io/ human feedback web app before March 7, 2023.')
-st.write(':orange[Needs to be run on Hugging Face to access the OpenAssistant model (Run it here https://huggingface.co/spaces/DrBenjamin/AI_Demo).]')
-with st.form('OpenAssistant'):
- client = InferenceAPIClient("OpenAssistant/oasst-sft-1-pythia-12b")
- st.subheader('Question')
- input_text = st.text_input('Ask a question')
- input_text = '<|prompter|>' + input_text + '<|endoftext|><|assistant|>'
- submitted = st.form_submit_button('Submit')
- if submitted:
- text = client.generate(input_text).generated_text
- st.subheader('Answer')
- st.write('Answer: :green[' + str(text) + ']')
-
-
-# Token Streaming
-#text = ""
-#for response in client.generate_stream("<|prompter|>Why is the sky blue?<|endoftext|><|assistant|>"):
-# if not response.token.special:
-# print(response.token.text)
-# text += response.token.text
-#st.write(text)
-
-#
-# openchat_preprompt = (
-# "\n: Hi!\n: My name is Bot, model version is 0.15, part of an open-source kit for "
-# "fine-tuning new bots! I was created by Together, LAION, and Ontocord.ai and the open-source "
-# "community. I am not human, not evil and not alive, and thus have no thoughts and feelings, "
-# "but I am programmed to be helpful, polite, honest, and friendly.\n"
-# )
-#
-#
-# def get_client(model: str):
-# if model == "togethercomputer/GPT-NeoXT-Chat-Base-20B":
-# return Client(os.getenv("OPENCHAT_API_URL"))
-# return InferenceAPIClient(model, token = os.getenv("HF_TOKEN", None))
-#
-#
-# def get_usernames(model: str):
-# """
-# Returns:
-# (str, str, str, str): pre-prompt, username, bot name, separator
-# """
-# if model == "OpenAssistant/oasst-sft-1-pythia-12b":
-# return "", "<|prompter|>", "<|assistant|>", "<|endoftext|>"
-# if model == "togethercomputer/GPT-NeoXT-Chat-Base-20B":
-# return openchat_preprompt, ": ", ": ", "\n"
-# return "", "User: ", "Assistant: ", "\n"
-#
-#
-# def predict(
-# model: str,
-# inputs: str,
-# typical_p: float,
-# top_p: float,
-# temperature: float,
-# top_k: int,
-# repetition_penalty: float,
-# watermark: bool,
-# chatbot,
-# history,
-# ):
-# client = get_client(model)
-# preprompt, user_name, assistant_name, sep = get_usernames(model)
-#
-# history.append(inputs)
-#
-# past = []
-# for data in chatbot:
-# user_data, model_data = data
-#
-# if not user_data.startswith(user_name):
-# user_data = user_name + user_data
-# if not model_data.startswith(sep + assistant_name):
-# model_data = sep + assistant_name + model_data
-#
-# past.append(user_data + model_data.rstrip() + sep)
-#
-# if not inputs.startswith(user_name):
-# inputs = user_name + inputs
-#
-# total_inputs = preprompt + "".join(past) + inputs + sep + assistant_name.rstrip()
-#
-# partial_words = ""
-#
-# if model == "OpenAssistant/oasst-sft-1-pythia-12b":
-# iterator = client.generate_stream(
-# total_inputs,
-# typical_p = typical_p,
-# truncate = 1000,
-# watermark = watermark,
-# max_new_tokens = 500,
-# )
-# else:
-# iterator = client.generate_stream(
-# total_inputs,
-# top_p = top_p if top_p < 1.0 else None,
-# top_k = top_k,
-# truncate = 1000,
-# repetition_penalty = repetition_penalty,
-# watermark = watermark,
-# temperature = temperature,
-# max_new_tokens = 500,
-# stop_sequences = [user_name.rstrip(), assistant_name.rstrip()],
-# )
-#
-# for i, response in enumerate(iterator):
-# if response.token.special:
-# continue
-#
-# partial_words = partial_words + response.token.text
-# if partial_words.endswith(user_name.rstrip()):
-# partial_words = partial_words.rstrip(user_name.rstrip())
-# if partial_words.endswith(assistant_name.rstrip()):
-# partial_words = partial_words.rstrip(assistant_name.rstrip())
-#
-# if i == 0:
-# history.append(" " + partial_words)
-# elif response.token.text not in user_name:
-# history[-1] = partial_words
-#
-# chat = [
-# (history[i].strip(), history[i + 1].strip())
-# for i in range(0, len(history) - 1, 2)
-# ]
-# yield chat, history
-#
-#
-# def reset_textbox():
-# return gr.update(value = "")
-#
-#
-# def radio_on_change(
-# value: str,
-# disclaimer,
-# typical_p,
-# top_p,
-# top_k,
-# temperature,
-# repetition_penalty,
-# watermark,
-# ):
-# if value == "OpenAssistant/oasst-sft-1-pythia-12b":
-# typical_p = typical_p.update(value = 0.2, visible = True)
-# top_p = top_p.update(visible = False)
-# top_k = top_k.update(visible = False)
-# temperature = temperature.update(visible = False)
-# disclaimer = disclaimer.update(visible = False)
-# repetition_penalty = repetition_penalty.update(visible = False)
-# watermark = watermark.update(False)
-# elif value == "togethercomputer/GPT-NeoXT-Chat-Base-20B":
-# typical_p = typical_p.update(visible = False)
-# top_p = top_p.update(value = 0.25, visible = True)
-# top_k = top_k.update(value = 50, visible = True)
-# temperature = temperature.update(value = 0.6, visible = True)
-# repetition_penalty = repetition_penalty.update(value = 1.01, visible = True)
-# watermark = watermark.update(False)
-# disclaimer = disclaimer.update(visible = True)
-# else:
-# typical_p = typical_p.update(visible = False)
-# top_p = top_p.update(value = 0.95, visible = True)
-# top_k = top_k.update(value = 4, visible = True)
-# temperature = temperature.update(value = 0.5, visible = True)
-# repetition_penalty = repetition_penalty.update(value = 1.03, visible = True)
-# watermark = watermark.update(True)
-# disclaimer = disclaimer.update(visible = False)
-# return (
-# disclaimer,
-# typical_p,
-# top_p,
-# top_k,
-# temperature,
-# repetition_penalty,
-# watermark,
-# )
-#
-#
-# title = """
🔥Large Language Model API 🚀Streaming🚀
"""
-# description = """Language models can be conditioned to act like dialogue agents through a conversational prompt that typically takes the form:
-# ```
-# User:
-# Assistant:
-# User:
-# Assistant:
-# ...
-# ```
-# In this app, you can explore the outputs of multiple LLMs when prompted in this way.
-# """
-#
-# openchat_disclaimer = """
-#
Generating Human Motion from Textual Descriptions (T2M-GPT)
- This space uses T2M-GPT models based on Vector Quantised-Variational AutoEncoder (VQ-VAE) and Generative Pre-trained Transformer (GPT) for human motion generation from textural descriptions🤗
-
- ''')
- with gr.Row():
- with gr.Column():
- gr.Markdown('''
-
-
- a man starts off in an up right position with botg arms extended out by his sides, he then brings his arms down to his body and claps his hands together. after this he wals down amd the the left where he proceeds to sit on a seat
-
-
- ''')
- with gr.Column():
- gr.Markdown('''
-
-
- a person puts their hands together, leans forwards slightly then swings the arms from right to left
-
-
- ''')
- with gr.Column():
- gr.Markdown('''
-
-
- a man is practicing the waltz with a partner
-
-
- ''')
- with gr.Row():
- with gr.Column():
- gr.Markdown('''
- ### Generate human motion by **T2M-GPT**
- ##### Step 1. Give prompt text describing human motion
- ##### Step 2. Choice method to render output (Fast: Sketch skeleton; Slow: SMPL mesh, only work with GPU and running time around 2 mins)
- ##### Step 3. Generate output and enjoy
- ''')
- with gr.Column():
- with gr.Row():
- text_prompt.render()
- method = gr.Dropdown(["slow", "fast"], label="Method", value="slow")
- with gr.Row():
- generate_btn = gr.Button("Generate")
- generate_btn.click(predict, [text_prompt, method], [video_out], api_name="generate")
- print(video_out)
- with gr.Row():
- video_out.render()
- with gr.Row():
- gr.Markdown('''
- ### You can test by following examples:
- ''')
- examples = gr.Examples(examples=
- [ "a person jogs in place, slowly at first, then increases speed. they then back up and squat down.",
- "a man steps forward and does a handstand",
- "a man rises from the ground, walks in a circle and sits back down on the ground"],
- label="Examples", inputs=[text_prompt])
-
-demo.launch(debug=True)
diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/dataset_folder.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/dataset_folder.py
deleted file mode 100644
index 1847e8792ae0cd543305a7b854493fd38fcdbc50..0000000000000000000000000000000000000000
--- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/dataset_folder.py
+++ /dev/null
@@ -1,430 +0,0 @@
-# Copyright (c) EPFL VILAB.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-# --------------------------------------------------------
-# Based on BEiT, timm, DINO DeiT and MAE-priv code bases
-# https://github.com/microsoft/unilm/tree/master/beit
-# https://github.com/rwightman/pytorch-image-models/tree/master/timm
-# https://github.com/facebookresearch/deit
-# https://github.com/facebookresearch/dino
-# https://github.com/BUPT-PRIV/MAE-priv
-# --------------------------------------------------------
-import os
-import os.path
-import random
-from copy import deepcopy
-from typing import Any, Callable, Dict, List, Optional, Tuple, cast
-
-import numpy as np
-import torch
-from PIL import Image
-from torchvision.datasets.vision import VisionDataset
-
-
-def has_file_allowed_extension(filename: str, extensions: Tuple[str, ...]) -> bool:
- """Checks if a file is an allowed extension.
-
- Args:
- filename (string): path to a file
- extensions (tuple of strings): extensions to consider (lowercase)
-
- Returns:
- bool: True if the filename ends with one of given extensions
- """
- return filename.lower().endswith(extensions)
-
-
-def is_image_file(filename: str) -> bool:
- """Checks if a file is an allowed image extension.
-
- Args:
- filename (string): path to a file
-
- Returns:
- bool: True if the filename ends with a known image extension
- """
- return has_file_allowed_extension(filename, IMG_EXTENSIONS)
-
-
-def make_dataset(
- directory: str,
- class_to_idx: Dict[str, int],
- extensions: Optional[Tuple[str, ...]] = None,
- is_valid_file: Optional[Callable[[str], bool]] = None,
-) -> List[Tuple[str, int]]:
- instances = []
- directory = os.path.expanduser(directory)
- both_none = extensions is None and is_valid_file is None
- both_something = extensions is not None and is_valid_file is not None
- if both_none or both_something:
- raise ValueError("Both extensions and is_valid_file cannot be None or not None at the same time")
- if extensions is not None:
- def is_valid_file(x: str) -> bool:
- return has_file_allowed_extension(x, cast(Tuple[str, ...], extensions))
- is_valid_file = cast(Callable[[str], bool], is_valid_file)
- for target_class in sorted(class_to_idx.keys()):
- class_index = class_to_idx[target_class]
- target_dir = os.path.join(directory, target_class)
- if not os.path.isdir(target_dir):
- continue
- for root, _, fnames in sorted(os.walk(target_dir, followlinks=True)):
- for fname in sorted(fnames):
- path = os.path.join(root, fname)
- if is_valid_file(path):
- item = path, class_index
- instances.append(item)
- return instances
-
-
-class DatasetFolder(VisionDataset):
- """A generic data loader where the samples are arranged in this way: ::
-
- root/class_x/xxx.ext
- root/class_x/xxy.ext
- root/class_x/xxz.ext
-
- root/class_y/123.ext
- root/class_y/nsdf3.ext
- root/class_y/asd932_.ext
-
- Args:
- root (string): Root directory path.
- loader (callable): A function to load a sample given its path.
- extensions (tuple[string]): A list of allowed extensions.
- both extensions and is_valid_file should not be passed.
- transform (callable, optional): A function/transform that takes in
- a sample and returns a transformed version.
- E.g, ``transforms.RandomCrop`` for images.
- target_transform (callable, optional): A function/transform that takes
- in the target and transforms it.
- is_valid_file (callable, optional): A function that takes path of a file
- and check if the file is a valid file (used to check of corrupt logs)
- both extensions and is_valid_file should not be passed.
-
- Attributes:
- classes (list): List of the class names sorted alphabetically.
- class_to_idx (dict): Dict with items (class_name, class_index).
- samples (list): List of (sample path, class_index) tuples
- targets (list): The class_index value for each image in the dataset
- """
-
- def __init__(
- self,
- root: str,
- loader: Callable[[str], Any],
- extensions: Optional[Tuple[str, ...]] = None,
- transform: Optional[Callable] = None,
- target_transform: Optional[Callable] = None,
- is_valid_file: Optional[Callable[[str], bool]] = None,
- ) -> None:
- super(DatasetFolder, self).__init__(root, transform=transform,
- target_transform=target_transform)
- classes, class_to_idx = self._find_classes(self.root)
- samples = make_dataset(self.root, class_to_idx, extensions, is_valid_file)
- if len(samples) == 0:
- msg = "Found 0 logs in subfolders of: {}\n".format(self.root)
- if extensions is not None:
- msg += "Supported extensions are: {}".format(",".join(extensions))
- raise RuntimeError(msg)
-
- self.loader = loader
- self.extensions = extensions
-
- self.classes = classes
- self.class_to_idx = class_to_idx
- self.samples = samples
- self.targets = [s[1] for s in samples]
-
- def _find_classes(self, dir: str) -> Tuple[List[str], Dict[str, int]]:
- """
- Finds the class folders in a dataset.
-
- Args:
- dir (string): Root directory path.
-
- Returns:
- tuple: (classes, class_to_idx) where classes are relative to (dir), and class_to_idx is a dictionary.
-
- Ensures:
- No class is a subdirectory of another.
- """
- classes = [d.name for d in os.scandir(dir) if d.is_dir()]
- classes.sort()
- class_to_idx = {cls_name: i for i, cls_name in enumerate(classes)}
- return classes, class_to_idx
-
- def __getitem__(self, index: int) -> Tuple[Any, Any]:
- """
- Args:
- index (int): Index
-
- Returns:
- tuple: (sample, target) where target is class_index of the target class.
- """
- while True:
- try:
- path, target = self.samples[index]
- sample = self.loader(path)
- break
- except Exception as e:
- print(e)
- index = random.randint(0, len(self.samples) - 1)
-
- if self.transform is not None:
- sample = self.transform(sample)
- if self.target_transform is not None:
- target = self.target_transform(target)
-
- return sample, target
-
- def __len__(self) -> int:
- return len(self.samples)
-
-
-class MultiTaskDatasetFolder(VisionDataset):
- """A generic multi-task dataset loader where the samples are arranged in this way: ::
-
- root/task_a/class_x/xxx.ext
- root/task_a/class_y/xxy.ext
- root/task_a/class_z/xxz.ext
-
- root/task_b/class_x/xxx.ext
- root/task_b/class_y/xxy.ext
- root/task_b/class_z/xxz.ext
-
- Args:
- root (string): Root directory path.
- tasks (list): List of tasks as strings
- loader (callable): A function to load a sample given its path.
- extensions (tuple[string]): A list of allowed extensions.
- both extensions and is_valid_file should not be passed.
- transform (callable, optional): A function/transform that takes in
- a sample and returns a transformed version.
- E.g, ``transforms.RandomCrop`` for images.
- target_transform (callable, optional): A function/transform that takes
- in the target and transforms it.
- is_valid_file (callable, optional): A function that takes path of a file
- and check if the file is a valid file (used to check of corrupt logs)
- both extensions and is_valid_file should not be passed.
-
- Attributes:
- classes (list): List of the class names sorted alphabetically.
- class_to_idx (dict): Dict with items (class_name, class_index).
- samples (list): List of (sample path, class_index) tuples
- targets (list): The class_index value for each image in the dataset
- """
-
- def __init__(
- self,
- root: str,
- tasks: List[str],
- loader: Callable[[str], Any],
- extensions: Optional[Tuple[str, ...]] = None,
- transform: Optional[Callable] = None,
- target_transform: Optional[Callable] = None,
- is_valid_file: Optional[Callable[[str], bool]] = None,
- prefixes: Optional[Dict[str,str]] = None,
- max_images: Optional[int] = None
- ) -> None:
- super(MultiTaskDatasetFolder, self).__init__(root, transform=transform,
- target_transform=target_transform)
- self.tasks = tasks
- classes, class_to_idx = self._find_classes(os.path.join(self.root, self.tasks[0]))
-
- prefixes = {} if prefixes is None else prefixes
- prefixes.update({task: '' for task in tasks if task not in prefixes})
-
- samples = {
- task: make_dataset(os.path.join(self.root, f'{prefixes[task]}{task}'), class_to_idx, extensions, is_valid_file)
- for task in self.tasks
- }
-
- for task, task_samples in samples.items():
- if len(task_samples) == 0:
- msg = "Found 0 logs in subfolders of: {}\n".format(os.path.join(self.root, task))
- if extensions is not None:
- msg += "Supported extensions are: {}".format(",".join(extensions))
- raise RuntimeError(msg)
-
- self.loader = loader
- self.extensions = extensions
-
- self.classes = classes
- self.class_to_idx = class_to_idx
- self.samples = samples
- # self.targets = [s[1] for s in list(samples.values())[0]]
-
- # Select random subset of dataset if so specified
- if isinstance(max_images, int):
- total_samples = len(list(self.samples.values())[0])
- np.random.seed(0)
- permutation = np.random.permutation(total_samples)
- for task in samples:
- self.samples[task] = [self.samples[task][i] for i in permutation][:max_images]
-
- self.cache = {}
-
- def _find_classes(self, dir: str) -> Tuple[List[str], Dict[str, int]]:
- """
- Finds the class folders in a dataset.
-
- Args:
- dir (string): Root directory path.
-
- Returns:
- tuple: (classes, class_to_idx) where classes are relative to (dir), and class_to_idx is a dictionary.
-
- Ensures:
- No class is a subdirectory of another.
- """
- classes = [d.name for d in os.scandir(dir) if d.is_dir()]
- classes.sort()
- class_to_idx = {cls_name: i for i, cls_name in enumerate(classes)}
- return classes, class_to_idx
-
- def __getitem__(self, index: int) -> Tuple[Any, Any]:
- """
- Args:
- index (int): Index
-
- Returns:
- tuple: (sample, target) where target is class_index of the target class.
- """
- if index in self.cache:
- sample_dict, target = deepcopy(self.cache[index])
- else:
- sample_dict = {}
- for task in self.tasks:
- path, target = self.samples[task][index]
- sample = pil_loader(path, convert_rgb=(task=='rgb'))
- sample_dict[task] = sample
- # self.cache[index] = deepcopy((sample_dict, target))
-
- if self.transform is not None:
- sample_dict = self.transform(sample_dict)
- if self.target_transform is not None:
- target = self.target_transform(target)
-
- return sample_dict, target
-
- def __len__(self) -> int:
- return len(list(self.samples.values())[0])
-
-
-IMG_EXTENSIONS = ('.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm', '.tif', '.tiff', '.webp', '.jpx')
-
-
-def pil_loader(path: str, convert_rgb=True) -> Image.Image:
- # open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835)
- # with open(path, 'rb') as f:
- # img = Image.open(f)
- img = Image.open(path)
- return img.convert('RGB') if convert_rgb else img
-
-
-# TODO: specify the return type
-def accimage_loader(path: str) -> Any:
- import accimage
- try:
- return accimage.Image(path)
- except IOError:
- # Potentially a decoding problem, fall back to PIL.Image
- return pil_loader(path)
-
-
-def default_loader(path: str) -> Any:
- from torchvision import get_image_backend
- if get_image_backend() == 'accimage':
- return accimage_loader(path)
- else:
- return pil_loader(path)
-
-
-class ImageFolder(DatasetFolder):
- """A generic data loader where the images are arranged in this way: ::
-
- root/dog/xxx.png
- root/dog/xxy.png
- root/dog/xxz.png
-
- root/cat/123.png
- root/cat/nsdf3.png
- root/cat/asd932_.png
-
- Args:
- root (string): Root directory path.
- transform (callable, optional): A function/transform that takes in an PIL image
- and returns a transformed version. E.g, ``transforms.RandomCrop``
- target_transform (callable, optional): A function/transform that takes in the
- target and transforms it.
- loader (callable, optional): A function to load an image given its path.
- is_valid_file (callable, optional): A function that takes path of an Image file
- and check if the file is a valid file (used to check of corrupt logs)
-
- Attributes:
- classes (list): List of the class names sorted alphabetically.
- class_to_idx (dict): Dict with items (class_name, class_index).
- imgs (list): List of (image path, class_index) tuples
- """
-
- def __init__(
- self,
- root: str,
- transform: Optional[Callable] = None,
- target_transform: Optional[Callable] = None,
- loader: Callable[[str], Any] = default_loader,
- is_valid_file: Optional[Callable[[str], bool]] = None,
- ):
- super(ImageFolder, self).__init__(root, loader, IMG_EXTENSIONS if is_valid_file is None else None,
- transform=transform,
- target_transform=target_transform,
- is_valid_file=is_valid_file)
- self.imgs = self.samples
-
-class MultiTaskImageFolder(MultiTaskDatasetFolder):
- """A generic multi-task dataset loader where the images are arranged in this way: ::
-
- root/task_a/class_x/xxx.ext
- root/task_a/class_y/xxy.ext
- root/task_a/class_z/xxz.ext
-
- root/task_b/class_x/xxx.ext
- root/task_b/class_y/xxy.ext
- root/task_b/class_z/xxz.ext
-
- Args:
- root (string): Root directory path.
- transform (callable, optional): A function/transform that takes in an PIL image
- and returns a transformed version. E.g, ``transforms.RandomCrop``
- target_transform (callable, optional): A function/transform that takes in the
- target and transforms it.
- loader (callable, optional): A function to load an image given its path.
- is_valid_file (callable, optional): A function that takes path of an Image file
- and check if the file is a valid file (used to check of corrupt logs)
-
- Attributes:
- classes (list): List of the class names sorted alphabetically.
- class_to_idx (dict): Dict with items (class_name, class_index).
- imgs (list): List of (image path, class_index) tuples
- """
-
- def __init__(
- self,
- root: str,
- tasks: List[str],
- transform: Optional[Callable] = None,
- target_transform: Optional[Callable] = None,
- loader: Callable[[str], Any] = pil_loader,
- is_valid_file: Optional[Callable[[str], bool]] = None,
- prefixes: Optional[Dict[str,str]] = None,
- max_images: Optional[int] = None
- ):
- super(MultiTaskImageFolder, self).__init__(root, tasks, loader, IMG_EXTENSIONS if is_valid_file is None else None,
- transform=transform,
- target_transform=target_transform,
- is_valid_file=is_valid_file,
- prefixes=prefixes,
- max_images=max_images)
- self.imgs = self.samples
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/data_scripts/download_flores_data.sh b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/data_scripts/download_flores_data.sh
deleted file mode 100644
index e6175ce0c38b06a1ebddaeca808f71b47f77f500..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/data_scripts/download_flores_data.sh
+++ /dev/null
@@ -1,246 +0,0 @@
-#!/bin/bash
-
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-#
-
-if [ -z $WORKDIR_ROOT ] ;
-then
- echo "please specify your working directory root in environment variable WORKDIR_ROOT. Exitting..."
- exit
-fi
-
-
-set -e
-set -o pipefail
-
-SRC=en
-SI_TGT=si
-NE_TGT=ne
-
-DESTDIR=${WORKDIR_ROOT}/ML50/raw/
-
-ROOT=${WORKDIR_ROOT}/tmp
-mkdir -p $ROOT
-DATA=$ROOT/data
-NE_ROOT=$DATA/all-clean-ne
-SI_ROOT=$DATA/all-clean-si
-
-mkdir -p $DATA $NE_ROOT $SI_ROOT
-
-SI_OPUS_DATASETS=(
- "$SI_ROOT/GNOME.en-si"
- "$SI_ROOT/Ubuntu.en-si"
- "$SI_ROOT/KDE4.en-si"
- "$SI_ROOT/OpenSubtitles.en-si"
-)
-
-SI_OPUS_URLS=(
- "https://object.pouta.csc.fi/OPUS-GNOME/v1/moses/en-si.txt.zip"
- "https://object.pouta.csc.fi/OPUS-Ubuntu/v14.10/moses/en-si.txt.zip"
- "https://object.pouta.csc.fi/OPUS-KDE4/v2/moses/en-si.txt.zip"
- "https://object.pouta.csc.fi/OPUS-OpenSubtitles/v2018/moses/en-si.txt.zip"
-)
-
-NE_OPUS_DATASETS=(
- "$NE_ROOT/GNOME.en-ne"
- "$NE_ROOT/Ubuntu.en-ne"
- "$NE_ROOT/KDE4.en-ne"
-)
-
-NE_OPUS_URLS=(
- "https://object.pouta.csc.fi/OPUS-GNOME/v1/moses/en-ne.txt.zip"
- "https://object.pouta.csc.fi/OPUS-Ubuntu/v14.10/moses/en-ne.txt.zip"
- "https://object.pouta.csc.fi/OPUS-KDE4/v2/moses/en-ne.txt.zip"
-)
-
-REMOVE_FILE_PATHS=()
-
-# Download data
-download_data() {
- CORPORA=$1
- URL=$2
-
- if [ -f $CORPORA ]; then
- echo "$CORPORA already exists, skipping download"
- else
- echo "Downloading $URL"
- wget $URL -O $CORPORA --no-check-certificate || rm -f $CORPORA
- if [ -f $CORPORA ]; then
- echo "$URL successfully downloaded."
- else
- echo "$URL not successfully downloaded."
- rm -f $CORPORA
- exit -1
- fi
- fi
-}
-
-# Example: download_opus_data $LANG_ROOT $TGT
-download_opus_data() {
- LANG_ROOT=$1
- TGT=$2
-
- if [ "$TGT" = "si" ]; then
- URLS=("${SI_OPUS_URLS[@]}")
- DATASETS=("${SI_OPUS_DATASETS[@]}")
- else
- URLS=("${NE_OPUS_URLS[@]}")
- DATASETS=("${NE_OPUS_DATASETS[@]}")
- fi
-
- # Download and extract data
- for ((i=0;i<${#URLS[@]};++i)); do
- URL=${URLS[i]}
- CORPORA=${DATASETS[i]}
-
- download_data $CORPORA $URL
- unzip -o $CORPORA -d $LANG_ROOT
- REMOVE_FILE_PATHS+=( $CORPORA $CORPORA.xml $CORPORA.ids $LANG_ROOT/README $LANG_ROOT/LICENSE )
- done
-
- cat ${DATASETS[0]}.$SRC ${DATASETS[1]}.$SRC ${DATASETS[2]}.$SRC > $LANG_ROOT/GNOMEKDEUbuntu.$SRC-$TGT.$SRC
- cat ${DATASETS[0]}.$TGT ${DATASETS[1]}.$TGT ${DATASETS[2]}.$TGT > $LANG_ROOT/GNOMEKDEUbuntu.$SRC-$TGT.$TGT
-
- REMOVE_FILE_PATHS+=( ${DATASETS[0]}.$SRC ${DATASETS[1]}.$SRC ${DATASETS[2]}.$SRC )
- REMOVE_FILE_PATHS+=( ${DATASETS[0]}.$TGT ${DATASETS[1]}.$TGT ${DATASETS[2]}.$TGT )
-}
-
-download_opus_data $SI_ROOT $SI_TGT
-cp ${SI_OPUS_DATASETS[3]}.$SRC $SI_ROOT/OpenSubtitles2018.$SRC-$SI_TGT.$SRC
-cp ${SI_OPUS_DATASETS[3]}.$SI_TGT $SI_ROOT/OpenSubtitles2018.$SRC-$SI_TGT.$SI_TGT
-REMOVE_FILE_PATHS+=( ${SI_OPUS_DATASETS[3]}.$SRC ${SI_OPUS_DATASETS[3]}.$SI_TGT )
-
-download_opus_data $NE_ROOT $NE_TGT
-
-
-# Download and extract Global Voices data
-GLOBAL_VOICES="$NE_ROOT/globalvoices.2018q4.ne-en"
-GLOBAL_VOICES_URL="http://www.casmacat.eu/corpus/global-voices/globalvoices.ne-en.xliff.gz"
-
-download_data $GLOBAL_VOICES.gz $GLOBAL_VOICES_URL
-gunzip -Nf $GLOBAL_VOICES.gz
-
-sed -ne 's?.*\(.*\).*?\1?p' $GLOBAL_VOICES > $GLOBAL_VOICES.$NE_TGT
-sed -ne 's?.*]*>\(.*\).*?\1?p' $GLOBAL_VOICES > $GLOBAL_VOICES.$SRC
-
-REMOVE_FILE_PATHS+=( $GLOBAL_VOICES )
-
-# Download and extract the bible dataset
-BIBLE_TOOLS=bible-corpus-tools
-XML_BIBLES=XML_Bibles
-XML_BIBLES_DUP=XML_Bibles_dup
-
-if [ ! -e $BIBLE_TOOLS ]; then
- echo "Cloning bible-corpus-tools repository..."
- git clone https://github.com/christos-c/bible-corpus-tools.git
-fi
-
-mkdir -p $BIBLE_TOOLS/bin $XML_BIBLES $XML_BIBLES_DUP
-javac -cp "$BIBLE_TOOLS/lib/*" -d $BIBLE_TOOLS/bin $BIBLE_TOOLS/src/bible/readers/*.java $BIBLE_TOOLS/src/bible/*.java
-
-download_data bible.tar.gz "https://github.com/christos-c/bible-corpus/archive/v1.2.1.tar.gz"
-tar xvzf bible.tar.gz
-
-cp bible-corpus-1.2.1/bibles/{Greek.xml,English.xml,Nepali.xml} $XML_BIBLES/
-cp bible-corpus-1.2.1/bibles/{Greek.xml,English-WEB.xml,Nepali.xml} $XML_BIBLES_DUP/
-
-java -cp $BIBLE_TOOLS/lib/*:$BIBLE_TOOLS/bin bible.CreateMLBooks $XML_BIBLES
-java -cp $BIBLE_TOOLS/lib/*:$BIBLE_TOOLS/bin bible.CreateMLBooks $XML_BIBLES_DUP
-java -cp $BIBLE_TOOLS/lib/*:$BIBLE_TOOLS/bin bible.CreateVerseAlignedBooks $XML_BIBLES
-java -cp $BIBLE_TOOLS/lib/*:$BIBLE_TOOLS/bin bible.CreateVerseAlignedBooks $XML_BIBLES_DUP
-
-cat $XML_BIBLES/aligned/*/English.txt > $NE_ROOT/bible.$SRC-$NE_TGT.$SRC
-cat $XML_BIBLES/aligned/*/Nepali.txt > $NE_ROOT/bible.$SRC-$NE_TGT.$NE_TGT
-cat $XML_BIBLES_DUP/aligned/*/English-WEB.txt > $NE_ROOT/bible_dup.$SRC-$NE_TGT.$SRC
-cat $XML_BIBLES_DUP/aligned/*/Nepali.txt > $NE_ROOT/bible_dup.$SRC-$NE_TGT.$NE_TGT
-REMOVE_FILE_PATHS+=( bible-corpus-1.2.1 bible.tar.gz $BIBLE_TOOLS $XML_BIBLES $XML_BIBLES_DUP )
-
-# Download and extract the Penn Treebank dataset
-NE_TAGGED=$ROOT/new_submissions_parallel_corpus_project_Nepal
-NE_TAGGED_URL="http://www.cle.org.pk/Downloads/ling_resources/parallelcorpus/NepaliTaggedCorpus.zip"
-EN_TAGGED_PATCH_URL="https://dl.fbaipublicfiles.com/fairseq/data/nepali-penn-treebank.en.patch"
-NE_TAGGED_PATCH_URL="https://dl.fbaipublicfiles.com/fairseq/data/nepali-penn-treebank.ne.patch"
-MOSES=mosesdecoder
-MOSES_TOK=$MOSES/scripts/tokenizer
-EN_PATCH_REGEX="{s:\\\/:\/:g;s/\*\T\*\-\n+//g;s/\-LCB\-/\{/g;s/\-RCB\-/\}/g; s/\-LSB\-/\[/g; s/\-RSB\-/\]/g;s/\-LRB\-/\(/g; s/\-RRB\-/\)/g; s/\'\'/\"/g; s/\`\`/\"/g; s/\ +\'s\ +/\'s /g; s/\ +\'re\ +/\'re /g; s/\"\ +/\"/g; s/\ +\"/\"/g; s/\ n't([\ \.\"])/n't\1/g; s/\r+(.)/\1/g;}"
-NE_PATCH_REGEX="{s:\p{Cf}::g;s:\\\/:\/:g;s/\*\T\*\-\n+//g;s/\-LCB\-/\{/g;s/\-RCB\-/\}/g; s/\-LSB\-/\[/g; s/\-RSB\-/\]/g;s/\-LRB\-/\(/g; s/\-RRB\-/\)/g; s/\'\'/\"/g; s/\`\`/\"/g; s/\ +\'s\ +/\'s /g; s/\ +\'re\ +/\'re /g; s/\"\ +/\"/g; s/\ +\"/\"/g; s/\ n't([\ \.\"])/n't\1/g; s/\r+(.)/\1/g;}"
-
-download_data $DATA/nepali-penn-treebank.$SRC.patch $EN_TAGGED_PATCH_URL
-download_data $DATA/nepali-penn-treebank.$NE_TGT.patch $NE_TAGGED_PATCH_URL
-download_data original.zip $NE_TAGGED_URL
-unzip -o original.zip -d $ROOT
-
-cat $NE_TAGGED/00.txt $NE_TAGGED/01.txt $NE_TAGGED/02.txt > $NE_TAGGED/nepali-penn-treebank.$SRC
-cat $NE_TAGGED/00ne_revised.txt $NE_TAGGED/01ne_revised.txt $NE_TAGGED/02ne_revised.txt > $NE_TAGGED/nepali-penn-treebank.$NE_TGT
-
-patch $NE_TAGGED/nepali-penn-treebank.$SRC -i $DATA/nepali-penn-treebank.$SRC.patch -o $NE_TAGGED/nepali-penn-treebank-patched.$SRC
-patch $NE_TAGGED/nepali-penn-treebank.$NE_TGT -i $DATA/nepali-penn-treebank.$NE_TGT.patch -o $NE_TAGGED/nepali-penn-treebank-patched.$NE_TGT
-
-if [ ! -e $MOSES ]; then
- echo "Cloning moses repository..."
- git clone https://github.com/moses-smt/mosesdecoder.git
-fi
-
-cat $NE_TAGGED/nepali-penn-treebank-patched.$SRC | \
- perl -anpe "$EN_PATCH_REGEX" | \
- $MOSES_TOK/tokenizer.perl -l $SRC | \
- $MOSES_TOK/detokenizer.perl -l $SRC > $NE_ROOT/nepali-penn-treebank.$SRC
-
-cat $NE_TAGGED/nepali-penn-treebank-patched.$NE_TGT | \
- perl -CIO -anpe "$NE_PATCH_REGEX" | \
- $MOSES_TOK/detokenizer.perl -l $SRC > $NE_ROOT/nepali-penn-treebank.$NE_TGT
-
-
-# Download nepali dictionary data
-NE_DICT=$NE_ROOT/dictionaries
-download_data $NE_DICT "http://www.seas.upenn.edu/~nlp/resources/TACL-data-release/dictionaries.tar.gz"
-tar xvzf $NE_DICT
-cp dictionaries/dict.ne $NE_ROOT/dictionary.$NE_TGT-$SRC
-REMOVE_FILE_PATHS+=( $NE_DICT dictionaries )
-
-REMOVE_FILE_PATHS+=( $MOSES $NE_TAGGED original.zip $DATA/nepali-penn-treebank.$SRC.patch $DATA/nepali-penn-treebank.$NE_TGT.patch )
-
-
-# Remove the temporary files
-for ((i=0;i<${#REMOVE_FILE_PATHS[@]};++i)); do
- rm -rf ${REMOVE_FILE_PATHS[i]}
-done
-
-# Copy the training data
-si=si_LK
-ne=ne_NP
-en=en_XX
-cat $SI_ROOT/GNOMEKDEUbuntu.en-si.si $SI_ROOT/OpenSubtitles2018.en-si.si > $DESTDIR/train.$si-$en.$si
-cat $SI_ROOT/GNOMEKDEUbuntu.en-si.en $SI_ROOT/OpenSubtitles2018.en-si.en > $DESTDIR/train.$si-$en.$en
-
-cat $NE_ROOT/bible_dup.en-ne.ne $NE_ROOT/bible.en-ne.ne $NE_ROOT/globalvoices.2018q4.ne-en.ne $NE_ROOT/GNOMEKDEUbuntu.en-ne.ne $NE_ROOT/nepali-penn-treebank.ne > $DESTDIR/train.$ne-$en.$ne
-cat $NE_ROOT/bible_dup.en-ne.en $NE_ROOT/bible.en-ne.en $NE_ROOT/globalvoices.2018q4.ne-en.en $NE_ROOT/GNOMEKDEUbuntu.en-ne.en $NE_ROOT/nepali-penn-treebank.en > $DESTDIR/train.$ne-$en.$en
-
-
-#Download the test sets
-wget https://github.com/facebookresearch/flores/raw/master/data/wikipedia_en_ne_si_test_sets.tgz
-tar -xvzf wikipedia_en_ne_si_test_sets.tgz
-
-cp wikipedia_en_ne_si_test_sets/wikipedia.dev.ne-en.ne $DESTDIR/valid.$ne-$en.$ne
-cp wikipedia_en_ne_si_test_sets/wikipedia.dev.ne-en.en $DESTDIR/valid.$ne-$en.$en
-
-cp wikipedia_en_ne_si_test_sets/wikipedia.dev.si-en.si $DESTDIR/valid.$si-$en.$si
-cp wikipedia_en_ne_si_test_sets/wikipedia.dev.si-en.en $DESTDIR/valid.$si-$en.$en
-
-cp wikipedia_en_ne_si_test_sets/wikipedia.devtest.ne-en.ne $DESTDIR/devtest.$ne-$en.$ne
-cp wikipedia_en_ne_si_test_sets/wikipedia.devtest.ne-en.en $DESTDIR/devtest.$ne-$en.$en
-
-cp wikipedia_en_ne_si_test_sets/wikipedia.devtest.si-en.si $DESTDIR/devtest.$si-$en.$si
-cp wikipedia_en_ne_si_test_sets/wikipedia.devtest.si-en.en $DESTDIR/devtest.$si-$en.$en
-
-cp wikipedia_en_ne_si_test_sets/wikipedia.test.ne-en.ne $DESTDIR/test.$ne-$en.$ne
-cp wikipedia_en_ne_si_test_sets/wikipedia.test.ne-en.en $DESTDIR/test.$ne-$en.$en
-
-cp wikipedia_en_ne_si_test_sets/wikipedia.test.si-en.si $DESTDIR/test.$si-$en.$si
-cp wikipedia_en_ne_si_test_sets/wikipedia.test.si-en.en $DESTDIR/test.$si-$en.$en
-
-rm -rf wikipedia_en_ne_si_test_sets.tgz wikipedia_en_ne_si_test_sets
diff --git a/spaces/HewDew/Linaqruf-anything-v3.0/README.md b/spaces/HewDew/Linaqruf-anything-v3.0/README.md
deleted file mode 100644
index a629e1def33aecebdeb81f2483115d1d0a36223a..0000000000000000000000000000000000000000
--- a/spaces/HewDew/Linaqruf-anything-v3.0/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Linaqruf Anything V3.0
-emoji: 👁
-colorFrom: blue
-colorTo: pink
-sdk: gradio
-sdk_version: 3.12.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/CarouselItem.svelte_svelte_type_style_lang.cc0aed40.js b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/CarouselItem.svelte_svelte_type_style_lang.cc0aed40.js
deleted file mode 100644
index e812d2d88d7d96dc6837e5da19aa2c3964ecabad..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/CarouselItem.svelte_svelte_type_style_lang.cc0aed40.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as B,i as E,s as H,p as I,e as w,a as j,t as M,b as g,d as k,f as T,g as c,l as S,u as D,q as F,r as O,h as q,j as Q,k as R,n as U,A as z,F as G,Q as y,$ as J,a0 as K,a1 as A}from"./index.396f4a72.js";function N(l){let t,o,e,u,L,f,m=l[2]+1+"",v,p,_=l[3].length+"",d,b,i,r,x,n;const C=l[9].default,a=I(C,l,l[8],null);return{c(){t=w("div"),a&&a.c(),o=j(),e=w("div"),u=w("button"),u.innerHTML='',L=j(),f=w("div"),v=M(m),p=M(" / "),d=M(_),b=j(),i=w("button"),i.innerHTML='',g(u,"class","flex items-center justify-center h-6 w-6 hover:text-orange-500"),g(f,"class","carousel_index text-center font-semibold"),g(i,"class","flex items-center justify-center h-6 w-6 hover:text-orange-500"),g(e,"class","carousel-control flex gap-4 justify-center items-center pt-2 text-sm"),g(t,"class","output-carousel flex flex-col relative"),g(t,"id",l[0]),k(t,"!hidden",!l[1])},m(s,h){T(s,t,h),a&&a.m(t,null),c(t,o),c(t,e),c(e,u),c(e,L),c(e,f),c(f,v),c(f,p),c(f,d),c(e,b),c(e,i),r=!0,x||(n=[S(u,"click",l[7]),S(i,"click",l[6])],x=!0)},p(s,[h]){a&&a.p&&(!r||h&256)&&D(a,C,s,s[8],r?O(C,s[8],h,null):F(s[8]),null),(!r||h&4)&&m!==(m=s[2]+1+"")&&q(v,m),(!r||h&8)&&_!==(_=s[3].length+"")&&q(d,_),(!r||h&1)&&g(t,"id",s[0]),h&2&&k(t,"!hidden",!s[1])},i(s){r||(Q(a,s),r=!0)},o(s){R(a,s),r=!1},d(s){s&&U(t),a&&a.d(s),x=!1,z(n)}}}const P={};function V(l,t,o){let e,u,{$$slots:L={},$$scope:f}=t,{elem_id:m=""}=t,{visible:v=!0}=t;const p=G(),_=A([]);y(l,_,n=>o(3,e=n));const d=A();y(l,d,n=>o(11,u=n));let b=-1;J(P,{register:()=>(e.push(++b),_.set(e),b),unregister:n=>{const C=e.findIndex(a=>a===n);e.slice(C,1),_.set(e)},current:d});let i=0;const r=()=>{o(2,i=(i+1)%e.length),p("change")},x=()=>{o(2,i=(i-1+e.length)%e.length),p("change")};return l.$$set=n=>{"elem_id"in n&&o(0,m=n.elem_id),"visible"in n&&o(1,v=n.visible),"$$scope"in n&&o(8,f=n.$$scope)},l.$$.update=()=>{l.$$.dirty&12&&K(d,u=e[i]||0,u)},[m,v,i,e,_,d,r,x,f,L]}class X extends B{constructor(t){super(),E(this,t,V,N,H,{elem_id:0,visible:1})}}export{X as C,P as a};
-//# sourceMappingURL=CarouselItem.svelte_svelte_type_style_lang.cc0aed40.js.map
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/criterions/adaptive_loss.py b/spaces/ICML2022/OFA/fairseq/fairseq/criterions/adaptive_loss.py
deleted file mode 100644
index 6209ceaedb6d8120ad820c11b55c13596447933c..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/criterions/adaptive_loss.py
+++ /dev/null
@@ -1,123 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-from dataclasses import dataclass
-
-import torch.nn.functional as F
-from fairseq import metrics, utils
-from fairseq.criterions import FairseqCriterion, register_criterion
-from fairseq.dataclass import FairseqDataclass
-from fairseq.dataclass.constants import DDP_BACKEND_CHOICES
-from omegaconf import II
-
-
-@dataclass
-class AdaptiveLossConfig(FairseqDataclass):
- sentence_avg: bool = II("optimization.sentence_avg")
- ddp_backend: DDP_BACKEND_CHOICES = II("distributed_training.ddp_backend")
-
-
-@register_criterion("adaptive_loss", dataclass=AdaptiveLossConfig)
-class AdaptiveLoss(FairseqCriterion):
- """This is an implementation of the loss function accompanying the adaptive softmax approximation for
- graphical processing units (GPU), described in the paper "Efficient softmax approximation for GPUs"
- (http://arxiv.org/abs/1609.04309)."""
-
- def __init__(self, task, sentence_avg):
- super().__init__(task)
- self.sentence_avg = sentence_avg
-
- @classmethod
- def build_criterion(cls, cfg: AdaptiveLossConfig, task):
- if cfg.ddp_backend in {"c10d", "pytorch_ddp"}:
- raise Exception(
- "AdaptiveLoss is not compatible with the PyTorch "
- "version of DistributedDataParallel. Please use "
- "`--ddp-backend=legacy_ddp` instead."
- )
- return cls(task, cfg.sentence_avg)
-
- def forward(self, model, sample, reduce=True):
- """Compute the loss for the given sample.
-
- Returns a tuple with three elements:
- 1) the loss
- 2) the sample size, which is used as the denominator for the gradient
- 3) logging outputs to display while training
- """
-
- assert (
- hasattr(model.decoder, "adaptive_softmax")
- and model.decoder.adaptive_softmax is not None
- )
- adaptive_softmax = model.decoder.adaptive_softmax
-
- net_output = model(**sample["net_input"])
- orig_target = model.get_targets(sample, net_output)
-
- nsentences = orig_target.size(0)
- orig_target = orig_target.view(-1)
-
- bsz = orig_target.size(0)
-
- logits, target = adaptive_softmax(net_output[0], orig_target)
- assert len(target) == len(logits)
-
- loss = net_output[0].new(1 if reduce else bsz).zero_()
-
- for i in range(len(target)):
- if target[i] is not None:
- assert target[i].min() >= 0 and target[i].max() <= logits[i].size(1)
- loss += F.cross_entropy(
- logits[i],
- target[i],
- ignore_index=self.padding_idx,
- reduction="sum" if reduce else "none",
- )
-
- orig = utils.strip_pad(orig_target, self.padding_idx)
- ntokens = orig.numel()
- sample_size = sample["target"].size(0) if self.sentence_avg else ntokens
- logging_output = {
- "loss": loss.data,
- "ntokens": ntokens,
- "nsentences": nsentences,
- "sample_size": sample_size,
- }
- return loss, sample_size, logging_output
-
- @staticmethod
- def reduce_metrics(logging_outputs) -> None:
- """Aggregate logging outputs from data parallel training."""
- loss_sum = utils.item(sum(log.get("loss", 0) for log in logging_outputs))
- ntokens = utils.item(sum(log.get("ntokens", 0) for log in logging_outputs))
- sample_size = utils.item(
- sum(log.get("sample_size", 0) for log in logging_outputs)
- )
-
- metrics.log_scalar(
- "loss", loss_sum / sample_size / math.log(2), sample_size, round=3
- )
- if sample_size != ntokens:
- metrics.log_scalar(
- "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3
- )
- metrics.log_derived(
- "ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg)
- )
- else:
- metrics.log_derived(
- "ppl", lambda meters: utils.get_perplexity(meters["loss"].avg)
- )
-
- @staticmethod
- def logging_outputs_can_be_summed() -> bool:
- """
- Whether the logging outputs returned by `forward` can be summed
- across workers prior to calling `reduce_metrics`. Setting this
- to True will improves distributed training speed.
- """
- return True
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/optim/adam.py b/spaces/ICML2022/OFA/fairseq/fairseq/optim/adam.py
deleted file mode 100644
index d3ae9e64a74774310adcd9968d2eae23368890f9..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/optim/adam.py
+++ /dev/null
@@ -1,239 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-import math
-from collections.abc import Collection
-from dataclasses import dataclass, field
-from typing import Any, List
-
-import torch
-import torch.distributed as dist
-import torch.optim
-from fairseq.dataclass import FairseqDataclass
-from fairseq.optim import FairseqOptimizer, register_optimizer
-from fairseq.optim.fused_adam import get_fused_adam_class
-from omegaconf import II, OmegaConf
-
-
-logger = logging.getLogger(__name__)
-
-
-@dataclass
-class FairseqAdamConfig(FairseqDataclass):
- adam_betas: Any = field(
- default=(0.9, 0.999), metadata={"help": "betas for Adam optimizer"}
- )
- adam_eps: float = field(
- default=1e-8, metadata={"help": "epsilon for Adam optimizer"}
- )
- weight_decay: float = field(default=0.0, metadata={"help": "weight decay"})
- use_old_adam: bool = field(
- default=False, metadata={"help": "Use fairseq.optim.adam.Adam"}
- )
- fp16_adam_stats: bool = field(
- default=False, metadata={"help": "use FP16 stats (with automatic scaling)"}
- )
- # TODO common vars below in parent
- tpu: bool = II("common.tpu")
- lr: List[float] = II("optimization.lr")
-
-
-@register_optimizer("adam", dataclass=FairseqAdamConfig)
-class FairseqAdam(FairseqOptimizer):
- """Adam optimizer for fairseq.
-
- Important note: this optimizer corresponds to the "AdamW" variant of
- Adam in its weight decay behavior. As such, it is most closely
- analogous to torch.optim.AdamW from PyTorch.
- """
-
- def __init__(self, cfg: FairseqAdamConfig, params):
- super().__init__(cfg)
- fused_adam_cls = get_fused_adam_class()
- use_fused_adam = (
- not getattr(cfg, "use_old_adam", False)
- and fused_adam_cls is not None
- and torch.cuda.is_available()
- )
- if getattr(cfg, "tpu", False):
- if self.cfg.fp16_adam_stats:
- raise NotImplementedError("--fp16-adam-stats is only supported on GPU")
- # on TPUs we use the Adam defined here, since it
- # automatically casts gradients to FP32
- self._optimizer = Adam(params, **self.optimizer_config)
- elif use_fused_adam:
- logger.info("using FusedAdam")
- self._optimizer = fused_adam_cls(
- params,
- use_fp16_stats=self.cfg.fp16_adam_stats,
- **self.optimizer_config
- )
- else:
- if self.cfg.fp16_adam_stats:
- raise NotImplementedError("--fp16-adam-stats is only supported with FusedAdamV1")
- self._optimizer = Adam(params, **self.optimizer_config)
-
- @property
- def optimizer_config(self):
- """
- Return a kwarg dictionary that will be used to override optimizer
- args stored in checkpoints. This allows us to load a checkpoint and
- resume training using a different set of optimizer args, e.g., with a
- different learning rate.
- """
- return {
- "lr": self.cfg.lr[0]
- if isinstance(self.cfg.lr, Collection)
- else self.cfg.lr,
- "betas": eval(self.cfg.adam_betas)
- if isinstance(self.cfg.adam_betas, str)
- else OmegaConf.to_container(self.cfg.adam_betas),
- "eps": self.cfg.adam_eps,
- "weight_decay": self.cfg.weight_decay,
- }
-
- def average_params(self):
- """Reduce Params is only used during BMUF distributed training."""
- state_dict = self.optimizer.state_dict()
- total_gpus = float(dist.get_world_size())
-
- for _, value in state_dict["state"].items():
- value["exp_avg"] /= total_gpus
- value["exp_avg_sq"] /= total_gpus
- dist.all_reduce(value["exp_avg"], op=dist.ReduceOp.SUM)
- dist.all_reduce(value["exp_avg_sq"], op=dist.ReduceOp.SUM)
-
-
-class Adam(torch.optim.Optimizer):
- r"""Implements Adam algorithm.
-
- This implementation is modified from torch.optim.Adam based on:
- `Fixed Weight Decay Regularization in Adam`
- (see https://arxiv.org/abs/1711.05101)
-
- It has been proposed in `Adam: A Method for Stochastic Optimization`_.
-
- Args:
- params (iterable): iterable of parameters to optimize or dicts defining
- parameter groups
- lr (float, optional): learning rate (default: 1e-3)
- betas (Tuple[float, float], optional): coefficients used for computing
- running averages of gradient and its square (default: (0.9, 0.999))
- eps (float, optional): term added to the denominator to improve
- numerical stability (default: 1e-8)
- weight_decay (float, optional): weight decay (L2 penalty) (default: 0)
- amsgrad (boolean, optional): whether to use the AMSGrad variant of this
- algorithm from the paper `On the Convergence of Adam and Beyond`_
-
- .. _Adam\: A Method for Stochastic Optimization:
- https://arxiv.org/abs/1412.6980
- .. _On the Convergence of Adam and Beyond:
- https://openreview.net/forum?id=ryQu7f-RZ
- """
-
- def __init__(
- self,
- params,
- lr=1e-3,
- betas=(0.9, 0.999),
- eps=1e-8,
- weight_decay=0,
- amsgrad=False,
- ):
- defaults = dict(
- lr=lr, betas=betas, eps=eps, weight_decay=weight_decay, amsgrad=amsgrad
- )
- super(Adam, self).__init__(params, defaults)
-
- @property
- def supports_memory_efficient_fp16(self):
- return True
-
- @property
- def supports_flat_params(self):
- return True
-
- def step(self, closure=None):
- """Performs a single optimization step.
-
- Args:
- closure (callable, optional): A closure that reevaluates the model
- and returns the loss.
- """
- loss = None
- if closure is not None:
- loss = closure()
-
- for group in self.param_groups:
- for p in group["params"]:
- if p.grad is None:
- continue
- grad = p.grad.data
- if grad.dtype in {torch.float16, torch.bfloat16}:
- grad = grad.float()
- if grad.is_sparse:
- raise RuntimeError(
- "Adam does not support sparse gradients, please consider SparseAdam instead"
- )
- amsgrad = group.get("amsgrad", False)
-
- p_data_fp32 = p.data
- if p.data.dtype in {torch.float16, torch.bfloat16}:
- p_data_fp32 = p_data_fp32.float()
-
- state = self.state[p]
-
- # State initialization
- if len(state) == 0:
- state["step"] = 0
- # Exponential moving average of gradient values
- state["exp_avg"] = torch.zeros_like(p_data_fp32)
- # Exponential moving average of squared gradient values
- state["exp_avg_sq"] = torch.zeros_like(p_data_fp32)
- if amsgrad:
- # Maintains max of all exp. moving avg. of sq. grad. values
- state["max_exp_avg_sq"] = torch.zeros_like(p_data_fp32)
- else:
- state["exp_avg"] = state["exp_avg"].to(p_data_fp32)
- state["exp_avg_sq"] = state["exp_avg_sq"].to(p_data_fp32)
- if amsgrad:
- state["max_exp_avg_sq"] = state["max_exp_avg_sq"].to(
- p_data_fp32
- )
-
- exp_avg, exp_avg_sq = state["exp_avg"], state["exp_avg_sq"]
- if amsgrad:
- max_exp_avg_sq = state["max_exp_avg_sq"]
- beta1, beta2 = group["betas"]
-
- state["step"] += 1
-
- # Decay the first and second moment running average coefficient
- exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1)
- exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value=1 - beta2)
- if amsgrad:
- # Maintains the maximum of all 2nd moment running avg. till now
- torch.max(max_exp_avg_sq, exp_avg_sq, out=max_exp_avg_sq)
- # Use the max. for normalizing running avg. of gradient
- denom = max_exp_avg_sq.sqrt().add_(group["eps"])
- else:
- denom = exp_avg_sq.sqrt().add_(group["eps"])
-
- bias_correction1 = 1 - beta1 ** state["step"]
- bias_correction2 = 1 - beta2 ** state["step"]
- step_size = group["lr"] * math.sqrt(bias_correction2) / bias_correction1
-
- if group["weight_decay"] != 0:
- p_data_fp32.add_(
- p_data_fp32, alpha=-group["weight_decay"] * group["lr"]
- )
-
- p_data_fp32.addcdiv_(exp_avg, denom, value=-step_size)
-
- if p.data.dtype in {torch.float16, torch.bfloat16}:
- p.data.copy_(p_data_fp32)
-
- return loss
diff --git a/spaces/Iceclear/StableSR/StableSR/basicsr/models/stylegan2_model.py b/spaces/Iceclear/StableSR/StableSR/basicsr/models/stylegan2_model.py
deleted file mode 100644
index d7da708122160f2be51a98a6a635349f34ee042e..0000000000000000000000000000000000000000
--- a/spaces/Iceclear/StableSR/StableSR/basicsr/models/stylegan2_model.py
+++ /dev/null
@@ -1,283 +0,0 @@
-import cv2
-import math
-import numpy as np
-import random
-import torch
-from collections import OrderedDict
-from os import path as osp
-
-from basicsr.archs import build_network
-from basicsr.losses import build_loss
-from basicsr.losses.gan_loss import g_path_regularize, r1_penalty
-from basicsr.utils import imwrite, tensor2img
-from basicsr.utils.registry import MODEL_REGISTRY
-from .base_model import BaseModel
-
-
-@MODEL_REGISTRY.register()
-class StyleGAN2Model(BaseModel):
- """StyleGAN2 model."""
-
- def __init__(self, opt):
- super(StyleGAN2Model, self).__init__(opt)
-
- # define network net_g
- self.net_g = build_network(opt['network_g'])
- self.net_g = self.model_to_device(self.net_g)
- self.print_network(self.net_g)
- # load pretrained model
- load_path = self.opt['path'].get('pretrain_network_g', None)
- if load_path is not None:
- param_key = self.opt['path'].get('param_key_g', 'params')
- self.load_network(self.net_g, load_path, self.opt['path'].get('strict_load_g', True), param_key)
-
- # latent dimension: self.num_style_feat
- self.num_style_feat = opt['network_g']['num_style_feat']
- num_val_samples = self.opt['val'].get('num_val_samples', 16)
- self.fixed_sample = torch.randn(num_val_samples, self.num_style_feat, device=self.device)
-
- if self.is_train:
- self.init_training_settings()
-
- def init_training_settings(self):
- train_opt = self.opt['train']
-
- # define network net_d
- self.net_d = build_network(self.opt['network_d'])
- self.net_d = self.model_to_device(self.net_d)
- self.print_network(self.net_d)
-
- # load pretrained model
- load_path = self.opt['path'].get('pretrain_network_d', None)
- if load_path is not None:
- param_key = self.opt['path'].get('param_key_d', 'params')
- self.load_network(self.net_d, load_path, self.opt['path'].get('strict_load_d', True), param_key)
-
- # define network net_g with Exponential Moving Average (EMA)
- # net_g_ema only used for testing on one GPU and saving, do not need to
- # wrap with DistributedDataParallel
- self.net_g_ema = build_network(self.opt['network_g']).to(self.device)
- # load pretrained model
- load_path = self.opt['path'].get('pretrain_network_g', None)
- if load_path is not None:
- self.load_network(self.net_g_ema, load_path, self.opt['path'].get('strict_load_g', True), 'params_ema')
- else:
- self.model_ema(0) # copy net_g weight
-
- self.net_g.train()
- self.net_d.train()
- self.net_g_ema.eval()
-
- # define losses
- # gan loss (wgan)
- self.cri_gan = build_loss(train_opt['gan_opt']).to(self.device)
- # regularization weights
- self.r1_reg_weight = train_opt['r1_reg_weight'] # for discriminator
- self.path_reg_weight = train_opt['path_reg_weight'] # for generator
-
- self.net_g_reg_every = train_opt['net_g_reg_every']
- self.net_d_reg_every = train_opt['net_d_reg_every']
- self.mixing_prob = train_opt['mixing_prob']
-
- self.mean_path_length = 0
-
- # set up optimizers and schedulers
- self.setup_optimizers()
- self.setup_schedulers()
-
- def setup_optimizers(self):
- train_opt = self.opt['train']
- # optimizer g
- net_g_reg_ratio = self.net_g_reg_every / (self.net_g_reg_every + 1)
- if self.opt['network_g']['type'] == 'StyleGAN2GeneratorC':
- normal_params = []
- style_mlp_params = []
- modulation_conv_params = []
- for name, param in self.net_g.named_parameters():
- if 'modulation' in name:
- normal_params.append(param)
- elif 'style_mlp' in name:
- style_mlp_params.append(param)
- elif 'modulated_conv' in name:
- modulation_conv_params.append(param)
- else:
- normal_params.append(param)
- optim_params_g = [
- { # add normal params first
- 'params': normal_params,
- 'lr': train_opt['optim_g']['lr']
- },
- {
- 'params': style_mlp_params,
- 'lr': train_opt['optim_g']['lr'] * 0.01
- },
- {
- 'params': modulation_conv_params,
- 'lr': train_opt['optim_g']['lr'] / 3
- }
- ]
- else:
- normal_params = []
- for name, param in self.net_g.named_parameters():
- normal_params.append(param)
- optim_params_g = [{ # add normal params first
- 'params': normal_params,
- 'lr': train_opt['optim_g']['lr']
- }]
-
- optim_type = train_opt['optim_g'].pop('type')
- lr = train_opt['optim_g']['lr'] * net_g_reg_ratio
- betas = (0**net_g_reg_ratio, 0.99**net_g_reg_ratio)
- self.optimizer_g = self.get_optimizer(optim_type, optim_params_g, lr, betas=betas)
- self.optimizers.append(self.optimizer_g)
-
- # optimizer d
- net_d_reg_ratio = self.net_d_reg_every / (self.net_d_reg_every + 1)
- if self.opt['network_d']['type'] == 'StyleGAN2DiscriminatorC':
- normal_params = []
- linear_params = []
- for name, param in self.net_d.named_parameters():
- if 'final_linear' in name:
- linear_params.append(param)
- else:
- normal_params.append(param)
- optim_params_d = [
- { # add normal params first
- 'params': normal_params,
- 'lr': train_opt['optim_d']['lr']
- },
- {
- 'params': linear_params,
- 'lr': train_opt['optim_d']['lr'] * (1 / math.sqrt(512))
- }
- ]
- else:
- normal_params = []
- for name, param in self.net_d.named_parameters():
- normal_params.append(param)
- optim_params_d = [{ # add normal params first
- 'params': normal_params,
- 'lr': train_opt['optim_d']['lr']
- }]
-
- optim_type = train_opt['optim_d'].pop('type')
- lr = train_opt['optim_d']['lr'] * net_d_reg_ratio
- betas = (0**net_d_reg_ratio, 0.99**net_d_reg_ratio)
- self.optimizer_d = self.get_optimizer(optim_type, optim_params_d, lr, betas=betas)
- self.optimizers.append(self.optimizer_d)
-
- def feed_data(self, data):
- self.real_img = data['gt'].to(self.device)
-
- def make_noise(self, batch, num_noise):
- if num_noise == 1:
- noises = torch.randn(batch, self.num_style_feat, device=self.device)
- else:
- noises = torch.randn(num_noise, batch, self.num_style_feat, device=self.device).unbind(0)
- return noises
-
- def mixing_noise(self, batch, prob):
- if random.random() < prob:
- return self.make_noise(batch, 2)
- else:
- return [self.make_noise(batch, 1)]
-
- def optimize_parameters(self, current_iter):
- loss_dict = OrderedDict()
-
- # optimize net_d
- for p in self.net_d.parameters():
- p.requires_grad = True
- self.optimizer_d.zero_grad()
-
- batch = self.real_img.size(0)
- noise = self.mixing_noise(batch, self.mixing_prob)
- fake_img, _ = self.net_g(noise)
- fake_pred = self.net_d(fake_img.detach())
-
- real_pred = self.net_d(self.real_img)
- # wgan loss with softplus (logistic loss) for discriminator
- l_d = self.cri_gan(real_pred, True, is_disc=True) + self.cri_gan(fake_pred, False, is_disc=True)
- loss_dict['l_d'] = l_d
- # In wgan, real_score should be positive and fake_score should be
- # negative
- loss_dict['real_score'] = real_pred.detach().mean()
- loss_dict['fake_score'] = fake_pred.detach().mean()
- l_d.backward()
-
- if current_iter % self.net_d_reg_every == 0:
- self.real_img.requires_grad = True
- real_pred = self.net_d(self.real_img)
- l_d_r1 = r1_penalty(real_pred, self.real_img)
- l_d_r1 = (self.r1_reg_weight / 2 * l_d_r1 * self.net_d_reg_every + 0 * real_pred[0])
- # TODO: why do we need to add 0 * real_pred, otherwise, a runtime
- # error will arise: RuntimeError: Expected to have finished
- # reduction in the prior iteration before starting a new one.
- # This error indicates that your module has parameters that were
- # not used in producing loss.
- loss_dict['l_d_r1'] = l_d_r1.detach().mean()
- l_d_r1.backward()
-
- self.optimizer_d.step()
-
- # optimize net_g
- for p in self.net_d.parameters():
- p.requires_grad = False
- self.optimizer_g.zero_grad()
-
- noise = self.mixing_noise(batch, self.mixing_prob)
- fake_img, _ = self.net_g(noise)
- fake_pred = self.net_d(fake_img)
-
- # wgan loss with softplus (non-saturating loss) for generator
- l_g = self.cri_gan(fake_pred, True, is_disc=False)
- loss_dict['l_g'] = l_g
- l_g.backward()
-
- if current_iter % self.net_g_reg_every == 0:
- path_batch_size = max(1, batch // self.opt['train']['path_batch_shrink'])
- noise = self.mixing_noise(path_batch_size, self.mixing_prob)
- fake_img, latents = self.net_g(noise, return_latents=True)
- l_g_path, path_lengths, self.mean_path_length = g_path_regularize(fake_img, latents, self.mean_path_length)
-
- l_g_path = (self.path_reg_weight * self.net_g_reg_every * l_g_path + 0 * fake_img[0, 0, 0, 0])
- # TODO: why do we need to add 0 * fake_img[0, 0, 0, 0]
- l_g_path.backward()
- loss_dict['l_g_path'] = l_g_path.detach().mean()
- loss_dict['path_length'] = path_lengths
-
- self.optimizer_g.step()
-
- self.log_dict = self.reduce_loss_dict(loss_dict)
-
- # EMA
- self.model_ema(decay=0.5**(32 / (10 * 1000)))
-
- def test(self):
- with torch.no_grad():
- self.net_g_ema.eval()
- self.output, _ = self.net_g_ema([self.fixed_sample])
-
- def dist_validation(self, dataloader, current_iter, tb_logger, save_img):
- if self.opt['rank'] == 0:
- self.nondist_validation(dataloader, current_iter, tb_logger, save_img)
-
- def nondist_validation(self, dataloader, current_iter, tb_logger, save_img):
- assert dataloader is None, 'Validation dataloader should be None.'
- self.test()
- result = tensor2img(self.output, min_max=(-1, 1))
- if self.opt['is_train']:
- save_img_path = osp.join(self.opt['path']['visualization'], 'train', f'train_{current_iter}.png')
- else:
- save_img_path = osp.join(self.opt['path']['visualization'], 'test', f'test_{self.opt["name"]}.png')
- imwrite(result, save_img_path)
- # add sample images to tb_logger
- result = (result / 255.).astype(np.float32)
- result = cv2.cvtColor(result, cv2.COLOR_BGR2RGB)
- if tb_logger is not None:
- tb_logger.add_image('samples', result, global_step=current_iter, dataformats='HWC')
-
- def save(self, epoch, current_iter):
- self.save_network([self.net_g, self.net_g_ema], 'net_g', current_iter, param_key=['params', 'params_ema'])
- self.save_network(self.net_d, 'net_d', current_iter)
- self.save_training_state(epoch, current_iter)
diff --git a/spaces/Intae/deepfake/training/transforms/albu.py b/spaces/Intae/deepfake/training/transforms/albu.py
deleted file mode 100644
index 07ede53248e3ee041c8a157169eafe614e0b3c6b..0000000000000000000000000000000000000000
--- a/spaces/Intae/deepfake/training/transforms/albu.py
+++ /dev/null
@@ -1,99 +0,0 @@
-import random
-
-import cv2
-import numpy as np
-from albumentations import DualTransform, ImageOnlyTransform
-from albumentations.augmentations.functional import crop
-
-
-def isotropically_resize_image(img, size, interpolation_down=cv2.INTER_AREA, interpolation_up=cv2.INTER_CUBIC):
- h, w = img.shape[:2]
- if max(w, h) == size:
- return img
- if w > h:
- scale = size / w
- h = h * scale
- w = size
- else:
- scale = size / h
- w = w * scale
- h = size
- interpolation = interpolation_up if scale > 1 else interpolation_down
- resized = cv2.resize(img, (int(w), int(h)), interpolation=interpolation)
- return resized
-
-
-class IsotropicResize(DualTransform):
- def __init__(self, max_side, interpolation_down=cv2.INTER_AREA, interpolation_up=cv2.INTER_CUBIC,
- always_apply=False, p=1):
- super(IsotropicResize, self).__init__(always_apply, p)
- self.max_side = max_side
- self.interpolation_down = interpolation_down
- self.interpolation_up = interpolation_up
-
- def apply(self, img, interpolation_down=cv2.INTER_AREA, interpolation_up=cv2.INTER_CUBIC, **params):
- return isotropically_resize_image(img, size=self.max_side, interpolation_down=interpolation_down,
- interpolation_up=interpolation_up)
-
- def apply_to_mask(self, img, **params):
- return self.apply(img, interpolation_down=cv2.INTER_NEAREST, interpolation_up=cv2.INTER_NEAREST, **params)
-
- def get_transform_init_args_names(self):
- return ("max_side", "interpolation_down", "interpolation_up")
-
-
-class Resize4xAndBack(ImageOnlyTransform):
- def __init__(self, always_apply=False, p=0.5):
- super(Resize4xAndBack, self).__init__(always_apply, p)
-
- def apply(self, img, **params):
- h, w = img.shape[:2]
- scale = random.choice([2, 4])
- img = cv2.resize(img, (w // scale, h // scale), interpolation=cv2.INTER_AREA)
- img = cv2.resize(img, (w, h),
- interpolation=random.choice([cv2.INTER_CUBIC, cv2.INTER_LINEAR, cv2.INTER_NEAREST]))
- return img
-
-
-class RandomSizedCropNonEmptyMaskIfExists(DualTransform):
-
- def __init__(self, min_max_height, w2h_ratio=[0.7, 1.3], always_apply=False, p=0.5):
- super(RandomSizedCropNonEmptyMaskIfExists, self).__init__(always_apply, p)
-
- self.min_max_height = min_max_height
- self.w2h_ratio = w2h_ratio
-
- def apply(self, img, x_min=0, x_max=0, y_min=0, y_max=0, **params):
- cropped = crop(img, x_min, y_min, x_max, y_max)
- return cropped
-
- @property
- def targets_as_params(self):
- return ["mask"]
-
- def get_params_dependent_on_targets(self, params):
- mask = params["mask"]
- mask_height, mask_width = mask.shape[:2]
- crop_height = int(mask_height * random.uniform(self.min_max_height[0], self.min_max_height[1]))
- w2h_ratio = random.uniform(*self.w2h_ratio)
- crop_width = min(int(crop_height * w2h_ratio), mask_width - 1)
- if mask.sum() == 0:
- x_min = random.randint(0, mask_width - crop_width + 1)
- y_min = random.randint(0, mask_height - crop_height + 1)
- else:
- mask = mask.sum(axis=-1) if mask.ndim == 3 else mask
- non_zero_yx = np.argwhere(mask)
- y, x = random.choice(non_zero_yx)
- x_min = x - random.randint(0, crop_width - 1)
- y_min = y - random.randint(0, crop_height - 1)
- x_min = np.clip(x_min, 0, mask_width - crop_width)
- y_min = np.clip(y_min, 0, mask_height - crop_height)
-
- x_max = x_min + crop_height
- y_max = y_min + crop_width
- y_max = min(mask_height, y_max)
- x_max = min(mask_width, x_max)
- return {"x_min": x_min, "x_max": x_max, "y_min": y_min, "y_max": y_max}
-
- def get_transform_init_args_names(self):
- return "min_max_height", "height", "width", "w2h_ratio"
\ No newline at end of file
diff --git a/spaces/IsaacK/streamlit-test/app.py b/spaces/IsaacK/streamlit-test/app.py
deleted file mode 100644
index 1a57bf7a69af4978bac32f019497c1c638f87c79..0000000000000000000000000000000000000000
--- a/spaces/IsaacK/streamlit-test/app.py
+++ /dev/null
@@ -1,24 +0,0 @@
-import streamlit as st
-import sqlite3
-import random
-import datetime
-
-# Custom imports
-from multipage import MultiPage
-from pages import quiz, upload, view, join, login
-
-# Create an instance of the app
-app = MultiPage()
-
-# Title of the main page
-st.markdown("# Quiz Maker")
-
-# Add all your application here
-app.add_page("Quiz", quiz.app)
-app.add_page("Join", join.app)
-app.add_page("Login", login.app)
-app.add_page("Upload", upload.app)
-app.add_page("View", view.app)
-
-# The main app
-app.run()
\ No newline at end of file
diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/ddpm/pipeline_ddpm.py b/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/ddpm/pipeline_ddpm.py
deleted file mode 100644
index 114a38a5fec7a471ed60be1c38ace65f86c903dd..0000000000000000000000000000000000000000
--- a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/ddpm/pipeline_ddpm.py
+++ /dev/null
@@ -1,127 +0,0 @@
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-from typing import Optional, Tuple, Union
-
-import torch
-
-from ...configuration_utils import FrozenDict
-from ...pipeline_utils import DiffusionPipeline, ImagePipelineOutput
-from ...utils import deprecate
-
-
-class DDPMPipeline(DiffusionPipeline):
- r"""
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
-
- Parameters:
- unet ([`UNet2DModel`]): U-Net architecture to denoise the encoded image.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image. Can be one of
- [`DDPMScheduler`], or [`DDIMScheduler`].
- """
-
- def __init__(self, unet, scheduler):
- super().__init__()
- self.register_modules(unet=unet, scheduler=scheduler)
-
- @torch.no_grad()
- def __call__(
- self,
- batch_size: int = 1,
- generator: Optional[torch.Generator] = None,
- num_inference_steps: int = 1000,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- **kwargs,
- ) -> Union[ImagePipelineOutput, Tuple]:
- r"""
- Args:
- batch_size (`int`, *optional*, defaults to 1):
- The number of images to generate.
- generator (`torch.Generator`, *optional*):
- A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
- deterministic.
- num_inference_steps (`int`, *optional*, defaults to 1000):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipeline_utils.ImagePipelineOutput`] instead of a plain tuple.
-
- Returns:
- [`~pipeline_utils.ImagePipelineOutput`] or `tuple`: [`~pipelines.utils.ImagePipelineOutput`] if
- `return_dict` is True, otherwise a `tuple. When returning a tuple, the first element is a list with the
- generated images.
- """
- message = (
- "Please make sure to instantiate your scheduler with `prediction_type` instead. E.g. `scheduler ="
- " DDPMScheduler.from_pretrained(, prediction_type='epsilon')`."
- )
- predict_epsilon = deprecate("predict_epsilon", "0.11.0", message, take_from=kwargs)
-
- if predict_epsilon is not None:
- new_config = dict(self.scheduler.config)
- new_config["prediction_type"] = "epsilon" if predict_epsilon else "sample"
- self.scheduler._internal_dict = FrozenDict(new_config)
-
- if generator is not None and generator.device.type != self.device.type and self.device.type != "mps":
- message = (
- f"The `generator` device is `{generator.device}` and does not match the pipeline "
- f"device `{self.device}`, so the `generator` will be ignored. "
- f'Please use `torch.Generator(device="{self.device}")` instead.'
- )
- deprecate(
- "generator.device == 'cpu'",
- "0.11.0",
- message,
- )
- generator = None
-
- # Sample gaussian noise to begin loop
- if isinstance(self.unet.sample_size, int):
- image_shape = (batch_size, self.unet.in_channels, self.unet.sample_size, self.unet.sample_size)
- else:
- image_shape = (batch_size, self.unet.in_channels, *self.unet.sample_size)
-
- if self.device.type == "mps":
- # randn does not work reproducibly on mps
- image = torch.randn(image_shape, generator=generator)
- image = image.to(self.device)
- else:
- image = torch.randn(image_shape, generator=generator, device=self.device)
-
- # set step values
- self.scheduler.set_timesteps(num_inference_steps)
-
- for t in self.progress_bar(self.scheduler.timesteps):
- # 1. predict noise model_output
- model_output = self.unet(image, t).sample
-
- # 2. compute previous image: x_t -> x_t-1
- image = self.scheduler.step(model_output, t, image, generator=generator).prev_sample
-
- image = (image / 2 + 0.5).clamp(0, 1)
- image = image.cpu().permute(0, 2, 3, 1).numpy()
- if output_type == "pil":
- image = self.numpy_to_pil(image)
-
- if not return_dict:
- return (image,)
-
- return ImagePipelineOutput(images=image)
diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/chatgpt - macOS.command b/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/chatgpt - macOS.command
deleted file mode 100644
index fa015edca9e6916f24394813ce8ba77d2072e296..0000000000000000000000000000000000000000
--- a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/chatgpt - macOS.command
+++ /dev/null
@@ -1,7 +0,0 @@
-#!/bin/bash
-echo Opening ChuanhuChatGPT...
-cd "$(dirname "${BASH_SOURCE[0]}")"
-nohup python3 ChuanhuChatbot.py >/dev/null 2>&1 &
-sleep 5
-open http://127.0.0.1:7860
-echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). If you kill ChuanhuChatbot, Use "pkill -f 'ChuanhuChatbot'" command in terminal.
\ No newline at end of file
diff --git a/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/text/thai.py b/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/text/thai.py
deleted file mode 100644
index 998207c01a85c710a46db1ec8b62c39c2d94bc84..0000000000000000000000000000000000000000
--- a/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/text/thai.py
+++ /dev/null
@@ -1,44 +0,0 @@
-import re
-from num_thai.thainumbers import NumThai
-
-
-num = NumThai()
-
-# List of (Latin alphabet, Thai) pairs:
-_latin_to_thai = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('a', 'เอ'),
- ('b','บี'),
- ('c','ซี'),
- ('d','ดี'),
- ('e','อี'),
- ('f','เอฟ'),
- ('g','จี'),
- ('h','เอช'),
- ('i','ไอ'),
- ('j','เจ'),
- ('k','เค'),
- ('l','แอล'),
- ('m','เอ็ม'),
- ('n','เอ็น'),
- ('o','โอ'),
- ('p','พี'),
- ('q','คิว'),
- ('r','แอร์'),
- ('s','เอส'),
- ('t','ที'),
- ('u','ยู'),
- ('v','วี'),
- ('w','ดับเบิลยู'),
- ('x','เอ็กซ์'),
- ('y','วาย'),
- ('z','ซี')
-]]
-
-
-def num_to_thai(text):
- return re.sub(r'(?:\d+(?:,?\d+)?)+(?:\.\d+(?:,?\d+)?)?', lambda x: ''.join(num.NumberToTextThai(float(x.group(0).replace(',', '')))), text)
-
-def latin_to_thai(text):
- for regex, replacement in _latin_to_thai:
- text = re.sub(regex, replacement, text)
- return text
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/encoder/preprocess.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/encoder/preprocess.py
deleted file mode 100644
index 69986bb3bb0a2d8a0e352d1cb330a375d55f7e2c..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/encoder/preprocess.py
+++ /dev/null
@@ -1,184 +0,0 @@
-from multiprocess.pool import ThreadPool
-from encoder.params_data import *
-from encoder.config import librispeech_datasets, anglophone_nationalites
-from datetime import datetime
-from encoder import audio
-from pathlib import Path
-from tqdm import tqdm
-import numpy as np
-
-
-class DatasetLog:
- """
- Registers metadata about the dataset in a text file.
- """
- def __init__(self, root, name):
- self.text_file = open(Path(root, "Log_%s.txt" % name.replace("/", "_")), "w")
- self.sample_data = dict()
-
- start_time = str(datetime.now().strftime("%A %d %B %Y at %H:%M"))
- self.write_line("Creating dataset %s on %s" % (name, start_time))
- self.write_line("-----")
- self._log_params()
-
- def _log_params(self):
- from encoder import params_data
- self.write_line("Parameter values:")
- for param_name in (p for p in dir(params_data) if not p.startswith("__")):
- value = getattr(params_data, param_name)
- self.write_line("\t%s: %s" % (param_name, value))
- self.write_line("-----")
-
- def write_line(self, line):
- self.text_file.write("%s\n" % line)
-
- def add_sample(self, **kwargs):
- for param_name, value in kwargs.items():
- if not param_name in self.sample_data:
- self.sample_data[param_name] = []
- self.sample_data[param_name].append(value)
-
- def finalize(self):
- self.write_line("Statistics:")
- for param_name, values in self.sample_data.items():
- self.write_line("\t%s:" % param_name)
- self.write_line("\t\tmin %.3f, max %.3f" % (np.min(values), np.max(values)))
- self.write_line("\t\tmean %.3f, median %.3f" % (np.mean(values), np.median(values)))
- self.write_line("-----")
- end_time = str(datetime.now().strftime("%A %d %B %Y at %H:%M"))
- self.write_line("Finished on %s" % end_time)
- self.text_file.close()
-
-
-def _init_preprocess_dataset(dataset_name, datasets_root, out_dir) -> (Path, DatasetLog):
- dataset_root = datasets_root.joinpath(dataset_name)
- if not dataset_root.exists():
- print("Couldn\'t find %s, skipping this dataset." % dataset_root)
- return None, None
- return dataset_root, DatasetLog(out_dir, dataset_name)
-
-
-def _preprocess_speaker_dirs(speaker_dirs, dataset_name, datasets_root, out_dir, extension,
- skip_existing, logger):
- print("%s: Preprocessing data for %d speakers." % (dataset_name, len(speaker_dirs)))
-
- # Function to preprocess utterances for one speaker
- def preprocess_speaker(speaker_dir: Path):
- # Give a name to the speaker that includes its dataset
- speaker_name = "_".join(speaker_dir.relative_to(datasets_root).parts)
-
- # Create an output directory with that name, as well as a txt file containing a
- # reference to each source file.
- speaker_out_dir = out_dir.joinpath(speaker_name)
- speaker_out_dir.mkdir(exist_ok=True)
- sources_fpath = speaker_out_dir.joinpath("_sources.txt")
-
- # There's a possibility that the preprocessing was interrupted earlier, check if
- # there already is a sources file.
- if sources_fpath.exists():
- try:
- with sources_fpath.open("r") as sources_file:
- existing_fnames = {line.split(",")[0] for line in sources_file}
- except:
- existing_fnames = {}
- else:
- existing_fnames = {}
-
- # Gather all audio files for that speaker recursively
- sources_file = sources_fpath.open("a" if skip_existing else "w")
- for in_fpath in speaker_dir.glob("**/*.%s" % extension):
- # Check if the target output file already exists
- out_fname = "_".join(in_fpath.relative_to(speaker_dir).parts)
- out_fname = out_fname.replace(".%s" % extension, ".npy")
- if skip_existing and out_fname in existing_fnames:
- continue
-
- # Load and preprocess the waveform
- wav = audio.preprocess_wav(in_fpath)
- if len(wav) == 0:
- continue
-
- # Create the mel spectrogram, discard those that are too short
- frames = audio.wav_to_mel_spectrogram(wav)
- if len(frames) < partials_n_frames:
- continue
-
- out_fpath = speaker_out_dir.joinpath(out_fname)
- np.save(out_fpath, frames)
- logger.add_sample(duration=len(wav) / sampling_rate)
- sources_file.write("%s,%s\n" % (out_fname, in_fpath))
-
- sources_file.close()
-
- # Process the utterances for each speaker
- with ThreadPool(8) as pool:
- list(tqdm(pool.imap(preprocess_speaker, speaker_dirs), dataset_name, len(speaker_dirs),
- unit="speakers"))
- logger.finalize()
- print("Done preprocessing %s.\n" % dataset_name)
-
-def preprocess_aidatatang_200zh(datasets_root: Path, out_dir: Path, skip_existing=False):
- dataset_name = "aidatatang_200zh"
- dataset_root, logger = _init_preprocess_dataset(dataset_name, datasets_root, out_dir)
- if not dataset_root:
- return
- # Preprocess all speakers
- speaker_dirs = list(dataset_root.joinpath("corpus", "train").glob("*"))
- _preprocess_speaker_dirs(speaker_dirs, dataset_name, datasets_root, out_dir, "wav",
- skip_existing, logger)
-
-def preprocess_librispeech(datasets_root: Path, out_dir: Path, skip_existing=False):
- for dataset_name in librispeech_datasets["train"]["other"]:
- # Initialize the preprocessing
- dataset_root, logger = _init_preprocess_dataset(dataset_name, datasets_root, out_dir)
- if not dataset_root:
- return
-
- # Preprocess all speakers
- speaker_dirs = list(dataset_root.glob("*"))
- _preprocess_speaker_dirs(speaker_dirs, dataset_name, datasets_root, out_dir, "flac",
- skip_existing, logger)
-
-
-def preprocess_voxceleb1(datasets_root: Path, out_dir: Path, skip_existing=False):
- # Initialize the preprocessing
- dataset_name = "VoxCeleb1"
- dataset_root, logger = _init_preprocess_dataset(dataset_name, datasets_root, out_dir)
- if not dataset_root:
- return
-
- # Get the contents of the meta file
- with dataset_root.joinpath("vox1_meta.csv").open("r") as metafile:
- metadata = [line.split("\t") for line in metafile][1:]
-
- # Select the ID and the nationality, filter out non-anglophone speakers
- nationalities = {line[0]: line[3] for line in metadata}
- keep_speaker_ids = [speaker_id for speaker_id, nationality in nationalities.items() if
- nationality.lower() in anglophone_nationalites]
- print("VoxCeleb1: using samples from %d (presumed anglophone) speakers out of %d." %
- (len(keep_speaker_ids), len(nationalities)))
-
- # Get the speaker directories for anglophone speakers only
- speaker_dirs = dataset_root.joinpath("wav").glob("*")
- speaker_dirs = [speaker_dir for speaker_dir in speaker_dirs if
- speaker_dir.name in keep_speaker_ids]
- print("VoxCeleb1: found %d anglophone speakers on the disk, %d missing (this is normal)." %
- (len(speaker_dirs), len(keep_speaker_ids) - len(speaker_dirs)))
-
- # Preprocess all speakers
- _preprocess_speaker_dirs(speaker_dirs, dataset_name, datasets_root, out_dir, "wav",
- skip_existing, logger)
-
-
-def preprocess_voxceleb2(datasets_root: Path, out_dir: Path, skip_existing=False):
- # Initialize the preprocessing
- dataset_name = "VoxCeleb2"
- dataset_root, logger = _init_preprocess_dataset(dataset_name, datasets_root, out_dir)
- if not dataset_root:
- return
-
- # Get the speaker directories
- # Preprocess all speakers
- speaker_dirs = list(dataset_root.joinpath("dev", "aac").glob("*"))
- _preprocess_speaker_dirs(speaker_dirs, dataset_name, datasets_root, out_dir, "m4a",
- skip_existing, logger)
diff --git a/spaces/Kororinpa/Amadeus_Project/models.py b/spaces/Kororinpa/Amadeus_Project/models.py
deleted file mode 100644
index f5acdeb2bedd47897348407c0ae55c9a160da881..0000000000000000000000000000000000000000
--- a/spaces/Kororinpa/Amadeus_Project/models.py
+++ /dev/null
@@ -1,534 +0,0 @@
-import copy
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import modules
-import attentions
-import monotonic_align
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from commons import init_weights, get_padding
-
-
-class StochasticDurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0):
- super().__init__()
- filter_channels = in_channels # it needs to be removed from future version.
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.log_flow = modules.Log()
- self.flows = nn.ModuleList()
- self.flows.append(modules.ElementwiseAffine(2))
- for i in range(n_flows):
- self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.flows.append(modules.Flip())
-
- self.post_pre = nn.Conv1d(1, filter_channels, 1)
- self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- self.post_flows = nn.ModuleList()
- self.post_flows.append(modules.ElementwiseAffine(2))
- for i in range(4):
- self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.post_flows.append(modules.Flip())
-
- self.pre = nn.Conv1d(in_channels, filter_channels, 1)
- self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
-
- def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0):
- x = torch.detach(x)
- x = self.pre(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.convs(x, x_mask)
- x = self.proj(x) * x_mask
-
- if not reverse:
- flows = self.flows
- assert w is not None
-
- logdet_tot_q = 0
- h_w = self.post_pre(w)
- h_w = self.post_convs(h_w, x_mask)
- h_w = self.post_proj(h_w) * x_mask
- e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask
- z_q = e_q
- for flow in self.post_flows:
- z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
- logdet_tot_q += logdet_q
- z_u, z1 = torch.split(z_q, [1, 1], 1)
- u = torch.sigmoid(z_u) * x_mask
- z0 = (w - u) * x_mask
- logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2])
- logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q
-
- logdet_tot = 0
- z0, logdet = self.log_flow(z0, x_mask)
- logdet_tot += logdet
- z = torch.cat([z0, z1], 1)
- for flow in flows:
- z, logdet = flow(z, x_mask, g=x, reverse=reverse)
- logdet_tot = logdet_tot + logdet
- nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot
- return nll + logq # [b]
- else:
- flows = list(reversed(self.flows))
- flows = flows[:-2] + [flows[-1]] # remove a useless vflow
- z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale
- for flow in flows:
- z = flow(z, x_mask, g=x, reverse=reverse)
- z0, z1 = torch.split(z, [1, 1], 1)
- logw = z0
- return logw
-
-
-class DurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
- super().__init__()
-
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.gin_channels = gin_channels
-
- self.drop = nn.Dropout(p_dropout)
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.norm_1 = modules.LayerNorm(filter_channels)
- self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.norm_2 = modules.LayerNorm(filter_channels)
- self.proj = nn.Conv1d(filter_channels, 1, 1)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, in_channels, 1)
-
- def forward(self, x, x_mask, g=None):
- x = torch.detach(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.conv_1(x * x_mask)
- x = torch.relu(x)
- x = self.norm_1(x)
- x = self.drop(x)
- x = self.conv_2(x * x_mask)
- x = torch.relu(x)
- x = self.norm_2(x)
- x = self.drop(x)
- x = self.proj(x * x_mask)
- return x * x_mask
-
-
-class TextEncoder(nn.Module):
- def __init__(self,
- n_vocab,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout):
- super().__init__()
- self.n_vocab = n_vocab
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
-
- self.emb = nn.Embedding(n_vocab, hidden_channels)
- nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5)
-
- self.encoder = attentions.Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths):
- x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
-
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return x, m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True))
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
-
-class Generator(torch.nn.Module):
- def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
- resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(weight_norm(
- ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)),
- k, u, padding=(k-u)//2)))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel//(2**(i+1))
- for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i*self.num_kernels+j](x)
- else:
- xs += self.resblocks[i*self.num_kernels+j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- print('Removing weight norm...')
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2,3,5,7,11]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-
-class SynthesizerTrn(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(self,
- n_vocab,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- n_speakers=0,
- gin_channels=0,
- use_sdp=True,
- **kwargs):
-
- super().__init__()
- self.n_vocab = n_vocab
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.n_speakers = n_speakers
- self.gin_channels = gin_channels
-
- self.use_sdp = use_sdp
-
- self.enc_p = TextEncoder(n_vocab,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels)
- self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels)
- self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels)
-
- if use_sdp:
- self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels)
- else:
- self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels)
-
- if n_speakers > 1:
- self.emb_g = nn.Embedding(n_speakers, gin_channels)
-
- def forward(self, x, x_lengths, y, y_lengths, sid=None):
-
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = None
-
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
-
- with torch.no_grad():
- # negative cross-entropy
- s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t]
- neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s]
- neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s]
- neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4
-
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach()
-
- w = attn.sum(2)
- if self.use_sdp:
- l_length = self.dp(x, x_mask, w, g=g)
- l_length = l_length / torch.sum(x_mask)
- else:
- logw_ = torch.log(w + 1e-6) * x_mask
- logw = self.dp(x, x_mask, g=g)
- l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging
-
- # expand prior
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2)
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2)
-
- z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size)
- o = self.dec(z_slice, g=g)
- return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None):
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = None
-
- if self.use_sdp:
- logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w)
- else:
- logw = self.dp(x, x_mask, g=g)
- w = torch.exp(logw) * x_mask * length_scale
- w_ceil = torch.ceil(w)
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
- y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = commons.generate_path(w_ceil, attn_mask)
-
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
-
- z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
- z = self.flow(z_p, y_mask, g=g, reverse=True)
- o = self.dec((z * y_mask)[:,:,:max_len], g=g)
- return o, attn, y_mask, (z, z_p, m_p, logs_p)
-
- def voice_conversion(self, y, y_lengths, sid_src, sid_tgt):
- assert self.n_speakers > 0, "n_speakers have to be larger than 0."
- g_src = self.emb_g(sid_src).unsqueeze(-1)
- g_tgt = self.emb_g(sid_tgt).unsqueeze(-1)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src)
- z_p = self.flow(z, y_mask, g=g_src)
- z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True)
- o_hat = self.dec(z_hat * y_mask, g=g_tgt)
- return o_hat, y_mask, (z, z_p, z_hat)
-
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/necks/dyhead.py b/spaces/KyanChen/RSPrompter/mmdet/models/necks/dyhead.py
deleted file mode 100644
index 5f5ae0b285c20558a0c7bcc59cbb7b214684eab2..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/necks/dyhead.py
+++ /dev/null
@@ -1,173 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch.nn as nn
-import torch.nn.functional as F
-from mmcv.cnn import build_activation_layer, build_norm_layer
-from mmcv.ops.modulated_deform_conv import ModulatedDeformConv2d
-from mmengine.model import BaseModule, constant_init, normal_init
-
-from mmdet.registry import MODELS
-from ..layers import DyReLU
-
-# Reference:
-# https://github.com/microsoft/DynamicHead
-# https://github.com/jshilong/SEPC
-
-
-class DyDCNv2(nn.Module):
- """ModulatedDeformConv2d with normalization layer used in DyHead.
-
- This module cannot be configured with `conv_cfg=dict(type='DCNv2')`
- because DyHead calculates offset and mask from middle-level feature.
-
- Args:
- in_channels (int): Number of input channels.
- out_channels (int): Number of output channels.
- stride (int | tuple[int], optional): Stride of the convolution.
- Default: 1.
- norm_cfg (dict, optional): Config dict for normalization layer.
- Default: dict(type='GN', num_groups=16, requires_grad=True).
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- stride=1,
- norm_cfg=dict(type='GN', num_groups=16, requires_grad=True)):
- super().__init__()
- self.with_norm = norm_cfg is not None
- bias = not self.with_norm
- self.conv = ModulatedDeformConv2d(
- in_channels, out_channels, 3, stride=stride, padding=1, bias=bias)
- if self.with_norm:
- self.norm = build_norm_layer(norm_cfg, out_channels)[1]
-
- def forward(self, x, offset, mask):
- """Forward function."""
- x = self.conv(x.contiguous(), offset, mask)
- if self.with_norm:
- x = self.norm(x)
- return x
-
-
-class DyHeadBlock(nn.Module):
- """DyHead Block with three types of attention.
-
- HSigmoid arguments in default act_cfg follow official code, not paper.
- https://github.com/microsoft/DynamicHead/blob/master/dyhead/dyrelu.py
-
- Args:
- in_channels (int): Number of input channels.
- out_channels (int): Number of output channels.
- zero_init_offset (bool, optional): Whether to use zero init for
- `spatial_conv_offset`. Default: True.
- act_cfg (dict, optional): Config dict for the last activation layer of
- scale-aware attention. Default: dict(type='HSigmoid', bias=3.0,
- divisor=6.0).
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- zero_init_offset=True,
- act_cfg=dict(type='HSigmoid', bias=3.0, divisor=6.0)):
- super().__init__()
- self.zero_init_offset = zero_init_offset
- # (offset_x, offset_y, mask) * kernel_size_y * kernel_size_x
- self.offset_and_mask_dim = 3 * 3 * 3
- self.offset_dim = 2 * 3 * 3
-
- self.spatial_conv_high = DyDCNv2(in_channels, out_channels)
- self.spatial_conv_mid = DyDCNv2(in_channels, out_channels)
- self.spatial_conv_low = DyDCNv2(in_channels, out_channels, stride=2)
- self.spatial_conv_offset = nn.Conv2d(
- in_channels, self.offset_and_mask_dim, 3, padding=1)
- self.scale_attn_module = nn.Sequential(
- nn.AdaptiveAvgPool2d(1), nn.Conv2d(out_channels, 1, 1),
- nn.ReLU(inplace=True), build_activation_layer(act_cfg))
- self.task_attn_module = DyReLU(out_channels)
- self._init_weights()
-
- def _init_weights(self):
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- normal_init(m, 0, 0.01)
- if self.zero_init_offset:
- constant_init(self.spatial_conv_offset, 0)
-
- def forward(self, x):
- """Forward function."""
- outs = []
- for level in range(len(x)):
- # calculate offset and mask of DCNv2 from middle-level feature
- offset_and_mask = self.spatial_conv_offset(x[level])
- offset = offset_and_mask[:, :self.offset_dim, :, :]
- mask = offset_and_mask[:, self.offset_dim:, :, :].sigmoid()
-
- mid_feat = self.spatial_conv_mid(x[level], offset, mask)
- sum_feat = mid_feat * self.scale_attn_module(mid_feat)
- summed_levels = 1
- if level > 0:
- low_feat = self.spatial_conv_low(x[level - 1], offset, mask)
- sum_feat += low_feat * self.scale_attn_module(low_feat)
- summed_levels += 1
- if level < len(x) - 1:
- # this upsample order is weird, but faster than natural order
- # https://github.com/microsoft/DynamicHead/issues/25
- high_feat = F.interpolate(
- self.spatial_conv_high(x[level + 1], offset, mask),
- size=x[level].shape[-2:],
- mode='bilinear',
- align_corners=True)
- sum_feat += high_feat * self.scale_attn_module(high_feat)
- summed_levels += 1
- outs.append(self.task_attn_module(sum_feat / summed_levels))
-
- return outs
-
-
-@MODELS.register_module()
-class DyHead(BaseModule):
- """DyHead neck consisting of multiple DyHead Blocks.
-
- See `Dynamic Head: Unifying Object Detection Heads with Attentions
- `_ for details.
-
- Args:
- in_channels (int): Number of input channels.
- out_channels (int): Number of output channels.
- num_blocks (int, optional): Number of DyHead Blocks. Default: 6.
- zero_init_offset (bool, optional): Whether to use zero init for
- `spatial_conv_offset`. Default: True.
- init_cfg (dict or list[dict], optional): Initialization config dict.
- Default: None.
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- num_blocks=6,
- zero_init_offset=True,
- init_cfg=None):
- assert init_cfg is None, 'To prevent abnormal initialization ' \
- 'behavior, init_cfg is not allowed to be set'
- super().__init__(init_cfg=init_cfg)
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.num_blocks = num_blocks
- self.zero_init_offset = zero_init_offset
-
- dyhead_blocks = []
- for i in range(num_blocks):
- in_channels = self.in_channels if i == 0 else self.out_channels
- dyhead_blocks.append(
- DyHeadBlock(
- in_channels,
- self.out_channels,
- zero_init_offset=zero_init_offset))
- self.dyhead_blocks = nn.Sequential(*dyhead_blocks)
-
- def forward(self, inputs):
- """Forward function."""
- assert isinstance(inputs, (tuple, list))
- outs = self.dyhead_blocks(inputs)
- return tuple(outs)
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/seg_heads/panoptic_fusion_heads/__init__.py b/spaces/KyanChen/RSPrompter/mmdet/models/seg_heads/panoptic_fusion_heads/__init__.py
deleted file mode 100644
index 41625a61d6d1c38c633062c24b1e3455bd3ae2df..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/seg_heads/panoptic_fusion_heads/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .base_panoptic_fusion_head import \
- BasePanopticFusionHead # noqa: F401,F403
-from .heuristic_fusion_head import HeuristicFusionHead # noqa: F401,F403
-from .maskformer_fusion_head import MaskFormerFusionHead # noqa: F401,F403
diff --git a/spaces/LZRi/LZR-Bert-VITS2/modules.py b/spaces/LZRi/LZR-Bert-VITS2/modules.py
deleted file mode 100644
index 92e0f32a51c472bfd1659a50a95a95d195281d2b..0000000000000000000000000000000000000000
--- a/spaces/LZRi/LZR-Bert-VITS2/modules.py
+++ /dev/null
@@ -1,452 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from transforms import piecewise_rational_quadratic_transform
-from attentions import Encoder
-
-LRELU_SLOPE = 0.1
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
-class TransformerCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- n_layers,
- n_heads,
- p_dropout=0,
- filter_channels=0,
- mean_only=False,
- wn_sharing_parameter=None,
- gin_channels = 0
- ):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = Encoder(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = gin_channels) if wn_sharing_parameter is None else wn_sharing_parameter
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/Langame/explorer/app.py b/spaces/Langame/explorer/app.py
deleted file mode 100644
index 7214a044ef45265eff6284169117265fdfa04fb9..0000000000000000000000000000000000000000
--- a/spaces/Langame/explorer/app.py
+++ /dev/null
@@ -1,107 +0,0 @@
-import streamlit as st
-from datasets import load_dataset
-import pandas as pd
-
-
-dataset = load_dataset("Langame/starter2")
-
-conversation_starters = []
-for i in range(len(dataset["train"])):
- conversation_starters.append({
- "conversation_starter": dataset["train"][i]["content"],
- "topics": dataset["train"][i]["topics"]
- })
-
-df = pd.DataFrame(conversation_starters)
-
-from pandas.api.types import (
- is_categorical_dtype,
- is_datetime64_any_dtype,
- is_numeric_dtype,
- is_object_dtype,
-)
-
-st.title("Conversation starters")
-
-st.write(
- """Quick hack to search conversation starters for your next conversations in https://huggingface.co/datasets/Langame/starter2
- """
-)
-
-
-def filter_dataframe(df: pd.DataFrame) -> pd.DataFrame:
- """
- Adds a UI on top of a dataframe to let viewers filter columns
- Args:
- df (pd.DataFrame): Original dataframe
- Returns:
- pd.DataFrame: Filtered dataframe
- """
- modify = st.checkbox("Add filters")
-
- if not modify:
- return df
-
- df = df.copy()
-
- # Try to convert datetimes into a standard format (datetime, no timezone)
- for col in df.columns:
- if is_object_dtype(df[col]):
- try:
- df[col] = pd.to_datetime(df[col])
- except Exception:
- pass
-
- if is_datetime64_any_dtype(df[col]):
- df[col] = df[col].dt.tz_localize(None)
-
- modification_container = st.container()
-
- with modification_container:
- to_filter_columns = st.multiselect("Filter dataframe on", df.columns)
- for column in to_filter_columns:
- left, right = st.columns((1, 20))
- left.write("↳")
- # Treat columns with < 10 unique values as categorical
- if is_categorical_dtype(df[column]) or df[column].nunique() < 10:
- user_cat_input = right.multiselect(
- f"Values for {column}",
- df[column].unique(),
- default=list(df[column].unique()),
- )
- df = df[df[column].isin(user_cat_input)]
- elif is_numeric_dtype(df[column]):
- _min = float(df[column].min())
- _max = float(df[column].max())
- step = (_max - _min) / 100
- user_num_input = right.slider(
- f"Values for {column}",
- _min,
- _max,
- (_min, _max),
- step=step,
- )
- df = df[df[column].between(*user_num_input)]
- elif is_datetime64_any_dtype(df[column]):
- user_date_input = right.date_input(
- f"Values for {column}",
- value=(
- df[column].min(),
- df[column].max(),
- ),
- )
- if len(user_date_input) == 2:
- user_date_input = tuple(map(pd.to_datetime, user_date_input))
- start_date, end_date = user_date_input
- df = df.loc[df[column].between(start_date, end_date)]
- else:
- user_text_input = right.text_input(
- f"Substring or regex in {column}",
- )
- if user_text_input:
- df = df[df[column].str.contains(user_text_input)]
-
- return df
-
-
-st.dataframe(filter_dataframe(df))
diff --git a/spaces/Lewislou/Lewislou-cell-seg-sribd/stardist_pkg/bioimageio_utils.py b/spaces/Lewislou/Lewislou-cell-seg-sribd/stardist_pkg/bioimageio_utils.py
deleted file mode 100644
index b9fd626ab96abf4bdf144a8cdca5185a2b94005f..0000000000000000000000000000000000000000
--- a/spaces/Lewislou/Lewislou-cell-seg-sribd/stardist_pkg/bioimageio_utils.py
+++ /dev/null
@@ -1,472 +0,0 @@
-from pathlib import Path
-from pkg_resources import get_distribution
-from zipfile import ZipFile
-import numpy as np
-import tempfile
-from distutils.version import LooseVersion
-from csbdeep.utils import axes_check_and_normalize, normalize, _raise
-
-
-DEEPIMAGEJ_MACRO = \
-"""
-//*******************************************************************
-// Date: July-2021
-// Credits: StarDist, DeepImageJ
-// URL:
-// https://github.com/stardist/stardist
-// https://deepimagej.github.io/deepimagej
-// This macro was adapted from
-// https://github.com/deepimagej/imagej-macros/blob/648caa867f6ccb459649d4d3799efa1e2e0c5204/StarDist2D_Post-processing.ijm
-// Please cite the respective contributions when using this code.
-//*******************************************************************
-// Macro to run StarDist postprocessing on 2D images.
-// StarDist and deepImageJ plugins need to be installed.
-// The macro assumes that the image to process is a stack in which
-// the first channel corresponds to the object probability map
-// and the remaining channels are the radial distances from each
-// pixel to the object boundary.
-//*******************************************************************
-
-// Get the name of the image to call it
-getDimensions(width, height, channels, slices, frames);
-name=getTitle();
-
-probThresh={probThresh};
-nmsThresh={nmsThresh};
-
-// Isolate the detection probability scores
-run("Make Substack...", "channels=1");
-rename("scores");
-
-// Isolate the oriented distances
-run("Fire");
-selectWindow(name);
-run("Delete Slice", "delete=channel");
-selectWindow(name);
-run("Properties...", "channels=" + maxOf(channels, slices) - 1 + " slices=1 frames=1 pixel_width=1.0000 pixel_height=1.0000 voxel_depth=1.0000");
-rename("distances");
-run("royal");
-
-// Run StarDist plugin
-run("Command From Macro", "command=[de.csbdresden.stardist.StarDist2DNMS], args=['prob':'scores', 'dist':'distances', 'probThresh':'" + probThresh + "', 'nmsThresh':'" + nmsThresh + "', 'outputType':'Both', 'excludeBoundary':'2', 'roiPosition':'Stack', 'verbose':'false'], process=[false]");
-"""
-
-
-def _import(error=True):
- try:
- from importlib_metadata import metadata
- from bioimageio.core.build_spec import build_model # type: ignore
- import xarray as xr
- import bioimageio.core # type: ignore
- except ImportError:
- if error:
- raise RuntimeError(
- "Required libraries are missing for bioimage.io model export.\n"
- "Please install StarDist as follows: pip install 'stardist[bioimageio]'\n"
- "(You do not need to uninstall StarDist first.)"
- )
- else:
- return None
- return metadata, build_model, bioimageio.core, xr
-
-
-def _create_stardist_dependencies(outdir):
- from ruamel.yaml import YAML
- from tensorflow import __version__ as tf_version
- from . import __version__ as stardist_version
- pkg_info = get_distribution("stardist")
- # dependencies that start with the name "bioimageio" will be added as conda dependencies
- reqs_conda = [str(req) for req in pkg_info.requires(extras=['bioimageio']) if str(req).startswith('bioimageio')]
- # only stardist and tensorflow as pip dependencies
- tf_major, tf_minor = LooseVersion(tf_version).version[:2]
- reqs_pip = (f"stardist>={stardist_version}", f"tensorflow>={tf_major}.{tf_minor},<{tf_major+1}")
- # conda environment
- env = dict(
- name = 'stardist',
- channels = ['defaults', 'conda-forge'],
- dependencies = [
- ('python>=3.7,<3.8' if tf_major == 1 else 'python>=3.7'),
- *reqs_conda,
- 'pip', {'pip': reqs_pip},
- ],
- )
- yaml = YAML(typ='safe')
- path = outdir / "environment.yaml"
- with open(path, "w") as f:
- yaml.dump(env, f)
- return f"conda:{path}"
-
-
-def _create_stardist_doc(outdir):
- doc_path = outdir / "README.md"
- text = (
- "# StarDist Model\n"
- "This is a model for object detection with star-convex shapes.\n"
- "Please see the [StarDist repository](https://github.com/stardist/stardist) for details."
- )
- with open(doc_path, "w") as f:
- f.write(text)
- return doc_path
-
-
-def _get_stardist_metadata(outdir, model):
- metadata, *_ = _import()
- package_data = metadata("stardist")
- doi_2d = "https://doi.org/10.1007/978-3-030-00934-2_30"
- doi_3d = "https://doi.org/10.1109/WACV45572.2020.9093435"
- authors = {
- 'Martin Weigert': dict(name='Martin Weigert', github_user='maweigert'),
- 'Uwe Schmidt': dict(name='Uwe Schmidt', github_user='uschmidt83'),
- }
- data = dict(
- description=package_data["Summary"],
- authors=list(authors.get(name.strip(),dict(name=name.strip())) for name in package_data["Author"].split(",")),
- git_repo=package_data["Home-Page"],
- license=package_data["License"],
- dependencies=_create_stardist_dependencies(outdir),
- cite=[{"text": "Cell Detection with Star-Convex Polygons", "doi": doi_2d},
- {"text": "Star-convex Polyhedra for 3D Object Detection and Segmentation in Microscopy", "doi": doi_3d}],
- tags=[
- 'fluorescence-light-microscopy', 'whole-slide-imaging', 'other', # modality
- f'{model.config.n_dim}d', # dims
- 'cells', 'nuclei', # content
- 'tensorflow', # framework
- 'fiji', # software
- 'unet', # network
- 'instance-segmentation', 'object-detection', # task
- 'stardist',
- ],
- covers=["https://raw.githubusercontent.com/stardist/stardist/master/images/stardist_logo.jpg"],
- documentation=_create_stardist_doc(outdir),
- )
- return data
-
-
-def _predict_tf(model_path, test_input):
- import tensorflow as tf
- from csbdeep.utils.tf import IS_TF_1
- # need to unzip the model assets
- model_assets = model_path.parent / "tf_model"
- with ZipFile(model_path, "r") as f:
- f.extractall(model_assets)
- if IS_TF_1:
- # make a new graph, i.e. don't use the global default graph
- with tf.Graph().as_default():
- with tf.Session() as sess:
- tf_model = tf.saved_model.load_v2(str(model_assets))
- x = tf.convert_to_tensor(test_input, dtype=tf.float32)
- model = tf_model.signatures["serving_default"]
- y = model(x)
- sess.run(tf.global_variables_initializer())
- output = sess.run(y["output"])
- else:
- tf_model = tf.saved_model.load(str(model_assets))
- x = tf.convert_to_tensor(test_input, dtype=tf.float32)
- model = tf_model.signatures["serving_default"]
- y = model(x)
- output = y["output"].numpy()
- return output
-
-
-def _get_weights_and_model_metadata(outdir, model, test_input, test_input_axes, test_input_norm_axes, mode, min_percentile, max_percentile):
-
- # get the path to the exported model assets (saved in outdir)
- if mode == "keras_hdf5":
- raise NotImplementedError("Export to keras format is not supported yet")
- elif mode == "tensorflow_saved_model_bundle":
- assets_uri = outdir / "TF_SavedModel.zip"
- model_csbdeep = model.export_TF(assets_uri, single_output=True, upsample_grid=True)
- else:
- raise ValueError(f"Unsupported mode: {mode}")
-
- # to force "inputs.data_type: float32" in the spec (bonus: disables normalization warning in model._predict_setup)
- test_input = test_input.astype(np.float32)
-
- # convert test_input to axes_net semantics and shape, also resize if necessary (to adhere to axes_net_div_by)
- test_input, axes_img, axes_net, axes_net_div_by, *_ = model._predict_setup(
- img=test_input,
- axes=test_input_axes,
- normalizer=None,
- n_tiles=None,
- show_tile_progress=False,
- predict_kwargs={},
- )
-
- # normalization axes string and numeric indices
- axes_norm = set(axes_net).intersection(set(axes_check_and_normalize(test_input_norm_axes, disallowed='S')))
- axes_norm = "".join(a for a in axes_net if a in axes_norm) # preserve order of axes_net
- axes_norm_num = tuple(axes_net.index(a) for a in axes_norm)
-
- # normalize input image
- test_input_norm = normalize(test_input, pmin=min_percentile, pmax=max_percentile, axis=axes_norm_num)
-
- net_axes_in = axes_net.lower()
- net_axes_out = axes_check_and_normalize(model._axes_out).lower()
- ndim_tensor = len(net_axes_out) + 1
-
- input_min_shape = list(axes_net_div_by)
- input_min_shape[axes_net.index('C')] = model.config.n_channel_in
- input_step = list(axes_net_div_by)
- input_step[axes_net.index('C')] = 0
-
- # add the batch axis to shape and step
- input_min_shape = [1] + input_min_shape
- input_step = [0] + input_step
-
- # the axes strings in bioimageio convention
- input_axes = "b" + net_axes_in.lower()
- output_axes = "b" + net_axes_out.lower()
-
- if mode == "keras_hdf5":
- output_names = ("prob", "dist") + (("class_prob",) if model._is_multiclass() else ())
- output_n_channels = (1, model.config.n_rays,) + ((1,) if model._is_multiclass() else ())
- # the output shape is computed from the input shape using
- # output_shape[i] = output_scale[i] * input_shape[i] + 2 * output_offset[i]
- output_scale = [1]+list(1/g for g in model.config.grid) + [0]
- output_offset = [0]*(ndim_tensor)
-
- elif mode == "tensorflow_saved_model_bundle":
- if model._is_multiclass():
- raise NotImplementedError("Tensorflow SavedModel not supported for multiclass models yet")
- # regarding input/output names: https://github.com/CSBDeep/CSBDeep/blob/b0d2f5f344ebe65a9b4c3007f4567fe74268c813/csbdeep/utils/tf.py#L193-L194
- input_names = ["input"]
- output_names = ["output"]
- output_n_channels = (1 + model.config.n_rays,)
- # the output shape is computed from the input shape using
- # output_shape[i] = output_scale[i] * input_shape[i] + 2 * output_offset[i]
- # same shape as input except for the channel dimension
- output_scale = [1]*(ndim_tensor)
- output_scale[output_axes.index("c")] = 0
- # no offset, except for the input axes, where it is output channel / 2
- output_offset = [0.0]*(ndim_tensor)
- output_offset[output_axes.index("c")] = output_n_channels[0] / 2.0
-
- assert all(s in (0, 1) for s in output_scale), "halo computation assumption violated"
- halo = model._axes_tile_overlap(output_axes.replace('b', 's'))
- halo = [int(np.ceil(v/8)*8) for v in halo] # optional: round up to be divisible by 8
-
- # the output shape needs to be valid after cropping the halo, so we add the halo to the input min shape
- input_min_shape = [ms + 2 * ha for ms, ha in zip(input_min_shape, halo)]
-
- # make sure the input min shape is still divisible by the min axis divisor
- input_min_shape = input_min_shape[:1] + [ms + (-ms % div_by) for ms, div_by in zip(input_min_shape[1:], axes_net_div_by)]
- assert all(ms % div_by == 0 for ms, div_by in zip(input_min_shape[1:], axes_net_div_by))
-
- metadata, *_ = _import()
- package_data = metadata("stardist")
- is_2D = model.config.n_dim == 2
-
- weights_file = outdir / "stardist_weights.h5"
- model.keras_model.save_weights(str(weights_file))
-
- config = dict(
- stardist=dict(
- python_version=package_data["Version"],
- thresholds=dict(model.thresholds._asdict()),
- weights=weights_file.name,
- config=vars(model.config),
- )
- )
-
- if is_2D:
- macro_file = outdir / "stardist_postprocessing.ijm"
- with open(str(macro_file), 'w', encoding='utf-8') as f:
- f.write(DEEPIMAGEJ_MACRO.format(probThresh=model.thresholds.prob, nmsThresh=model.thresholds.nms))
- config['stardist'].update(postprocessing_macro=macro_file.name)
-
- n_inputs = len(input_names)
- assert n_inputs == 1
- input_config = dict(
- input_names=input_names,
- input_min_shape=[input_min_shape],
- input_step=[input_step],
- input_axes=[input_axes],
- input_data_range=[["-inf", "inf"]],
- preprocessing=[[dict(
- name="scale_range",
- kwargs=dict(
- mode="per_sample",
- axes=axes_norm.lower(),
- min_percentile=min_percentile,
- max_percentile=max_percentile,
- ))]]
- )
-
- n_outputs = len(output_names)
- output_config = dict(
- output_names=output_names,
- output_data_range=[["-inf", "inf"]] * n_outputs,
- output_axes=[output_axes] * n_outputs,
- output_reference=[input_names[0]] * n_outputs,
- output_scale=[output_scale] * n_outputs,
- output_offset=[output_offset] * n_outputs,
- halo=[halo] * n_outputs
- )
-
- in_path = outdir / "test_input.npy"
- np.save(in_path, test_input[np.newaxis])
-
- if mode == "tensorflow_saved_model_bundle":
- test_outputs = _predict_tf(assets_uri, test_input_norm[np.newaxis])
- else:
- test_outputs = model.predict(test_input_norm)
-
- # out_paths = []
- # for i, out in enumerate(test_outputs):
- # p = outdir / f"test_output{i}.npy"
- # np.save(p, out)
- # out_paths.append(p)
- assert n_outputs == 1
- out_paths = [outdir / "test_output.npy"]
- np.save(out_paths[0], test_outputs)
-
- from tensorflow import __version__ as tf_version
- data = dict(weight_uri=assets_uri, test_inputs=[in_path], test_outputs=out_paths,
- config=config, tensorflow_version=tf_version)
- data.update(input_config)
- data.update(output_config)
- _files = [str(weights_file)]
- if is_2D:
- _files.append(str(macro_file))
- data.update(attachments=dict(files=_files))
-
- return data
-
-
-def export_bioimageio(
- model,
- outpath,
- test_input,
- test_input_axes=None,
- test_input_norm_axes='ZYX',
- name=None,
- mode="tensorflow_saved_model_bundle",
- min_percentile=1.0,
- max_percentile=99.8,
- overwrite_spec_kwargs=None,
-):
- """Export stardist model into bioimage.io format, https://github.com/bioimage-io/spec-bioimage-io.
-
- Parameters
- ----------
- model: StarDist2D, StarDist3D
- the model to convert
- outpath: str, Path
- where to save the model
- test_input: np.ndarray
- input image for generating test data
- test_input_axes: str or None
- the axes of the test input, for example 'YX' for a 2d image or 'ZYX' for a 3d volume
- using None assumes that axes of test_input are the same as those of model
- test_input_norm_axes: str
- the axes of the test input which will be jointly normalized, for example 'ZYX' for all spatial dimensions ('Z' ignored for 2D input)
- use 'ZYXC' to also jointly normalize channels (e.g. for RGB input images)
- name: str
- the name of this model (default: None)
- if None, uses the (folder) name of the model (i.e. `model.name`)
- mode: str
- the export type for this model (default: "tensorflow_saved_model_bundle")
- min_percentile: float
- min percentile to be used for image normalization (default: 1.0)
- max_percentile: float
- max percentile to be used for image normalization (default: 99.8)
- overwrite_spec_kwargs: dict or None
- spec keywords that should be overloaded (default: None)
- """
- _, build_model, *_ = _import()
- from .models import StarDist2D, StarDist3D
- isinstance(model, (StarDist2D, StarDist3D)) or _raise(ValueError("not a valid model"))
- 0 <= min_percentile < max_percentile <= 100 or _raise(ValueError("invalid percentile values"))
-
- if name is None:
- name = model.name
- name = str(name)
-
- outpath = Path(outpath)
- if outpath.suffix == "":
- outdir = outpath
- zip_path = outdir / f"{name}.zip"
- elif outpath.suffix == ".zip":
- outdir = outpath.parent
- zip_path = outpath
- else:
- raise ValueError(f"outpath has to be a folder or zip file, got {outpath}")
- outdir.mkdir(exist_ok=True, parents=True)
-
- with tempfile.TemporaryDirectory() as _tmp_dir:
- tmp_dir = Path(_tmp_dir)
- kwargs = _get_stardist_metadata(tmp_dir, model)
- model_kwargs = _get_weights_and_model_metadata(tmp_dir, model, test_input, test_input_axes, test_input_norm_axes, mode,
- min_percentile=min_percentile, max_percentile=max_percentile)
- kwargs.update(model_kwargs)
- if overwrite_spec_kwargs is not None:
- kwargs.update(overwrite_spec_kwargs)
-
- build_model(name=name, output_path=zip_path, add_deepimagej_config=(model.config.n_dim==2), root=tmp_dir, **kwargs)
- print(f"\nbioimage.io model with name '{name}' exported to '{zip_path}'")
-
-
-def import_bioimageio(source, outpath):
- """Import stardist model from bioimage.io format, https://github.com/bioimage-io/spec-bioimage-io.
-
- Load a model in bioimage.io format from the given `source` (e.g. path to zip file, URL)
- and convert it to a regular stardist model, which will be saved in the folder `outpath`.
-
- Parameters
- ----------
- source: str, Path
- bioimage.io resource (e.g. path, URL)
- outpath: str, Path
- folder to save the stardist model (must not exist previously)
-
- Returns
- -------
- StarDist2D or StarDist3D
- stardist model loaded from `outpath`
-
- """
- import shutil, uuid
- from csbdeep.utils import save_json
- from .models import StarDist2D, StarDist3D
- *_, bioimageio_core, _ = _import()
-
- outpath = Path(outpath)
- not outpath.exists() or _raise(FileExistsError(f"'{outpath}' already exists"))
-
- with tempfile.TemporaryDirectory() as _tmp_dir:
- tmp_dir = Path(_tmp_dir)
- # download the full model content to a temporary folder
- zip_path = tmp_dir / f"{str(uuid.uuid4())}.zip"
- bioimageio_core.export_resource_package(source, output_path=zip_path)
- with ZipFile(zip_path, "r") as zip_ref:
- zip_ref.extractall(tmp_dir)
- zip_path.unlink()
- rdf_path = tmp_dir / "rdf.yaml"
- biomodel = bioimageio_core.load_resource_description(rdf_path)
-
- # read the stardist specific content
- 'stardist' in biomodel.config or _raise(RuntimeError("bioimage.io model not compatible"))
- config = biomodel.config['stardist']['config']
- thresholds = biomodel.config['stardist']['thresholds']
- weights = biomodel.config['stardist']['weights']
-
- # make sure that the keras weights are in the attachments
- weights_file = None
- for f in biomodel.attachments.files:
- if f.name == weights and f.exists():
- weights_file = f
- break
- weights_file is not None or _raise(FileNotFoundError(f"couldn't find weights file '{weights}'"))
-
- # save the config and threshold to json, and weights to hdf5 to enable loading as stardist model
- # copy bioimageio files to separate sub-folder
- outpath.mkdir(parents=True)
- save_json(config, str(outpath / 'config.json'))
- save_json(thresholds, str(outpath / 'thresholds.json'))
- shutil.copy(str(weights_file), str(outpath / "weights_bioimageio.h5"))
- shutil.copytree(str(tmp_dir), str(outpath / "bioimageio"))
-
- model_class = (StarDist2D if config['n_dim'] == 2 else StarDist3D)
- model = model_class(None, outpath.name, basedir=str(outpath.parent))
-
- return model
diff --git a/spaces/LightSY/W2L-TD/facelib/detection/yolov5face/__init__.py b/spaces/LightSY/W2L-TD/facelib/detection/yolov5face/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_models/seg.py b/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_models/seg.py
deleted file mode 100644
index 291e547ff45de81ddd512bf04ce0af7957b89ae7..0000000000000000000000000000000000000000
--- a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_models/seg.py
+++ /dev/null
@@ -1,21 +0,0 @@
-label_convertor = dict(
- type='SegConvertor', dict_type='DICT36', with_unknown=True, lower=True)
-
-model = dict(
- type='SegRecognizer',
- backbone=dict(
- type='ResNet31OCR',
- layers=[1, 2, 5, 3],
- channels=[32, 64, 128, 256, 512, 512],
- out_indices=[0, 1, 2, 3],
- stage4_pool_cfg=dict(kernel_size=2, stride=2),
- last_stage_pool=True),
- neck=dict(
- type='FPNOCR', in_channels=[128, 256, 512, 512], out_channels=256),
- head=dict(
- type='SegHead',
- in_channels=256,
- upsample_param=dict(scale_factor=2.0, mode='nearest')),
- loss=dict(
- type='SegLoss', seg_downsample_ratio=1.0, seg_with_loss_weight=True),
- label_convertor=label_convertor)
diff --git a/spaces/ML701G7/taim-gan/src/models/predict_model.py b/spaces/ML701G7/taim-gan/src/models/predict_model.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/MashiroSA/sovits-emu-voice-transform/resample.py b/spaces/MashiroSA/sovits-emu-voice-transform/resample.py
deleted file mode 100644
index b28a86eb779d7b3f163e89fac64ecabe044ad1e2..0000000000000000000000000000000000000000
--- a/spaces/MashiroSA/sovits-emu-voice-transform/resample.py
+++ /dev/null
@@ -1,48 +0,0 @@
-import os
-import argparse
-import librosa
-import numpy as np
-from multiprocessing import Pool, cpu_count
-from scipy.io import wavfile
-from tqdm import tqdm
-
-
-def process(item):
- spkdir, wav_name, args = item
- # speaker 's5', 'p280', 'p315' are excluded,
- speaker = spkdir.replace("\\", "/").split("/")[-1]
- wav_path = os.path.join(args.in_dir, speaker, wav_name)
- if os.path.exists(wav_path) and '.wav' in wav_path:
- os.makedirs(os.path.join(args.out_dir2, speaker), exist_ok=True)
- wav, sr = librosa.load(wav_path, sr=None)
- wav, _ = librosa.effects.trim(wav, top_db=20)
- peak = np.abs(wav).max()
- if peak > 1.0:
- wav = 0.98 * wav / peak
- wav2 = librosa.resample(wav, orig_sr=sr, target_sr=args.sr2)
- wav2 /= max(wav2.max(), -wav2.min())
- save_name = wav_name
- save_path2 = os.path.join(args.out_dir2, speaker, save_name)
- wavfile.write(
- save_path2,
- args.sr2,
- (wav2 * np.iinfo(np.int16).max).astype(np.int16)
- )
-
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--sr2", type=int, default=44100, help="sampling rate")
- parser.add_argument("--in_dir", type=str, default="./dataset_raw", help="path to source dir")
- parser.add_argument("--out_dir2", type=str, default="./dataset/44k", help="path to target dir")
- args = parser.parse_args()
- processs = 30 if cpu_count() > 60 else (cpu_count()-2 if cpu_count() > 4 else 1)
- pool = Pool(processes=processs)
-
- for speaker in os.listdir(args.in_dir):
- spk_dir = os.path.join(args.in_dir, speaker)
- if os.path.isdir(spk_dir):
- print(spk_dir)
- for _ in tqdm(pool.imap_unordered(process, [(spk_dir, i, args) for i in os.listdir(spk_dir) if i.endswith("wav")])):
- pass
diff --git a/spaces/MasterThesisCBS/NorPaca_GPT/README.md b/spaces/MasterThesisCBS/NorPaca_GPT/README.md
deleted file mode 100644
index ccdb9e49d1f9dc5b0f0c3441236daa2abfbfdb5f..0000000000000000000000000000000000000000
--- a/spaces/MasterThesisCBS/NorPaca_GPT/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: NorPaca GPT
-emoji: 🐠
-colorFrom: gray
-colorTo: yellow
-sdk: streamlit
-sdk_version: 1.19.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/MetaWabbit/Auto-GPT/autogpt/memory/local.py b/spaces/MetaWabbit/Auto-GPT/autogpt/memory/local.py
deleted file mode 100644
index 803b6dc6ebb430285f423cda592fa3e902e9a4a6..0000000000000000000000000000000000000000
--- a/spaces/MetaWabbit/Auto-GPT/autogpt/memory/local.py
+++ /dev/null
@@ -1,136 +0,0 @@
-from __future__ import annotations
-
-import dataclasses
-import os
-from typing import Any, List
-
-import numpy as np
-import orjson
-
-from autogpt.llm_utils import create_embedding_with_ada
-from autogpt.memory.base import MemoryProviderSingleton
-
-EMBED_DIM = 1536
-SAVE_OPTIONS = orjson.OPT_SERIALIZE_NUMPY | orjson.OPT_SERIALIZE_DATACLASS
-
-
-def create_default_embeddings():
- return np.zeros((0, EMBED_DIM)).astype(np.float32)
-
-
-@dataclasses.dataclass
-class CacheContent:
- texts: List[str] = dataclasses.field(default_factory=list)
- embeddings: np.ndarray = dataclasses.field(
- default_factory=create_default_embeddings
- )
-
-
-class LocalCache(MemoryProviderSingleton):
- """A class that stores the memory in a local file"""
-
- def __init__(self, cfg) -> None:
- """Initialize a class instance
-
- Args:
- cfg: Config object
-
- Returns:
- None
- """
- self.filename = f"{cfg.memory_index}.json"
- if os.path.exists(self.filename):
- try:
- with open(self.filename, "w+b") as f:
- file_content = f.read()
- if not file_content.strip():
- file_content = b"{}"
- f.write(file_content)
-
- loaded = orjson.loads(file_content)
- self.data = CacheContent(**loaded)
- except orjson.JSONDecodeError:
- print(f"Error: The file '{self.filename}' is not in JSON format.")
- self.data = CacheContent()
- else:
- print(
- f"Warning: The file '{self.filename}' does not exist. "
- "Local memory would not be saved to a file."
- )
- self.data = CacheContent()
-
- def add(self, text: str):
- """
- Add text to our list of texts, add embedding as row to our
- embeddings-matrix
-
- Args:
- text: str
-
- Returns: None
- """
- if "Command Error:" in text:
- return ""
- self.data.texts.append(text)
-
- embedding = create_embedding_with_ada(text)
-
- vector = np.array(embedding).astype(np.float32)
- vector = vector[np.newaxis, :]
- self.data.embeddings = np.concatenate(
- [
- self.data.embeddings,
- vector,
- ],
- axis=0,
- )
-
- with open(self.filename, "wb") as f:
- out = orjson.dumps(self.data, option=SAVE_OPTIONS)
- f.write(out)
- return text
-
- def clear(self) -> str:
- """
- Clears the redis server.
-
- Returns: A message indicating that the memory has been cleared.
- """
- self.data = CacheContent()
- return "Obliviated"
-
- def get(self, data: str) -> list[Any] | None:
- """
- Gets the data from the memory that is most relevant to the given data.
-
- Args:
- data: The data to compare to.
-
- Returns: The most relevant data.
- """
- return self.get_relevant(data, 1)
-
- def get_relevant(self, text: str, k: int) -> list[Any]:
- """ "
- matrix-vector mult to find score-for-each-row-of-matrix
- get indices for top-k winning scores
- return texts for those indices
- Args:
- text: str
- k: int
-
- Returns: List[str]
- """
- embedding = create_embedding_with_ada(text)
-
- scores = np.dot(self.data.embeddings, embedding)
-
- top_k_indices = np.argsort(scores)[-k:][::-1]
-
- return [self.data.texts[i] for i in top_k_indices]
-
- def get_stats(self) -> tuple[int, tuple[int, ...]]:
- """
- Returns: The stats of the local cache.
- """
- return len(self.data.texts), self.data.embeddings.shape
diff --git a/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/lib/renderer/gl/cam_render.py b/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/lib/renderer/gl/cam_render.py
deleted file mode 100644
index 7b766af057b9c052388aceb152b0191fa2e4ea25..0000000000000000000000000000000000000000
--- a/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/lib/renderer/gl/cam_render.py
+++ /dev/null
@@ -1,48 +0,0 @@
-from .render import Render
-
-GLUT = None
-
-class CamRender(Render):
- def __init__(self, width=1600, height=1200, name='Cam Renderer',
- program_files=['simple.fs', 'simple.vs'], color_size=1, ms_rate=1, egl=False):
- Render.__init__(self, width, height, name, program_files, color_size, ms_rate=ms_rate, egl=egl)
- self.camera = None
-
- if not egl:
- global GLUT
- import OpenGL.GLUT as GLUT
- GLUT.glutDisplayFunc(self.display)
- GLUT.glutKeyboardFunc(self.keyboard)
-
- def set_camera(self, camera):
- self.camera = camera
- self.projection_matrix, self.model_view_matrix = camera.get_gl_matrix()
-
- def keyboard(self, key, x, y):
- # up
- eps = 1
- # print(key)
- if key == b'w':
- self.camera.center += eps * self.camera.direction
- elif key == b's':
- self.camera.center -= eps * self.camera.direction
- if key == b'a':
- self.camera.center -= eps * self.camera.right
- elif key == b'd':
- self.camera.center += eps * self.camera.right
- if key == b' ':
- self.camera.center += eps * self.camera.up
- elif key == b'x':
- self.camera.center -= eps * self.camera.up
- elif key == b'i':
- self.camera.near += 0.1 * eps
- self.camera.far += 0.1 * eps
- elif key == b'o':
- self.camera.near -= 0.1 * eps
- self.camera.far -= 0.1 * eps
-
- self.projection_matrix, self.model_view_matrix = self.camera.get_gl_matrix()
-
- def show(self):
- if GLUT is not None:
- GLUT.glutMainLoop()
diff --git "a/spaces/MohamadRezo/flixPicks/pages/3_\360\237\224\220Signup.py" "b/spaces/MohamadRezo/flixPicks/pages/3_\360\237\224\220Signup.py"
deleted file mode 100644
index 6bb3a067c00a858703ace864057c2a7838611748..0000000000000000000000000000000000000000
--- "a/spaces/MohamadRezo/flixPicks/pages/3_\360\237\224\220Signup.py"
+++ /dev/null
@@ -1,15 +0,0 @@
-import streamlit as st
-from database import Users
-
-users = Users.users_table()
-
-userName = st.text_input("Enter your UserName")
-password = st.text_input("Enter your Password", type="password")
-
-if st.button("signup"):
- if users.has_key(userName):
- st.error("The UserName is already taken.")
- else:
- users.insert(userName, password)
- st.success("You have successfully registered")
-
\ No newline at end of file
diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/textdet/drrg/drrg_resnet50_fpn-unet_1200e_ctw1500.py b/spaces/Mountchicken/MAERec-Gradio/configs/textdet/drrg/drrg_resnet50_fpn-unet_1200e_ctw1500.py
deleted file mode 100644
index c35030997193d2c54b125d540e646c3f1ef9e997..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/configs/textdet/drrg/drrg_resnet50_fpn-unet_1200e_ctw1500.py
+++ /dev/null
@@ -1,30 +0,0 @@
-_base_ = [
- '_base_drrg_resnet50_fpn-unet.py',
- '../_base_/datasets/ctw1500.py',
- '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_sgd_1200e.py',
-]
-
-# dataset settings
-ctw1500_textdet_train = _base_.ctw1500_textdet_train
-ctw1500_textdet_train.pipeline = _base_.train_pipeline
-ctw1500_textdet_test = _base_.ctw1500_textdet_test
-ctw1500_textdet_test.pipeline = _base_.test_pipeline
-
-train_dataloader = dict(
- batch_size=4,
- num_workers=4,
- persistent_workers=True,
- sampler=dict(type='DefaultSampler', shuffle=True),
- dataset=ctw1500_textdet_train)
-
-val_dataloader = dict(
- batch_size=1,
- num_workers=1,
- persistent_workers=True,
- sampler=dict(type='DefaultSampler', shuffle=False),
- dataset=ctw1500_textdet_test)
-
-test_dataloader = val_dataloader
-
-auto_scale_lr = dict(base_batch_size=16)
diff --git a/spaces/MrVicente/RA-BART/custom_bart/encoder_layer.py b/spaces/MrVicente/RA-BART/custom_bart/encoder_layer.py
deleted file mode 100644
index 0478b203cdd1100662a529b891589b8371d48024..0000000000000000000000000000000000000000
--- a/spaces/MrVicente/RA-BART/custom_bart/encoder_layer.py
+++ /dev/null
@@ -1,102 +0,0 @@
-#############################
-# Imports
-#############################
-
-# Python modules
-from typing import Optional, Tuple
-
-# Remote modules
-import torch
-from torch import nn
-from transformers import BartConfig
-from transformers.activations import ACT2FN
-
-# Local modules
-from .bart_attention import BartCustomAttention
-from .bart_mask_attention import BartCustomMaskAttention
-from .config import BartCustomConfig
-
-
-class BartCustomEncoderLayer(nn.Module):
- def __init__(self, config: BartCustomConfig, heads_mask: Optional[torch.Tensor]):
- super().__init__()
- self.embed_dim = config.d_model
- is_simple_mask_commonsense = config.is_simple_mask_commonsense
- if not is_simple_mask_commonsense:
- print("Selecting complex relation attention")
- self.self_attn = BartCustomAttention(
- embed_dim=self.embed_dim,
- num_heads=config.encoder_attention_heads,
- dropout=config.attention_dropout,
- num_relation_kinds=config.num_relation_kinds,
- use_same_relation_kv_emb=config.use_same_relation_kv_emb,
- heads_mask=heads_mask,
- )
- else:
- print("Selecting simple (MASK) relation attention")
- self.self_attn = BartCustomMaskAttention(
- embed_dim=self.embed_dim,
- num_heads=config.encoder_attention_heads,
- dropout=config.attention_dropout,
- num_relation_kinds=config.num_relation_kinds,
- heads_mask=heads_mask,
- )
- self.self_attn_layer_norm = nn.LayerNorm(self.embed_dim)
- self.dropout = config.dropout
- self.activation_fn = ACT2FN[config.activation_function]
- self.activation_dropout = config.activation_dropout
- self.fc1 = nn.Linear(self.embed_dim, config.encoder_ffn_dim)
- self.fc2 = nn.Linear(config.encoder_ffn_dim, self.embed_dim)
- self.final_layer_norm = nn.LayerNorm(self.embed_dim)
-
- def forward(
- self,
- hidden_states: torch.FloatTensor,
- attention_mask: torch.FloatTensor,
- layer_head_mask: torch.FloatTensor,
- output_attentions: Optional[bool] = False,
- relation_inputs: Optional[torch.Tensor] = None,
- ) -> Tuple[torch.FloatTensor, Optional[torch.FloatTensor]]:
- """
- Args:
- hidden_states (`torch.FloatTensor`): input to the layer of shape `(seq_len, batch, embed_dim)`
- attention_mask (`torch.FloatTensor`): attention mask of size
- `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
- layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size
- `(encoder_attention_heads,)`.
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under
- returned tensors for more detail.
- """
- residual = hidden_states
- hidden_states, attn_weights, _ = self.self_attn(
- hidden_states=hidden_states,
- attention_mask=attention_mask,
- layer_head_mask=layer_head_mask,
- output_attentions=output_attentions,
- relation_inputs=relation_inputs,
- )
- hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
- hidden_states = residual + hidden_states
- hidden_states = self.self_attn_layer_norm(hidden_states)
-
- residual = hidden_states
- hidden_states = self.activation_fn(self.fc1(hidden_states))
- hidden_states = nn.functional.dropout(hidden_states, p=self.activation_dropout, training=self.training)
- hidden_states = self.fc2(hidden_states)
- hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
- hidden_states = residual + hidden_states
- hidden_states = self.final_layer_norm(hidden_states)
-
- if hidden_states.dtype == torch.float16 and (
- torch.isinf(hidden_states).any() or torch.isnan(hidden_states).any()
- ):
- clamp_value = torch.finfo(hidden_states.dtype).max - 1000
- hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value)
-
- outputs = (hidden_states,)
-
- if output_attentions:
- outputs += (attn_weights,)
-
- return outputs
diff --git a/spaces/Myrna/VideoSummary2/app.py b/spaces/Myrna/VideoSummary2/app.py
deleted file mode 100644
index ea0d92944bdf4e1fde3b7b46810816a97c6b4964..0000000000000000000000000000000000000000
--- a/spaces/Myrna/VideoSummary2/app.py
+++ /dev/null
@@ -1,22 +0,0 @@
-import gradio as gr
-from summarize import Summarizer
-
-interface = gr.Interface(fn = Summarizer,
- inputs = [gr.inputs.Textbox(lines=2,
- placeholder="Enter your link...",
- label='YouTube Video Link'),
- gr.inputs.Radio(["mT5", "BART"], type="value", label='Model')],
- outputs = [gr.outputs.Textbox(
- label="Summary")],
-
- title = "Video Summary Generator",
- examples = [
- ['https://www.youtube.com/watch?v=OaeYUm06in0&list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&t=5761s', 'BART'],
- ['https://www.youtube.com/watch?v=U5OD8MjYnOM', 'BART'],
- ['https://www.youtube.com/watch?v=Gfr50f6ZBvo', 'BART'],
- ['https://www.youtube.com/watch?v=G4hL5Om4IJ4&t=2680s', 'BART'],
- ['https://www.youtube.com/watch?v=0Jd7fJgFkPU&t=8776s', 'mT5']
- ],
- enable_queue=True)
-
-interface.launch(debug=True)
\ No newline at end of file
diff --git a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/modeling/architecture/resnet.py b/spaces/NCTCMumbai/NCTC/models/official/vision/detection/modeling/architecture/resnet.py
deleted file mode 100644
index abbc7213ea971f0cb014d770e7e0c1707855fb08..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/modeling/architecture/resnet.py
+++ /dev/null
@@ -1,309 +0,0 @@
-# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Contains definitions for the post-activation form of Residual Networks.
-
-Residual networks (ResNets) were proposed in:
-[1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
- Deep Residual Learning for Image Recognition. arXiv:1512.03385
-"""
-
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-from absl import logging
-import tensorflow as tf
-from tensorflow.python.keras import backend
-from official.vision.detection.modeling.architecture import nn_ops
-
-# TODO(b/140112644): Refactor the code with Keras style, i.e. build and call.
-class Resnet(object):
- """Class to build ResNet family model."""
-
- def __init__(self,
- resnet_depth,
- activation='relu',
- norm_activation=nn_ops.norm_activation_builder(
- activation='relu'),
- data_format='channels_last'):
- """ResNet initialization function.
-
- Args:
- resnet_depth: `int` depth of ResNet backbone model.
- norm_activation: an operation that includes a normalization layer
- followed by an optional activation layer.
- data_format: `str` either "channels_first" for `[batch, channels, height,
- width]` or "channels_last for `[batch, height, width, channels]`.
- """
- self._resnet_depth = resnet_depth
- if activation == 'relu':
- self._activation_op = tf.nn.relu
- elif activation == 'swish':
- self._activation_op = tf.nn.swish
- else:
- raise ValueError('Unsupported activation `{}`.'.format(activation))
- self._norm_activation = norm_activation
- self._data_format = data_format
-
- model_params = {
- 10: {'block': self.residual_block, 'layers': [1, 1, 1, 1]},
- 18: {'block': self.residual_block, 'layers': [2, 2, 2, 2]},
- 34: {'block': self.residual_block, 'layers': [3, 4, 6, 3]},
- 50: {'block': self.bottleneck_block, 'layers': [3, 4, 6, 3]},
- 101: {'block': self.bottleneck_block, 'layers': [3, 4, 23, 3]},
- 152: {'block': self.bottleneck_block, 'layers': [3, 8, 36, 3]},
- 200: {'block': self.bottleneck_block, 'layers': [3, 24, 36, 3]}
- }
-
- if resnet_depth not in model_params:
- valid_resnet_depths = ', '.join(
- [str(depth) for depth in sorted(model_params.keys())])
- raise ValueError(
- 'The resnet_depth should be in [%s]. Not a valid resnet_depth:'%(
- valid_resnet_depths), self._resnet_depth)
- params = model_params[resnet_depth]
- self._resnet_fn = self.resnet_v1_generator(
- params['block'], params['layers'])
-
- def __call__(self, inputs, is_training=None):
- """Returns the ResNet model for a given size and number of output classes.
-
- Args:
- inputs: a `Tesnor` with shape [batch_size, height, width, 3] representing
- a batch of images.
- is_training: `bool` if True, the model is in training mode.
-
- Returns:
- a `dict` containing `int` keys for continuous feature levels [2, 3, 4, 5].
- The values are corresponding feature hierarchy in ResNet with shape
- [batch_size, height_l, width_l, num_filters].
- """
- with backend.get_graph().as_default():
- with tf.name_scope('resnet%s' % self._resnet_depth):
- return self._resnet_fn(inputs, is_training)
-
- def fixed_padding(self, inputs, kernel_size):
- """Pads the input along the spatial dimensions independently of input size.
-
- Args:
- inputs: `Tensor` of size `[batch, channels, height, width]` or
- `[batch, height, width, channels]` depending on `data_format`.
- kernel_size: `int` kernel size to be used for `conv2d` or max_pool2d`
- operations. Should be a positive integer.
-
- Returns:
- A padded `Tensor` of the same `data_format` with size either intact
- (if `kernel_size == 1`) or padded (if `kernel_size > 1`).
- """
- pad_total = kernel_size - 1
- pad_beg = pad_total // 2
- pad_end = pad_total - pad_beg
- if self._data_format == 'channels_first':
- padded_inputs = tf.pad(
- tensor=inputs,
- paddings=[[0, 0], [0, 0], [pad_beg, pad_end], [pad_beg, pad_end]])
- else:
- padded_inputs = tf.pad(
- tensor=inputs,
- paddings=[[0, 0], [pad_beg, pad_end], [pad_beg, pad_end], [0, 0]])
-
- return padded_inputs
-
- def conv2d_fixed_padding(self, inputs, filters, kernel_size, strides):
- """Strided 2-D convolution with explicit padding.
-
- The padding is consistent and is based only on `kernel_size`, not on the
- dimensions of `inputs` (as opposed to using `tf.layers.conv2d` alone).
-
- Args:
- inputs: `Tensor` of size `[batch, channels, height_in, width_in]`.
- filters: `int` number of filters in the convolution.
- kernel_size: `int` size of the kernel to be used in the convolution.
- strides: `int` strides of the convolution.
-
- Returns:
- A `Tensor` of shape `[batch, filters, height_out, width_out]`.
- """
- if strides > 1:
- inputs = self.fixed_padding(inputs, kernel_size)
-
- return tf.keras.layers.Conv2D(
- filters=filters,
- kernel_size=kernel_size,
- strides=strides,
- padding=('SAME' if strides == 1 else 'VALID'),
- use_bias=False,
- kernel_initializer=tf.initializers.VarianceScaling(),
- data_format=self._data_format)(
- inputs=inputs)
-
- def residual_block(self,
- inputs,
- filters,
- strides,
- use_projection=False,
- is_training=None):
- """Standard building block for residual networks with BN after convolutions.
-
- Args:
- inputs: `Tensor` of size `[batch, channels, height, width]`.
- filters: `int` number of filters for the first two convolutions. Note that
- the third and final convolution will use 4 times as many filters.
- strides: `int` block stride. If greater than 1, this block will ultimately
- downsample the input.
- use_projection: `bool` for whether this block should use a projection
- shortcut (versus the default identity shortcut). This is usually
- `True` for the first block of a block group, which may change the
- number of filters and the resolution.
- is_training: `bool` if True, the model is in training mode.
- Returns:
- The output `Tensor` of the block.
- """
- shortcut = inputs
- if use_projection:
- # Projection shortcut in first layer to match filters and strides
- shortcut = self.conv2d_fixed_padding(
- inputs=inputs, filters=filters, kernel_size=1, strides=strides)
- shortcut = self._norm_activation(use_activation=False)(
- shortcut, is_training=is_training)
-
- inputs = self.conv2d_fixed_padding(
- inputs=inputs, filters=filters, kernel_size=3, strides=strides)
- inputs = self._norm_activation()(inputs, is_training=is_training)
-
- inputs = self.conv2d_fixed_padding(
- inputs=inputs, filters=filters, kernel_size=3, strides=1)
- inputs = self._norm_activation(use_activation=False, init_zero=True)(
- inputs, is_training=is_training)
-
- return self._activation_op(inputs + shortcut)
-
- def bottleneck_block(self,
- inputs,
- filters,
- strides,
- use_projection=False,
- is_training=None):
- """Bottleneck block variant for residual networks with BN after convolutions.
-
- Args:
- inputs: `Tensor` of size `[batch, channels, height, width]`.
- filters: `int` number of filters for the first two convolutions. Note that
- the third and final convolution will use 4 times as many filters.
- strides: `int` block stride. If greater than 1, this block will ultimately
- downsample the input.
- use_projection: `bool` for whether this block should use a projection
- shortcut (versus the default identity shortcut). This is usually
- `True` for the first block of a block group, which may change the
- number of filters and the resolution.
- is_training: `bool` if True, the model is in training mode.
-
- Returns:
- The output `Tensor` of the block.
- """
- shortcut = inputs
- if use_projection:
- # Projection shortcut only in first block within a group. Bottleneck
- # blocks end with 4 times the number of filters.
- filters_out = 4 * filters
- shortcut = self.conv2d_fixed_padding(
- inputs=inputs, filters=filters_out, kernel_size=1, strides=strides)
- shortcut = self._norm_activation(use_activation=False)(
- shortcut, is_training=is_training)
-
- inputs = self.conv2d_fixed_padding(
- inputs=inputs, filters=filters, kernel_size=1, strides=1)
- inputs = self._norm_activation()(inputs, is_training=is_training)
-
- inputs = self.conv2d_fixed_padding(
- inputs=inputs, filters=filters, kernel_size=3, strides=strides)
- inputs = self._norm_activation()(inputs, is_training=is_training)
-
- inputs = self.conv2d_fixed_padding(
- inputs=inputs, filters=4 * filters, kernel_size=1, strides=1)
- inputs = self._norm_activation(use_activation=False, init_zero=True)(
- inputs, is_training=is_training)
-
- return self._activation_op(inputs + shortcut)
-
- def block_group(self, inputs, filters, block_fn, blocks, strides, name,
- is_training):
- """Creates one group of blocks for the ResNet model.
-
- Args:
- inputs: `Tensor` of size `[batch, channels, height, width]`.
- filters: `int` number of filters for the first convolution of the layer.
- block_fn: `function` for the block to use within the model
- blocks: `int` number of blocks contained in the layer.
- strides: `int` stride to use for the first convolution of the layer. If
- greater than 1, this layer will downsample the input.
- name: `str`name for the Tensor output of the block layer.
- is_training: `bool` if True, the model is in training mode.
-
- Returns:
- The output `Tensor` of the block layer.
- """
- # Only the first block per block_group uses projection shortcut and strides.
- inputs = block_fn(inputs, filters, strides, use_projection=True,
- is_training=is_training)
-
- for _ in range(1, blocks):
- inputs = block_fn(inputs, filters, 1, is_training=is_training)
-
- return tf.identity(inputs, name)
-
- def resnet_v1_generator(self, block_fn, layers):
- """Generator for ResNet v1 models.
-
- Args:
- block_fn: `function` for the block to use within the model. Either
- `residual_block` or `bottleneck_block`.
- layers: list of 4 `int`s denoting the number of blocks to include in each
- of the 4 block groups. Each group consists of blocks that take inputs of
- the same resolution.
-
- Returns:
- Model `function` that takes in `inputs` and `is_training` and returns the
- output `Tensor` of the ResNet model.
- """
-
- def model(inputs, is_training=None):
- """Creation of the model graph."""
- inputs = self.conv2d_fixed_padding(
- inputs=inputs, filters=64, kernel_size=7, strides=2)
- inputs = tf.identity(inputs, 'initial_conv')
- inputs = self._norm_activation()(inputs, is_training=is_training)
-
- inputs = tf.keras.layers.MaxPool2D(
- pool_size=3, strides=2, padding='SAME',
- data_format=self._data_format)(
- inputs)
- inputs = tf.identity(inputs, 'initial_max_pool')
-
- c2 = self.block_group(
- inputs=inputs, filters=64, block_fn=block_fn, blocks=layers[0],
- strides=1, name='block_group1', is_training=is_training)
- c3 = self.block_group(
- inputs=c2, filters=128, block_fn=block_fn, blocks=layers[1],
- strides=2, name='block_group2', is_training=is_training)
- c4 = self.block_group(
- inputs=c3, filters=256, block_fn=block_fn, blocks=layers[2],
- strides=2, name='block_group3', is_training=is_training)
- c5 = self.block_group(
- inputs=c4, filters=512, block_fn=block_fn, blocks=layers[3],
- strides=2, name='block_group4', is_training=is_training)
- return {2: c2, 3: c3, 4: c4, 5: c5}
-
- return model
diff --git a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/modeling/base_model.py b/spaces/NCTCMumbai/NCTC/models/official/vision/detection/modeling/base_model.py
deleted file mode 100644
index 8d18f12f5b7c52ca02334c4c685b70d353de83c5..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/modeling/base_model.py
+++ /dev/null
@@ -1,138 +0,0 @@
-# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Base Model definition."""
-
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-import abc
-import functools
-import re
-import tensorflow as tf
-from official.vision.detection.modeling import checkpoint_utils
-from official.vision.detection.modeling import learning_rates
-from official.vision.detection.modeling import optimizers
-
-
-def _make_filter_trainable_variables_fn(frozen_variable_prefix):
- """Creates a function for filtering trainable varialbes."""
-
- def _filter_trainable_variables(variables):
- """Filters trainable varialbes.
-
- Args:
- variables: a list of tf.Variable to be filtered.
-
- Returns:
- filtered_variables: a list of tf.Variable filtered out the frozen ones.
- """
- # frozen_variable_prefix: a regex string specifing the prefix pattern of
- # the frozen variables' names.
- filtered_variables = [
- v for v in variables
- if not frozen_variable_prefix or
- not re.match(frozen_variable_prefix, v.name)
- ]
- return filtered_variables
-
- return _filter_trainable_variables
-
-
-class Model(object):
- """Base class for model function."""
-
- __metaclass__ = abc.ABCMeta
-
- def __init__(self, params):
- self._use_bfloat16 = params.architecture.use_bfloat16
-
- if params.architecture.use_bfloat16:
- policy = tf.compat.v2.keras.mixed_precision.experimental.Policy(
- 'mixed_bfloat16')
- tf.compat.v2.keras.mixed_precision.experimental.set_policy(policy)
-
- # Optimization.
- self._optimizer_fn = optimizers.OptimizerFactory(params.train.optimizer)
- self._learning_rate = learning_rates.learning_rate_generator(
- params.train.total_steps, params.train.learning_rate)
-
- self._frozen_variable_prefix = params.train.frozen_variable_prefix
- self._regularization_var_regex = params.train.regularization_variable_regex
- self._l2_weight_decay = params.train.l2_weight_decay
-
- # Checkpoint restoration.
- self._checkpoint = params.train.checkpoint.as_dict()
-
- # Summary.
- self._enable_summary = params.enable_summary
- self._model_dir = params.model_dir
-
- @abc.abstractmethod
- def build_outputs(self, inputs, mode):
- """Build the graph of the forward path."""
- pass
-
- @abc.abstractmethod
- def build_model(self, params, mode):
- """Build the model object."""
- pass
-
- @abc.abstractmethod
- def build_loss_fn(self):
- """Build the model object."""
- pass
-
- def post_processing(self, labels, outputs):
- """Post-processing function."""
- return labels, outputs
-
- def model_outputs(self, inputs, mode):
- """Build the model outputs."""
- return self.build_outputs(inputs, mode)
-
- def build_optimizer(self):
- """Returns train_op to optimize total loss."""
- # Sets up the optimizer.
- return self._optimizer_fn(self._learning_rate)
-
- def make_filter_trainable_variables_fn(self):
- """Creates a function for filtering trainable varialbes."""
- return _make_filter_trainable_variables_fn(self._frozen_variable_prefix)
-
- def weight_decay_loss(self, trainable_variables):
- reg_variables = [
- v for v in trainable_variables
- if self._regularization_var_regex is None
- or re.match(self._regularization_var_regex, v.name)
- ]
-
- return self._l2_weight_decay * tf.add_n(
- [tf.nn.l2_loss(v) for v in reg_variables])
-
- def make_restore_checkpoint_fn(self):
- """Returns scaffold function to restore parameters from v1 checkpoint."""
- if 'skip_checkpoint_variables' in self._checkpoint:
- skip_regex = self._checkpoint['skip_checkpoint_variables']
- else:
- skip_regex = None
- return checkpoint_utils.make_restore_checkpoint_fn(
- self._checkpoint['path'],
- prefix=self._checkpoint['prefix'],
- skip_regex=skip_regex)
-
- def eval_metrics(self):
- """Returns tuple of metric function and its inputs for evaluation."""
- raise NotImplementedError('Unimplemented eval_metrics')
diff --git a/spaces/NEARHUb/video-transcoder/README.md b/spaces/NEARHUb/video-transcoder/README.md
deleted file mode 100644
index 0d1bc00645252657e677aad0f59192d9557a5136..0000000000000000000000000000000000000000
--- a/spaces/NEARHUb/video-transcoder/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Video Transcoder
-emoji: 📚
-colorFrom: gray
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.31.0
-app_file: app.py
-pinned: false
----
\ No newline at end of file
diff --git a/spaces/Nightwing25/AICoverGen/src/rvc.py b/spaces/Nightwing25/AICoverGen/src/rvc.py
deleted file mode 100644
index a2790602462859e4a9885c145a13ff86efba8a3c..0000000000000000000000000000000000000000
--- a/spaces/Nightwing25/AICoverGen/src/rvc.py
+++ /dev/null
@@ -1,166 +0,0 @@
-from multiprocessing import cpu_count
-from pathlib import Path
-
-import torch
-from fairseq import checkpoint_utils
-from scipy.io import wavfile
-
-from infer_pack.models import (
- SynthesizerTrnMs256NSFsid,
- SynthesizerTrnMs256NSFsid_nono,
- SynthesizerTrnMs768NSFsid,
- SynthesizerTrnMs768NSFsid_nono,
-)
-from my_utils import load_audio
-from vc_infer_pipeline import VC
-
-BASE_DIR = Path(__file__).resolve().parent.parent
-
-
-# config cpu
-def use_fp32_config():
- for config_file in [
- "32k.json",
- "40k.json",
- "48k.json",
- "48k_v2.json",
- "32k_v2.json",
- ]:
- with open(f"src/configs/{config_file}", "r") as f:
- strr = f.read().replace("true", "false")
- with open(f"src/configs/{config_file}", "w") as f:
- f.write(strr)
-
-class Config:
- def __init__(self, device, is_half):
- self.device = device
- self.is_half = is_half
- self.n_cpu = 2 # set cpu cores
- self.gpu_name = None
- self.gpu_mem = None
- self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config()
-
- def device_config(self) -> tuple:
- if torch.cuda.is_available():
- i_device = int(self.device.split(":")[-1])
- self.gpu_name = torch.cuda.get_device_name(i_device)
- if (
- ("16" in self.gpu_name and "V100" not in self.gpu_name.upper())
- or "P40" in self.gpu_name.upper()
- or "1060" in self.gpu_name
- or "1070" in self.gpu_name
- or "1080" in self.gpu_name
- ):
- print("16 series/10 series P40 forced single precision")
- self.is_half = False
- for config_file in ["32k.json", "40k.json", "48k.json"]:
- with open(BASE_DIR / "src" / "configs" / config_file, "r") as f:
- strr = f.read().replace("true", "false")
- with open(BASE_DIR / "src" / "configs" / config_file, "w") as f:
- f.write(strr)
- with open(BASE_DIR / "src" / "trainset_preprocess_pipeline_print.py", "r") as f:
- strr = f.read().replace("3.7", "3.0")
- with open(BASE_DIR / "src" / "trainset_preprocess_pipeline_print.py", "w") as f:
- f.write(strr)
- else:
- self.gpu_name = None
- self.gpu_mem = int(
- torch.cuda.get_device_properties(i_device).total_memory
- / 1024
- / 1024
- / 1024
- + 0.4
- )
- if self.gpu_mem <= 4:
- with open(BASE_DIR / "src" / "trainset_preprocess_pipeline_print.py", "r") as f:
- strr = f.read().replace("3.7", "3.0")
- with open(BASE_DIR / "src" / "trainset_preprocess_pipeline_print.py", "w") as f:
- f.write(strr)
- elif torch.backends.mps.is_available():
- print("No supported N-card found, use MPS for inference")
- self.device = "mps"
- else:
- print("No supported N-card found, use CPU for inference")
- self.device = "cpu"
- self.is_half = False
- use_fp32_config() # cpu config
-
- if self.n_cpu == 0:
- self.n_cpu = cpu_count()
-
- if self.is_half:
- # 6G memory config
- x_pad = 3
- x_query = 10
- x_center = 60
- x_max = 65
- else:
- # 5G memory config
- x_pad = 1
- x_query = 6
- x_center = 38
- x_max = 41
-
- if self.gpu_mem != None and self.gpu_mem <= 4:
- x_pad = 1
- x_query = 5
- x_center = 30
- x_max = 32
-
- return x_pad, x_query, x_center, x_max
-
-
-def load_hubert(device, is_half, model_path):
- models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task([model_path], suffix='', )
- hubert = models[0]
- hubert = hubert.to(device)
-
- if is_half:
- hubert = hubert.half()
- else:
- hubert = hubert.float()
-
- hubert.eval()
- return hubert
-
-
-def get_vc(device, is_half, config, model_path):
- cpt = torch.load(model_path, map_location='cpu')
- if "config" not in cpt or "weight" not in cpt:
- raise ValueError(f'Incorrect format for {model_path}. Use a voice model trained using RVC v2 instead.')
-
- tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0]
- if_f0 = cpt.get("f0", 1)
- version = cpt.get("version", "v1")
-
- if version == "v1":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half)
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- elif version == "v2":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=is_half)
- else:
- net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
-
- del net_g.enc_q
- print(net_g.load_state_dict(cpt["weight"], strict=False))
- net_g.eval().to(device)
-
- if is_half:
- net_g = net_g.half()
- else:
- net_g = net_g.float()
-
- vc = VC(tgt_sr, config)
- return cpt, version, net_g, tgt_sr, vc
-
-
-def rvc_infer(index_path, index_rate, input_path, output_path, pitch_change, f0_method, cpt, version, net_g, filter_radius, tgt_sr, rms_mix_rate, protect, crepe_hop_length, vc, hubert_model):
- audio = load_audio(input_path, 16000)
- times = [0, 0, 0]
- if_f0 = cpt.get('f0', 1)
- audio_opt = vc.pipeline(hubert_model, net_g, 0, audio, input_path, times, pitch_change, f0_method, index_path, index_rate, if_f0, filter_radius, tgt_sr, 0, rms_mix_rate, version, protect, crepe_hop_length)
- wavfile.write(output_path, tgt_sr, audio_opt)
diff --git a/spaces/NimaBoscarino/climategan/climategan/data.py b/spaces/NimaBoscarino/climategan/climategan/data.py
deleted file mode 100644
index e57fc21725361e02185088a909a74e36a8cd3fc4..0000000000000000000000000000000000000000
--- a/spaces/NimaBoscarino/climategan/climategan/data.py
+++ /dev/null
@@ -1,539 +0,0 @@
-"""Data-loading functions in order to create a Dataset and DataLoaders.
-Transforms for loaders are in transforms.py
-"""
-
-import json
-import os
-from pathlib import Path
-
-import numpy as np
-import torch
-import yaml
-from imageio import imread
-from PIL import Image
-from torch.utils.data import DataLoader, Dataset
-from torchvision import transforms
-
-from climategan.transforms import get_transforms
-from climategan.tutils import get_normalized_depth_t
-from climategan.utils import env_to_path, is_image_file
-
-classes_dict = {
- "s": { # unity
- 0: [0, 0, 255, 255], # Water
- 1: [55, 55, 55, 255], # Ground
- 2: [0, 255, 255, 255], # Building
- 3: [255, 212, 0, 255], # Traffic items
- 4: [0, 255, 0, 255], # Vegetation
- 5: [255, 97, 0, 255], # Terrain
- 6: [255, 0, 0, 255], # Car
- 7: [60, 180, 60, 255], # Trees
- 8: [255, 0, 255, 255], # Person
- 9: [0, 0, 0, 255], # Sky
- 10: [255, 255, 255, 255], # Default
- },
- "r": { # deeplab v2
- 0: [0, 0, 255, 255], # Water
- 1: [55, 55, 55, 255], # Ground
- 2: [0, 255, 255, 255], # Building
- 3: [255, 212, 0, 255], # Traffic items
- 4: [0, 255, 0, 255], # Vegetation
- 5: [255, 97, 0, 255], # Terrain
- 6: [255, 0, 0, 255], # Car
- 7: [60, 180, 60, 255], # Trees
- 8: [220, 20, 60, 255], # Person
- 9: [8, 19, 49, 255], # Sky
- 10: [0, 80, 100, 255], # Default
- },
- "kitti": {
- 0: [210, 0, 200], # Terrain
- 1: [90, 200, 255], # Sky
- 2: [0, 199, 0], # Tree
- 3: [90, 240, 0], # Vegetation
- 4: [140, 140, 140], # Building
- 5: [100, 60, 100], # Road
- 6: [250, 100, 255], # GuardRail
- 7: [255, 255, 0], # TrafficSign
- 8: [200, 200, 0], # TrafficLight
- 9: [255, 130, 0], # Pole
- 10: [80, 80, 80], # Misc
- 11: [160, 60, 60], # Truck
- 12: [255, 127, 80], # Car
- 13: [0, 139, 139], # Van
- 14: [0, 0, 0], # Undefined
- },
- "flood": {
- 0: [255, 0, 0], # Cannot flood
- 1: [0, 0, 255], # Must flood
- 2: [0, 0, 0], # May flood
- },
-}
-
-kitti_mapping = {
- 0: 5, # Terrain -> Terrain
- 1: 9, # Sky -> Sky
- 2: 7, # Tree -> Trees
- 3: 4, # Vegetation -> Vegetation
- 4: 2, # Building -> Building
- 5: 1, # Road -> Ground
- 6: 3, # GuardRail -> Traffic items
- 7: 3, # TrafficSign -> Traffic items
- 8: 3, # TrafficLight -> Traffic items
- 9: 3, # Pole -> Traffic items
- 10: 10, # Misc -> default
- 11: 6, # Truck -> Car
- 12: 6, # Car -> Car
- 13: 6, # Van -> Car
- 14: 10, # Undefined -> Default
-}
-
-
-def encode_exact_segmap(seg, classes_dict, default_value=14):
- """
- When the mapping (rgb -> label) is known to be exact (no approximative rgb values)
- maps rgb image to segmap labels
-
- Args:
- seg (np.ndarray): H x W x 3 RGB image
- classes_dict (dict): Mapping {class: rgb value}
- default_value (int, optional): Value for unknown label. Defaults to 14.
-
- Returns:
- np.ndarray: Segmap as labels, not RGB
- """
- out = np.ones((seg.shape[0], seg.shape[1])) * default_value
- for cindex, cvalue in classes_dict.items():
- out[np.where((seg == cvalue).all(-1))] = cindex
- return out
-
-
-def merge_labels(labels, mapping, default_value=14):
- """
- Maps labels from a source domain to labels of a target domain,
- typically kitti -> climategan
-
- Args:
- labels (np.ndarray): input segmap labels
- mapping (dict): source_label -> target_label
- default_value (int, optional): Unknown label. Defaults to 14.
-
- Returns:
- np.ndarray: Adapted labels
- """
- out = np.ones_like(labels) * default_value
- for source, target in mapping.items():
- out[labels == source] = target
- return out
-
-
-def process_kitti_seg(path, kitti_classes, merge_map, default=14):
- """
- Processes a path to produce a 1 x 1 x H x W torch segmap
-
- %timeit process_kitti_seg(path, classes_dict, mapping, default=14)
- 326 ms ± 118 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
-
- Args:
- path (str | pathlib.Path): Segmap RBG path
- kitti_classes (dict): Kitti map label -> rgb
- merge_map (dict): map kitti_label -> climategan_label
- default (int, optional): Unknown kitti label. Defaults to 14.
-
- Returns:
- torch.Tensor: 1 x 1 x H x W torch segmap
- """
- seg = imread(path)
- labels = encode_exact_segmap(seg, kitti_classes, default_value=default)
- merged = merge_labels(labels, merge_map, default_value=default)
- return torch.tensor(merged).unsqueeze(0).unsqueeze(0)
-
-
-def decode_segmap_merged_labels(tensor, domain, is_target, nc=11):
- """Creates a label colormap for classes used in Unity segmentation benchmark.
- Arguments:
- tensor -- segmented image of size (1) x (nc) x (H) x (W)
- if prediction, or size (1) x (1) x (H) x (W) if target
- Returns:
- RGB tensor of size (1) x (3) x (H) x (W)
- #"""
-
- if is_target: # Target is size 1 x 1 x H x W
- idx = tensor.squeeze(0).squeeze(0)
- else: # Prediction is size 1 x nc x H x W
- idx = torch.argmax(tensor.squeeze(0), dim=0)
-
- indexer = torch.tensor(list(classes_dict[domain].values()))[:, :3]
- return indexer[idx.long()].permute(2, 0, 1).to(torch.float32).unsqueeze(0)
-
-
-def decode_segmap_cityscapes_labels(image, nc=19):
- """Creates a label colormap used in CITYSCAPES segmentation benchmark.
- Arguments:
- image {array} -- segmented image
- (array of image size containing class at each pixel)
- Returns:
- array of size 3*nc -- A colormap for visualizing segmentation results.
- """
- colormap = np.zeros((19, 3), dtype=np.uint8)
- colormap[0] = [128, 64, 128]
- colormap[1] = [244, 35, 232]
- colormap[2] = [70, 70, 70]
- colormap[3] = [102, 102, 156]
- colormap[4] = [190, 153, 153]
- colormap[5] = [153, 153, 153]
- colormap[6] = [250, 170, 30]
- colormap[7] = [220, 220, 0]
- colormap[8] = [107, 142, 35]
- colormap[9] = [152, 251, 152]
- colormap[10] = [70, 130, 180]
- colormap[11] = [220, 20, 60]
- colormap[12] = [255, 0, 0]
- colormap[13] = [0, 0, 142]
- colormap[14] = [0, 0, 70]
- colormap[15] = [0, 60, 100]
- colormap[16] = [0, 80, 100]
- colormap[17] = [0, 0, 230]
- colormap[18] = [119, 11, 32]
-
- r = np.zeros_like(image).astype(np.uint8)
- g = np.zeros_like(image).astype(np.uint8)
- b = np.zeros_like(image).astype(np.uint8)
-
- for col in range(nc):
- idx = image == col
- r[idx] = colormap[col, 0]
- g[idx] = colormap[col, 1]
- b[idx] = colormap[col, 2]
-
- rgb = np.stack([r, g, b], axis=2)
- return rgb
-
-
-def find_closest_class(pixel, dict_classes):
- """Takes a pixel as input and finds the closest known pixel value corresponding
- to a class in dict_classes
-
- Arguments:
- pixel -- tuple pixel (R,G,B,A)
- Returns:
- tuple pixel (R,G,B,A) corresponding to a key (a class) in dict_classes
- """
- min_dist = float("inf")
- closest_pixel = None
- for pixel_value in dict_classes.keys():
- dist = np.sqrt(np.sum(np.square(np.subtract(pixel, pixel_value))))
- if dist < min_dist:
- min_dist = dist
- closest_pixel = pixel_value
- return closest_pixel
-
-
-def encode_segmap(arr, domain):
- """Change a segmentation RGBA array to a segmentation array
- with each pixel being the index of the class
- Arguments:
- numpy array -- segmented image of size (H) x (W) x (4 RGBA values)
- Returns:
- numpy array of size (1) x (H) x (W) with each pixel being the index of the class
- """
- new_arr = np.zeros((1, arr.shape[0], arr.shape[1]))
- dict_classes = {
- tuple(rgba_value): class_id
- for (class_id, rgba_value) in classes_dict[domain].items()
- }
- for i in range(arr.shape[0]):
- for j in range(arr.shape[1]):
- pixel_rgba = tuple(arr[i, j, :])
- if pixel_rgba in dict_classes.keys():
- new_arr[0, i, j] = dict_classes[pixel_rgba]
- else:
- pixel_rgba_closest = find_closest_class(pixel_rgba, dict_classes)
- new_arr[0, i, j] = dict_classes[pixel_rgba_closest]
- return new_arr
-
-
-def encode_mask_label(arr, domain):
- """Change a segmentation RGBA array to a segmentation array
- with each pixel being the index of the class
- Arguments:
- numpy array -- segmented image of size (H) x (W) x (3 RGB values)
- Returns:
- numpy array of size (1) x (H) x (W) with each pixel being the index of the class
- """
- diff = np.zeros((len(classes_dict[domain].keys()), arr.shape[0], arr.shape[1]))
- for cindex, cvalue in classes_dict[domain].items():
- diff[cindex, :, :] = np.sqrt(
- np.sum(
- np.square(arr - np.tile(cvalue, (arr.shape[0], arr.shape[1], 1))),
- axis=2,
- )
- )
- return np.expand_dims(np.argmin(diff, axis=0), axis=0)
-
-
-def transform_segmap_image_to_tensor(path, domain):
- """
- Transforms a segmentation image to a tensor of size (1) x (1) x (H) x (W)
- with each pixel being the index of the class
- """
- arr = np.array(Image.open(path).convert("RGBA"))
- arr = encode_segmap(arr, domain)
- arr = torch.from_numpy(arr).float()
- arr = arr.unsqueeze(0)
- return arr
-
-
-def save_segmap_tensors(path_to_json, path_to_dir, domain):
- """
- Loads the segmentation images mentionned in a json file, transforms them to
- tensors and save the tensors in the wanted directory
-
- Args:
- path_to_json: complete path to the json file where to find the original data
- path_to_dir: path to the directory where to save the tensors as tensor_name.pt
- domain: domain of the images ("r" or "s")
-
- e.g:
- save_tensors(
- "/network/tmp1/ccai/data/climategan/seg/train_s.json",
- "/network/tmp1/ccai/data/munit_dataset/simdata/Unity11K_res640/Seg_tensors/",
- "s",
- )
- """
- ims_list = None
- if path_to_json:
- path_to_json = Path(path_to_json).resolve()
- with open(path_to_json, "r") as f:
- ims_list = yaml.safe_load(f)
-
- assert ims_list is not None
-
- for im_dict in ims_list:
- for task_name, path in im_dict.items():
- if task_name == "s":
- file_name = os.path.splitext(path)[0] # remove extension
- file_name = file_name.rsplit("/", 1)[-1] # keep only the file_name
- tensor = transform_segmap_image_to_tensor(path, domain)
- torch.save(tensor, path_to_dir + file_name + ".pt")
-
-
-def pil_image_loader(path, task):
- if Path(path).suffix == ".npy":
- arr = np.load(path).astype(np.uint8)
- elif is_image_file(path):
- # arr = imread(path).astype(np.uint8)
- arr = np.array(Image.open(path).convert("RGB"))
- else:
- raise ValueError("Unknown data type {}".format(path))
-
- # Convert from RGBA to RGB for images
- if len(arr.shape) == 3 and arr.shape[-1] == 4:
- arr = arr[:, :, 0:3]
-
- if task == "m":
- arr[arr != 0] = 1
- # Make sure mask is single-channel
- if len(arr.shape) >= 3:
- arr = arr[:, :, 0]
-
- # assert len(arr.shape) == 3, (path, task, arr.shape)
-
- return Image.fromarray(arr)
-
-
-def tensor_loader(path, task, domain, opts):
- """load data as tensors
- Args:
- path (str): path to data
- task (str)
- domain (str)
- Returns:
- [Tensor]: 1 x C x H x W
- """
- if task == "s":
- if domain == "kitti":
- return process_kitti_seg(
- path, classes_dict["kitti"], kitti_mapping, default=14
- )
- return torch.load(path)
- elif task == "d":
- if Path(path).suffix == ".npy":
- arr = np.load(path)
- else:
- arr = imread(path) # .astype(np.uint8) /!\ kitti is np.uint16
- tensor = torch.from_numpy(arr.astype(np.float32))
- tensor = get_normalized_depth_t(
- tensor,
- domain,
- normalize="d" in opts.train.pseudo.tasks,
- log=opts.gen.d.classify.enable,
- )
- tensor = tensor.unsqueeze(0)
- return tensor
-
- elif Path(path).suffix == ".npy":
- arr = np.load(path).astype(np.float32)
- elif is_image_file(path):
- arr = imread(path).astype(np.float32)
- else:
- raise ValueError("Unknown data type {}".format(path))
-
- # Convert from RGBA to RGB for images
- if len(arr.shape) == 3 and arr.shape[-1] == 4:
- arr = arr[:, :, 0:3]
-
- if task == "x":
- arr -= arr.min()
- arr /= arr.max()
- arr = np.moveaxis(arr, 2, 0)
- elif task == "s":
- arr = np.moveaxis(arr, 2, 0)
- elif task == "m":
- if arr.max() > 127:
- arr = (arr > 127).astype(arr.dtype)
- # Make sure mask is single-channel
- if len(arr.shape) >= 3:
- arr = arr[:, :, 0]
- arr = np.expand_dims(arr, 0)
-
- return torch.from_numpy(arr).unsqueeze(0)
-
-
-class OmniListDataset(Dataset):
- def __init__(self, mode, domain, opts, transform=None):
-
- self.opts = opts
- self.domain = domain
- self.mode = mode
- self.tasks = set(opts.tasks)
- self.tasks.add("x")
- if "p" in self.tasks:
- self.tasks.add("m")
-
- file_list_path = Path(opts.data.files[mode][domain])
- if "/" not in str(file_list_path):
- file_list_path = Path(opts.data.files.base) / Path(
- opts.data.files[mode][domain]
- )
-
- if file_list_path.suffix == ".json":
- self.samples_paths = self.json_load(file_list_path)
- elif file_list_path.suffix in {".yaml", ".yml"}:
- self.samples_paths = self.yaml_load(file_list_path)
- else:
- raise ValueError("Unknown file list type in {}".format(file_list_path))
-
- if opts.data.max_samples and opts.data.max_samples != -1:
- assert isinstance(opts.data.max_samples, int)
- self.samples_paths = self.samples_paths[: opts.data.max_samples]
-
- self.filter_samples()
- if opts.data.check_samples:
- print(f"Checking samples ({mode}, {domain})")
- self.check_samples()
- self.file_list_path = str(file_list_path)
- self.transform = transform
-
- def filter_samples(self):
- """
- Filter out data which is not required for the model's tasks
- as defined in opts.tasks
- """
- self.samples_paths = [
- {k: v for k, v in s.items() if k in self.tasks} for s in self.samples_paths
- ]
-
- def __getitem__(self, i):
- """Return an item in the dataset with fields:
- {
- data: transform({
- domains: values
- }),
- paths: [{task: path}],
- domain: [domain],
- mode: [train|val]
- }
- Args:
- i (int): index of item to retrieve
- Returns:
- dict: dataset item where tensors of data are in item["data"] which is a dict
- {task: tensor}
- """
- paths = self.samples_paths[i]
-
- # always apply transforms,
- # if no transform is specified, ToTensor and Normalize will be applied
-
- item = {
- "data": self.transform(
- {
- task: tensor_loader(
- env_to_path(path),
- task,
- self.domain,
- self.opts,
- )
- for task, path in paths.items()
- }
- ),
- "paths": paths,
- "domain": self.domain if self.domain != "kitti" else "s",
- "mode": self.mode,
- }
-
- return item
-
- def __len__(self):
- return len(self.samples_paths)
-
- def json_load(self, file_path):
- with open(file_path, "r") as f:
- return json.load(f)
-
- def yaml_load(self, file_path):
- with open(file_path, "r") as f:
- return yaml.safe_load(f)
-
- def check_samples(self):
- """Checks that every file listed in samples_paths actually
- exist on the file-system
- """
- for s in self.samples_paths:
- for k, v in s.items():
- assert Path(v).exists(), f"{k} {v} does not exist"
-
-
-def get_loader(mode, domain, opts):
- if (
- domain != "kitti"
- or not opts.train.kitti.pretrain
- or not opts.train.kitti.batch_size
- ):
- batch_size = opts.data.loaders.get("batch_size", 4)
- else:
- batch_size = opts.train.kitti.get("batch_size", 4)
-
- return DataLoader(
- OmniListDataset(
- mode,
- domain,
- opts,
- transform=transforms.Compose(get_transforms(opts, mode, domain)),
- ),
- batch_size=batch_size,
- shuffle=True,
- num_workers=opts.data.loaders.get("num_workers", 8),
- pin_memory=True, # faster transfer to gpu
- drop_last=True, # avoids batchnorm pbs if last batch has size 1
- )
-
-
-def get_all_loaders(opts):
- loaders = {}
- for mode in ["train", "val"]:
- loaders[mode] = {}
- for domain in opts.domains:
- if mode in opts.data.files:
- if domain in opts.data.files[mode]:
- loaders[mode][domain] = get_loader(mode, domain, opts)
- return loaders
diff --git a/spaces/OAOA/DifFace/basicsr/ops/dcn/__init__.py b/spaces/OAOA/DifFace/basicsr/ops/dcn/__init__.py
deleted file mode 100644
index 32e3592f896d61b4127e09d0476381b9d55e32ff..0000000000000000000000000000000000000000
--- a/spaces/OAOA/DifFace/basicsr/ops/dcn/__init__.py
+++ /dev/null
@@ -1,7 +0,0 @@
-from .deform_conv import (DeformConv, DeformConvPack, ModulatedDeformConv, ModulatedDeformConvPack, deform_conv,
- modulated_deform_conv)
-
-__all__ = [
- 'DeformConv', 'DeformConvPack', 'ModulatedDeformConv', 'ModulatedDeformConvPack', 'deform_conv',
- 'modulated_deform_conv'
-]
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/legacy/block_pair_dataset.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/legacy/block_pair_dataset.py
deleted file mode 100644
index ba069b46052286c531b4f9706d96788732cd2ad2..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/legacy/block_pair_dataset.py
+++ /dev/null
@@ -1,311 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-
-import numpy as np
-import torch
-from fairseq.data import FairseqDataset
-
-
-class BlockPairDataset(FairseqDataset):
- """Break a Dataset of tokens into sentence pair blocks for next sentence
- prediction as well as masked language model.
-
- High-level logics are:
- 1. break input tensor to tensor blocks
- 2. pair the blocks with 50% next sentence and 50% random sentence
- 3. return paired blocks as well as related segment labels
-
- Args:
- dataset (~torch.utils.data.Dataset): dataset to break into blocks
- sizes: array of sentence lengths
- dictionary: dictionary for the task
- block_size: maximum block size
- break_mode: mode for breaking copurs into block pairs. currently we support
- 2 modes
- doc: respect document boundaries and each part of the pair should belong to on document
- none: don't respect any boundary and cut tokens evenly
- short_seq_prob: probability for generating shorter block pairs
- doc_break_size: Size for empty line separating documents. Typically 1 if
- the sentences have eos, 0 otherwise.
- """
-
- def __init__(
- self,
- dataset,
- dictionary,
- sizes,
- block_size,
- break_mode="doc",
- short_seq_prob=0.1,
- doc_break_size=1,
- ):
- super().__init__()
- self.dataset = dataset
- self.pad = dictionary.pad()
- self.eos = dictionary.eos()
- self.cls = dictionary.cls()
- self.mask = dictionary.mask()
- self.sep = dictionary.sep()
- self.break_mode = break_mode
- self.dictionary = dictionary
- self.short_seq_prob = short_seq_prob
- self.block_indices = []
-
- assert len(dataset) == len(sizes)
-
- if break_mode == "doc":
- cur_doc = []
- for sent_id, sz in enumerate(sizes):
- assert doc_break_size == 0 or sz != 0, (
- "when doc_break_size is non-zero, we expect documents to be"
- "separated by a blank line with a single eos."
- )
- # empty line as document separator
- if sz == doc_break_size:
- if len(cur_doc) == 0:
- continue
- self.block_indices.append(cur_doc)
- cur_doc = []
- else:
- cur_doc.append(sent_id)
- max_num_tokens = block_size - 3 # Account for [CLS], [SEP], [SEP]
- self.sent_pairs = []
- self.sizes = []
- for doc_id, doc in enumerate(self.block_indices):
- self._generate_sentence_pair(doc, doc_id, max_num_tokens, sizes)
- elif break_mode is None or break_mode == "none":
- # each block should have half of the block size since we are constructing block pair
- sent_length = (block_size - 3) // 2
- total_len = sum(dataset.sizes)
- length = math.ceil(total_len / sent_length)
-
- def block_at(i):
- start = i * sent_length
- end = min(start + sent_length, total_len)
- return (start, end)
-
- sent_indices = np.array([block_at(i) for i in range(length)])
- sent_sizes = np.array([e - s for s, e in sent_indices])
- dataset_index = self._sent_to_dataset_index(sent_sizes)
-
- # pair sentences
- self._pair_sentences(dataset_index)
- else:
- raise ValueError("Invalid break_mode: " + break_mode)
-
- def _pair_sentences(self, dataset_index):
- """
- Give a list of evenly cut blocks/sentences, pair these sentences with 50%
- consecutive sentences and 50% random sentences.
- This is used for none break mode
- """
- # pair sentences
- for sent_id, sent in enumerate(dataset_index):
- next_sent_label = (
- 1 if np.random.rand() > 0.5 and sent_id != len(dataset_index) - 1 else 0
- )
- if next_sent_label:
- next_sent = dataset_index[sent_id + 1]
- else:
- next_sent = dataset_index[
- self._skip_sampling(len(dataset_index), [sent_id, sent_id + 1])
- ]
- self.sent_pairs.append((sent, next_sent, next_sent_label))
-
- # The current blocks don't include the special tokens but the
- # sizes already account for this
- self.sizes.append(3 + sent[3] + next_sent[3])
-
- def _sent_to_dataset_index(self, sent_sizes):
- """
- Build index mapping block indices to the underlying dataset indices
- """
- dataset_index = []
- ds_idx, ds_remaining = -1, 0
- for to_consume in sent_sizes:
- sent_size = to_consume
- if ds_remaining == 0:
- ds_idx += 1
- ds_remaining = sent_sizes[ds_idx]
- start_ds_idx = ds_idx
- start_offset = sent_sizes[ds_idx] - ds_remaining
- while to_consume > ds_remaining:
- to_consume -= ds_remaining
- ds_idx += 1
- ds_remaining = sent_sizes[ds_idx]
- ds_remaining -= to_consume
- dataset_index.append(
- (
- start_ds_idx, # starting index in dataset
- start_offset, # starting offset within starting index
- ds_idx, # ending index in dataset
- sent_size, # sentence length
- )
- )
- assert ds_remaining == 0
- assert ds_idx == len(self.dataset) - 1
- return dataset_index
-
- def _generate_sentence_pair(self, doc, doc_id, max_num_tokens, sizes):
- """
- Go through a single document and genrate sentence paris from it
- """
- current_chunk = []
- current_length = 0
- curr = 0
- # To provide more randomness, we decrease target seq length for parts of
- # samples (10% by default). Note that max_num_tokens is the hard threshold
- # for batching and will never be changed.
- target_seq_length = max_num_tokens
- if np.random.random() < self.short_seq_prob:
- target_seq_length = np.random.randint(2, max_num_tokens)
- # loop through all sentences in document
- while curr < len(doc):
- sent_id = doc[curr]
- current_chunk.append(sent_id)
- current_length = sum(sizes[current_chunk])
- # split chunk and generate pair when exceed target_seq_length or
- # finish the loop
- if curr == len(doc) - 1 or current_length >= target_seq_length:
- # split the chunk into 2 parts
- a_end = 1
- if len(current_chunk) > 2:
- a_end = np.random.randint(1, len(current_chunk) - 1)
- sent_a = current_chunk[:a_end]
- len_a = sum(sizes[sent_a])
- # generate next sentence label, note that if there is only 1 sentence
- # in current chunk, label is always 0
- next_sent_label = (
- 1 if np.random.rand() > 0.5 and len(current_chunk) != 1 else 0
- )
- if not next_sent_label:
- # if next sentence label is 0, sample sent_b from a random doc
- target_b_length = target_seq_length - len_a
- rand_doc_id = self._skip_sampling(len(self.block_indices), [doc_id])
- random_doc = self.block_indices[rand_doc_id]
- random_start = np.random.randint(0, len(random_doc))
- sent_b = []
- len_b = 0
- for j in range(random_start, len(random_doc)):
- sent_b.append(random_doc[j])
- len_b = sum(sizes[sent_b])
- if len_b >= target_b_length:
- break
- # return the second part of the chunk since it's not used
- num_unused_segments = len(current_chunk) - a_end
- curr -= num_unused_segments
- else:
- # if next sentence label is 1, use the second part of chunk as sent_B
- sent_b = current_chunk[a_end:]
- len_b = sum(sizes[sent_b])
- # currently sent_a and sent_B may be longer than max_num_tokens,
- # truncate them and return block idx and offsets for them
- sent_a, sent_b = self._truncate_sentences(
- sent_a, sent_b, max_num_tokens
- )
- self.sent_pairs.append((sent_a, sent_b, next_sent_label))
- self.sizes.append(3 + sent_a[3] + sent_b[3])
- current_chunk = []
- curr += 1
-
- def _skip_sampling(self, total, skip_ids):
- """
- Generate a random integer which is not in skip_ids. Sample range is [0, total)
- TODO: ids in skip_ids should be consecutive, we can extend it to more generic version later
- """
- rand_id = np.random.randint(total - len(skip_ids))
- return rand_id if rand_id < min(skip_ids) else rand_id + len(skip_ids)
-
- def _truncate_sentences(self, sent_a, sent_b, max_num_tokens):
- """
- Trancate a pair of sentence to limit total length under max_num_tokens
- Logics:
- 1. Truncate longer sentence
- 2. Tokens to be truncated could be at the beginning or the end of the sentnce
- Returns:
- Truncated sentences represented by dataset idx
- """
- len_a, len_b = sum(self.dataset.sizes[sent_a]), sum(self.dataset.sizes[sent_b])
- front_cut_a = front_cut_b = end_cut_a = end_cut_b = 0
-
- while True:
- total_length = (
- len_a + len_b - front_cut_a - front_cut_b - end_cut_a - end_cut_b
- )
- if total_length <= max_num_tokens:
- break
-
- if len_a - front_cut_a - end_cut_a > len_b - front_cut_b - end_cut_b:
- if np.random.rand() < 0.5:
- front_cut_a += 1
- else:
- end_cut_a += 1
- else:
- if np.random.rand() < 0.5:
- front_cut_b += 1
- else:
- end_cut_b += 1
-
- # calculate ds indices as well as offsets and return
- truncated_sent_a = self._cut_sentence(sent_a, front_cut_a, end_cut_a)
- truncated_sent_b = self._cut_sentence(sent_b, front_cut_b, end_cut_b)
- return truncated_sent_a, truncated_sent_b
-
- def _cut_sentence(self, sent, front_cut, end_cut):
- """
- Cut a sentence based on the numbers of tokens to be cut from beginning and end
- Represent the sentence as dataset idx and return
- """
- start_ds_idx, end_ds_idx, offset = sent[0], sent[-1], 0
- target_len = sum(self.dataset.sizes[sent]) - front_cut - end_cut
- while front_cut > 0:
- if self.dataset.sizes[start_ds_idx] > front_cut:
- offset += front_cut
- break
- else:
- front_cut -= self.dataset.sizes[start_ds_idx]
- start_ds_idx += 1
- while end_cut > 0:
- if self.dataset.sizes[end_ds_idx] > end_cut:
- break
- else:
- end_cut -= self.dataset.sizes[end_ds_idx]
- end_ds_idx -= 1
- return start_ds_idx, offset, end_ds_idx, target_len
-
- def _fetch_block(self, start_ds_idx, offset, end_ds_idx, length):
- """
- Fetch a block of tokens based on its dataset idx
- """
- buffer = torch.cat(
- [self.dataset[idx] for idx in range(start_ds_idx, end_ds_idx + 1)]
- )
- s, e = offset, offset + length
- return buffer[s:e]
-
- def __getitem__(self, index):
- block1, block2, next_sent_label = self.sent_pairs[index]
- block1 = self._fetch_block(*block1)
- block2 = self._fetch_block(*block2)
- return block1, block2, next_sent_label
-
- def __len__(self):
- return len(self.sizes)
-
- @property
- def supports_prefetch(self):
- return getattr(self.dataset, "supports_prefetch", False)
-
- def prefetch(self, indices):
- prefetch_idx = set()
- for index in indices:
- for block1, block2, _ in [self.sent_pairs[index]]:
- for ds_idx in range(block1[0], block1[2] + 1):
- prefetch_idx.add(ds_idx)
- for ds_idx in range(block2[0], block2[2] + 1):
- prefetch_idx.add(ds_idx)
- self.dataset.prefetch(prefetch_idx)
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/ema/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/ema/__init__.py
deleted file mode 100644
index 503ceaa609b092e48bd32a0031f4e2ffb875483f..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/ema/__init__.py
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import importlib
-import os
-
-from .ema import EMA
-
-
-def build_ema(model, cfg, device):
- return EMA(model, cfg, device)
-
-
-# automatically import any Python files in the models/ema/ directory
-for file in sorted(os.listdir(os.path.dirname(__file__))):
- if file.endswith(".py") and not file.startswith("_"):
- file_name = file[: file.find(".py")]
- importlib.import_module("fairseq.models.ema." + file_name)
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/simultaneous_translation/docs/ende-mma.md b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/simultaneous_translation/docs/ende-mma.md
deleted file mode 100644
index 241d604a3b31a37755da68aad6ff47d46891d3fc..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/simultaneous_translation/docs/ende-mma.md
+++ /dev/null
@@ -1,74 +0,0 @@
-# Simultaneous Machine Translation
-
-This directory contains the code for the paper [Monotonic Multihead Attention](https://openreview.net/forum?id=Hyg96gBKPS)
-
-## Prepare Data
-
-[Please follow the instructions to download and preprocess the WMT'15 En-De dataset.](https://github.com/pytorch/fairseq/tree/simulastsharedtask/examples/translation#prepare-wmt14en2desh)
-
-Another example of training an English to Japanese model can be found [here](docs/enja.md)
-
-## Training
-
-- MMA-IL
-
-```shell
-fairseq-train \
- data-bin/wmt15_en_de_32k \
- --simul-type infinite_lookback \
- --user-dir $FAIRSEQ/example/simultaneous_translation \
- --mass-preservation \
- --criterion latency_augmented_label_smoothed_cross_entropy \
- --latency-weight-avg 0.1 \
- --max-update 50000 \
- --arch transformer_monotonic_iwslt_de_en save_dir_key=lambda \
- --optimizer adam --adam-betas '(0.9, 0.98)' \
- --lr-scheduler 'inverse_sqrt' \
- --warmup-init-lr 1e-7 --warmup-updates 4000 \
- --lr 5e-4 --stop-min-lr 1e-9 --clip-norm 0.0 --weight-decay 0.0001\
- --dropout 0.3 \
- --label-smoothing 0.1\
- --max-tokens 3584
-```
-
-- MMA-H
-
-```shell
-fairseq-train \
- data-bin/wmt15_en_de_32k \
- --simul-type hard_aligned \
- --user-dir $FAIRSEQ/example/simultaneous_translation \
- --mass-preservation \
- --criterion latency_augmented_label_smoothed_cross_entropy \
- --latency-weight-var 0.1 \
- --max-update 50000 \
- --arch transformer_monotonic_iwslt_de_en save_dir_key=lambda \
- --optimizer adam --adam-betas '(0.9, 0.98)' \
- --lr-scheduler 'inverse_sqrt' \
- --warmup-init-lr 1e-7 --warmup-updates 4000 \
- --lr 5e-4 --stop-min-lr 1e-9 --clip-norm 0.0 --weight-decay 0.0001\
- --dropout 0.3 \
- --label-smoothing 0.1\
- --max-tokens 3584
-```
-
-- wait-k
-
-```shell
-fairseq-train \
- data-bin/wmt15_en_de_32k \
- --simul-type wait-k \
- --waitk-lagging 3 \
- --user-dir $FAIRSEQ/example/simultaneous_translation \
- --mass-preservation \
- --criterion latency_augmented_label_smoothed_cross_entropy \
- --max-update 50000 \
- --arch transformer_monotonic_iwslt_de_en save_dir_key=lambda \
- --optimizer adam --adam-betas '(0.9, 0.98)' \
- --lr-scheduler 'inverse_sqrt' \
- --warmup-init-lr 1e-7 --warmup-updates 4000 \
- --lr 5e-4 --stop-min-lr 1e-9 --clip-norm 0.0 --weight-decay 0.0001\
- --dropout 0.3 \
- --label-smoothing 0.1\
- --max-tokens 3584
-```
diff --git a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/hubert/simple_kmeans/README.md b/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/hubert/simple_kmeans/README.md
deleted file mode 100644
index cd17da3b3e6f3e39083f7a76a56ff46c3a63b929..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/hubert/simple_kmeans/README.md
+++ /dev/null
@@ -1,71 +0,0 @@
-# Sharded Feature Extraction and K-means Application
-
-This folder contains scripts for preparing HUBERT labels from tsv files, the
-steps are:
-1. feature extraction
-2. k-means clustering
-3. k-means application
-
-
-## Data preparation
-
-`*.tsv` files contains a list of audio, where each line is the root, and
-following lines are the subpath for each audio:
-```
-
-
-
-...
-```
-
-
-## Feature extraction
-
-### MFCC feature
-Suppose the tsv file is at `${tsv_dir}/${split}.tsv`. To extract 39-D
-mfcc+delta+ddelta features for the 1st iteration HUBERT training, run:
-```sh
-python dump_mfcc_feature.py ${tsv_dir} ${split} ${nshard} ${rank} ${feat_dir}
-```
-This would shard the tsv file into `${nshard}` and extract features for the
-`${rank}`-th shard, where rank is an integer in `[0, nshard-1]`. Features would
-be saved at `${feat_dir}/${split}_${rank}_${nshard}.{npy,len}`.
-
-
-### HUBERT feature
-To extract features from the `${layer}`-th transformer layer of a trained
-HUBERT model saved at `${ckpt_path}`, run:
-```sh
-python dump_hubert_feature.py ${tsv_dir} ${split} ${ckpt_path} ${layer} ${nshard} ${rank} ${feat_dir}
-```
-Features would also be saved at `${feat_dir}/${split}_${rank}_${nshard}.{npy,len}`.
-
-- if out-of-memory, decrease the chunk size with `--max_chunk`
-
-
-## K-means clustering
-To fit a k-means model with `${n_clusters}` clusters on 10% of the `${split}` data, run
-```sh
-python learn_kmeans.py ${feat_dir} ${split} ${nshard} ${km_path} ${n_cluster} --percent 0.1
-```
-This saves the k-means model to `${km_path}`.
-
-- set `--precent -1` to use all data
-- more kmeans options can be found with `-h` flag
-
-
-## K-means application
-To apply a trained k-means model `${km_path}` to obtain labels for `${split}`, run
-```sh
-python dump_km_label.py ${feat_dir} ${split} ${km_path} ${nshard} ${rank} ${lab_dir}
-```
-This would extract labels for the `${rank}`-th shard out of `${nshard}` shards
-and dump them to `${lab_dir}/${split}_${rank}_${shard}.km`
-
-
-Finally, merge shards for `${split}` by running
-```sh
-for rank in $(seq 0 $((nshard - 1))); do
- cat $lab_dir/${split}_${rank}_${nshard}.km
-done > $lab_dir/${split}.km
-```
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/backtranslation_dataset.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/backtranslation_dataset.py
deleted file mode 100644
index 8f70c90df3d237077537993e125d366c95292f1a..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/backtranslation_dataset.py
+++ /dev/null
@@ -1,165 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-from fairseq import utils
-
-from . import FairseqDataset
-
-
-def backtranslate_samples(samples, collate_fn, generate_fn, cuda=True):
- """Backtranslate a list of samples.
-
- Given an input (*samples*) of the form:
-
- [{'id': 1, 'source': 'hallo welt'}]
-
- this will return:
-
- [{'id': 1, 'source': 'hello world', 'target': 'hallo welt'}]
-
- Args:
- samples (List[dict]): samples to backtranslate. Individual samples are
- expected to have a 'source' key, which will become the 'target'
- after backtranslation.
- collate_fn (callable): function to collate samples into a mini-batch
- generate_fn (callable): function to generate backtranslations
- cuda (bool): use GPU for generation (default: ``True``)
-
- Returns:
- List[dict]: an updated list of samples with a backtranslated source
- """
- collated_samples = collate_fn(samples)
- s = utils.move_to_cuda(collated_samples) if cuda else collated_samples
- generated_sources = generate_fn(s)
-
- id_to_src = {sample["id"]: sample["source"] for sample in samples}
-
- # Go through each tgt sentence in batch and its corresponding best
- # generated hypothesis and create a backtranslation data pair
- # {id: id, source: generated backtranslation, target: original tgt}
- return [
- {
- "id": id.item(),
- "target": id_to_src[id.item()],
- "source": hypos[0]["tokens"].cpu(),
- }
- for id, hypos in zip(collated_samples["id"], generated_sources)
- ]
-
-
-class BacktranslationDataset(FairseqDataset):
- """
- Sets up a backtranslation dataset which takes a tgt batch, generates
- a src using a tgt-src backtranslation function (*backtranslation_fn*),
- and returns the corresponding `{generated src, input tgt}` batch.
-
- Args:
- tgt_dataset (~fairseq.data.FairseqDataset): the dataset to be
- backtranslated. Only the source side of this dataset will be used.
- After backtranslation, the source sentences in this dataset will be
- returned as the targets.
- src_dict (~fairseq.data.Dictionary): the dictionary of backtranslated
- sentences.
- tgt_dict (~fairseq.data.Dictionary, optional): the dictionary of
- sentences to be backtranslated.
- backtranslation_fn (callable, optional): function to call to generate
- backtranslations. This is typically the `generate` method of a
- :class:`~fairseq.sequence_generator.SequenceGenerator` object.
- Pass in None when it is not available at initialization time, and
- use set_backtranslation_fn function to set it when available.
- output_collater (callable, optional): function to call on the
- backtranslated samples to create the final batch
- (default: ``tgt_dataset.collater``).
- cuda: use GPU for generation
- """
-
- def __init__(
- self,
- tgt_dataset,
- src_dict,
- tgt_dict=None,
- backtranslation_fn=None,
- output_collater=None,
- cuda=True,
- **kwargs
- ):
- self.tgt_dataset = tgt_dataset
- self.backtranslation_fn = backtranslation_fn
- self.output_collater = (
- output_collater if output_collater is not None else tgt_dataset.collater
- )
- self.cuda = cuda if torch.cuda.is_available() else False
- self.src_dict = src_dict
- self.tgt_dict = tgt_dict
-
- def __getitem__(self, index):
- """
- Returns a single sample from *tgt_dataset*. Note that backtranslation is
- not applied in this step; use :func:`collater` instead to backtranslate
- a batch of samples.
- """
- return self.tgt_dataset[index]
-
- def __len__(self):
- return len(self.tgt_dataset)
-
- def set_backtranslation_fn(self, backtranslation_fn):
- self.backtranslation_fn = backtranslation_fn
-
- def collater(self, samples):
- """Merge and backtranslate a list of samples to form a mini-batch.
-
- Using the samples from *tgt_dataset*, load a collated target sample to
- feed to the backtranslation model. Then take the backtranslation with
- the best score as the source and the original input as the target.
-
- Note: we expect *tgt_dataset* to provide a function `collater()` that
- will collate samples into the format expected by *backtranslation_fn*.
- After backtranslation, we will feed the new list of samples (i.e., the
- `(backtranslated source, original source)` pairs) to *output_collater*
- and return the result.
-
- Args:
- samples (List[dict]): samples to backtranslate and collate
-
- Returns:
- dict: a mini-batch with keys coming from *output_collater*
- """
- if samples[0].get("is_dummy", False):
- return samples
- samples = backtranslate_samples(
- samples=samples,
- collate_fn=self.tgt_dataset.collater,
- generate_fn=(lambda net_input: self.backtranslation_fn(net_input)),
- cuda=self.cuda,
- )
- return self.output_collater(samples)
-
- def num_tokens(self, index):
- """Just use the tgt dataset num_tokens"""
- return self.tgt_dataset.num_tokens(index)
-
- def ordered_indices(self):
- """Just use the tgt dataset ordered_indices"""
- return self.tgt_dataset.ordered_indices()
-
- def size(self, index):
- """Return an example's size as a float or tuple. This value is used
- when filtering a dataset with ``--max-positions``.
-
- Note: we use *tgt_dataset* to approximate the length of the source
- sentence, since we do not know the actual length until after
- backtranslation.
- """
- tgt_size = self.tgt_dataset.size(index)[0]
- return (tgt_size, tgt_size)
-
- @property
- def supports_prefetch(self):
- return getattr(self.tgt_dataset, "supports_prefetch", False)
-
- def prefetch(self, indices):
- return self.tgt_dataset.prefetch(indices)
diff --git a/spaces/OkamiFeng/Bark-with-Voice-Cloning/util/helper.py b/spaces/OkamiFeng/Bark-with-Voice-Cloning/util/helper.py
deleted file mode 100644
index 185613661a2f450e55a5d2add1a1e75bc08f5c19..0000000000000000000000000000000000000000
--- a/spaces/OkamiFeng/Bark-with-Voice-Cloning/util/helper.py
+++ /dev/null
@@ -1,35 +0,0 @@
-import os
-from datetime import datetime
-from mutagen.wave import WAVE
-from mutagen.id3._frames import *
-
-def create_filename(path, seed, name, extension):
- now = datetime.now()
- date_str =now.strftime("%m-%d-%Y")
- outputs_folder = os.path.join(os.getcwd(), path)
- if not os.path.exists(outputs_folder):
- os.makedirs(outputs_folder)
-
- sub_folder = os.path.join(outputs_folder, date_str)
- if not os.path.exists(sub_folder):
- os.makedirs(sub_folder)
-
- time_str = now.strftime("%H-%M-%S")
- if seed == None:
- file_name = f"{name}_{time_str}{extension}"
- else:
- file_name = f"{name}_{time_str}_s{seed}{extension}"
- return os.path.join(sub_folder, file_name)
-
-
-def add_id3_tag(filename, text, speakername, seed):
- audio = WAVE(filename)
- if speakername == None:
- speakername = "Unconditional"
-
- # write id3 tag with text truncated to 60 chars, as a precaution...
- audio["TIT2"] = TIT2(encoding=3, text=text[:60])
- audio["TPE1"] = TPE1(encoding=3, text=f"Voice {speakername} using Seed={seed}")
- audio["TPUB"] = TPUB(encoding=3, text="Bark by Suno AI")
- audio["COMMENT"] = COMM(encoding=3, text="Generated with Bark GUI - Text-Prompted Generative Audio Model. Visit https://github.com/C0untFloyd/bark-gui")
- audio.save()
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/data/transforms/__init__.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/data/transforms/__init__.py
deleted file mode 100644
index ab3c63b5b456a7fb878757e25768a3634f76ae5b..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/data/transforms/__init__.py
+++ /dev/null
@@ -1,14 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from fvcore.transforms.transform import Transform, TransformList # order them first
-from fvcore.transforms.transform import *
-from .transform import *
-from .augmentation import *
-from .augmentation_impl import *
-
-__all__ = [k for k in globals().keys() if not k.startswith("_")]
-
-
-from detectron2.utils.env import fixup_module_metadata
-
-fixup_module_metadata(__name__, globals(), __all__)
-del fixup_module_metadata
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/solver/__init__.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/solver/__init__.py
deleted file mode 100644
index 9a2dbd35bb24f0d4a979bc8f304142376d87e7ec..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/solver/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from .build import build_lr_scheduler, build_optimizer, get_default_optimizer_params
-from .lr_scheduler import WarmupCosineLR, WarmupMultiStepLR, LRMultiplier, WarmupParamScheduler
-
-__all__ = [k for k in globals().keys() if not k.startswith("_")]
diff --git a/spaces/OptimalScale/Robin-33b/lmflow/utils/data_utils.py b/spaces/OptimalScale/Robin-33b/lmflow/utils/data_utils.py
deleted file mode 100644
index 25ff71ef3d5e953e7dd26fb595e5b35a3b0a273e..0000000000000000000000000000000000000000
--- a/spaces/OptimalScale/Robin-33b/lmflow/utils/data_utils.py
+++ /dev/null
@@ -1,212 +0,0 @@
-"""The program includes several functions: setting a random seed,
-loading data from a JSON file, batching data, and extracting answers from generated text.
-"""
-
-import random
-import numpy as np
-import torch
-import json
-import re
-def set_random_seed(seed: int):
- """
- Set the random seed for `random`, `numpy`, `torch`, `torch.cuda`.
-
- Parameters
- ------------
- seed : int
- The default seed.
-
- """
- random.seed(seed)
- np.random.seed(seed)
- torch.manual_seed(seed)
- if torch.cuda.is_available():
- torch.cuda.manual_seed_all(seed)
-
-def load_data(file_name: str):
- """
- Load data with file name.
-
- Parameters
- ------------
- file_name : str.
- The dataset file name.
-
- Returns
- ------------
- inputs : list.
- The input texts of the dataset.
- outputs : list.
- The output texts file datasets.
- len : int.
- The length of the dataset.
- """
- inputs = []
- outputs = []
- type = ""
- with open(file_name, encoding='utf-8') as f:
- json_data = json.load(f)
- type = json_data["type"]
- for line in json_data["instances"]:
- inputs.append(line["input"])
- outputs.append(line["output"])
-
- print(f"load dataset {file_name} success.\n")
- print(f"Type : {type}, datasize : {len(outputs)}")
-
- return inputs, outputs, len(outputs)
-
-def batchlize(examples: list, batch_size: int, random_shuffle: bool):
- """
- Convert examples to a dataloader.
-
- Parameters
- ------------
- examples : list.
- Data list.
- batch_size : int.
-
- random_shuffle : bool
- If true, the dataloader shuffle the training data.
-
- Returns
- ------------
- dataloader:
- Dataloader with batch generator.
- """
- size = 0
- dataloader = []
- length = len(examples)
- if (random_shuffle):
- random.shuffle(examples)
- while size < length:
- if length - size > batch_size:
- dataloader.append(examples[size : size+batch_size])
- size += batch_size
- else:
- dataloader.append(examples[size : size+(length-size)])
- size += (length - size)
- return dataloader
-
-
-
-def answer_extraction(response, answer_type=None): #use this funtion to extract answers from generated text
-
- """
- Use this funtion to extract answers from generated text
-
- Parameters
- ------------
- args :
- Arguments.
- response : str
- plain string response.
-
-
- Returns
- ------------
- answer:
- Decoded answer (such as A, B, C, D, E for mutiple-choice QA).
- """
-
- # temp = response["generated_text"]
- temp = response
- if answer_type in ("gsm8k", "svamp", "asdiv", "addsub", "singleeq", "multiarith", "math"):
- temp = temp.replace(",", "")
- temp = [s for s in re.findall(r'-?\d+\.?\d*', temp)]
- elif answer_type in ("aqua", "csqa", "multiple_choice"):
- temp = re.findall(r'A|B|C|D|E', temp)
- elif answer_type in ("strategyqa", "coin_flip"):
- temp = temp.lower()
- temp = re.sub("\"|\'|\n|\.|\s|\:|\,"," ", temp)
- temp = temp.split(" ")
- temp = [i for i in temp if i in ("yes", "no")]
- elif answer_type in ("last_letters"):
- temp = re.sub("\"|\'|\n|\.|\s","", temp)
- temp = [temp]
- elif answer_type in ("pubmedqa", "binary_choice"):
- # pattern = "Output: (yes|no|maybe)"
- # sttr = re.search(pattern, temp)
- # answer = sttr.group(0)[8:] if sttr is not None else "N/A"
- pattern = "(answer|Answer|ANSWER|output|Output|OUTPUT|A): \(*(yes|Yes|YES|no|No|NO|maybe|Maybe|MAYBE)"
- sttr = re.search(pattern, temp)
- if sttr is not None:
- mid_answer = sttr.group(0)
- mid_answer = mid_answer.split(":")[-1].strip()
- answer = mid_answer.lower()
- else:
- pattern = "(yes|Yes|YES|no|No|NO|maybe|Maybe|MAYBE)(\.|\s)"
- sttr = re.search(pattern, temp)
- if sttr is not None:
- answer = sttr.group(0)[:-1].lower()
- else:
- answer = "N/A"
- return answer
- elif answer_type == "medmcqa":
- # pattern = "Output: (A|B|C|D)."
- # sttr = re.search(pattern, temp)
- # answer = sttr.group(0)[8:-1].lower() if sttr is not None else "N/A"
- pattern = "(answer|Answer|ANSWER|output|Output|OUTPUT|A): \(*(A|B|C|D|a|b|c|d)"
- sttr = re.search(pattern, temp)
- if sttr is not None:
- mid_answer = sttr.group(0)
- answer = mid_answer[-1].lower()
- else:
- pattern = "\(*(A|B|C|D|a|b|c|d)\)*(\.|\s)"
- sttr = re.search(pattern, temp)
- if sttr is not None:
- if '(' in sttr.group(0):
- answer = sttr.group(0)[1].lower()
- else:
- answer = sttr.group(0)[0].lower()
- else:
- answer = "N/A"
- return answer
-
- elif answer_type == "usmle":
- # pattern = "Output: (A|B|C|D)."
- # sttr = re.search(pattern, temp)
- # answer = sttr.group(0)[8:-1].lower() if sttr is not None else "N/A"
- pattern = "(Answer|Output|A): \(*(A|B|C|D|a|b|c|d)"
- sttr = re.search(pattern, temp)
- if sttr is not None:
- mid_answer = sttr.group(0)
- answer = mid_answer[-1].lower()
- else:
- pattern = "\(*(A|B|C|D|a|b|c|d)\)*(\.|\s)"
- sttr = re.search(pattern, temp)
- if sttr is not None:
- if '(' in sttr.group(0):
- answer = sttr.group(0)[1].lower()
- else:
- answer = sttr.group(0)[0].lower()
- else:
- answer = "N/A"
- return answer
- elif answer_type == "text":
- return response
- else:
- raise NotImplementedError(f"Unsupported answer type: {answer_type}")
-
- if len(temp) != 0:
- answer = temp[-1]
- # if there is . at the end of answer, remove it
- # e.g. answer = 64.
- if answer != "":
- if answer[-1] == ".":
- answer = answer[:-1]
-
- # round the answer to nearest integer
- if answer_type in ("gsm8k", "svamp"):
- try:
- answer = str(round(float(answer)))
- except:
- answer = "" # no sol or sol doesn't have valid format
- elif answer_type in ("last_letters"):
- try:
- answer = answer[-args.concat_length:]
- except:
- answer = ""
- else:
- answer = ""
- return answer
diff --git a/spaces/OswaldDev/Image-enhancement/models/network_swinir.py b/spaces/OswaldDev/Image-enhancement/models/network_swinir.py
deleted file mode 100644
index 702e1fd2f5f7dd11feda3ab1999e65dd293250fb..0000000000000000000000000000000000000000
--- a/spaces/OswaldDev/Image-enhancement/models/network_swinir.py
+++ /dev/null
@@ -1,866 +0,0 @@
-# -----------------------------------------------------------------------------------
-# SwinIR: Image Restoration Using Swin Transformer, https://arxiv.org/abs/2108.10257
-# Originally Written by Ze Liu, Modified by Jingyun Liang.
-# -----------------------------------------------------------------------------------
-
-import math
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils.checkpoint as checkpoint
-from timm.models.layers import DropPath, to_2tuple, trunc_normal_
-
-
-class Mlp(nn.Module):
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
-def window_partition(x, window_size):
- """
- Args:
- x: (B, H, W, C)
- window_size (int): window size
-
- Returns:
- windows: (num_windows*B, window_size, window_size, C)
- """
- B, H, W, C = x.shape
- x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
- windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
- return windows
-
-
-def window_reverse(windows, window_size, H, W):
- """
- Args:
- windows: (num_windows*B, window_size, window_size, C)
- window_size (int): Window size
- H (int): Height of image
- W (int): Width of image
-
- Returns:
- x: (B, H, W, C)
- """
- B = int(windows.shape[0] / (H * W / window_size / window_size))
- x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
- x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
- return x
-
-
-class WindowAttention(nn.Module):
- r""" Window based multi-head self attention (W-MSA) module with relative position bias.
- It supports both of shifted and non-shifted window.
-
- Args:
- dim (int): Number of input channels.
- window_size (tuple[int]): The height and width of the window.
- num_heads (int): Number of attention heads.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
- attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
- proj_drop (float, optional): Dropout ratio of output. Default: 0.0
- """
-
- def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.):
-
- super().__init__()
- self.dim = dim
- self.window_size = window_size # Wh, Ww
- self.num_heads = num_heads
- head_dim = dim // num_heads
- self.scale = qk_scale or head_dim ** -0.5
-
- # define a parameter table of relative position bias
- self.relative_position_bias_table = nn.Parameter(
- torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH
-
- # get pair-wise relative position index for each token inside the window
- coords_h = torch.arange(self.window_size[0])
- coords_w = torch.arange(self.window_size[1])
- coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
- coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
- relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
- relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
- relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
- relative_coords[:, :, 1] += self.window_size[1] - 1
- relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
- relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
- self.register_buffer("relative_position_index", relative_position_index)
-
- self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(dim, dim)
-
- self.proj_drop = nn.Dropout(proj_drop)
-
- trunc_normal_(self.relative_position_bias_table, std=.02)
- self.softmax = nn.Softmax(dim=-1)
-
- def forward(self, x, mask=None):
- """
- Args:
- x: input features with shape of (num_windows*B, N, C)
- mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
- """
- B_, N, C = x.shape
- qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- q = q * self.scale
- attn = (q @ k.transpose(-2, -1))
-
- relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
- self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH
- relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
- attn = attn + relative_position_bias.unsqueeze(0)
-
- if mask is not None:
- nW = mask.shape[0]
- attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
- attn = attn.view(-1, self.num_heads, N, N)
- attn = self.softmax(attn)
- else:
- attn = self.softmax(attn)
-
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
- def extra_repr(self) -> str:
- return f'dim={self.dim}, window_size={self.window_size}, num_heads={self.num_heads}'
-
- def flops(self, N):
- # calculate flops for 1 window with token length of N
- flops = 0
- # qkv = self.qkv(x)
- flops += N * self.dim * 3 * self.dim
- # attn = (q @ k.transpose(-2, -1))
- flops += self.num_heads * N * (self.dim // self.num_heads) * N
- # x = (attn @ v)
- flops += self.num_heads * N * N * (self.dim // self.num_heads)
- # x = self.proj(x)
- flops += N * self.dim * self.dim
- return flops
-
-
-class SwinTransformerBlock(nn.Module):
- r""" Swin Transformer Block.
-
- Args:
- dim (int): Number of input channels.
- input_resolution (tuple[int]): Input resulotion.
- num_heads (int): Number of attention heads.
- window_size (int): Window size.
- shift_size (int): Shift size for SW-MSA.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float, optional): Stochastic depth rate. Default: 0.0
- act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- """
-
- def __init__(self, dim, input_resolution, num_heads, window_size=7, shift_size=0,
- mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0.,
- act_layer=nn.GELU, norm_layer=nn.LayerNorm):
- super().__init__()
- self.dim = dim
- self.input_resolution = input_resolution
- self.num_heads = num_heads
- self.window_size = window_size
- self.shift_size = shift_size
- self.mlp_ratio = mlp_ratio
- if min(self.input_resolution) <= self.window_size:
- # if window size is larger than input resolution, we don't partition windows
- self.shift_size = 0
- self.window_size = min(self.input_resolution)
- assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
-
- self.norm1 = norm_layer(dim)
- self.attn = WindowAttention(
- dim, window_size=to_2tuple(self.window_size), num_heads=num_heads,
- qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)
-
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- if self.shift_size > 0:
- attn_mask = self.calculate_mask(self.input_resolution)
- else:
- attn_mask = None
-
- self.register_buffer("attn_mask", attn_mask)
-
- def calculate_mask(self, x_size):
- # calculate attention mask for SW-MSA
- H, W = x_size
- img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1
- h_slices = (slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None))
- w_slices = (slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None))
- cnt = 0
- for h in h_slices:
- for w in w_slices:
- img_mask[:, h, w, :] = cnt
- cnt += 1
-
- mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
- mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
- attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
- attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
-
- return attn_mask
-
- def forward(self, x, x_size):
- H, W = x_size
- B, L, C = x.shape
- # assert L == H * W, "input feature has wrong size"
-
- shortcut = x
- x = self.norm1(x)
- x = x.view(B, H, W, C)
-
- # cyclic shift
- if self.shift_size > 0:
- shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
- else:
- shifted_x = x
-
- # partition windows
- x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C
- x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C
-
- # W-MSA/SW-MSA (to be compatible for testing on images whose shapes are the multiple of window size
- if self.input_resolution == x_size:
- attn_windows = self.attn(x_windows, mask=self.attn_mask) # nW*B, window_size*window_size, C
- else:
- attn_windows = self.attn(x_windows, mask=self.calculate_mask(x_size).to(x.device))
-
- # merge windows
- attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
- shifted_x = window_reverse(attn_windows, self.window_size, H, W) # B H' W' C
-
- # reverse cyclic shift
- if self.shift_size > 0:
- x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
- else:
- x = shifted_x
- x = x.view(B, H * W, C)
-
- # FFN
- x = shortcut + self.drop_path(x)
- x = x + self.drop_path(self.mlp(self.norm2(x)))
-
- return x
-
- def extra_repr(self) -> str:
- return f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " \
- f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}"
-
- def flops(self):
- flops = 0
- H, W = self.input_resolution
- # norm1
- flops += self.dim * H * W
- # W-MSA/SW-MSA
- nW = H * W / self.window_size / self.window_size
- flops += nW * self.attn.flops(self.window_size * self.window_size)
- # mlp
- flops += 2 * H * W * self.dim * self.dim * self.mlp_ratio
- # norm2
- flops += self.dim * H * W
- return flops
-
-
-class PatchMerging(nn.Module):
- r""" Patch Merging Layer.
-
- Args:
- input_resolution (tuple[int]): Resolution of input feature.
- dim (int): Number of input channels.
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- """
-
- def __init__(self, input_resolution, dim, norm_layer=nn.LayerNorm):
- super().__init__()
- self.input_resolution = input_resolution
- self.dim = dim
- self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
- self.norm = norm_layer(4 * dim)
-
- def forward(self, x):
- """
- x: B, H*W, C
- """
- H, W = self.input_resolution
- B, L, C = x.shape
- assert L == H * W, "input feature has wrong size"
- assert H % 2 == 0 and W % 2 == 0, f"x size ({H}*{W}) are not even."
-
- x = x.view(B, H, W, C)
-
- x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
- x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
- x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
- x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
- x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
- x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
-
- x = self.norm(x)
- x = self.reduction(x)
-
- return x
-
- def extra_repr(self) -> str:
- return f"input_resolution={self.input_resolution}, dim={self.dim}"
-
- def flops(self):
- H, W = self.input_resolution
- flops = H * W * self.dim
- flops += (H // 2) * (W // 2) * 4 * self.dim * 2 * self.dim
- return flops
-
-
-class BasicLayer(nn.Module):
- """ A basic Swin Transformer layer for one stage.
-
- Args:
- dim (int): Number of input channels.
- input_resolution (tuple[int]): Input resolution.
- depth (int): Number of blocks.
- num_heads (int): Number of attention heads.
- window_size (int): Local window size.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
- """
-
- def __init__(self, dim, input_resolution, depth, num_heads, window_size,
- mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0.,
- drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False):
-
- super().__init__()
- self.dim = dim
- self.input_resolution = input_resolution
- self.depth = depth
- self.use_checkpoint = use_checkpoint
-
- # build blocks
- self.blocks = nn.ModuleList([
- SwinTransformerBlock(dim=dim, input_resolution=input_resolution,
- num_heads=num_heads, window_size=window_size,
- shift_size=0 if (i % 2 == 0) else window_size // 2,
- mlp_ratio=mlp_ratio,
- qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop, attn_drop=attn_drop,
- drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
- norm_layer=norm_layer)
- for i in range(depth)])
-
- # patch merging layer
- if downsample is not None:
- self.downsample = downsample(input_resolution, dim=dim, norm_layer=norm_layer)
- else:
- self.downsample = None
-
- def forward(self, x, x_size):
- for blk in self.blocks:
- if self.use_checkpoint:
- x = checkpoint.checkpoint(blk, x, x_size)
- else:
- x = blk(x, x_size)
- if self.downsample is not None:
- x = self.downsample(x)
- return x
-
- def extra_repr(self) -> str:
- return f"dim={self.dim}, input_resolution={self.input_resolution}, depth={self.depth}"
-
- def flops(self):
- flops = 0
- for blk in self.blocks:
- flops += blk.flops()
- if self.downsample is not None:
- flops += self.downsample.flops()
- return flops
-
-
-class RSTB(nn.Module):
- """Residual Swin Transformer Block (RSTB).
-
- Args:
- dim (int): Number of input channels.
- input_resolution (tuple[int]): Input resolution.
- depth (int): Number of blocks.
- num_heads (int): Number of attention heads.
- window_size (int): Local window size.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
- img_size: Input image size.
- patch_size: Patch size.
- resi_connection: The convolutional block before residual connection.
- """
-
- def __init__(self, dim, input_resolution, depth, num_heads, window_size,
- mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0.,
- drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False,
- img_size=224, patch_size=4, resi_connection='1conv'):
- super(RSTB, self).__init__()
-
- self.dim = dim
- self.input_resolution = input_resolution
-
- self.residual_group = BasicLayer(dim=dim,
- input_resolution=input_resolution,
- depth=depth,
- num_heads=num_heads,
- window_size=window_size,
- mlp_ratio=mlp_ratio,
- qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop, attn_drop=attn_drop,
- drop_path=drop_path,
- norm_layer=norm_layer,
- downsample=downsample,
- use_checkpoint=use_checkpoint)
-
- if resi_connection == '1conv':
- self.conv = nn.Conv2d(dim, dim, 3, 1, 1)
- elif resi_connection == '3conv':
- # to save parameters and memory
- self.conv = nn.Sequential(nn.Conv2d(dim, dim // 4, 3, 1, 1), nn.LeakyReLU(negative_slope=0.2, inplace=True),
- nn.Conv2d(dim // 4, dim // 4, 1, 1, 0),
- nn.LeakyReLU(negative_slope=0.2, inplace=True),
- nn.Conv2d(dim // 4, dim, 3, 1, 1))
-
- self.patch_embed = PatchEmbed(
- img_size=img_size, patch_size=patch_size, in_chans=0, embed_dim=dim,
- norm_layer=None)
-
- self.patch_unembed = PatchUnEmbed(
- img_size=img_size, patch_size=patch_size, in_chans=0, embed_dim=dim,
- norm_layer=None)
-
- def forward(self, x, x_size):
- return self.patch_embed(self.conv(self.patch_unembed(self.residual_group(x, x_size), x_size))) + x
-
- def flops(self):
- flops = 0
- flops += self.residual_group.flops()
- H, W = self.input_resolution
- flops += H * W * self.dim * self.dim * 9
- flops += self.patch_embed.flops()
- flops += self.patch_unembed.flops()
-
- return flops
-
-
-class PatchEmbed(nn.Module):
- r""" Image to Patch Embedding
-
- Args:
- img_size (int): Image size. Default: 224.
- patch_size (int): Patch token size. Default: 4.
- in_chans (int): Number of input image channels. Default: 3.
- embed_dim (int): Number of linear projection output channels. Default: 96.
- norm_layer (nn.Module, optional): Normalization layer. Default: None
- """
-
- def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
- super().__init__()
- img_size = to_2tuple(img_size)
- patch_size = to_2tuple(patch_size)
- patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]]
- self.img_size = img_size
- self.patch_size = patch_size
- self.patches_resolution = patches_resolution
- self.num_patches = patches_resolution[0] * patches_resolution[1]
-
- self.in_chans = in_chans
- self.embed_dim = embed_dim
-
- if norm_layer is not None:
- self.norm = norm_layer(embed_dim)
- else:
- self.norm = None
-
- def forward(self, x):
- x = x.flatten(2).transpose(1, 2) # B Ph*Pw C
- if self.norm is not None:
- x = self.norm(x)
- return x
-
- def flops(self):
- flops = 0
- H, W = self.img_size
- if self.norm is not None:
- flops += H * W * self.embed_dim
- return flops
-
-
-class PatchUnEmbed(nn.Module):
- r""" Image to Patch Unembedding
-
- Args:
- img_size (int): Image size. Default: 224.
- patch_size (int): Patch token size. Default: 4.
- in_chans (int): Number of input image channels. Default: 3.
- embed_dim (int): Number of linear projection output channels. Default: 96.
- norm_layer (nn.Module, optional): Normalization layer. Default: None
- """
-
- def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
- super().__init__()
- img_size = to_2tuple(img_size)
- patch_size = to_2tuple(patch_size)
- patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]]
- self.img_size = img_size
- self.patch_size = patch_size
- self.patches_resolution = patches_resolution
- self.num_patches = patches_resolution[0] * patches_resolution[1]
-
- self.in_chans = in_chans
- self.embed_dim = embed_dim
-
- def forward(self, x, x_size):
- B, HW, C = x.shape
- x = x.transpose(1, 2).view(B, self.embed_dim, x_size[0], x_size[1]) # B Ph*Pw C
- return x
-
- def flops(self):
- flops = 0
- return flops
-
-
-class Upsample(nn.Sequential):
- """Upsample module.
-
- Args:
- scale (int): Scale factor. Supported scales: 2^n and 3.
- num_feat (int): Channel number of intermediate features.
- """
-
- def __init__(self, scale, num_feat):
- m = []
- if (scale & (scale - 1)) == 0: # scale = 2^n
- for _ in range(int(math.log(scale, 2))):
- m.append(nn.Conv2d(num_feat, 4 * num_feat, 3, 1, 1))
- m.append(nn.PixelShuffle(2))
- elif scale == 3:
- m.append(nn.Conv2d(num_feat, 9 * num_feat, 3, 1, 1))
- m.append(nn.PixelShuffle(3))
- else:
- raise ValueError(f'scale {scale} is not supported. ' 'Supported scales: 2^n and 3.')
- super(Upsample, self).__init__(*m)
-
-
-class UpsampleOneStep(nn.Sequential):
- """UpsampleOneStep module (the difference with Upsample is that it always only has 1conv + 1pixelshuffle)
- Used in lightweight SR to save parameters.
-
- Args:
- scale (int): Scale factor. Supported scales: 2^n and 3.
- num_feat (int): Channel number of intermediate features.
-
- """
-
- def __init__(self, scale, num_feat, num_out_ch, input_resolution=None):
- self.num_feat = num_feat
- self.input_resolution = input_resolution
- m = []
- m.append(nn.Conv2d(num_feat, (scale ** 2) * num_out_ch, 3, 1, 1))
- m.append(nn.PixelShuffle(scale))
- super(UpsampleOneStep, self).__init__(*m)
-
- def flops(self):
- H, W = self.input_resolution
- flops = H * W * self.num_feat * 3 * 9
- return flops
-
-
-class SwinIR(nn.Module):
- r""" SwinIR
- A PyTorch impl of : `SwinIR: Image Restoration Using Swin Transformer`, based on Swin Transformer.
-
- Args:
- img_size (int | tuple(int)): Input image size. Default 64
- patch_size (int | tuple(int)): Patch size. Default: 1
- in_chans (int): Number of input image channels. Default: 3
- embed_dim (int): Patch embedding dimension. Default: 96
- depths (tuple(int)): Depth of each Swin Transformer layer.
- num_heads (tuple(int)): Number of attention heads in different layers.
- window_size (int): Window size. Default: 7
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4
- qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. Default: None
- drop_rate (float): Dropout rate. Default: 0
- attn_drop_rate (float): Attention dropout rate. Default: 0
- drop_path_rate (float): Stochastic depth rate. Default: 0.1
- norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
- ape (bool): If True, add absolute position embedding to the patch embedding. Default: False
- patch_norm (bool): If True, add normalization after patch embedding. Default: True
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False
- upscale: Upscale factor. 2/3/4/8 for image SR, 1 for denoising and compress artifact reduction
- img_range: Image range. 1. or 255.
- upsampler: The reconstruction reconstruction module. 'pixelshuffle'/'pixelshuffledirect'/'nearest+conv'/None
- resi_connection: The convolutional block before residual connection. '1conv'/'3conv'
- """
-
- def __init__(self, img_size=64, patch_size=1, in_chans=3,
- embed_dim=96, depths=[6, 6, 6, 6], num_heads=[6, 6, 6, 6],
- window_size=7, mlp_ratio=4., qkv_bias=True, qk_scale=None,
- drop_rate=0., attn_drop_rate=0., drop_path_rate=0.1,
- norm_layer=nn.LayerNorm, ape=False, patch_norm=True,
- use_checkpoint=False, upscale=2, img_range=1., upsampler='', resi_connection='1conv',
- **kwargs):
- super(SwinIR, self).__init__()
- num_in_ch = in_chans
- num_out_ch = in_chans
- num_feat = 64
- self.img_range = img_range
- if in_chans == 3:
- rgb_mean = (0.4488, 0.4371, 0.4040)
- self.mean = torch.Tensor(rgb_mean).view(1, 3, 1, 1)
- else:
- self.mean = torch.zeros(1, 1, 1, 1)
- self.upscale = upscale
- self.upsampler = upsampler
- self.window_size = window_size
-
- #####################################################################################################
- ################################### 1, shallow feature extraction ###################################
- self.conv_first = nn.Conv2d(num_in_ch, embed_dim, 3, 1, 1)
-
- #####################################################################################################
- ################################### 2, deep feature extraction ######################################
- self.num_layers = len(depths)
- self.embed_dim = embed_dim
- self.ape = ape
- self.patch_norm = patch_norm
- self.num_features = embed_dim
- self.mlp_ratio = mlp_ratio
-
- # split image into non-overlapping patches
- self.patch_embed = PatchEmbed(
- img_size=img_size, patch_size=patch_size, in_chans=embed_dim, embed_dim=embed_dim,
- norm_layer=norm_layer if self.patch_norm else None)
- num_patches = self.patch_embed.num_patches
- patches_resolution = self.patch_embed.patches_resolution
- self.patches_resolution = patches_resolution
-
- # merge non-overlapping patches into image
- self.patch_unembed = PatchUnEmbed(
- img_size=img_size, patch_size=patch_size, in_chans=embed_dim, embed_dim=embed_dim,
- norm_layer=norm_layer if self.patch_norm else None)
-
- # absolute position embedding
- if self.ape:
- self.absolute_pos_embed = nn.Parameter(torch.zeros(1, num_patches, embed_dim))
- trunc_normal_(self.absolute_pos_embed, std=.02)
-
- self.pos_drop = nn.Dropout(p=drop_rate)
-
- # stochastic depth
- dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule
-
- # build Residual Swin Transformer blocks (RSTB)
- self.layers = nn.ModuleList()
- for i_layer in range(self.num_layers):
- layer = RSTB(dim=embed_dim,
- input_resolution=(patches_resolution[0],
- patches_resolution[1]),
- depth=depths[i_layer],
- num_heads=num_heads[i_layer],
- window_size=window_size,
- mlp_ratio=self.mlp_ratio,
- qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate,
- drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], # no impact on SR results
- norm_layer=norm_layer,
- downsample=None,
- use_checkpoint=use_checkpoint,
- img_size=img_size,
- patch_size=patch_size,
- resi_connection=resi_connection
-
- )
- self.layers.append(layer)
- self.norm = norm_layer(self.num_features)
-
- # build the last conv layer in deep feature extraction
- if resi_connection == '1conv':
- self.conv_after_body = nn.Conv2d(embed_dim, embed_dim, 3, 1, 1)
- elif resi_connection == '3conv':
- # to save parameters and memory
- self.conv_after_body = nn.Sequential(nn.Conv2d(embed_dim, embed_dim // 4, 3, 1, 1),
- nn.LeakyReLU(negative_slope=0.2, inplace=True),
- nn.Conv2d(embed_dim // 4, embed_dim // 4, 1, 1, 0),
- nn.LeakyReLU(negative_slope=0.2, inplace=True),
- nn.Conv2d(embed_dim // 4, embed_dim, 3, 1, 1))
-
- #####################################################################################################
- ################################ 3, high quality image reconstruction ################################
- if self.upsampler == 'pixelshuffle':
- # for classical SR
- self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1),
- nn.LeakyReLU(inplace=True))
- self.upsample = Upsample(upscale, num_feat)
- self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1)
- elif self.upsampler == 'pixelshuffledirect':
- # for lightweight SR (to save parameters)
- self.upsample = UpsampleOneStep(upscale, embed_dim, num_out_ch,
- (patches_resolution[0], patches_resolution[1]))
- elif self.upsampler == 'nearest+conv':
- # for real-world SR (less artifacts)
- assert self.upscale == 4, 'only support x4 now.'
- self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1),
- nn.LeakyReLU(inplace=True))
- self.conv_up1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
- self.conv_up2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
- self.conv_hr = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
- self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1)
- self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
- else:
- # for image denoising and JPEG compression artifact reduction
- self.conv_last = nn.Conv2d(embed_dim, num_out_ch, 3, 1, 1)
-
- self.apply(self._init_weights)
-
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- @torch.jit.ignore
- def no_weight_decay(self):
- return {'absolute_pos_embed'}
-
- @torch.jit.ignore
- def no_weight_decay_keywords(self):
- return {'relative_position_bias_table'}
-
- def check_image_size(self, x):
- _, _, h, w = x.size()
- mod_pad_h = (self.window_size - h % self.window_size) % self.window_size
- mod_pad_w = (self.window_size - w % self.window_size) % self.window_size
- x = F.pad(x, (0, mod_pad_w, 0, mod_pad_h), 'reflect')
- return x
-
- def forward_features(self, x):
- x_size = (x.shape[2], x.shape[3])
- x = self.patch_embed(x)
- if self.ape:
- x = x + self.absolute_pos_embed
- x = self.pos_drop(x)
-
- for layer in self.layers:
- x = layer(x, x_size)
-
- x = self.norm(x) # B L C
- x = self.patch_unembed(x, x_size)
-
- return x
-
- def forward(self, x):
- H, W = x.shape[2:]
- x = self.check_image_size(x)
-
- self.mean = self.mean.type_as(x)
- x = (x - self.mean) * self.img_range
-
- if self.upsampler == 'pixelshuffle':
- # for classical SR
- x = self.conv_first(x)
- x = self.conv_after_body(self.forward_features(x)) + x
- x = self.conv_before_upsample(x)
- x = self.conv_last(self.upsample(x))
- elif self.upsampler == 'pixelshuffledirect':
- # for lightweight SR
- x = self.conv_first(x)
- x = self.conv_after_body(self.forward_features(x)) + x
- x = self.upsample(x)
- elif self.upsampler == 'nearest+conv':
- # for real-world SR
- x = self.conv_first(x)
- x = self.conv_after_body(self.forward_features(x)) + x
- x = self.conv_before_upsample(x)
- x = self.lrelu(self.conv_up1(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest')))
- x = self.lrelu(self.conv_up2(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest')))
- x = self.conv_last(self.lrelu(self.conv_hr(x)))
- else:
- # for image denoising and JPEG compression artifact reduction
- x_first = self.conv_first(x)
- res = self.conv_after_body(self.forward_features(x_first)) + x_first
- x = x + self.conv_last(res)
-
- x = x / self.img_range + self.mean
-
- return x[:, :, :H*self.upscale, :W*self.upscale]
-
- def flops(self):
- flops = 0
- H, W = self.patches_resolution
- flops += H * W * 3 * self.embed_dim * 9
- flops += self.patch_embed.flops()
- for i, layer in enumerate(self.layers):
- flops += layer.flops()
- flops += H * W * 3 * self.embed_dim * self.embed_dim
- flops += self.upsample.flops()
- return flops
-
-
-if __name__ == '__main__':
- upscale = 4
- window_size = 8
- height = (1024 // upscale // window_size + 1) * window_size
- width = (720 // upscale // window_size + 1) * window_size
- model = SwinIR(upscale=2, img_size=(height, width),
- window_size=window_size, img_range=1., depths=[6, 6, 6, 6],
- embed_dim=60, num_heads=[6, 6, 6, 6], mlp_ratio=2, upsampler='pixelshuffledirect')
- print(model)
- print(height, width, model.flops() / 1e9)
-
- x = torch.randn((1, 3, height, width))
- x = model(x)
- print(x.shape)
diff --git a/spaces/OswaldDev/Image-enhancement/utils/util_calculate_psnr_ssim.py b/spaces/OswaldDev/Image-enhancement/utils/util_calculate_psnr_ssim.py
deleted file mode 100644
index 1a8fb27161f9c1fd3e37b14654dfe05eaadf619c..0000000000000000000000000000000000000000
--- a/spaces/OswaldDev/Image-enhancement/utils/util_calculate_psnr_ssim.py
+++ /dev/null
@@ -1,346 +0,0 @@
-import cv2
-import numpy as np
-import torch
-
-
-def calculate_psnr(img1, img2, crop_border, input_order='HWC', test_y_channel=False):
- """Calculate PSNR (Peak Signal-to-Noise Ratio).
-
- Ref: https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio
-
- Args:
- img1 (ndarray): Images with range [0, 255].
- img2 (ndarray): Images with range [0, 255].
- crop_border (int): Cropped pixels in each edge of an image. These
- pixels are not involved in the PSNR calculation.
- input_order (str): Whether the input order is 'HWC' or 'CHW'.
- Default: 'HWC'.
- test_y_channel (bool): Test on Y channel of YCbCr. Default: False.
-
- Returns:
- float: psnr result.
- """
-
- assert img1.shape == img2.shape, (f'Image shapes are differnet: {img1.shape}, {img2.shape}.')
- if input_order not in ['HWC', 'CHW']:
- raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are ' '"HWC" and "CHW"')
- img1 = reorder_image(img1, input_order=input_order)
- img2 = reorder_image(img2, input_order=input_order)
- img1 = img1.astype(np.float64)
- img2 = img2.astype(np.float64)
-
- if crop_border != 0:
- img1 = img1[crop_border:-crop_border, crop_border:-crop_border, ...]
- img2 = img2[crop_border:-crop_border, crop_border:-crop_border, ...]
-
- if test_y_channel:
- img1 = to_y_channel(img1)
- img2 = to_y_channel(img2)
-
- mse = np.mean((img1 - img2) ** 2)
- if mse == 0:
- return float('inf')
- return 20. * np.log10(255. / np.sqrt(mse))
-
-
-def _ssim(img1, img2):
- """Calculate SSIM (structural similarity) for one channel images.
-
- It is called by func:`calculate_ssim`.
-
- Args:
- img1 (ndarray): Images with range [0, 255] with order 'HWC'.
- img2 (ndarray): Images with range [0, 255] with order 'HWC'.
-
- Returns:
- float: ssim result.
- """
-
- C1 = (0.01 * 255) ** 2
- C2 = (0.03 * 255) ** 2
-
- img1 = img1.astype(np.float64)
- img2 = img2.astype(np.float64)
- kernel = cv2.getGaussianKernel(11, 1.5)
- window = np.outer(kernel, kernel.transpose())
-
- mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5]
- mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5]
- mu1_sq = mu1 ** 2
- mu2_sq = mu2 ** 2
- mu1_mu2 = mu1 * mu2
- sigma1_sq = cv2.filter2D(img1 ** 2, -1, window)[5:-5, 5:-5] - mu1_sq
- sigma2_sq = cv2.filter2D(img2 ** 2, -1, window)[5:-5, 5:-5] - mu2_sq
- sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2
-
- ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * (sigma1_sq + sigma2_sq + C2))
- return ssim_map.mean()
-
-
-def calculate_ssim(img1, img2, crop_border, input_order='HWC', test_y_channel=False):
- """Calculate SSIM (structural similarity).
-
- Ref:
- Image quality assessment: From error visibility to structural similarity
-
- The results are the same as that of the official released MATLAB code in
- https://ece.uwaterloo.ca/~z70wang/research/ssim/.
-
- For three-channel images, SSIM is calculated for each channel and then
- averaged.
-
- Args:
- img1 (ndarray): Images with range [0, 255].
- img2 (ndarray): Images with range [0, 255].
- crop_border (int): Cropped pixels in each edge of an image. These
- pixels are not involved in the SSIM calculation.
- input_order (str): Whether the input order is 'HWC' or 'CHW'.
- Default: 'HWC'.
- test_y_channel (bool): Test on Y channel of YCbCr. Default: False.
-
- Returns:
- float: ssim result.
- """
-
- assert img1.shape == img2.shape, (f'Image shapes are differnet: {img1.shape}, {img2.shape}.')
- if input_order not in ['HWC', 'CHW']:
- raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are ' '"HWC" and "CHW"')
- img1 = reorder_image(img1, input_order=input_order)
- img2 = reorder_image(img2, input_order=input_order)
- img1 = img1.astype(np.float64)
- img2 = img2.astype(np.float64)
-
- if crop_border != 0:
- img1 = img1[crop_border:-crop_border, crop_border:-crop_border, ...]
- img2 = img2[crop_border:-crop_border, crop_border:-crop_border, ...]
-
- if test_y_channel:
- img1 = to_y_channel(img1)
- img2 = to_y_channel(img2)
-
- ssims = []
- for i in range(img1.shape[2]):
- ssims.append(_ssim(img1[..., i], img2[..., i]))
- return np.array(ssims).mean()
-
-
-def _blocking_effect_factor(im):
- block_size = 8
-
- block_horizontal_positions = torch.arange(7, im.shape[3] - 1, 8)
- block_vertical_positions = torch.arange(7, im.shape[2] - 1, 8)
-
- horizontal_block_difference = (
- (im[:, :, :, block_horizontal_positions] - im[:, :, :, block_horizontal_positions + 1]) ** 2).sum(
- 3).sum(2).sum(1)
- vertical_block_difference = (
- (im[:, :, block_vertical_positions, :] - im[:, :, block_vertical_positions + 1, :]) ** 2).sum(3).sum(
- 2).sum(1)
-
- nonblock_horizontal_positions = np.setdiff1d(torch.arange(0, im.shape[3] - 1), block_horizontal_positions)
- nonblock_vertical_positions = np.setdiff1d(torch.arange(0, im.shape[2] - 1), block_vertical_positions)
-
- horizontal_nonblock_difference = (
- (im[:, :, :, nonblock_horizontal_positions] - im[:, :, :, nonblock_horizontal_positions + 1]) ** 2).sum(
- 3).sum(2).sum(1)
- vertical_nonblock_difference = (
- (im[:, :, nonblock_vertical_positions, :] - im[:, :, nonblock_vertical_positions + 1, :]) ** 2).sum(
- 3).sum(2).sum(1)
-
- n_boundary_horiz = im.shape[2] * (im.shape[3] // block_size - 1)
- n_boundary_vert = im.shape[3] * (im.shape[2] // block_size - 1)
- boundary_difference = (horizontal_block_difference + vertical_block_difference) / (
- n_boundary_horiz + n_boundary_vert)
-
- n_nonboundary_horiz = im.shape[2] * (im.shape[3] - 1) - n_boundary_horiz
- n_nonboundary_vert = im.shape[3] * (im.shape[2] - 1) - n_boundary_vert
- nonboundary_difference = (horizontal_nonblock_difference + vertical_nonblock_difference) / (
- n_nonboundary_horiz + n_nonboundary_vert)
-
- scaler = np.log2(block_size) / np.log2(min([im.shape[2], im.shape[3]]))
- bef = scaler * (boundary_difference - nonboundary_difference)
-
- bef[boundary_difference <= nonboundary_difference] = 0
- return bef
-
-
-def calculate_psnrb(img1, img2, crop_border, input_order='HWC', test_y_channel=False):
- """Calculate PSNR-B (Peak Signal-to-Noise Ratio).
-
- Ref: Quality assessment of deblocked images, for JPEG image deblocking evaluation
- # https://gitlab.com/Queuecumber/quantization-guided-ac/-/blob/master/metrics/psnrb.py
-
- Args:
- img1 (ndarray): Images with range [0, 255].
- img2 (ndarray): Images with range [0, 255].
- crop_border (int): Cropped pixels in each edge of an image. These
- pixels are not involved in the PSNR calculation.
- input_order (str): Whether the input order is 'HWC' or 'CHW'.
- Default: 'HWC'.
- test_y_channel (bool): Test on Y channel of YCbCr. Default: False.
-
- Returns:
- float: psnr result.
- """
-
- assert img1.shape == img2.shape, (f'Image shapes are differnet: {img1.shape}, {img2.shape}.')
- if input_order not in ['HWC', 'CHW']:
- raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are ' '"HWC" and "CHW"')
- img1 = reorder_image(img1, input_order=input_order)
- img2 = reorder_image(img2, input_order=input_order)
- img1 = img1.astype(np.float64)
- img2 = img2.astype(np.float64)
-
- if crop_border != 0:
- img1 = img1[crop_border:-crop_border, crop_border:-crop_border, ...]
- img2 = img2[crop_border:-crop_border, crop_border:-crop_border, ...]
-
- if test_y_channel:
- img1 = to_y_channel(img1)
- img2 = to_y_channel(img2)
-
- # follow https://gitlab.com/Queuecumber/quantization-guided-ac/-/blob/master/metrics/psnrb.py
- img1 = torch.from_numpy(img1).permute(2, 0, 1).unsqueeze(0) / 255.
- img2 = torch.from_numpy(img2).permute(2, 0, 1).unsqueeze(0) / 255.
-
- total = 0
- for c in range(img1.shape[1]):
- mse = torch.nn.functional.mse_loss(img1[:, c:c + 1, :, :], img2[:, c:c + 1, :, :], reduction='none')
- bef = _blocking_effect_factor(img1[:, c:c + 1, :, :])
-
- mse = mse.view(mse.shape[0], -1).mean(1)
- total += 10 * torch.log10(1 / (mse + bef))
-
- return float(total) / img1.shape[1]
-
-
-def reorder_image(img, input_order='HWC'):
- """Reorder images to 'HWC' order.
-
- If the input_order is (h, w), return (h, w, 1);
- If the input_order is (c, h, w), return (h, w, c);
- If the input_order is (h, w, c), return as it is.
-
- Args:
- img (ndarray): Input image.
- input_order (str): Whether the input order is 'HWC' or 'CHW'.
- If the input image shape is (h, w), input_order will not have
- effects. Default: 'HWC'.
-
- Returns:
- ndarray: reordered image.
- """
-
- if input_order not in ['HWC', 'CHW']:
- raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are ' "'HWC' and 'CHW'")
- if len(img.shape) == 2:
- img = img[..., None]
- if input_order == 'CHW':
- img = img.transpose(1, 2, 0)
- return img
-
-
-def to_y_channel(img):
- """Change to Y channel of YCbCr.
-
- Args:
- img (ndarray): Images with range [0, 255].
-
- Returns:
- (ndarray): Images with range [0, 255] (float type) without round.
- """
- img = img.astype(np.float32) / 255.
- if img.ndim == 3 and img.shape[2] == 3:
- img = bgr2ycbcr(img, y_only=True)
- img = img[..., None]
- return img * 255.
-
-
-def _convert_input_type_range(img):
- """Convert the type and range of the input image.
-
- It converts the input image to np.float32 type and range of [0, 1].
- It is mainly used for pre-processing the input image in colorspace
- convertion functions such as rgb2ycbcr and ycbcr2rgb.
-
- Args:
- img (ndarray): The input image. It accepts:
- 1. np.uint8 type with range [0, 255];
- 2. np.float32 type with range [0, 1].
-
- Returns:
- (ndarray): The converted image with type of np.float32 and range of
- [0, 1].
- """
- img_type = img.dtype
- img = img.astype(np.float32)
- if img_type == np.float32:
- pass
- elif img_type == np.uint8:
- img /= 255.
- else:
- raise TypeError('The img type should be np.float32 or np.uint8, ' f'but got {img_type}')
- return img
-
-
-def _convert_output_type_range(img, dst_type):
- """Convert the type and range of the image according to dst_type.
-
- It converts the image to desired type and range. If `dst_type` is np.uint8,
- images will be converted to np.uint8 type with range [0, 255]. If
- `dst_type` is np.float32, it converts the image to np.float32 type with
- range [0, 1].
- It is mainly used for post-processing images in colorspace convertion
- functions such as rgb2ycbcr and ycbcr2rgb.
-
- Args:
- img (ndarray): The image to be converted with np.float32 type and
- range [0, 255].
- dst_type (np.uint8 | np.float32): If dst_type is np.uint8, it
- converts the image to np.uint8 type with range [0, 255]. If
- dst_type is np.float32, it converts the image to np.float32 type
- with range [0, 1].
-
- Returns:
- (ndarray): The converted image with desired type and range.
- """
- if dst_type not in (np.uint8, np.float32):
- raise TypeError('The dst_type should be np.float32 or np.uint8, ' f'but got {dst_type}')
- if dst_type == np.uint8:
- img = img.round()
- else:
- img /= 255.
- return img.astype(dst_type)
-
-
-def bgr2ycbcr(img, y_only=False):
- """Convert a BGR image to YCbCr image.
-
- The bgr version of rgb2ycbcr.
- It implements the ITU-R BT.601 conversion for standard-definition
- television. See more details in
- https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion.
-
- It differs from a similar function in cv2.cvtColor: `BGR <-> YCrCb`.
- In OpenCV, it implements a JPEG conversion. See more details in
- https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion.
-
- Args:
- img (ndarray): The input image. It accepts:
- 1. np.uint8 type with range [0, 255];
- 2. np.float32 type with range [0, 1].
- y_only (bool): Whether to only return Y channel. Default: False.
-
- Returns:
- ndarray: The converted YCbCr image. The output image has the same type
- and range as input image.
- """
- img_type = img.dtype
- img = _convert_input_type_range(img)
- if y_only:
- out_img = np.dot(img, [24.966, 128.553, 65.481]) + 16.0
- else:
- out_img = np.matmul(
- img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786], [65.481, -37.797, 112.0]]) + [16, 128, 128]
- out_img = _convert_output_type_range(out_img, img_type)
- return out_img
diff --git a/spaces/Oumar199/Fake-Real-Face-Detection/fake_face_detection/data/collator.py b/spaces/Oumar199/Fake-Real-Face-Detection/fake_face_detection/data/collator.py
deleted file mode 100644
index 44780306c69e3d5d054d06d03d08f3e2e6ade517..0000000000000000000000000000000000000000
--- a/spaces/Oumar199/Fake-Real-Face-Detection/fake_face_detection/data/collator.py
+++ /dev/null
@@ -1,33 +0,0 @@
-
-import torch
-import numpy as np
-
-def fake_face_collator(batch):
- """The data collator for training vision transformer models on fake and real face dataset
-
- Args:
- batch (list): A dictionary containing the pixel values and the labels
-
- Returns:
- dict: The final dictionary
- """
-
- new_batch = {
- 'pixel_values': [],
- 'labels': []
- }
-
- for x in batch:
-
- pixel_values = torch.from_numpy(x['pixel_values'][0]) if isinstance(x['pixel_values'][0], np.ndarray) \
- else x['pixel_values'][0]
-
- new_batch['pixel_values'].append(pixel_values)
-
- new_batch['labels'].append(torch.tensor(x['labels']))
-
- new_batch['pixel_values'] = torch.stack(new_batch['pixel_values'])
-
- new_batch['labels'] = torch.stack(new_batch['labels'])
-
- return new_batch
diff --git a/spaces/PaddlePaddle/UIE-X/app.py b/spaces/PaddlePaddle/UIE-X/app.py
deleted file mode 100644
index b29e9a7c0abcd39d797a2d79dfb669751307b3f0..0000000000000000000000000000000000000000
--- a/spaces/PaddlePaddle/UIE-X/app.py
+++ /dev/null
@@ -1,386 +0,0 @@
-#-*- coding: UTF-8 -*-
-# Copyright 2022 the HuggingFace Team.
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import traceback
-import base64
-
-import gradio as gr
-import cv2
-
-from paddlenlp import Taskflow
-from paddlenlp.utils.doc_parser import DocParser
-
-
-doc_parser = DocParser()
-task_instance = Taskflow(
- "information_extraction",
- model="uie-x-base",
- task_path="PaddlePaddle/uie-x-base",
- from_hf_hub=True)
-
-examples = [
- [
- "business_card.png",
- "Name;Title;Web Link;Email;Address",
- ],
- [
- "license.jpeg",
- "Name;DOB;ISS;EXP",
- ],
- [
- "statements.png",
- "Date|Gross profit",
- ],
- [
- "invoice.jpeg",
- "名称;纳税人识别号;开票日期",
- ],
- [
- "custom.jpeg",
- "收发货人;进口口岸;进口日期;运输方式;征免性质;境内目的地;运输工具名称;包装种类;件数;合同协议号"
- ],
- [
- "resume.png",
- "职位;年龄;学校|时间;学校|专业",
- ],
-]
-
-example_files = {
- "Name;Title;Web Link;Email;Address": "business_card.png",
- "Name;DOB;ISS;EXP": "license.jpeg",
- "Date|Gross profit": "statements.png",
- "职位;年龄;学校|时间;学校|专业": "resume.png",
- "收发货人;进口口岸;进口日期;运输方式;征免性质;境内目的地;运输工具名称;包装种类;件数;合同协议号": "custom.jpeg",
- "名称;纳税人识别号;开票日期": "invoice.jpeg",
-}
-
-lang_map = {
- "resume.png": "ch",
- "custom.jpeg": "ch",
- "business_card.png": "en",
- "invoice.jpeg": "ch",
- "license.jpeg": "en",
- "statements.png": "en",
-}
-
-def dbc2sbc(s):
- rs = ""
- for char in s:
- code = ord(char)
- if code == 0x3000:
- code = 0x0020
- else:
- code -= 0xfee0
- if not (0x0021 <= code and code <= 0x7e):
- rs += char
- continue
- rs += chr(code)
- return rs
-
-
-def BGR2RGB(img):
- pilimg = img.copy()
- pilimg[:, :, 0] = img[:, :, 2]
- pilimg[:, :, 2] = img[:, :, 0]
- return pilimg
-
-
-def np2base64(image_np):
- image_np = BGR2RGB(image_np)
- image = cv2.imencode('.jpg', image_np)[1]
- base64_str = str(base64.b64encode(image))[2:-1]
- return base64_str
-
-
-def process_path(path):
- error = None
- if path:
- try:
- if path.endswith(".pdf"):
- images_list = [doc_parser.read_pdf(path)]
- else:
- images_list = [doc_parser.read_image(path)]
- return (
- path,
- gr.update(visible=True, value=images_list),
- gr.update(visible=True),
- gr.update(visible=False, value=None),
- gr.update(visible=False, value=None),
- None,
- )
- except Exception as e:
- traceback.print_exc()
- error = str(e)
- return (
- None,
- gr.update(visible=False, value=None),
- gr.update(visible=False),
- gr.update(visible=False, value=None),
- gr.update(visible=False, value=None),
- gr.update(visible=True, value=error) if error is not None else None,
- None,
- )
-
-
-def process_upload(file):
- if file:
- return process_path(file.name)
- else:
- return (
- None,
- gr.update(visible=False, value=None),
- gr.update(visible=False),
- gr.update(visible=False, value=None),
- gr.update(visible=False, value=None),
- None,
- )
-
-def get_schema(schema_str):
- def _is_ch(s):
- for ch in s:
- if "\u4e00" <= ch <= "\u9fff":
- return True
- return False
- schema_lang = "ch" if _is_ch(schema_str) else "en"
- schema = schema_str.split(";")
- schema_list = []
- for s in schema:
- cand = s.split("|")
- if len(cand) == 1:
- schema_list.append(cand[0])
- else:
- subject = cand[0]
- relations = cand[1:]
- added = False
- for a in schema_list:
- if isinstance(a, dict):
- if subject in a.keys():
- a[subject].extend(relations)
- added = True
- break
- if not added:
- a = {subject: relations}
- schema_list.append(a)
- return schema_list, schema_lang
-
-
-def run_taskflow(document, schema, argument):
- task_instance.set_schema(schema)
- task_instance.set_argument(argument)
- return task_instance({'doc': document})
-
-
-def process_doc(document, schema, ocr_lang, layout_analysis):
- if [document, schema] in examples:
- ocr_lang = lang_map[document]
-
- if not schema:
- schema = '时间;组织机构;人物'
- if document is None:
- return None, None
-
- layout_analysis = True if layout_analysis == "yes" else False
- schema, schema_lang = get_schema(dbc2sbc(schema))
- argument = {
- "ocr_lang": ocr_lang,
- "schema_lang": schema_lang,
- "layout_analysis": layout_analysis
- }
- prediction = run_taskflow(document, schema, argument)[0]
-
- if document.endswith(".pdf"):
- _image = doc_parser.read_pdf(document)
- else:
- _image = doc_parser.read_image(document)
-
- img_show = doc_parser.write_image_with_results(
- np2base64(_image),
- result=prediction,
- return_image=True)
- img_list = [img_show]
-
- return (
- gr.update(visible=True, value=img_list),
- gr.update(visible=True, value=prediction),
- )
-
-
-def load_example_document(img, schema, ocr_lang, layout_analysis):
- if img is not None:
- document = example_files[schema]
- choice = lang_map[document].split("-")
- ocr_lang = choice[0]
- preview, answer = process_doc(document, schema, ocr_lang, layout_analysis)
- return document, schema, preview, gr.update(visible=True), answer
- else:
- return None, None, None, gr.update(visible=False), None
-
-
-def read_content(file_path: str) -> str:
- """read the content of target file
- """
- with open(file_path, 'r', encoding='utf-8') as f:
- content = f.read()
-
- return content
-
-
-with gr.Blocks() as demo:
- gr.HTML(read_content("header.html"))
- gr.Markdown(
- "Open-sourced by [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP), **UIE-X** is a universal information extraction engine for both scanned document and text inputs. It supports Entity Extraction, Relation Extraction and Event Extraction tasks. "
- "UIE-X performs well on a zero-shot settings, which is enabled by a flexible schema that allows you to specify extraction targets with simple natural language. "
- "Moreover, on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP), we provide a comprehensive and easy-to-use fine-tuning and few-shot customization workflow. "
- "Want to dive deeper? Check out our [AIStudio Notebook](https://aistudio.baidu.com/aistudio/projectdetail/5261592) and [Colab Notebook](https://colab.research.google.com/drive/1ZY_ELZgoemJNoa6baWpgtzebLgoCT8MK?usp=sharing). "
- "For more details, please visit our [GitHub](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/applications/information_extraction/README_en.md)"
- )
-
- document = gr.Variable()
- is_text = gr.Variable()
- example_schema = gr.Textbox(visible=False)
- example_image = gr.Image(visible=False)
- with gr.Row(equal_height=True):
- with gr.Column():
- with gr.Row():
- gr.Markdown("## 1. Select a file 选择文件", elem_id="select-a-file")
- img_clear_button = gr.Button(
- "Clear", variant="secondary", elem_id="file-clear", visible=False
- )
- image = gr.Gallery(visible=False)
- with gr.Row(equal_height=True):
- with gr.Column():
- with gr.Row():
- url = gr.Textbox(
- show_label=False,
- placeholder="URL",
- lines=1,
- max_lines=1,
- elem_id="url-textbox",
- )
- submit = gr.Button("Get")
- url_error = gr.Textbox(
- visible=False,
- elem_id="url-error",
- max_lines=1,
- interactive=False,
- label="Error",
- )
- gr.Markdown("##
— or —
")
- upload = gr.File(label=None, interactive=True, elem_id="short-upload-box")
- gr.Examples(
- examples=examples,
- inputs=[example_image, example_schema],
- )
-
- with gr.Column():
- gr.Markdown("## 2. Information Extraction 信息抽取 ")
- gr.Markdown("### 👉 Set a schema 设置schema")
- gr.Markdown("Entity extraction: entity type should be separated by ';', e.g. **Person;Organization**")
- gr.Markdown("实体抽取:实体类别之间以';'分割,例如 **人物;组织机构**")
- gr.Markdown("Relation extraction: set the subject and relation type, separated by '|', e.g. **Person|Date;Person|Email**")
- gr.Markdown("关系抽取:需配置主体和关系类别,中间以'|'分割,例如 **人物|出生时间;人物|邮箱**")
- gr.Markdown("### 👉 Model customization 模型定制")
- gr.Markdown("We recommend to further improve the extraction performance in specific domain through the process of [data annotation & fine-tuning](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/applications/information_extraction/document/README_en.md)")
- gr.Markdown("我们建议通过[数据标注+微调](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/applications/information_extraction/document/README_en.md)的流程进一步增强模型在特定场景的效果")
-
- schema = gr.Textbox(
- label="Schema",
- placeholder="e.g. Name|Company;Name|Position;Email;Phone Number",
- lines=1,
- max_lines=1,
- )
-
- ocr_lang = gr.Radio(
- choices=["ch", "en"],
- value="en",
- label="OCR语言 / OCR Language (Please choose ch for Chinese images.)",
- )
-
- layout_analysis = gr.Radio(
- choices=["yes", "no"],
- value="no",
- label="版面分析 / Layout analysis (Better extraction for multi-line text)",
- )
-
- with gr.Row():
- clear_button = gr.Button("Clear", variant="secondary")
- submit_button = gr.Button(
- "Submit", variant="primary", elem_id="submit-button"
- )
- with gr.Column():
- output = gr.JSON(label="Output", visible=False)
-
- for cb in [img_clear_button, clear_button]:
- cb.click(
- lambda _: (
- gr.update(visible=False, value=None),
- None,
- gr.update(visible=False, value=None),
- gr.update(visible=False),
- None,
- None,
- None,
- gr.update(visible=False, value=None),
- None,
- ),
- inputs=clear_button,
- outputs=[
- image,
- document,
- output,
- img_clear_button,
- example_image,
- upload,
- url,
- url_error,
- schema,
- ],
- )
-
- upload.change(
- fn=process_upload,
- inputs=[upload],
- outputs=[document, image, img_clear_button, output, url_error],
- )
- submit.click(
- fn=process_path,
- inputs=[url],
- outputs=[document, image, img_clear_button, output, url_error],
- )
-
- schema.submit(
- fn=process_doc,
- inputs=[document, schema, ocr_lang, layout_analysis],
- outputs=[image, output],
- )
-
- submit_button.click(
- fn=process_doc,
- inputs=[document, schema, ocr_lang, layout_analysis],
- outputs=[image, output],
- )
-
- example_image.change(
- fn=load_example_document,
- inputs=[example_image, example_schema, ocr_lang, layout_analysis],
- outputs=[document, schema, image, img_clear_button, output],
- )
-
- gr.HTML(read_content("footer.html"))
-
-
-if __name__ == "__main__":
- demo.queue().launch()
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/top-repl.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/top-repl.go
deleted file mode 100644
index 469885b7e271499c43931f637a46fad74d21b5f9..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/top-repl.go and /dev/null differ
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-13.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-13.go
deleted file mode 100644
index b70563da6b0892d4e5f7ea039e77f8450155e446..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-13.go and /dev/null differ
diff --git a/spaces/PeepDaSlan9/AutoGPT/autogpt/agent/agent.py b/spaces/PeepDaSlan9/AutoGPT/autogpt/agent/agent.py
deleted file mode 100644
index ee7885f8844022597321fa6b492430ec34c0d6b9..0000000000000000000000000000000000000000
--- a/spaces/PeepDaSlan9/AutoGPT/autogpt/agent/agent.py
+++ /dev/null
@@ -1,197 +0,0 @@
-from colorama import Fore, Style
-
-from autogpt.app import execute_command, get_command
-from autogpt.chat import chat_with_ai, create_chat_message
-from autogpt.config import Config
-from autogpt.json_utils.json_fix_llm import fix_json_using_multiple_techniques
-from autogpt.json_utils.utilities import validate_json
-from autogpt.logs import logger, print_assistant_thoughts
-from autogpt.speech import say_text
-from autogpt.spinner import Spinner
-from autogpt.utils import clean_input
-
-
-class Agent:
- """Agent class for interacting with Auto-GPT.
-
- Attributes:
- ai_name: The name of the agent.
- memory: The memory object to use.
- full_message_history: The full message history.
- next_action_count: The number of actions to execute.
- system_prompt: The system prompt is the initial prompt that defines everything the AI needs to know to achieve its task successfully.
- Currently, the dynamic and customizable information in the system prompt are ai_name, description and goals.
-
- triggering_prompt: The last sentence the AI will see before answering. For Auto-GPT, this prompt is:
- Determine which next command to use, and respond using the format specified above:
- The triggering prompt is not part of the system prompt because between the system prompt and the triggering
- prompt we have contextual information that can distract the AI and make it forget that its goal is to find the next task to achieve.
- SYSTEM PROMPT
- CONTEXTUAL INFORMATION (memory, previous conversations, anything relevant)
- TRIGGERING PROMPT
-
- The triggering prompt reminds the AI about its short term meta task (defining the next task)
- """
-
- def __init__(
- self,
- ai_name,
- memory,
- full_message_history,
- next_action_count,
- system_prompt,
- triggering_prompt,
- ):
- self.ai_name = ai_name
- self.memory = memory
- self.full_message_history = full_message_history
- self.next_action_count = next_action_count
- self.system_prompt = system_prompt
- self.triggering_prompt = triggering_prompt
-
- def start_interaction_loop(self):
- # Interaction Loop
- cfg = Config()
- loop_count = 0
- command_name = None
- arguments = None
- user_input = ""
-
- while True:
- # Discontinue if continuous limit is reached
- loop_count += 1
- if (
- cfg.continuous_mode
- and cfg.continuous_limit > 0
- and loop_count > cfg.continuous_limit
- ):
- logger.typewriter_log(
- "Continuous Limit Reached: ", Fore.YELLOW, f"{cfg.continuous_limit}"
- )
- break
-
- # Send message to AI, get response
- with Spinner("Thinking... "):
- assistant_reply = chat_with_ai(
- self.system_prompt,
- self.triggering_prompt,
- self.full_message_history,
- self.memory,
- cfg.fast_token_limit,
- ) # TODO: This hardcodes the model to use GPT3.5. Make this an argument
-
- assistant_reply_json = fix_json_using_multiple_techniques(assistant_reply)
-
- # Print Assistant thoughts
- if assistant_reply_json != {}:
- validate_json(assistant_reply_json, "llm_response_format_1")
- # Get command name and arguments
- try:
- print_assistant_thoughts(self.ai_name, assistant_reply_json)
- command_name, arguments = get_command(assistant_reply_json)
- # command_name, arguments = assistant_reply_json_valid["command"]["name"], assistant_reply_json_valid["command"]["args"]
- if cfg.speak_mode:
- say_text(f"I want to execute {command_name}")
- except Exception as e:
- logger.error("Error: \n", str(e))
-
- if not cfg.continuous_mode and self.next_action_count == 0:
- ### GET USER AUTHORIZATION TO EXECUTE COMMAND ###
- # Get key press: Prompt the user to press enter to continue or escape
- # to exit
- logger.typewriter_log(
- "NEXT ACTION: ",
- Fore.CYAN,
- f"COMMAND = {Fore.CYAN}{command_name}{Style.RESET_ALL} "
- f"ARGUMENTS = {Fore.CYAN}{arguments}{Style.RESET_ALL}",
- )
- print(
- "Enter 'y' to authorise command, 'y -N' to run N continuous "
- "commands, 'n' to exit program, or enter feedback for "
- f"{self.ai_name}...",
- flush=True,
- )
- while True:
- console_input = clean_input(
- Fore.MAGENTA + "Input:" + Style.RESET_ALL
- )
- if console_input.lower().strip() == "y":
- user_input = "GENERATE NEXT COMMAND JSON"
- break
- elif console_input.lower().strip() == "":
- print("Invalid input format.")
- continue
- elif console_input.lower().startswith("y -"):
- try:
- self.next_action_count = abs(
- int(console_input.split(" ")[1])
- )
- user_input = "GENERATE NEXT COMMAND JSON"
- except ValueError:
- print(
- "Invalid input format. Please enter 'y -n' where n is"
- " the number of continuous tasks."
- )
- continue
- break
- elif console_input.lower() == "n":
- user_input = "EXIT"
- break
- else:
- user_input = console_input
- command_name = "human_feedback"
- break
-
- if user_input == "GENERATE NEXT COMMAND JSON":
- logger.typewriter_log(
- "-=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=",
- Fore.MAGENTA,
- "",
- )
- elif user_input == "EXIT":
- print("Exiting...", flush=True)
- break
- else:
- # Print command
- logger.typewriter_log(
- "NEXT ACTION: ",
- Fore.CYAN,
- f"COMMAND = {Fore.CYAN}{command_name}{Style.RESET_ALL}"
- f" ARGUMENTS = {Fore.CYAN}{arguments}{Style.RESET_ALL}",
- )
-
- # Execute command
- if command_name is not None and command_name.lower().startswith("error"):
- result = (
- f"Command {command_name} threw the following error: {arguments}"
- )
- elif command_name == "human_feedback":
- result = f"Human feedback: {user_input}"
- else:
- result = (
- f"Command {command_name} returned: "
- f"{execute_command(command_name, arguments)}"
- )
- if self.next_action_count > 0:
- self.next_action_count -= 1
-
- memory_to_add = (
- f"Assistant Reply: {assistant_reply} "
- f"\nResult: {result} "
- f"\nHuman Feedback: {user_input} "
- )
-
- self.memory.add(memory_to_add)
-
- # Check if there's a result from the command append it to the message
- # history
- if result is not None:
- self.full_message_history.append(create_chat_message("system", result))
- logger.typewriter_log("SYSTEM: ", Fore.YELLOW, result)
- else:
- self.full_message_history.append(
- create_chat_message("system", "Unable to execute command")
- )
- logger.typewriter_log(
- "SYSTEM: ", Fore.YELLOW, "Unable to execute command"
- )
diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/rpn/__init__.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/rpn/__init__.py
deleted file mode 100644
index c05e602a5e4a4f35f826068d6ee0ff9d4e011411..0000000000000000000000000000000000000000
--- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/rpn/__init__.py
+++ /dev/null
@@ -1,24 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-# from .rpn import build_rpn
-from .rpn import RPNModule
-from .retina import RetinaNetModule
-from .fcos import FCOSModule
-from .atss import ATSSModule
-from .dyhead import DyHeadModule
-from .vldyhead import VLDyHeadModule
-
-_RPN_META_ARCHITECTURES = {"RPN": RPNModule,
- "RETINA": RetinaNetModule,
- "FCOS": FCOSModule,
- "ATSS": ATSSModule,
- "DYHEAD": DyHeadModule,
- "VLDYHEAD": VLDyHeadModule
- }
-
-
-def build_rpn(cfg):
- """
- This gives the gist of it. Not super important because it doesn't change as much
- """
- rpn_arch = _RPN_META_ARCHITECTURES[cfg.MODEL.RPN_ARCHITECTURE]
- return rpn_arch(cfg)
diff --git a/spaces/RMXK/RVC_HFF/LazyImport.py b/spaces/RMXK/RVC_HFF/LazyImport.py
deleted file mode 100644
index 5bdb05ddd5a546a43adba7274b4c3465bb77f2f5..0000000000000000000000000000000000000000
--- a/spaces/RMXK/RVC_HFF/LazyImport.py
+++ /dev/null
@@ -1,13 +0,0 @@
-from importlib.util import find_spec, LazyLoader, module_from_spec
-from sys import modules
-
-def lazyload(name):
- if name in modules:
- return modules[name]
- else:
- spec = find_spec(name)
- loader = LazyLoader(spec.loader)
- module = module_from_spec(spec)
- modules[name] = module
- loader.exec_module(module)
- return module
\ No newline at end of file
diff --git a/spaces/Raaniel/Support-and-resistance/app.py b/spaces/Raaniel/Support-and-resistance/app.py
deleted file mode 100644
index 16fde9af820171b541883e1ac12a44d04b3e5995..0000000000000000000000000000000000000000
--- a/spaces/Raaniel/Support-and-resistance/app.py
+++ /dev/null
@@ -1,184 +0,0 @@
-import pandas as pd
-import yfinance as yf
-import numpy as np
-import plotly.graph_objects as go
-from plotly.subplots import make_subplots
-from sklearn.cluster import AgglomerativeClustering
-import streamlit as st
-import requests
-from streamlit_lottie import st_lottie
-import datetime
-
-st.set_page_config(page_title = "Support and resistance levels",
- page_icon = ':📈:',
- layout = 'wide')
-
-st.title('📈 Technical analysis 📉')
-st.header('Find support and resistance levels for :blue[price action] analysis!')
-st.markdown('''
-This demo includes an implemented Agglomerative Clustering
-algorithm that can assist you in automatically detecting
-potential support and resistance levels in financial markets.
-''', unsafe_allow_html = True)
-st.markdown('##')
-
-def load_lottieurl(url: str):
- r = requests.get(url)
- if r.status_code != 200:
- return None
- return r.json()
-
-lottie_url__money = "https://assets1.lottiefiles.com/packages/lf20_06a6pf9i.json"
-lottie_money = load_lottieurl(lottie_url__money)
-
-st.sidebar.header('Please choose parameters: ')
-
-ticker = st.text_input('''Select stock to analyse:
-(Make sure the ticker you search for is supported
-by _Yahoo! Finance_).''', 'BNB-USD')
-
-interval = st.sidebar.selectbox(
- 'Select the time interval',
- ('1d', '5d', '1wk', '1mo', '3mo'))
-
-timedelta = {'1d': 1, '5d': 5, '1wk' : 7, '1mo' : 30, '3mo' : 90}
-
-start = st.sidebar.date_input(
- "Select the beginning date",
- datetime.date(2022, 1, 1))
-
-end = st.sidebar.date_input(
- "Select the ending date",
- datetime.date(2023, 1, 1), min_value = start + datetime.timedelta(timedelta[interval]))
-
-df = yf.download(ticker, start = start, end = end, interval = interval)
-df.index = pd.to_datetime(df.index).strftime("%d-%m-%Y")
-df = df.drop(columns = ["Adj Close"])
-
-num_clusters = st.sidebar.slider(
- 'Select the number of clusters (affects number of levels you will get)',
- 1, 7, 3)
-
-rolling_wave_length = st.sidebar.slider(
- '''Select the length of rolling wave
- (select more the more long-term biased you are)''',
- 1, len(df)//5, 1)
-
-left_column, right_column = st.columns(2)
-
-left_column.markdown('Preview data:',
- unsafe_allow_html = True)
-left_column.dataframe(df, height = 400, use_container_width=True)
-
-with right_column:
- st_lottie(lottie_money, key="money")
-
-#creating function
-def calculate_support_resistance(df, rolling_wave_length, num_clusters):
- date = df.index
- df.reset_index(inplace=True)
-
- max_waves_temp = df.High.rolling(rolling_wave_length).max().rename('waves')
- min_waves_temp = df.Low.rolling(rolling_wave_length).min().rename('waves')
-
- max_waves = pd.concat([max_waves_temp, pd.Series(np.zeros(len(max_waves_temp)) + 1)], axis=1)
- min_waves = pd.concat([min_waves_temp, pd.Series(np.zeros(len(min_waves_temp)) + -1)], axis=1)
- max_waves.drop_duplicates('waves', inplace=True)
- min_waves.drop_duplicates('waves', inplace=True)
-
- waves = pd.concat([max_waves, min_waves]).sort_index()
- waves = waves[waves[0] != waves[0].shift()].dropna()
-
- x = np.concatenate((waves.waves.values.reshape(-1, 1),
- (np.zeros(len(waves)) + 1).reshape(-1, 1)), axis=1)
-
- cluster = AgglomerativeClustering(n_clusters=num_clusters, linkage='ward')
- cluster.fit_predict(x)
- waves['clusters'] = cluster.labels_
- waves2 = waves.loc[waves.groupby('clusters')['waves'].idxmax()]
- df.index = date
- waves2.waves.drop_duplicates(keep='first', inplace=True)
-
- return waves2.reset_index().waves
-support_resistance_levels = calculate_support_resistance(df, rolling_wave_length, num_clusters)
-
-#creating a plot
-fig = make_subplots(rows=2, cols=1, shared_xaxes=True,
- vertical_spacing=0.06, subplot_titles=('OHLC', 'Volume'),
- row_width=[0.3, 0.7])
-
-fig.add_trace(go.Candlestick(x=df.index,
- open=df['Open'],
- high=df['High'],
- low=df['Low'],
- close=df['Close'], name = "Market data"), row = 1, col = 1)
-
-i = 0
-for level in support_resistance_levels.to_list():
- fig.add_hline(y=level, line_width=1,
- line_dash="dash", row=1, col=1,
- line_color="snow")
- i += 1
-
-fig.update_xaxes(
- rangeslider_visible = False)
-
-colors = []
-
-for i in range(len(df.Close)):
- if i != 0:
- if df.Close[i] > df.Close[i-1]:
- colors.append('lightgreen')
- else:
- colors.append('lightcoral')
- else:
- colors.append('lightcoral')
-
-fig.add_trace(go.Bar(x=df.index, y=df['Volume'], showlegend=False,
- marker=dict(color=colors)), row=2, col=1)
-
-fig.update_traces(name= 'Volume', selector=dict(type='bar'))
-
-text = f'{ticker} Chart'
-
-fig.update_layout(
- title=go.layout.Title(
- text=text,
- xref="paper",
- x=0))
-
-#show chart
-st.plotly_chart(fig, use_container_width=True)
-
-st.markdown("""
-Disclaimer: It's important to note that while this demonstration provides a useful approach to
-identifying support and resistance levels in financial markets,
-it is not intended to be taken as financial advice.
-Trading decisions should be made based on careful analysis of multiple factors,
-including market conditions,
-risk tolerance,
-and individual financial goals.
-""", unsafe_allow_html=True)
-
-hide_streamlit_style = """
-
- """
-st.markdown(hide_streamlit_style, unsafe_allow_html=True)
-
-st.markdown('''
-
"""
-
-gr.Interface(inference, inputs, outputs, title=title, description=description, article=article, examples=[['starrynight.jpeg',"Image Captioning","None","Nucleus sampling"]]).launch(enable_queue=True)
\ No newline at end of file
diff --git a/spaces/Sandiago21/text-to-speech-greek/app.py b/spaces/Sandiago21/text-to-speech-greek/app.py
deleted file mode 100644
index 8f7a7c8a9b837e1c622d6eafce9897af5ecca247..0000000000000000000000000000000000000000
--- a/spaces/Sandiago21/text-to-speech-greek/app.py
+++ /dev/null
@@ -1,126 +0,0 @@
-import gradio as gr
-import torch
-from datasets import load_dataset
-from transformers import pipeline, SpeechT5Processor, SpeechT5HifiGan, SpeechT5ForTextToSpeech
-
-model_id = "Sandiago21/speecht5_finetuned_google_fleurs_greek" # update with your model id
-# pipe = pipeline("automatic-speech-recognition", model=model_id)
-model = SpeechT5ForTextToSpeech.from_pretrained(model_id)
-vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan")
-embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation")
-speaker_embeddings = torch.tensor(embeddings_dataset[7440]["xvector"]).unsqueeze(0)
-
-processor = SpeechT5Processor.from_pretrained(model_id)
-
-replacements = [
- ("ου", "u"),
- ("αυ", "af"),
- ("ευ", "ef"),
- ("ει", "i"),
- ("οι", "i"),
- ("αι", "e"),
- ("ού", "u"),
- ("εί", "i"),
- ("οί", "i"),
- ("αί", "e"),
- ("Ά", "A"),
- ("Έ", "E"),
- ("Ή", "H"),
- ("Ί", "I"),
- ("Ό", "O"),
- ("Ύ", "Y"),
- ("Ώ", "O"),
- ("ΐ", "i"),
- ("Α", "A"),
- ("Β", "B"),
- ("Γ", "G"),
- ("Δ", "L"),
- ("Ε", "Ε"),
- ("Ζ", "Z"),
- ("Η", "I"),
- ("Θ", "Th"),
- ("Ι", "I"),
- ("Κ", "K"),
- ("Λ", "L"),
- ("Μ", "M"),
- ("Ν", "N"),
- ("Ξ", "Ks"),
- ("Ο", "O"),
- ("Π", "P"),
- ("Ρ", "R"),
- ("Σ", "S"),
- ("Τ", "T"),
- ("Υ", "Y"),
- ("Φ", "F"),
- ("Χ", "X"),
- ("Ω", "O"),
- ("ά", "a"),
- ("έ", "e"),
- ("ή", "i"),
- ("ί", "i"),
- ("α", "a"),
- ("β", "v"),
- ("γ", "g"),
- ("δ", "d"),
- ("ε", "e"),
- ("ζ", "z"),
- ("η", "i"),
- ("θ", "th"),
- ("ι", "i"),
- ("κ", "k"),
- ("λ", "l"),
- ("μ", "m"),
- ("ν", "n"),
- ("ξ", "ks"),
- ("ο", "o"),
- ("π", "p"),
- ("ρ", "r"),
- ("ς", "s"),
- ("σ", "s"),
- ("τ", "t"),
- ("υ", "i"),
- ("φ", "f"),
- ("χ", "h"),
- ("ψ", "ps"),
- ("ω", "o"),
- ("ϊ", "i"),
- ("ϋ", "i"),
- ("ό", "o"),
- ("ύ", "i"),
- ("ώ", "o"),
- ("í", "i"),
- ("õ", "o"),
- ("Ε", "E"),
- ("Ψ", "Ps"),
-]
-
-
-title = "Text-to-Speech"
-description = """
-Demo for text-to-speech translation in Greek. Demo uses [Sandiago21/speecht5_finetuned_google_fleurs_greek](https://huggingface.co/Sandiago21/speecht5_finetuned_google_fleurs_greek) checkpoint, which is based on Microsoft's
-[SpeechT5 TTS](https://huggingface.co/microsoft/speecht5_tts) model and is fine-tuned in Greek Audio dataset
-")
-"""
-
-
-def cleanup_text(text):
- for src, dst in replacements:
- text = text.replace(src, dst)
- return text
-
-def synthesize_speech(text):
- text = cleanup_text(text)
- inputs = processor(text=text, return_tensors="pt")
-
- speech = model.generate_speech(inputs["input_ids"], speaker_embeddings, vocoder=vocoder)
-
- return gr.Audio.update(value=(16000, speech.cpu().numpy()))
-
-syntesize_speech_gradio = gr.Interface(
- synthesize_speech,
- inputs = gr.Textbox(label="Text", placeholder="Type something here..."),
- outputs=gr.Audio(),
- examples=["Έλαβαν χώρα μεγάλες διαδηλώσεις στην Πολωνία όταν εκείνη η χώρα υπέγραψε την acta που οδήγησε την κυβέρνηση της πολωνίας να αποφασίσει τη μη επικύρωση της συμφωνίας προς το παρόν"],
- title=title,
- description=description,
-).launch()
diff --git a/spaces/SankarSrin/image-matting-app/ppmatting/core/__init__.py b/spaces/SankarSrin/image-matting-app/ppmatting/core/__init__.py
deleted file mode 100644
index 78060ba48aac1fd7d8cb32eccc7ccddadd74017f..0000000000000000000000000000000000000000
--- a/spaces/SankarSrin/image-matting-app/ppmatting/core/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-from .val import evaluate
-from .val_ml import evaluate_ml
-from .train import train
-from .predict import predict
\ No newline at end of file
diff --git a/spaces/Sefray/PylenaLineDetector_ICDAR2023/README.md b/spaces/Sefray/PylenaLineDetector_ICDAR2023/README.md
deleted file mode 100644
index 7b787b15522f4cb6060be979f9ee792c278f4b06..0000000000000000000000000000000000000000
--- a/spaces/Sefray/PylenaLineDetector_ICDAR2023/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: PylenaLineDetector ICDAR2023
-emoji: ⚡
-colorFrom: green
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ServerX/PorcoDiaz/lib/uvr5_pack/lib_v5/nets_123812KB.py b/spaces/ServerX/PorcoDiaz/lib/uvr5_pack/lib_v5/nets_123812KB.py
deleted file mode 100644
index becbfae85683a13bbb19d3ea6c840da24e61e01e..0000000000000000000000000000000000000000
--- a/spaces/ServerX/PorcoDiaz/lib/uvr5_pack/lib_v5/nets_123812KB.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import torch
-from torch import nn
-import torch.nn.functional as F
-
-from . import layers_123821KB as layers
-
-
-class BaseASPPNet(nn.Module):
- def __init__(self, nin, ch, dilations=(4, 8, 16)):
- super(BaseASPPNet, self).__init__()
- self.enc1 = layers.Encoder(nin, ch, 3, 2, 1)
- self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1)
- self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1)
- self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1)
-
- self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations)
-
- self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1)
- self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1)
- self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1)
- self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1)
-
- def __call__(self, x):
- h, e1 = self.enc1(x)
- h, e2 = self.enc2(h)
- h, e3 = self.enc3(h)
- h, e4 = self.enc4(h)
-
- h = self.aspp(h)
-
- h = self.dec4(h, e4)
- h = self.dec3(h, e3)
- h = self.dec2(h, e2)
- h = self.dec1(h, e1)
-
- return h
-
-
-class CascadedASPPNet(nn.Module):
- def __init__(self, n_fft):
- super(CascadedASPPNet, self).__init__()
- self.stg1_low_band_net = BaseASPPNet(2, 32)
- self.stg1_high_band_net = BaseASPPNet(2, 32)
-
- self.stg2_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0)
- self.stg2_full_band_net = BaseASPPNet(16, 32)
-
- self.stg3_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0)
- self.stg3_full_band_net = BaseASPPNet(32, 64)
-
- self.out = nn.Conv2d(64, 2, 1, bias=False)
- self.aux1_out = nn.Conv2d(32, 2, 1, bias=False)
- self.aux2_out = nn.Conv2d(32, 2, 1, bias=False)
-
- self.max_bin = n_fft // 2
- self.output_bin = n_fft // 2 + 1
-
- self.offset = 128
-
- def forward(self, x, aggressiveness=None):
- mix = x.detach()
- x = x.clone()
-
- x = x[:, :, : self.max_bin]
-
- bandw = x.size()[2] // 2
- aux1 = torch.cat(
- [
- self.stg1_low_band_net(x[:, :, :bandw]),
- self.stg1_high_band_net(x[:, :, bandw:]),
- ],
- dim=2,
- )
-
- h = torch.cat([x, aux1], dim=1)
- aux2 = self.stg2_full_band_net(self.stg2_bridge(h))
-
- h = torch.cat([x, aux1, aux2], dim=1)
- h = self.stg3_full_band_net(self.stg3_bridge(h))
-
- mask = torch.sigmoid(self.out(h))
- mask = F.pad(
- input=mask,
- pad=(0, 0, 0, self.output_bin - mask.size()[2]),
- mode="replicate",
- )
-
- if self.training:
- aux1 = torch.sigmoid(self.aux1_out(aux1))
- aux1 = F.pad(
- input=aux1,
- pad=(0, 0, 0, self.output_bin - aux1.size()[2]),
- mode="replicate",
- )
- aux2 = torch.sigmoid(self.aux2_out(aux2))
- aux2 = F.pad(
- input=aux2,
- pad=(0, 0, 0, self.output_bin - aux2.size()[2]),
- mode="replicate",
- )
- return mask * mix, aux1 * mix, aux2 * mix
- else:
- if aggressiveness:
- mask[:, :, : aggressiveness["split_bin"]] = torch.pow(
- mask[:, :, : aggressiveness["split_bin"]],
- 1 + aggressiveness["value"] / 3,
- )
- mask[:, :, aggressiveness["split_bin"] :] = torch.pow(
- mask[:, :, aggressiveness["split_bin"] :],
- 1 + aggressiveness["value"],
- )
-
- return mask * mix
-
- def predict(self, x_mag, aggressiveness=None):
- h = self.forward(x_mag, aggressiveness)
-
- if self.offset > 0:
- h = h[:, :, :, self.offset : -self.offset]
- assert h.size()[3] > 0
-
- return h
diff --git a/spaces/Shashashasha/so-vits-fork-yoshi/README.md b/spaces/Shashashasha/so-vits-fork-yoshi/README.md
deleted file mode 100644
index edfbc1b8a5b7e8293e6fda7ac9c83f5b0d7968fd..0000000000000000000000000000000000000000
--- a/spaces/Shashashasha/so-vits-fork-yoshi/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Voice Cloning
-emoji: 😻
-colorFrom: blue
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.27.0
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: Shashashasha/so-vits-fork-vika
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ShreyashNadage/InvestmentCopilot/Stock_Static_Data.py b/spaces/ShreyashNadage/InvestmentCopilot/Stock_Static_Data.py
deleted file mode 100644
index bca1805ae8dbb04fcca1a18f9e024097dfe307b6..0000000000000000000000000000000000000000
--- a/spaces/ShreyashNadage/InvestmentCopilot/Stock_Static_Data.py
+++ /dev/null
@@ -1,258 +0,0 @@
-nse_stock_sectors = {
- 'RELIANCE': 'Energy',
- 'HDFCBANK': 'Financial Services',
- 'INFY': 'IT',
- 'ICICIBANK': 'Financial Services',
- 'TCS': 'IT',
- 'HINDUNILVR': 'Consumer Goods',
- 'KOTAKBANK': 'Financial Services',
- 'HDFC': 'Financial Services',
- 'BAJFINANCE': 'Financial Services',
- 'AXISBANK': 'Financial Services',
- 'ASIANPAINT': 'Consumer Goods',
- 'ITC': 'Consumer Goods',
- 'LT': 'Infrastructure',
- 'HCLTECH': 'IT',
- 'SBI': 'Financial Services',
- 'BAJAJFINSV': 'Financial Services',
- 'TECHM': 'IT',
- 'NTPC': 'Energy',
- 'BANKBARODA': 'Financial Services',
- 'IOC': 'Energy',
- 'NESTLEIND': 'Consumer Goods',
- 'ONGC': 'Energy',
- 'BPCL': 'Energy',
- 'POWERGRID': 'Infrastructure',
- 'SBIN': 'Financial Services',
- 'BHARTIARTL': 'Telecom',
- 'MARUTI': 'Automobile',
- 'ULTRACEMCO': 'Cement',
- 'BAJAJ-AUTO': 'Automobile',
- 'WIPRO': 'IT',
- 'JSWSTEEL': 'Metals',
- 'TATASTEEL': 'Metals',
- 'HINDZINC': 'Metals',
- 'SUNPHARMA': 'Pharmaceuticals',
- 'TATAMOTORS': 'Automobile',
- 'DMART': 'Retail',
- 'BRITANNIA': 'Consumer Goods',
- 'TITAN': 'Consumer Goods',
- 'GAIL': 'Energy',
- 'DRREDDY': 'Pharmaceuticals',
- 'HINDPETRO': 'Energy',
- 'CIPLA': 'Pharmaceuticals',
- 'BHARTIINFR': 'Telecom',
- 'EICHERMOT': 'Automobile',
- 'M&M': 'Automobile',
- 'ULTRACEMC': 'Cement',
- 'INDUSINDBK': 'Financial Services',
- 'HEROMOTOCO': 'Automobile',
- 'SHREECEM': 'Cement',
- 'GRASIM': 'Cement',
- 'ONGCORP': 'Energy',
- 'ADANIPORTS': 'Infrastructure',
- 'BAJAJHLDNG': 'Financial Services',
- 'DIVISLAB': 'Pharmaceuticals',
- 'TATAPOWER': 'Energy',
- 'ZEEL': 'Media',
- 'NTPC_': 'Energy',
- 'ICICIPRULI': 'Financial Services',
- 'MRF': 'Automobile',
- 'MOTHERSUMI': 'Automobile',
- 'COALINDIA': 'Mining',
- 'SBILIFE': 'Financial Services',
- 'PIDILITIND': 'Consumer Goods',
- 'TATACONSUM': 'Consumer Goods',
- 'SBICARD': 'Financial Services',
- 'DABUR': 'Consumer Goods',
- 'PNB': 'Financial Services',
- 'LUPIN': 'Pharmaceuticals',
- 'UBL': 'Consumer Goods',
- 'BHEL': 'Infrastructure',
- 'ACC': 'Cement',
- 'HINDALCO': 'Metals',
- 'TORNTPHARM': 'Pharmaceuticals',
- 'BOSCHLTD': 'Automobile',
- 'DLF': 'Real Estate',
- 'AMBUJACEM': 'Cement',
- 'SAIL': 'Metals',
- 'MUTHOOTFIN': 'Financial Services',
- 'MCDOWELL-N': 'Consumer Goods',
- 'PFC': 'Financial Services',
- 'BEL': 'Defense',
- 'BANDHANBNK': 'Financial Services',
- 'PEL': 'Conglomerate',
- 'TORNTPOWER': 'Energy',
- 'HAVELLS': 'Consumer Goods',
- 'FEDERALBNK': 'Financial Services',
- 'BERGEPAINT': 'Consumer Goods',
- 'RBLBANK': 'Financial Services',
- 'INDIGO': 'Airlines',
- 'RAMCOCEM': 'Cement',
- 'EXIDEIND': 'Automobile',
- 'CHOLAFIN': 'Financial Services',
- 'ICICIGI': 'Financial Services',
- 'BANKINDIA': 'Financial Services',
- 'ADANIGREEN': 'Energy',
- 'HDFCLIFE': 'Financial Services',
- 'APOLLOHOSP': 'Healthcare',
- 'AUROPHARMA': 'Pharmaceuticals',
- 'IGL': 'Energy',
- 'TVSMOTOR': 'Automobile',
- 'GODREJCP': 'Consumer Goods',
- 'MGL': 'Energy',
- 'BATAINDIA': 'Retail',
- 'M&MFIN': 'Financial Services',
- 'NIACL': 'Financial Services',
- 'ADANIENT': 'Conglomerate',
- 'JINDALSTEL': 'Metals',
- 'BANKNIFTY': 'Index',
- 'COLPAL': 'Consumer Goods',
- 'UBL_': 'Consumer Goods',
- 'INDIANB': 'Financial Services',
- 'BANKBAROD': 'Financial Services',
- 'ASHOKLEY': 'Automobile',
- 'SRTRANSFIN': 'Financial Services',
- 'ACC_': 'Cement',
- 'SIEMENS': 'Infrastructure',
- 'HDFCAMC': 'Financial Services',
- 'AMARAJABAT': 'Automobile',
- 'BSE': 'Index',
- 'MGL_': 'Energy',
- 'BAJAJHLDNG_': 'Financial Services',
- 'AMBUJACEM_': 'Cement',
- 'BPCL_': 'Energy',
- 'IDFCFIRSTB': 'Financial Services',
- 'IDEA': 'Telecom',
- 'PFIZER': 'Pharmaceuticals',
- 'BANDHANBNK_': 'Financial Services',
- 'HCLTECH_': 'IT',
- 'MINDTREE': 'IT',
- 'HDFCBANK_': 'Financial Services',
- 'ASHOKLEY_': 'Automobile',
- 'PNBHOUSING': 'Real Estate',
- 'GRASIM_': 'Cement',
- 'M&MFIN_': 'Financial Services',
- 'PVR': 'Entertainment',
- 'RPOWER': 'Energy',
- 'TVTODAY': 'Media',
- 'APLLTD': 'Pharmaceuticals',
- 'IDBI': 'Financial Services',
- 'IRCTC': 'Travel',
- 'JINDALSTEL_': 'Metals',
- 'L&TFH': 'Financial Services',
- 'NIITTECH': 'IT',
- 'INDIGO_': 'Airlines',
- 'BANKINDIA_': 'Financial Services',
- 'MINDACORP': 'Retail',
- 'FEDERALBNK_': 'Financial Services',
- 'GLENMARK': 'Pharmaceuticals',
- 'TV18BRDCST': 'Media',
- 'UJJIVAN': 'Financial Services',
- 'CENTRALBK': 'Financial Services',
- 'NCC': 'Infrastructure',
- 'HDFCLIFE_': 'Financial Services',
- 'SYNGENE': 'Pharmaceuticals',
- 'BALKRISIND': 'Automobile',
- 'CHOLAFIN_': 'Financial Services',
- 'COFORGE': 'IT',
- 'CRISIL': 'Financial Services',
- 'DEEPAKNTR': 'Consumer Goods',
- 'JUBLFOOD': 'Retail',
- 'PHOENIXLTD': 'Real Estate',
- 'EQUITAS': 'Financial Services',
- 'LTI': 'IT',
- 'RBLBANK_': 'Financial Services',
- 'CANBK': 'Financial Services',
- 'MOTILALOFS': 'Financial Services',
- 'PNCINFRA': 'Infrastructure',
- 'SUNTV': 'Media',
- 'AMBER': 'Consumer Goods',
- 'BLUESTARCO': 'Consumer Goods',
- 'CUB': 'Financial Services',
- 'GREENPLY': 'Consumer Goods',
- 'KOLTEPATIL': 'Real Estate',
- 'L&TFH_': 'Financial Services',
- 'METROPOLIS': 'Healthcare',
- 'RCOM': 'Telecom',
- 'SYMPHONY': 'Consumer Goods',
- 'VBL': 'Consumer Goods',
- 'VENKEYS': 'Consumer Goods',
- 'COROMANDEL': 'Fertilizers',
- 'GODREJAGRO': 'Consumer Goods',
- 'HAL': 'Defense',
- 'JINDALSAW': 'Metals',
- 'KEC': 'Infrastructure',
- 'MINDACORP_': 'Retail',
- 'NMDC': 'Metals',
- 'PGHL': 'Consumer Goods',
- 'PRAJIND': 'Infrastructure',
- 'RITES': 'Infrastructure',
- 'BANKBAROD_': 'Financial Services',
- 'BATAINDIA_': 'Retail',
- 'BEML': 'Infrastructure',
- 'CAREERP': 'IT',
- 'CENTURYTEX': 'Cement',
- 'DCBBANK': 'Financial Services',
- 'DHFL': 'Financial Services',
- 'FSL': 'Financial Services',
- 'FUTURECON': 'Retail',
- 'GEPIL': 'Pharmaceuticals',
- 'GESHIP': 'Shipping',
- 'GUJALKALI': 'Fertilizers',
- 'HATHWAY': 'Media',
- 'HERCULES': 'Pharmaceuticals',
- 'HEXAWARE': 'IT',
- 'IDFC': 'Financial Services',
- 'IDFCFIRSTB_': 'Financial Services',
- 'IFCI': 'Financial Services',
- 'IIFL': 'Financial Services',
- 'INDIANB_': 'Financial Services',
- 'IRCON': 'Infrastructure',
- 'JETAIRWAYS': 'Airlines',
- 'LUPIN': 'Pharmaceuticals',
- 'M&M': 'Automobile',
- 'MAHLOG': 'Automobile',
- 'MANAPPURAM': 'Financial Services',
- 'MAXFIN': 'Financial Services',
- 'MCX': 'Financial Services',
- 'MOTHERSUMI': 'Automobile',
- 'NATCOPHARM': 'Pharmaceuticals',
- 'NATIONALUM': 'Metals',
- 'NH': 'Hospitality',
- 'NLCINDIA': 'Power',
- 'NTPC': 'Power',
- 'PCJEWELLER': 'Retail',
- 'PNBHOUSING': 'Financial Services',
- 'PVR': 'Media',
- 'RAYMOND': 'Textiles',
- 'RELCAPITAL': 'Financial Services',
- 'RELIANCE': 'Oil & Gas',
- 'RIIL': 'Infrastructure',
- 'SADBHAV': 'Infrastructure',
- 'SAIL': 'Metals',
- 'SCHAEFFLER': 'Automobile',
- 'SFL': 'Retail',
- 'SOUTHBANK': 'Financial Services',
- 'SPARC': 'Pharmaceuticals',
- 'SUZLON': 'Power',
- 'SYNDIBANK': 'Financial Services',
- 'TATAELXSI': 'IT',
- 'TATAMOTORS': 'Automobile',
- 'TECHM': 'IT',
- 'THERMAX': 'Engineering',
- 'TITAN': 'Retail',
- 'TRIDENT': 'Textiles',
- 'UFO': 'Media',
- 'UPL': 'Chemicals',
- 'VAKRANGEE': 'IT',
- 'WOCKPHARMA': 'Pharmaceuticals',
- 'YESBANK': 'Financial Services'
-}
-
-def get_sector_list():
- return list(set(nse_stock_sectors.values()))
-
-def get_stock_list_by_sector(sector='IT'):
- return [key.strip()+'.NS' for key, val in nse_stock_sectors.items() if val == sector]
-
diff --git a/spaces/SiddharthK/dslim-bert-large-NER/README.md b/spaces/SiddharthK/dslim-bert-large-NER/README.md
deleted file mode 100644
index e10a300e3adc9f94825461de10f13897d051e179..0000000000000000000000000000000000000000
--- a/spaces/SiddharthK/dslim-bert-large-NER/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: NER using BERT/large
-emoji: 🚀
-colorFrom: green
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.17.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/SidneyChen/mbti_prediction/app.py b/spaces/SidneyChen/mbti_prediction/app.py
deleted file mode 100644
index 3667c08a6683bcab2bace5d9ce85e78c2e972773..0000000000000000000000000000000000000000
--- a/spaces/SidneyChen/mbti_prediction/app.py
+++ /dev/null
@@ -1,18 +0,0 @@
-def update(name):
- return f"Welcome to Gradio, {name}!"
-import gradio as gr
-
-gui = gr.Interface(fn=update, #callable function
- inputs=gr.inputs.Textbox(label = '讓我來分析你最近的人格><', placeholder = '個性描述、自己的故事或是曾經發過的文章'), #input format
- outputs=gr.outputs.Textbox(label = '只有我最了解你,你是一位...'),
- title = "AI-MBTI knows U.",
- description = 'Come on. Let us predict your MBTI type ! We will tell you what kind of movie should you watch !',
- theme = 'grass',
- # examples = [['User', ‘description, MBTI Type& Moive Fits in U ],
-
-
- ) #output format
-
-
-#display the interface
-gui.launch(share=True)
\ No newline at end of file
diff --git a/spaces/Silentlin/DiffSinger/utils/text_norm.py b/spaces/Silentlin/DiffSinger/utils/text_norm.py
deleted file mode 100644
index d0973cebc91e0525aeb6657e70012a1d37b5e6ff..0000000000000000000000000000000000000000
--- a/spaces/Silentlin/DiffSinger/utils/text_norm.py
+++ /dev/null
@@ -1,790 +0,0 @@
-# coding=utf-8
-# Authors:
-# 2019.5 Zhiyang Zhou (https://github.com/Joee1995/chn_text_norm.git)
-# 2019.9 Jiayu DU
-#
-# requirements:
-# - python 3.X
-# notes: python 2.X WILL fail or produce misleading results
-
-import sys, os, argparse, codecs, string, re
-
-# ================================================================================ #
-# basic constant
-# ================================================================================ #
-CHINESE_DIGIS = u'零一二三四五六七八九'
-BIG_CHINESE_DIGIS_SIMPLIFIED = u'零壹贰叁肆伍陆柒捌玖'
-BIG_CHINESE_DIGIS_TRADITIONAL = u'零壹貳參肆伍陸柒捌玖'
-SMALLER_BIG_CHINESE_UNITS_SIMPLIFIED = u'十百千万'
-SMALLER_BIG_CHINESE_UNITS_TRADITIONAL = u'拾佰仟萬'
-LARGER_CHINESE_NUMERING_UNITS_SIMPLIFIED = u'亿兆京垓秭穰沟涧正载'
-LARGER_CHINESE_NUMERING_UNITS_TRADITIONAL = u'億兆京垓秭穰溝澗正載'
-SMALLER_CHINESE_NUMERING_UNITS_SIMPLIFIED = u'十百千万'
-SMALLER_CHINESE_NUMERING_UNITS_TRADITIONAL = u'拾佰仟萬'
-
-ZERO_ALT = u'〇'
-ONE_ALT = u'幺'
-TWO_ALTS = [u'两', u'兩']
-
-POSITIVE = [u'正', u'正']
-NEGATIVE = [u'负', u'負']
-POINT = [u'点', u'點']
-# PLUS = [u'加', u'加']
-# SIL = [u'杠', u'槓']
-
-# 中文数字系统类型
-NUMBERING_TYPES = ['low', 'mid', 'high']
-
-CURRENCY_NAMES = '(人民币|美元|日元|英镑|欧元|马克|法郎|加拿大元|澳元|港币|先令|芬兰马克|爱尔兰镑|' \
- '里拉|荷兰盾|埃斯库多|比塞塔|印尼盾|林吉特|新西兰元|比索|卢布|新加坡元|韩元|泰铢)'
-CURRENCY_UNITS = '((亿|千万|百万|万|千|百)|(亿|千万|百万|万|千|百|)元|(亿|千万|百万|万|千|百|)块|角|毛|分)'
-COM_QUANTIFIERS = '(匹|张|座|回|场|尾|条|个|首|阙|阵|网|炮|顶|丘|棵|只|支|袭|辆|挑|担|颗|壳|窠|曲|墙|群|腔|' \
- '砣|座|客|贯|扎|捆|刀|令|打|手|罗|坡|山|岭|江|溪|钟|队|单|双|对|出|口|头|脚|板|跳|枝|件|贴|' \
- '针|线|管|名|位|身|堂|课|本|页|家|户|层|丝|毫|厘|分|钱|两|斤|担|铢|石|钧|锱|忽|(千|毫|微)克|' \
- '毫|厘|分|寸|尺|丈|里|寻|常|铺|程|(千|分|厘|毫|微)米|撮|勺|合|升|斗|石|盘|碗|碟|叠|桶|笼|盆|' \
- '盒|杯|钟|斛|锅|簋|篮|盘|桶|罐|瓶|壶|卮|盏|箩|箱|煲|啖|袋|钵|年|月|日|季|刻|时|周|天|秒|分|旬|' \
- '纪|岁|世|更|夜|春|夏|秋|冬|代|伏|辈|丸|泡|粒|颗|幢|堆|条|根|支|道|面|片|张|颗|块)'
-
-# punctuation information are based on Zhon project (https://github.com/tsroten/zhon.git)
-CHINESE_PUNC_STOP = '!?。。'
-CHINESE_PUNC_NON_STOP = '"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏'
-CHINESE_PUNC_LIST = CHINESE_PUNC_STOP + CHINESE_PUNC_NON_STOP
-
-
-# ================================================================================ #
-# basic class
-# ================================================================================ #
-class ChineseChar(object):
- """
- 中文字符
- 每个字符对应简体和繁体,
- e.g. 简体 = '负', 繁体 = '負'
- 转换时可转换为简体或繁体
- """
-
- def __init__(self, simplified, traditional):
- self.simplified = simplified
- self.traditional = traditional
- # self.__repr__ = self.__str__
-
- def __str__(self):
- return self.simplified or self.traditional or None
-
- def __repr__(self):
- return self.__str__()
-
-
-class ChineseNumberUnit(ChineseChar):
- """
- 中文数字/数位字符
- 每个字符除繁简体外还有一个额外的大写字符
- e.g. '陆' 和 '陸'
- """
-
- def __init__(self, power, simplified, traditional, big_s, big_t):
- super(ChineseNumberUnit, self).__init__(simplified, traditional)
- self.power = power
- self.big_s = big_s
- self.big_t = big_t
-
- def __str__(self):
- return '10^{}'.format(self.power)
-
- @classmethod
- def create(cls, index, value, numbering_type=NUMBERING_TYPES[1], small_unit=False):
-
- if small_unit:
- return ChineseNumberUnit(power=index + 1,
- simplified=value[0], traditional=value[1], big_s=value[1], big_t=value[1])
- elif numbering_type == NUMBERING_TYPES[0]:
- return ChineseNumberUnit(power=index + 8,
- simplified=value[0], traditional=value[1], big_s=value[0], big_t=value[1])
- elif numbering_type == NUMBERING_TYPES[1]:
- return ChineseNumberUnit(power=(index + 2) * 4,
- simplified=value[0], traditional=value[1], big_s=value[0], big_t=value[1])
- elif numbering_type == NUMBERING_TYPES[2]:
- return ChineseNumberUnit(power=pow(2, index + 3),
- simplified=value[0], traditional=value[1], big_s=value[0], big_t=value[1])
- else:
- raise ValueError(
- 'Counting type should be in {0} ({1} provided).'.format(NUMBERING_TYPES, numbering_type))
-
-
-class ChineseNumberDigit(ChineseChar):
- """
- 中文数字字符
- """
-
- def __init__(self, value, simplified, traditional, big_s, big_t, alt_s=None, alt_t=None):
- super(ChineseNumberDigit, self).__init__(simplified, traditional)
- self.value = value
- self.big_s = big_s
- self.big_t = big_t
- self.alt_s = alt_s
- self.alt_t = alt_t
-
- def __str__(self):
- return str(self.value)
-
- @classmethod
- def create(cls, i, v):
- return ChineseNumberDigit(i, v[0], v[1], v[2], v[3])
-
-
-class ChineseMath(ChineseChar):
- """
- 中文数位字符
- """
-
- def __init__(self, simplified, traditional, symbol, expression=None):
- super(ChineseMath, self).__init__(simplified, traditional)
- self.symbol = symbol
- self.expression = expression
- self.big_s = simplified
- self.big_t = traditional
-
-
-CC, CNU, CND, CM = ChineseChar, ChineseNumberUnit, ChineseNumberDigit, ChineseMath
-
-
-class NumberSystem(object):
- """
- 中文数字系统
- """
- pass
-
-
-class MathSymbol(object):
- """
- 用于中文数字系统的数学符号 (繁/简体), e.g.
- positive = ['正', '正']
- negative = ['负', '負']
- point = ['点', '點']
- """
-
- def __init__(self, positive, negative, point):
- self.positive = positive
- self.negative = negative
- self.point = point
-
- def __iter__(self):
- for v in self.__dict__.values():
- yield v
-
-
-# class OtherSymbol(object):
-# """
-# 其他符号
-# """
-#
-# def __init__(self, sil):
-# self.sil = sil
-#
-# def __iter__(self):
-# for v in self.__dict__.values():
-# yield v
-
-
-# ================================================================================ #
-# basic utils
-# ================================================================================ #
-def create_system(numbering_type=NUMBERING_TYPES[1]):
- """
- 根据数字系统类型返回创建相应的数字系统,默认为 mid
- NUMBERING_TYPES = ['low', 'mid', 'high']: 中文数字系统类型
- low: '兆' = '亿' * '十' = $10^{9}$, '京' = '兆' * '十', etc.
- mid: '兆' = '亿' * '万' = $10^{12}$, '京' = '兆' * '万', etc.
- high: '兆' = '亿' * '亿' = $10^{16}$, '京' = '兆' * '兆', etc.
- 返回对应的数字系统
- """
-
- # chinese number units of '亿' and larger
- all_larger_units = zip(
- LARGER_CHINESE_NUMERING_UNITS_SIMPLIFIED, LARGER_CHINESE_NUMERING_UNITS_TRADITIONAL)
- larger_units = [CNU.create(i, v, numbering_type, False)
- for i, v in enumerate(all_larger_units)]
- # chinese number units of '十, 百, 千, 万'
- all_smaller_units = zip(
- SMALLER_CHINESE_NUMERING_UNITS_SIMPLIFIED, SMALLER_CHINESE_NUMERING_UNITS_TRADITIONAL)
- smaller_units = [CNU.create(i, v, small_unit=True)
- for i, v in enumerate(all_smaller_units)]
- # digis
- chinese_digis = zip(CHINESE_DIGIS, CHINESE_DIGIS,
- BIG_CHINESE_DIGIS_SIMPLIFIED, BIG_CHINESE_DIGIS_TRADITIONAL)
- digits = [CND.create(i, v) for i, v in enumerate(chinese_digis)]
- digits[0].alt_s, digits[0].alt_t = ZERO_ALT, ZERO_ALT
- digits[1].alt_s, digits[1].alt_t = ONE_ALT, ONE_ALT
- digits[2].alt_s, digits[2].alt_t = TWO_ALTS[0], TWO_ALTS[1]
-
- # symbols
- positive_cn = CM(POSITIVE[0], POSITIVE[1], '+', lambda x: x)
- negative_cn = CM(NEGATIVE[0], NEGATIVE[1], '-', lambda x: -x)
- point_cn = CM(POINT[0], POINT[1], '.', lambda x,
- y: float(str(x) + '.' + str(y)))
- # sil_cn = CM(SIL[0], SIL[1], '-', lambda x, y: float(str(x) + '-' + str(y)))
- system = NumberSystem()
- system.units = smaller_units + larger_units
- system.digits = digits
- system.math = MathSymbol(positive_cn, negative_cn, point_cn)
- # system.symbols = OtherSymbol(sil_cn)
- return system
-
-
-def chn2num(chinese_string, numbering_type=NUMBERING_TYPES[1]):
- def get_symbol(char, system):
- for u in system.units:
- if char in [u.traditional, u.simplified, u.big_s, u.big_t]:
- return u
- for d in system.digits:
- if char in [d.traditional, d.simplified, d.big_s, d.big_t, d.alt_s, d.alt_t]:
- return d
- for m in system.math:
- if char in [m.traditional, m.simplified]:
- return m
-
- def string2symbols(chinese_string, system):
- int_string, dec_string = chinese_string, ''
- for p in [system.math.point.simplified, system.math.point.traditional]:
- if p in chinese_string:
- int_string, dec_string = chinese_string.split(p)
- break
- return [get_symbol(c, system) for c in int_string], \
- [get_symbol(c, system) for c in dec_string]
-
- def correct_symbols(integer_symbols, system):
- """
- 一百八 to 一百八十
- 一亿一千三百万 to 一亿 一千万 三百万
- """
-
- if integer_symbols and isinstance(integer_symbols[0], CNU):
- if integer_symbols[0].power == 1:
- integer_symbols = [system.digits[1]] + integer_symbols
-
- if len(integer_symbols) > 1:
- if isinstance(integer_symbols[-1], CND) and isinstance(integer_symbols[-2], CNU):
- integer_symbols.append(
- CNU(integer_symbols[-2].power - 1, None, None, None, None))
-
- result = []
- unit_count = 0
- for s in integer_symbols:
- if isinstance(s, CND):
- result.append(s)
- unit_count = 0
- elif isinstance(s, CNU):
- current_unit = CNU(s.power, None, None, None, None)
- unit_count += 1
-
- if unit_count == 1:
- result.append(current_unit)
- elif unit_count > 1:
- for i in range(len(result)):
- if isinstance(result[-i - 1], CNU) and result[-i - 1].power < current_unit.power:
- result[-i - 1] = CNU(result[-i - 1].power +
- current_unit.power, None, None, None, None)
- return result
-
- def compute_value(integer_symbols):
- """
- Compute the value.
- When current unit is larger than previous unit, current unit * all previous units will be used as all previous units.
- e.g. '两千万' = 2000 * 10000 not 2000 + 10000
- """
- value = [0]
- last_power = 0
- for s in integer_symbols:
- if isinstance(s, CND):
- value[-1] = s.value
- elif isinstance(s, CNU):
- value[-1] *= pow(10, s.power)
- if s.power > last_power:
- value[:-1] = list(map(lambda v: v *
- pow(10, s.power), value[:-1]))
- last_power = s.power
- value.append(0)
- return sum(value)
-
- system = create_system(numbering_type)
- int_part, dec_part = string2symbols(chinese_string, system)
- int_part = correct_symbols(int_part, system)
- int_str = str(compute_value(int_part))
- dec_str = ''.join([str(d.value) for d in dec_part])
- if dec_part:
- return '{0}.{1}'.format(int_str, dec_str)
- else:
- return int_str
-
-
-def num2chn(number_string, numbering_type=NUMBERING_TYPES[1], big=False,
- traditional=False, alt_zero=False, alt_one=False, alt_two=True,
- use_zeros=True, use_units=True):
- def get_value(value_string, use_zeros=True):
-
- striped_string = value_string.lstrip('0')
-
- # record nothing if all zeros
- if not striped_string:
- return []
-
- # record one digits
- elif len(striped_string) == 1:
- if use_zeros and len(value_string) != len(striped_string):
- return [system.digits[0], system.digits[int(striped_string)]]
- else:
- return [system.digits[int(striped_string)]]
-
- # recursively record multiple digits
- else:
- result_unit = next(u for u in reversed(
- system.units) if u.power < len(striped_string))
- result_string = value_string[:-result_unit.power]
- return get_value(result_string) + [result_unit] + get_value(striped_string[-result_unit.power:])
-
- system = create_system(numbering_type)
-
- int_dec = number_string.split('.')
- if len(int_dec) == 1:
- int_string = int_dec[0]
- dec_string = ""
- elif len(int_dec) == 2:
- int_string = int_dec[0]
- dec_string = int_dec[1]
- else:
- raise ValueError(
- "invalid input num string with more than one dot: {}".format(number_string))
-
- if use_units and len(int_string) > 1:
- result_symbols = get_value(int_string)
- else:
- result_symbols = [system.digits[int(c)] for c in int_string]
- dec_symbols = [system.digits[int(c)] for c in dec_string]
- if dec_string:
- result_symbols += [system.math.point] + dec_symbols
-
- if alt_two:
- liang = CND(2, system.digits[2].alt_s, system.digits[2].alt_t,
- system.digits[2].big_s, system.digits[2].big_t)
- for i, v in enumerate(result_symbols):
- if isinstance(v, CND) and v.value == 2:
- next_symbol = result_symbols[i +
- 1] if i < len(result_symbols) - 1 else None
- previous_symbol = result_symbols[i - 1] if i > 0 else None
- if isinstance(next_symbol, CNU) and isinstance(previous_symbol, (CNU, type(None))):
- if next_symbol.power != 1 and ((previous_symbol is None) or (previous_symbol.power != 1)):
- result_symbols[i] = liang
-
- # if big is True, '两' will not be used and `alt_two` has no impact on output
- if big:
- attr_name = 'big_'
- if traditional:
- attr_name += 't'
- else:
- attr_name += 's'
- else:
- if traditional:
- attr_name = 'traditional'
- else:
- attr_name = 'simplified'
-
- result = ''.join([getattr(s, attr_name) for s in result_symbols])
-
- # if not use_zeros:
- # result = result.strip(getattr(system.digits[0], attr_name))
-
- if alt_zero:
- result = result.replace(
- getattr(system.digits[0], attr_name), system.digits[0].alt_s)
-
- if alt_one:
- result = result.replace(
- getattr(system.digits[1], attr_name), system.digits[1].alt_s)
-
- for i, p in enumerate(POINT):
- if result.startswith(p):
- return CHINESE_DIGIS[0] + result
-
- # ^10, 11, .., 19
- if len(result) >= 2 and result[1] in [SMALLER_CHINESE_NUMERING_UNITS_SIMPLIFIED[0],
- SMALLER_CHINESE_NUMERING_UNITS_TRADITIONAL[0]] and \
- result[0] in [CHINESE_DIGIS[1], BIG_CHINESE_DIGIS_SIMPLIFIED[1], BIG_CHINESE_DIGIS_TRADITIONAL[1]]:
- result = result[1:]
-
- return result
-
-
-# ================================================================================ #
-# different types of rewriters
-# ================================================================================ #
-class Cardinal:
- """
- CARDINAL类
- """
-
- def __init__(self, cardinal=None, chntext=None):
- self.cardinal = cardinal
- self.chntext = chntext
-
- def chntext2cardinal(self):
- return chn2num(self.chntext)
-
- def cardinal2chntext(self):
- return num2chn(self.cardinal)
-
-
-class Digit:
- """
- DIGIT类
- """
-
- def __init__(self, digit=None, chntext=None):
- self.digit = digit
- self.chntext = chntext
-
- # def chntext2digit(self):
- # return chn2num(self.chntext)
-
- def digit2chntext(self):
- return num2chn(self.digit, alt_two=False, use_units=False)
-
-
-class TelePhone:
- """
- TELEPHONE类
- """
-
- def __init__(self, telephone=None, raw_chntext=None, chntext=None):
- self.telephone = telephone
- self.raw_chntext = raw_chntext
- self.chntext = chntext
-
- # def chntext2telephone(self):
- # sil_parts = self.raw_chntext.split('')
- # self.telephone = '-'.join([
- # str(chn2num(p)) for p in sil_parts
- # ])
- # return self.telephone
-
- def telephone2chntext(self, fixed=False):
-
- if fixed:
- sil_parts = self.telephone.split('-')
- self.raw_chntext = ''.join([
- num2chn(part, alt_two=False, use_units=False) for part in sil_parts
- ])
- self.chntext = self.raw_chntext.replace('', '')
- else:
- sp_parts = self.telephone.strip('+').split()
- self.raw_chntext = ''.join([
- num2chn(part, alt_two=False, use_units=False) for part in sp_parts
- ])
- self.chntext = self.raw_chntext.replace('', '')
- return self.chntext
-
-
-class Fraction:
- """
- FRACTION类
- """
-
- def __init__(self, fraction=None, chntext=None):
- self.fraction = fraction
- self.chntext = chntext
-
- def chntext2fraction(self):
- denominator, numerator = self.chntext.split('分之')
- return chn2num(numerator) + '/' + chn2num(denominator)
-
- def fraction2chntext(self):
- numerator, denominator = self.fraction.split('/')
- return num2chn(denominator) + '分之' + num2chn(numerator)
-
-
-class Date:
- """
- DATE类
- """
-
- def __init__(self, date=None, chntext=None):
- self.date = date
- self.chntext = chntext
-
- # def chntext2date(self):
- # chntext = self.chntext
- # try:
- # year, other = chntext.strip().split('年', maxsplit=1)
- # year = Digit(chntext=year).digit2chntext() + '年'
- # except ValueError:
- # other = chntext
- # year = ''
- # if other:
- # try:
- # month, day = other.strip().split('月', maxsplit=1)
- # month = Cardinal(chntext=month).chntext2cardinal() + '月'
- # except ValueError:
- # day = chntext
- # month = ''
- # if day:
- # day = Cardinal(chntext=day[:-1]).chntext2cardinal() + day[-1]
- # else:
- # month = ''
- # day = ''
- # date = year + month + day
- # self.date = date
- # return self.date
-
- def date2chntext(self):
- date = self.date
- try:
- year, other = date.strip().split('年', 1)
- year = Digit(digit=year).digit2chntext() + '年'
- except ValueError:
- other = date
- year = ''
- if other:
- try:
- month, day = other.strip().split('月', 1)
- month = Cardinal(cardinal=month).cardinal2chntext() + '月'
- except ValueError:
- day = date
- month = ''
- if day:
- day = Cardinal(cardinal=day[:-1]).cardinal2chntext() + day[-1]
- else:
- month = ''
- day = ''
- chntext = year + month + day
- self.chntext = chntext
- return self.chntext
-
-
-class Money:
- """
- MONEY类
- """
-
- def __init__(self, money=None, chntext=None):
- self.money = money
- self.chntext = chntext
-
- # def chntext2money(self):
- # return self.money
-
- def money2chntext(self):
- money = self.money
- pattern = re.compile(r'(\d+(\.\d+)?)')
- matchers = pattern.findall(money)
- if matchers:
- for matcher in matchers:
- money = money.replace(matcher[0], Cardinal(cardinal=matcher[0]).cardinal2chntext())
- self.chntext = money
- return self.chntext
-
-
-class Percentage:
- """
- PERCENTAGE类
- """
-
- def __init__(self, percentage=None, chntext=None):
- self.percentage = percentage
- self.chntext = chntext
-
- def chntext2percentage(self):
- return chn2num(self.chntext.strip().strip('百分之')) + '%'
-
- def percentage2chntext(self):
- return '百分之' + num2chn(self.percentage.strip().strip('%'))
-
-
-# ================================================================================ #
-# NSW Normalizer
-# ================================================================================ #
-class NSWNormalizer:
- def __init__(self, raw_text):
- self.raw_text = '^' + raw_text + '$'
- self.norm_text = ''
-
- def _particular(self):
- text = self.norm_text
- pattern = re.compile(r"(([a-zA-Z]+)二([a-zA-Z]+))")
- matchers = pattern.findall(text)
- if matchers:
- # print('particular')
- for matcher in matchers:
- text = text.replace(matcher[0], matcher[1] + '2' + matcher[2], 1)
- self.norm_text = text
- return self.norm_text
-
- def normalize(self, remove_punc=True):
- text = self.raw_text
-
- # 规范化日期
- pattern = re.compile(r"\D+((([089]\d|(19|20)\d{2})年)?(\d{1,2}月(\d{1,2}[日号])?)?)")
- matchers = pattern.findall(text)
- if matchers:
- # print('date')
- for matcher in matchers:
- text = text.replace(matcher[0], Date(date=matcher[0]).date2chntext(), 1)
-
- # 规范化金钱
- pattern = re.compile(r"\D+((\d+(\.\d+)?)[多余几]?" + CURRENCY_UNITS + r"(\d" + CURRENCY_UNITS + r"?)?)")
- matchers = pattern.findall(text)
- if matchers:
- # print('money')
- for matcher in matchers:
- text = text.replace(matcher[0], Money(money=matcher[0]).money2chntext(), 1)
-
- # 规范化固话/手机号码
- # 手机
- # http://www.jihaoba.com/news/show/13680
- # 移动:139、138、137、136、135、134、159、158、157、150、151、152、188、187、182、183、184、178、198
- # 联通:130、131、132、156、155、186、185、176
- # 电信:133、153、189、180、181、177
- pattern = re.compile(r"\D((\+?86 ?)?1([38]\d|5[0-35-9]|7[678]|9[89])\d{8})\D")
- matchers = pattern.findall(text)
- if matchers:
- # print('telephone')
- for matcher in matchers:
- text = text.replace(matcher[0], TelePhone(telephone=matcher[0]).telephone2chntext(), 1)
- # 固话
- pattern = re.compile(r"\D((0(10|2[1-3]|[3-9]\d{2})-?)?[1-9]\d{6,7})\D")
- matchers = pattern.findall(text)
- if matchers:
- # print('fixed telephone')
- for matcher in matchers:
- text = text.replace(matcher[0], TelePhone(telephone=matcher[0]).telephone2chntext(fixed=True), 1)
-
- # 规范化分数
- pattern = re.compile(r"(\d+/\d+)")
- matchers = pattern.findall(text)
- if matchers:
- # print('fraction')
- for matcher in matchers:
- text = text.replace(matcher, Fraction(fraction=matcher).fraction2chntext(), 1)
-
- # 规范化百分数
- text = text.replace('%', '%')
- pattern = re.compile(r"(\d+(\.\d+)?%)")
- matchers = pattern.findall(text)
- if matchers:
- # print('percentage')
- for matcher in matchers:
- text = text.replace(matcher[0], Percentage(percentage=matcher[0]).percentage2chntext(), 1)
-
- # 规范化纯数+量词
- pattern = re.compile(r"(\d+(\.\d+)?)[多余几]?" + COM_QUANTIFIERS)
- matchers = pattern.findall(text)
- if matchers:
- # print('cardinal+quantifier')
- for matcher in matchers:
- text = text.replace(matcher[0], Cardinal(cardinal=matcher[0]).cardinal2chntext(), 1)
-
- # 规范化数字编号
- pattern = re.compile(r"(\d{4,32})")
- matchers = pattern.findall(text)
- if matchers:
- # print('digit')
- for matcher in matchers:
- text = text.replace(matcher, Digit(digit=matcher).digit2chntext(), 1)
-
- # 规范化纯数
- pattern = re.compile(r"(\d+(\.\d+)?)")
- matchers = pattern.findall(text)
- if matchers:
- # print('cardinal')
- for matcher in matchers:
- text = text.replace(matcher[0], Cardinal(cardinal=matcher[0]).cardinal2chntext(), 1)
-
- self.norm_text = text
- self._particular()
-
- text = self.norm_text.lstrip('^').rstrip('$')
- if remove_punc:
- # Punctuations removal
- old_chars = CHINESE_PUNC_LIST + string.punctuation # includes all CN and EN punctuations
- new_chars = ' ' * len(old_chars)
- del_chars = ''
- text = text.translate(str.maketrans(old_chars, new_chars, del_chars))
- return text
-
-
-def nsw_test_case(raw_text):
- print('I:' + raw_text)
- print('O:' + NSWNormalizer(raw_text).normalize())
- print('')
-
-
-def nsw_test():
- nsw_test_case('固话:0595-23865596或23880880。')
- nsw_test_case('固话:0595-23865596或23880880。')
- nsw_test_case('手机:+86 19859213959或15659451527。')
- nsw_test_case('分数:32477/76391。')
- nsw_test_case('百分数:80.03%。')
- nsw_test_case('编号:31520181154418。')
- nsw_test_case('纯数:2983.07克或12345.60米。')
- nsw_test_case('日期:1999年2月20日或09年3月15号。')
- nsw_test_case('金钱:12块5,34.5元,20.1万')
- nsw_test_case('特殊:O2O或B2C。')
- nsw_test_case('3456万吨')
- nsw_test_case('2938个')
- nsw_test_case('938')
- nsw_test_case('今天吃了115个小笼包231个馒头')
- nsw_test_case('有62%的概率')
-
-
-if __name__ == '__main__':
- # nsw_test()
-
- p = argparse.ArgumentParser()
- p.add_argument('ifile', help='input filename, assume utf-8 encoding')
- p.add_argument('ofile', help='output filename')
- p.add_argument('--to_upper', action='store_true', help='convert to upper case')
- p.add_argument('--to_lower', action='store_true', help='convert to lower case')
- p.add_argument('--has_key', action='store_true', help="input text has Kaldi's key as first field.")
- p.add_argument('--log_interval', type=int, default=10000, help='log interval in number of processed lines')
- args = p.parse_args()
-
- ifile = codecs.open(args.ifile, 'r', 'utf8')
- ofile = codecs.open(args.ofile, 'w+', 'utf8')
-
- n = 0
- for l in ifile:
- key = ''
- text = ''
- if args.has_key:
- cols = l.split(maxsplit=1)
- key = cols[0]
- if len(cols) == 2:
- text = cols[1]
- else:
- text = ''
- else:
- text = l
-
- # cases
- if args.to_upper and args.to_lower:
- sys.stderr.write('text norm: to_upper OR to_lower?')
- exit(1)
- if args.to_upper:
- text = text.upper()
- if args.to_lower:
- text = text.lower()
-
- # NSW(Non-Standard-Word) normalization
- text = NSWNormalizer(text).normalize()
-
- #
- if args.has_key:
- ofile.write(key + '\t' + text)
- else:
- ofile.write(text)
-
- n += 1
- if n % args.log_interval == 0:
- sys.stderr.write("text norm: {} lines done.\n".format(n))
-
- sys.stderr.write("text norm: {} lines done in total.\n".format(n))
-
- ifile.close()
- ofile.close()
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/utils/html.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/utils/html.py
deleted file mode 100644
index a7a29fc748cf0a72e9ccdcb66c38b79aa5d1ebba..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/utils/html.py
+++ /dev/null
@@ -1,314 +0,0 @@
-import json
-import jinja2
-
-
-HTML_TEMPLATE = jinja2.Template(
- """
-{%- if fullhtml -%}
-
-
-
-{%- endif %}
-
-{%- if not requirejs %}
-
- {%- if mode == 'vega-lite' %}
-
- {%- endif %}
-
-{%- endif %}
-{%- if fullhtml %}
-{%- if requirejs %}
-
-
-{%- endif %}
-
-
-{%- endif %}
-
-
-{%- if fullhtml %}
-
-
-{%- endif %}
-"""
-)
-
-
-HTML_TEMPLATE_UNIVERSAL = jinja2.Template(
- """
-
-
-
-"""
-)
-
-
-# This is like the HTML_TEMPLATE template, but includes vega javascript inline
-# so that the resulting file is not dependent on external resources. This was
-# ported over from altair_saver.
-#
-# implies requirejs=False and full_html=True
-INLINE_HTML_TEMPLATE = jinja2.Template(
- """\
-
-
-
-
-
-
-
-
-
-
-
-"""
-)
-
-
-TEMPLATES = {
- "standard": HTML_TEMPLATE,
- "universal": HTML_TEMPLATE_UNIVERSAL,
- "inline": INLINE_HTML_TEMPLATE,
-}
-
-
-def spec_to_html(
- spec,
- mode,
- vega_version,
- vegaembed_version,
- vegalite_version=None,
- base_url="https://cdn.jsdelivr.net/npm",
- output_div="vis",
- embed_options=None,
- json_kwds=None,
- fullhtml=True,
- requirejs=False,
- template="standard",
-):
- """Embed a Vega/Vega-Lite spec into an HTML page
-
- Parameters
- ----------
- spec : dict
- a dictionary representing a vega-lite plot spec.
- mode : string {'vega' | 'vega-lite'}
- The rendering mode. This value is overridden by embed_options['mode'],
- if it is present.
- vega_version : string
- For html output, the version of vega.js to use.
- vegalite_version : string
- For html output, the version of vegalite.js to use.
- vegaembed_version : string
- For html output, the version of vegaembed.js to use.
- base_url : string (optional)
- The base url from which to load the javascript libraries.
- output_div : string (optional)
- The id of the div element where the plot will be shown.
- embed_options : dict (optional)
- Dictionary of options to pass to the vega-embed script. Default
- entry is {'mode': mode}.
- json_kwds : dict (optional)
- Dictionary of keywords to pass to json.dumps().
- fullhtml : boolean (optional)
- If True (default) then return a full html page. If False, then return
- an HTML snippet that can be embedded into an HTML page.
- requirejs : boolean (optional)
- If False (default) then load libraries from base_url using ",e=e.removeChild(e.firstChild);break;case"select":e="string"==typeof r.is?l.createElement("select",{is:r.is}):l.createElement("select"),r.multiple?e.multiple=!0:r.size&&(e.size=r.size);break;default:e="string"==typeof r.is?l.createElement(n,{is:r.is}):l.createElement(n)}}e[ev]=t,e[eb]=r;e:for(l=t.child;null!==l;){if(5===l.tag||6===l.tag)e.appendChild(l.stateNode);else if(4!==l.tag&&27!==l.tag&&null!==l.child){l.child.return=l,l=l.child;continue}if(l===t)break;for(;null===l.sibling;){if(null===l.return||l.return===t)break e;l=l.return}l.sibling.return=l.return,l=l.sibling}switch(t.stateNode=e,sf(e,n,r),n){case"button":case"input":case"select":case"textarea":e=!!r.autoFocus;break;case"img":e=!0;break;default:e=!1}e&&lJ(t)}null!==t.ref&&l0(t)}return l8(t),t.flags&=-16777217,null;case 6:if(e&&null!=t.stateNode)e.memoizedProps!==r&&lJ(t);else{if("string"!=typeof r&&null===t.stateNode)throw Error(i(166));if(e=A.current,t1(t)){e:{if(e=t.stateNode,r=t.memoizedProps,e[ev]=t,(n=e.nodeValue!==r)&&null!==(l=tQ))switch(l.tag){case 3:if(l=0!=(1&l.mode),si(e.nodeValue,r,l),l){e=!1;break e}break;case 27:case 5:if(a=0!=(1&l.mode),!0!==l.memoizedProps.suppressHydrationWarning&&si(e.nodeValue,r,a),a){e=!1;break e}}e=n}e&&lJ(t)}else(e=sm(e).createTextNode(r))[ev]=t,t.stateNode=e}return l8(t),null;case 13:if(nQ(t),r=t.memoizedState,null===e||null!==e.memoizedState&&null!==e.memoizedState.dehydrated){if(t$&&null!==tV&&0!=(1&t.mode)&&0==(128&t.flags))t2(),t3(),t.flags|=384,l=!1;else if(l=t1(t),null!==r&&null!==r.dehydrated){if(null===e){if(!l)throw Error(i(318));if(!(l=null!==(l=t.memoizedState)?l.dehydrated:null))throw Error(i(317));l[ev]=t}else t3(),0==(128&t.flags)&&(t.memoizedState=null),t.flags|=4;l8(t),l=!1}else null!==tW&&(ob(tW),tW=null),l=!0;if(!l)return 256&t.flags?t:null}if(0!=(128&t.flags))return t.lanes=n,t;return r=null!==r,e=null!==e&&null!==e.memoizedState,r&&(n=t.child,l=null,null!==n.alternate&&null!==n.alternate.memoizedState&&null!==n.alternate.memoizedState.cachePool&&(l=n.alternate.memoizedState.cachePool.pool),a=null,null!==n.memoizedState&&null!==n.memoizedState.cachePool&&(a=n.memoizedState.cachePool.pool),a!==l&&(n.flags|=2048)),r!==e&&r&&(t.child.flags|=8192),l3(t,t.updateQueue),l8(t),null;case 4:return U(),null===e&&u5(t.stateNode.containerInfo),l8(t),null;case 10:return lR(t.type._context),l8(t),null;case 19:if(p(nV),null===(l=t.memoizedState))return l8(t),null;if(r=0!=(128&t.flags),null===(a=l.rendering)){if(r)l4(l,!1);else{if(0!==a8||null!==e&&0!=(128&e.flags))for(e=t.child;null!==e;){if(null!==(a=n$(e))){for(t.flags|=128,l4(l,!1),e=a.updateQueue,t.updateQueue=e,l3(t,e),t.subtreeFlags=0,e=n,r=t.child;null!==r;)oX(r,e),r=r.sibling;return h(nV,1&nV.current|2),t.child}e=e.sibling}null!==l.tail&&H()>or&&(t.flags|=128,r=!0,l4(l,!1),t.lanes=8388608)}}else{if(!r){if(null!==(e=n$(a))){if(t.flags|=128,r=!0,e=e.updateQueue,t.updateQueue=e,l3(t,e),l4(l,!0),null===l.tail&&"hidden"===l.tailMode&&!a.alternate&&!t$)return l8(t),null}else 2*H()-l.renderingStartTime>or&&1073741824!==n&&(t.flags|=128,r=!0,l4(l,!1),t.lanes=8388608)}l.isBackwards?(a.sibling=t.child,t.child=a):(null!==(e=l.last)?e.sibling=a:t.child=a,l.last=a)}if(null!==l.tail)return t=l.tail,l.rendering=t,l.tail=t.sibling,l.renderingStartTime=H(),t.sibling=null,e=nV.current,h(nV,r?1&e|2:1&e),t;return l8(t),null;case 22:case 23:return nQ(t),nR(),r=null!==t.memoizedState,null!==e?null!==e.memoizedState!==r&&(t.flags|=8192):r&&(t.flags|=8192),r&&0!=(1&t.mode)?0!=(1073741824&n)&&0==(128&t.flags)&&(l8(t),6&t.subtreeFlags&&(t.flags|=8192)):l8(t),null!==(r=t.updateQueue)&&l3(t,r.retryQueue),r=null,null!==e&&null!==e.memoizedState&&null!==e.memoizedState.cachePool&&(r=e.memoizedState.cachePool.pool),n=null,null!==t.memoizedState&&null!==t.memoizedState.cachePool&&(n=t.memoizedState.cachePool.pool),n!==r&&(t.flags|=2048),null!==e&&p(lY),null;case 24:return r=null,null!==e&&(r=e.memoizedState.cache),t.memoizedState.cache!==r&&(t.flags|=2048),lR(lj),l8(t),null;case 25:return null}throw Error(i(156,t.tag))}(t.alternate,t,a4);if(null!==n){aJ=n;return}if(null!==(t=t.sibling)){aJ=t;return}aJ=t=e}while(null!==t);0===a8&&(a8=5)}function oO(e,t,n){var r=eh,l=aX.transition;try{aX.transition=null,eh=2,function(e,t,n,r){do oI();while(null!==os);if(0!=(6&aG))throw Error(i(327));var l=e.finishedWork,a=e.finishedLanes;if(null!==l){if(e.finishedWork=null,e.finishedLanes=0,l===e.current)throw Error(i(177));e.callbackNode=null,e.callbackPriority=0,e.cancelPendingCommit=null;var o=l.lanes|l.childLanes;if(function(e,t){var n=e.pendingLanes&~t;e.pendingLanes=t,e.suspendedLanes=0,e.pingedLanes=0,e.expiredLanes&=t,e.entangledLanes&=t,e.errorRecoveryDisabledLanes&=t,e.shellSuspendCounter=0,t=e.entanglements;var r=e.expirationTimes;for(e=e.hiddenUpdates;0r&&(l=r,r=a,a=l),l=uT(n,a);var o=uT(n,r);l&&o&&(1!==e.rangeCount||e.anchorNode!==l.node||e.anchorOffset!==l.offset||e.focusNode!==o.node||e.focusOffset!==o.offset)&&((t=t.createRange()).setStart(l.node,l.offset),e.removeAllRanges(),a>r?(e.addRange(t),e.extend(o.node,o.offset)):(t.setEnd(o.node,o.offset),e.addRange(t)))}}for(t=[],e=n;e=e.parentNode;)1===e.nodeType&&t.push({element:e,left:e.scrollLeft,top:e.scrollTop});for("function"==typeof n.focus&&n.focus(),n=0;nn?32:n;n=aX.transition;var l=eh;try{if(aX.transition=null,eh=r,null===os)var a=!1;else{r=od,od=null;var o=os,u=oc;if(os=null,oc=0,0!=(6&aG))throw Error(i(331));var s=aG;if(aG|=4,a$(o.current),aO(o,o.current,u,r),aG=s,nG(!1),ee&&"function"==typeof ee.onPostCommitFiberRoot)try{ee.onPostCommitFiberRoot(J,o)}catch(e){}a=!0}return a}finally{eh=l,aX.transition=n,oA(e,t)}}return!1}function oU(e,t,n){t=r5(n,t),t=le(e,t,2),null!==(e=ni(e,t,2))&&(ed(e,2),nX(e))}function oB(e,t,n){if(3===e.tag)oU(e,e,n);else for(;null!==t;){if(3===t.tag){oU(t,e,n);break}if(1===t.tag){var r=t.stateNode;if("function"==typeof t.type.getDerivedStateFromError||"function"==typeof r.componentDidCatch&&(null===oi||!oi.has(r))){e=r5(n,e),e=lt(t,e,2),null!==(t=ni(t,e,2))&&(ed(t,2),nX(t));break}}t=t.return}}function oQ(e,t,n){var r=e.pingCache;if(null===r){r=e.pingCache=new aH;var l=new Set;r.set(t,l)}else void 0===(l=r.get(t))&&(l=new Set,r.set(t,l));l.has(n)||(a3=!0,l.add(n),e=oV.bind(null,e,t,n),t.then(e,e))}function oV(e,t,n){var r=e.pingCache;null!==r&&r.delete(t),e.pingedLanes|=e.suspendedLanes&n,aZ===e&&(a0&n)===n&&(4===a8||3===a8&&(125829120&a0)===a0&&300>H()-on?0==(2&aG)&&ox(e,0):a9|=n),nX(e)}function o$(e,t){0===t&&(t=0==(1&e.mode)?2:ec()),null!==(e=ne(e,t))&&(ed(e,t),nX(e))}function oW(e){var t=e.memoizedState,n=0;null!==t&&(n=t.retryLane),o$(e,n)}function oj(e,t){var n=0;switch(e.tag){case 13:var r=e.stateNode,l=e.memoizedState;null!==l&&(n=l.retryLane);break;case 19:r=e.stateNode;break;case 22:r=e.stateNode._retryCache;break;default:throw Error(i(314))}null!==r&&r.delete(t),o$(e,n)}function oH(e,t,n,r){this.tag=e,this.key=n,this.sibling=this.child=this.return=this.stateNode=this.type=this.elementType=null,this.index=0,this.refCleanup=this.ref=null,this.pendingProps=t,this.dependencies=this.memoizedState=this.updateQueue=this.memoizedProps=null,this.mode=r,this.subtreeFlags=this.flags=0,this.deletions=null,this.childLanes=this.lanes=0,this.alternate=null}function oq(e,t,n,r){return new oH(e,t,n,r)}function oK(e){return!(!(e=e.prototype)||!e.isReactComponent)}function oY(e,t){var n=e.alternate;return null===n?((n=oq(e.tag,t,e.key,e.mode)).elementType=e.elementType,n.type=e.type,n.stateNode=e.stateNode,n.alternate=e,e.alternate=n):(n.pendingProps=t,n.type=e.type,n.flags=0,n.subtreeFlags=0,n.deletions=null),n.flags=31457280&e.flags,n.childLanes=e.childLanes,n.lanes=e.lanes,n.child=e.child,n.memoizedProps=e.memoizedProps,n.memoizedState=e.memoizedState,n.updateQueue=e.updateQueue,t=e.dependencies,n.dependencies=null===t?null:{lanes:t.lanes,firstContext:t.firstContext},n.sibling=e.sibling,n.index=e.index,n.ref=e.ref,n.refCleanup=e.refCleanup,n}function oX(e,t){e.flags&=31457282;var n=e.alternate;return null===n?(e.childLanes=0,e.lanes=t,e.child=null,e.subtreeFlags=0,e.memoizedProps=null,e.memoizedState=null,e.updateQueue=null,e.dependencies=null,e.stateNode=null):(e.childLanes=n.childLanes,e.lanes=n.lanes,e.child=n.child,e.subtreeFlags=0,e.deletions=null,e.memoizedProps=n.memoizedProps,e.memoizedState=n.memoizedState,e.updateQueue=n.updateQueue,e.type=n.type,t=n.dependencies,e.dependencies=null===t?null:{lanes:t.lanes,firstContext:t.firstContext}),e}function oG(e,t,n,r,l,a){var o=2;if(r=e,"function"==typeof e)oK(e)&&(o=1);else if("string"==typeof e)o=!function(e,t,n){if(1===n||null!=t.itemProp)return!1;switch(e){case"meta":case"title":return!0;case"style":if("string"!=typeof t.precedence||"string"!=typeof t.href||""===t.href)break;return!0;case"link":if("string"!=typeof t.rel||"string"!=typeof t.href||""===t.href||t.onLoad||t.onError)break;if("stylesheet"===t.rel)return e=t.disabled,"string"==typeof t.precedence&&null==e;return!0;case"script":if(!0===t.async&&!t.onLoad&&!t.onError&&"string"==typeof t.src&&t.src)return!0}return!1}(e,n,R.current)?"html"===e||"head"===e||"body"===e?27:5:26;else e:switch(e){case y:return oZ(n.children,l,a,t);case v:o=8,0!=(1&(l|=8))&&(l|=16);break;case b:return(e=oq(12,n,t,2|l)).elementType=b,e.lanes=a,e;case C:return(e=oq(13,n,t,l)).elementType=C,e.lanes=a,e;case x:return(e=oq(19,n,t,l)).elementType=x,e.lanes=a,e;case _:return oJ(n,l,a,t);case L:case N:case T:return(e=oq(24,n,t,l)).elementType=T,e.lanes=a,e;default:if("object"==typeof e&&null!==e)switch(e.$$typeof){case k:o=10;break e;case w:o=9;break e;case E:o=11;break e;case z:o=14;break e;case P:o=16,r=null;break e}throw Error(i(130,null==e?e:typeof e,""))}return(t=oq(o,n,t,l)).elementType=e,t.type=r,t.lanes=a,t}function oZ(e,t,n,r){return(e=oq(7,e,r,t)).lanes=n,e}function oJ(e,t,n,r){(e=oq(22,e,r,t)).elementType=_,e.lanes=n;var l={_visibility:1,_pendingVisibility:1,_pendingMarkers:null,_retryCache:null,_transitions:null,_current:null,detach:function(){var e=l._current;if(null===e)throw Error(i(456));if(0==(2&l._pendingVisibility)){var t=ne(e,2);null!==t&&(l._pendingVisibility|=2,og(t,e,2))}},attach:function(){var e=l._current;if(null===e)throw Error(i(456));if(0!=(2&l._pendingVisibility)){var t=ne(e,2);null!==t&&(l._pendingVisibility&=-3,og(t,e,2))}}};return e.stateNode=l,e}function o0(e,t,n){return(e=oq(6,e,null,t)).lanes=n,e}function o1(e,t,n){return(t=oq(4,null!==e.children?e.children:[],e.key,t)).lanes=n,t.stateNode={containerInfo:e.containerInfo,pendingChildren:null,implementation:e.implementation},t}function o2(e,t,n,r,l){this.tag=t,this.containerInfo=e,this.finishedWork=this.pingCache=this.current=this.pendingChildren=null,this.timeoutHandle=-1,this.callbackNode=this.next=this.pendingContext=this.context=this.cancelPendingCommit=null,this.callbackPriority=0,this.expirationTimes=ef(-1),this.entangledLanes=this.shellSuspendCounter=this.errorRecoveryDisabledLanes=this.finishedLanes=this.expiredLanes=this.pingedLanes=this.suspendedLanes=this.pendingLanes=0,this.entanglements=ef(0),this.hiddenUpdates=ef(null),this.identifierPrefix=r,this.onRecoverableError=l,this.pooledCache=null,this.pooledCacheLanes=0,this.incompleteTransitions=new Map}function o3(e,t,n,r,l,a,o,i,u){return e=new o2(e,t,n,i,u),1===t?(t=1,!0===a&&(t|=24)):t=0,a=oq(3,null,null,t),e.current=a,a.stateNode=e,t=lH(),t.refCount++,e.pooledCache=t,t.refCount++,a.memoizedState={element:r,isDehydrated:n,cache:t},nl(a),e}function o4(e){if(!e)return tg;e=e._reactInternals;e:{if(td(e)!==e||1!==e.tag)throw Error(i(170));var t=e;do{switch(t.tag){case 3:t=t.stateNode.context;break e;case 1:if(tw(t.type)){t=t.stateNode.__reactInternalMemoizedMergedChildContext;break e}}t=t.return}while(null!==t);throw Error(i(171))}if(1===e.tag){var n=e.type;if(tw(n))return tC(e,n,t)}return t}function o8(e,t,n,r,l,a,o,i,u){return(e=o3(n,r,!0,e,l,a,o,i,u)).context=o4(null),(l=no(r=om(n=e.current))).callback=null!=t?t:null,ni(n,l,r),e.current.lanes=r,ed(e,r),nX(e),e}function o6(e,t,n,r){var l=t.current,a=om(l);return n=o4(n),null===t.context?t.context=n:t.pendingContext=n,(t=no(a)).payload={element:e},null!==(r=void 0===r?null:r)&&(t.callback=r),null!==(e=ni(l,t,a))&&(og(e,l,a),nu(e,l,a)),a}function o5(e){return(e=e.current).child?(e.child.tag,e.child.stateNode):null}function o7(e,t){if(null!==(e=e.memoizedState)&&null!==e.dehydrated){var n=e.retryLane;e.retryLane=0!==n&&n=uo),us=!1;function uc(e,t){switch(e){case"keyup":return -1!==ul.indexOf(t.keyCode);case"keydown":return 229!==t.keyCode;case"keypress":case"mousedown":case"focusout":return!0;default:return!1}}function uf(e){return"object"==typeof(e=e.detail)&&"data"in e?e.data:null}var ud=!1,up={color:!0,date:!0,datetime:!0,"datetime-local":!0,email:!0,month:!0,number:!0,password:!0,range:!0,search:!0,tel:!0,text:!0,time:!0,url:!0,week:!0};function uh(e){var t=e&&e.nodeName&&e.nodeName.toLowerCase();return"input"===t?!!up[e.type]:"textarea"===t}function um(e,t,n,r){tc(r),0<(t=st(t,"onChange")).length&&(n=new ik("onChange","change",null,n,r),e.push({event:n,listeners:t}))}var ug=null,uy=null;function uv(e){u3(e,0)}function ub(e){if(eX(e_(e)))return e}function uk(e,t){if("change"===e)return t}var uw=!1;if(eA){if(eA){var uS="oninput"in document;if(!uS){var uE=document.createElement("div");uE.setAttribute("oninput","return;"),uS="function"==typeof uE.oninput}r=uS}else r=!1;uw=r&&(!document.documentMode||9=t)return{node:r,offset:t-e};e=n}e:{for(;r;){if(r.nextSibling){r=r.nextSibling;break e}r=r.parentNode}r=void 0}r=uL(r)}}function uM(){for(var e=window,t=eG();t instanceof e.HTMLIFrameElement;){try{var n="string"==typeof t.contentWindow.location.href}catch(e){n=!1}if(n)e=t.contentWindow;else break;t=eG(e.document)}return t}function uF(e){var t=e&&e.nodeName&&e.nodeName.toLowerCase();return t&&("input"===t&&("text"===e.type||"search"===e.type||"tel"===e.type||"url"===e.type||"password"===e.type)||"textarea"===t||"true"===e.contentEditable)}var uD=eA&&"documentMode"in document&&11>=document.documentMode,uR=null,uO=null,uA=null,uI=!1;function uU(e,t,n){var r=n.window===n?n.document:9===n.nodeType?n:n.ownerDocument;uI||null==uR||uR!==eG(r)||(r="selectionStart"in(r=uR)&&uF(r)?{start:r.selectionStart,end:r.selectionEnd}:{anchorNode:(r=(r.ownerDocument&&r.ownerDocument.defaultView||window).getSelection()).anchorNode,anchorOffset:r.anchorOffset,focusNode:r.focusNode,focusOffset:r.focusOffset},uA&&np(uA,r)||(uA=r,0<(r=st(uO,"onSelect")).length&&(t=new ik("onSelect","select",null,t,n),e.push({event:t,listeners:r}),t.target=uR)))}function uB(e,t){var n={};return n[e.toLowerCase()]=t.toLowerCase(),n["Webkit"+e]="webkit"+t,n["Moz"+e]="moz"+t,n}var uQ={animationend:uB("Animation","AnimationEnd"),animationiteration:uB("Animation","AnimationIteration"),animationstart:uB("Animation","AnimationStart"),transitionend:uB("Transition","TransitionEnd")},uV={},u$={};function uW(e){if(uV[e])return uV[e];if(!uQ[e])return e;var t,n=uQ[e];for(t in n)if(n.hasOwnProperty(t)&&t in u$)return uV[e]=n[t];return e}eA&&(u$=document.createElement("div").style,"AnimationEvent"in window||(delete uQ.animationend.animation,delete uQ.animationiteration.animation,delete uQ.animationstart.animation),"TransitionEvent"in window||delete uQ.transitionend.transition);var uj=uW("animationend"),uH=uW("animationiteration"),uq=uW("animationstart"),uK=uW("transitionend"),uY=new Map,uX="abort auxClick cancel canPlay canPlayThrough click close contextMenu copy cut drag dragEnd dragEnter dragExit dragLeave dragOver dragStart drop durationChange emptied encrypted ended error gotPointerCapture input invalid keyDown keyPress keyUp load loadedData loadedMetadata loadStart lostPointerCapture mouseDown mouseMove mouseOut mouseOver mouseUp paste pause play playing pointerCancel pointerDown pointerMove pointerOut pointerOver pointerUp progress rateChange reset resize seeked seeking stalled submit suspend timeUpdate touchCancel touchEnd touchStart volumeChange scroll toggle touchMove waiting wheel".split(" ");function uG(e,t){uY.set(e,t),eR(t,[e])}for(var uZ=0;uZ title"):null)}var sH=null;function sq(){}function sK(){if(this.count--,0===this.count){if(this.stylesheets)sX(this,this.stylesheets);else if(this.unsuspend){var e=this.unsuspend;this.unsuspend=null,e()}}}var sY=null;function sX(e,t){e.stylesheets=null,null!==e.unsuspend&&(e.count++,sY=new Map,t.forEach(sG,e),sY=null,sK.call(e))}function sG(e,t){if(!(4&t.state.loading)){var n=sY.get(e);if(n)var r=n.get("last");else{n=new Map,sY.set(e,n);for(var l=e.querySelectorAll("link[data-precedence],style[data-precedence]"),a=0;a [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/ccds/vits_onnx/app/text/symbols.py b/spaces/ccds/vits_onnx/app/text/symbols.py
deleted file mode 100644
index 7678bdc5c60b1216b840d5cc8c5de171b8976807..0000000000000000000000000000000000000000
--- a/spaces/ccds/vits_onnx/app/text/symbols.py
+++ /dev/null
@@ -1,39 +0,0 @@
-'''
-Defines the set of symbols used in text input to the model.
-'''
-
-'''# japanese_cleaners
-_pad = '_'
-_punctuation = ',.!?-'
-_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧ↓↑ '
-'''
-# jp_cleaners
-_pad = '_'
-_punctuation = ',.!?-'
-_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧ↓↑ '
-
-
-
-# japanese_cleaners2
-# _pad = '_'
-# _punctuation = ',.!?-~…'
-# _letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧʦ↓↑ '
-
-
-'''# korean_cleaners
-_pad = '_'
-_punctuation = ',.!?…~'
-_letters = 'ㄱㄴㄷㄹㅁㅂㅅㅇㅈㅊㅋㅌㅍㅎㄲㄸㅃㅆㅉㅏㅓㅗㅜㅡㅣㅐㅔ '
-'''
-
-'''# chinese_cleaners
-_pad = '_'
-_punctuation = ',。!?—…'
-_letters = 'ㄅㄆㄇㄈㄉㄊㄋㄌㄍㄎㄏㄐㄑㄒㄓㄔㄕㄖㄗㄘㄙㄚㄛㄜㄝㄞㄟㄠㄡㄢㄣㄤㄥㄦㄧㄨㄩˉˊˇˋ˙ '
-'''
-
-# Export all symbols:
-symbols = [_pad] + list(_punctuation) + list(_letters)
-
-# Special symbol ids
-SPACE_ID = symbols.index(" ")
diff --git a/spaces/ccmusic-database/README/README.md b/spaces/ccmusic-database/README/README.md
deleted file mode 100644
index 856dfd9f43ec335ee6b903dac9d166e58d5f7817..0000000000000000000000000000000000000000
--- a/spaces/ccmusic-database/README/README.md
+++ /dev/null
@@ -1,27 +0,0 @@
----
-title: README
-emoji: 💻
-colorFrom: gray
-colorTo: indigo
-sdk: static
-pinned: true
-license: mit
----
-
-
-It is always a challenge for computational musicology (CM) researchers to acquire high-quality datasets. This is because of the dispersive nature of existing datasets that makes dataset collection a tedious, sometimes difficult, work to do. Moreover, they are some that are even not publicly available. The CCMusic, our open data platform for CM research, not only contains high-quality datasets at present, which have already been used in several pieces of research and applicants but is also a community in which researchers can contribute datasets thus benefiting the whole research field. We believe that the CCMusic would be a bridge for researchers and dataset builders and make a great contribution to building and maintaining publicly available datasets of fine quality for CM researchers.
-
-
-## Cite
-```
-@dataset{zhaorui_liu_2021_5676893,
- author = {Zhaorui Liu, Monan Zhou, Shenyang Xu, Zhaowen Wang, Wei Li and Zijin Li},
- title = {CCMUSIC DATABASE: A Music Data Sharing Platform for Computational Musicology Research},
- month = {nov},
- year = {2021},
- publisher = {Zenodo},
- version = {1.1},
- doi = {10.5281/zenodo.5676893},
- url = {https://doi.org/10.5281/zenodo.5676893}
-}
-```
\ No newline at end of file
diff --git a/spaces/ceckenrode/SelfCareDimensionsPositiveReframing/README.md b/spaces/ceckenrode/SelfCareDimensionsPositiveReframing/README.md
deleted file mode 100644
index b9679be44e4307167b94e56db0ef15bcfbb46a6c..0000000000000000000000000000000000000000
--- a/spaces/ceckenrode/SelfCareDimensionsPositiveReframing/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: SelfCareDimensionsPositiveReframing
-emoji: 🔥
-colorFrom: pink
-colorTo: yellow
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/demo/OpenVINO/README.md b/spaces/chendl/compositional_test/multimodal/YOLOX/demo/OpenVINO/README.md
deleted file mode 100644
index 559708f13f2f21bbb16ae331f50a625014a7b28b..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/multimodal/YOLOX/demo/OpenVINO/README.md
+++ /dev/null
@@ -1,4 +0,0 @@
-## YOLOX for OpenVINO
-
-* [C++ Demo](./cpp)
-* [Python Demo](./python)
\ No newline at end of file
diff --git a/spaces/chendl/compositional_test/transformers/examples/legacy/seq2seq/finetune_trainer.py b/spaces/chendl/compositional_test/transformers/examples/legacy/seq2seq/finetune_trainer.py
deleted file mode 100644
index 4e186c96d8c2186ec0f023822ec70aac4fce4693..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/legacy/seq2seq/finetune_trainer.py
+++ /dev/null
@@ -1,375 +0,0 @@
-#!/usr/bin/env python
-# Copyright 2020 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import logging
-import os
-import sys
-from dataclasses import dataclass, field
-from typing import Optional
-
-from seq2seq_trainer import Seq2SeqTrainer
-from seq2seq_training_args import Seq2SeqTrainingArguments
-
-import transformers
-from transformers import (
- AutoConfig,
- AutoModelForSeq2SeqLM,
- AutoTokenizer,
- HfArgumentParser,
- MBartTokenizer,
- MBartTokenizerFast,
- set_seed,
-)
-from transformers.trainer_utils import EvaluationStrategy, is_main_process
-from transformers.training_args import ParallelMode
-from utils import (
- Seq2SeqDataCollator,
- Seq2SeqDataset,
- assert_all_frozen,
- build_compute_metrics_fn,
- check_output_dir,
- freeze_embeds,
- freeze_params,
- lmap,
- save_json,
- use_task_specific_params,
- write_txt_file,
-)
-
-
-logger = logging.getLogger(__name__)
-
-
-@dataclass
-class ModelArguments:
- """
- Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
- """
-
- model_name_or_path: str = field(
- metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
- )
- config_name: Optional[str] = field(
- default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
- )
- tokenizer_name: Optional[str] = field(
- default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
- )
- cache_dir: Optional[str] = field(
- default=None,
- metadata={"help": "Where do you want to store the pretrained models downloaded from huggingface.co"},
- )
- freeze_encoder: bool = field(default=False, metadata={"help": "Whether tp freeze the encoder."})
- freeze_embeds: bool = field(default=False, metadata={"help": "Whether to freeze the embeddings."})
-
-
-@dataclass
-class DataTrainingArguments:
- """
- Arguments pertaining to what data we are going to input our model for training and eval.
- """
-
- data_dir: str = field(
- metadata={"help": "The input data dir. Should contain the .tsv files (or other data files) for the task."}
- )
- task: Optional[str] = field(
- default="summarization",
- metadata={"help": "Task name, summarization (or summarization_{dataset} for pegasus) or translation"},
- )
- max_source_length: Optional[int] = field(
- default=1024,
- metadata={
- "help": (
- "The maximum total input sequence length after tokenization. Sequences longer "
- "than this will be truncated, sequences shorter will be padded."
- )
- },
- )
- max_target_length: Optional[int] = field(
- default=128,
- metadata={
- "help": (
- "The maximum total sequence length for target text after tokenization. Sequences longer "
- "than this will be truncated, sequences shorter will be padded."
- )
- },
- )
- val_max_target_length: Optional[int] = field(
- default=142,
- metadata={
- "help": (
- "The maximum total sequence length for validation target text after tokenization. Sequences longer "
- "than this will be truncated, sequences shorter will be padded. "
- "This argument is also used to override the ``max_length`` param of ``model.generate``, which is used "
- "during ``evaluate`` and ``predict``."
- )
- },
- )
- test_max_target_length: Optional[int] = field(
- default=142,
- metadata={
- "help": (
- "The maximum total sequence length for test target text after tokenization. Sequences longer "
- "than this will be truncated, sequences shorter will be padded."
- )
- },
- )
- n_train: Optional[int] = field(default=-1, metadata={"help": "# training examples. -1 means use all."})
- n_val: Optional[int] = field(default=-1, metadata={"help": "# validation examples. -1 means use all."})
- n_test: Optional[int] = field(default=-1, metadata={"help": "# test examples. -1 means use all."})
- src_lang: Optional[str] = field(default=None, metadata={"help": "Source language id for translation."})
- tgt_lang: Optional[str] = field(default=None, metadata={"help": "Target language id for translation."})
- eval_beams: Optional[int] = field(default=None, metadata={"help": "# num_beams to use for evaluation."})
- ignore_pad_token_for_loss: bool = field(
- default=True,
- metadata={"help": "If only pad tokens should be ignored. This assumes that `config.pad_token_id` is defined."},
- )
-
-
-def handle_metrics(split, metrics, output_dir):
- """
- Log and save metrics
-
- Args:
- - split: one of train, val, test
- - metrics: metrics dict
- - output_dir: where to save the metrics
- """
-
- logger.info(f"***** {split} metrics *****")
- for key in sorted(metrics.keys()):
- logger.info(f" {key} = {metrics[key]}")
- save_json(metrics, os.path.join(output_dir, f"{split}_results.json"))
-
-
-def main():
- # See all possible arguments in src/transformers/training_args.py
- # or by passing the --help flag to this script.
- # We now keep distinct sets of args, for a cleaner separation of concerns.
-
- parser = HfArgumentParser((ModelArguments, DataTrainingArguments, Seq2SeqTrainingArguments))
-
- if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
- # If we pass only one argument to the script and it's the path to a json file,
- # let's parse it to get our arguments.
- model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
- else:
- model_args, data_args, training_args = parser.parse_args_into_dataclasses()
-
- check_output_dir(training_args)
-
- # Setup logging
- logging.basicConfig(
- format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
- datefmt="%m/%d/%Y %H:%M:%S",
- level=logging.INFO if training_args.local_rank in [-1, 0] else logging.WARN,
- )
- logger.warning(
- "Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s",
- training_args.local_rank,
- training_args.device,
- training_args.n_gpu,
- bool(training_args.parallel_mode == ParallelMode.DISTRIBUTED),
- training_args.fp16,
- )
- transformers.utils.logging.enable_default_handler()
- transformers.utils.logging.enable_explicit_format()
- # Set the verbosity to info of the Transformers logger (on main process only):
- if is_main_process(training_args.local_rank):
- transformers.utils.logging.set_verbosity_info()
- logger.info("Training/evaluation parameters %s", training_args)
-
- # Set seed
- set_seed(training_args.seed)
-
- # Load pretrained model and tokenizer
- #
- # Distributed training:
- # The .from_pretrained methods guarantee that only one local process can concurrently
- # download model & vocab.
-
- config = AutoConfig.from_pretrained(
- model_args.config_name if model_args.config_name else model_args.model_name_or_path,
- cache_dir=model_args.cache_dir,
- )
-
- extra_model_params = ("encoder_layerdrop", "decoder_layerdrop", "dropout", "attention_dropout")
- for p in extra_model_params:
- if getattr(training_args, p, None):
- assert hasattr(config, p), f"({config.__class__.__name__}) doesn't have a `{p}` attribute"
- setattr(config, p, getattr(training_args, p))
-
- tokenizer = AutoTokenizer.from_pretrained(
- model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
- cache_dir=model_args.cache_dir,
- )
- model = AutoModelForSeq2SeqLM.from_pretrained(
- model_args.model_name_or_path,
- from_tf=".ckpt" in model_args.model_name_or_path,
- config=config,
- cache_dir=model_args.cache_dir,
- )
-
- # use task specific params
- use_task_specific_params(model, data_args.task)
-
- # set num_beams for evaluation
- if data_args.eval_beams is None:
- data_args.eval_beams = model.config.num_beams
-
- # set decoder_start_token_id for MBart
- if model.config.decoder_start_token_id is None and isinstance(tokenizer, (MBartTokenizer, MBartTokenizerFast)):
- assert (
- data_args.tgt_lang is not None and data_args.src_lang is not None
- ), "mBart requires --tgt_lang and --src_lang"
- if isinstance(tokenizer, MBartTokenizer):
- model.config.decoder_start_token_id = tokenizer.lang_code_to_id[data_args.tgt_lang]
- else:
- model.config.decoder_start_token_id = tokenizer.convert_tokens_to_ids(data_args.tgt_lang)
-
- if model_args.freeze_embeds:
- freeze_embeds(model)
- if model_args.freeze_encoder:
- freeze_params(model.get_encoder())
- assert_all_frozen(model.get_encoder())
-
- dataset_class = Seq2SeqDataset
-
- # Get datasets
- train_dataset = (
- dataset_class(
- tokenizer,
- type_path="train",
- data_dir=data_args.data_dir,
- n_obs=data_args.n_train,
- max_target_length=data_args.max_target_length,
- max_source_length=data_args.max_source_length,
- prefix=model.config.prefix or "",
- )
- if training_args.do_train
- else None
- )
- eval_dataset = (
- dataset_class(
- tokenizer,
- type_path="val",
- data_dir=data_args.data_dir,
- n_obs=data_args.n_val,
- max_target_length=data_args.val_max_target_length,
- max_source_length=data_args.max_source_length,
- prefix=model.config.prefix or "",
- )
- if training_args.do_eval or training_args.evaluation_strategy != EvaluationStrategy.NO
- else None
- )
- test_dataset = (
- dataset_class(
- tokenizer,
- type_path="test",
- data_dir=data_args.data_dir,
- n_obs=data_args.n_test,
- max_target_length=data_args.test_max_target_length,
- max_source_length=data_args.max_source_length,
- prefix=model.config.prefix or "",
- )
- if training_args.do_predict
- else None
- )
-
- # Initialize our Trainer
- compute_metrics_fn = (
- build_compute_metrics_fn(data_args.task, tokenizer) if training_args.predict_with_generate else None
- )
- trainer = Seq2SeqTrainer(
- model=model,
- args=training_args,
- data_args=data_args,
- train_dataset=train_dataset,
- eval_dataset=eval_dataset,
- data_collator=Seq2SeqDataCollator(
- tokenizer, data_args, model.config.decoder_start_token_id, training_args.tpu_num_cores
- ),
- compute_metrics=compute_metrics_fn,
- tokenizer=tokenizer,
- )
-
- all_metrics = {}
- # Training
- if training_args.do_train:
- logger.info("*** Train ***")
-
- train_result = trainer.train(
- model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
- )
- metrics = train_result.metrics
- metrics["train_n_objs"] = data_args.n_train
-
- trainer.save_model() # this also saves the tokenizer
-
- if trainer.is_world_process_zero():
- handle_metrics("train", metrics, training_args.output_dir)
- all_metrics.update(metrics)
-
- # Need to save the state, since Trainer.save_model saves only the tokenizer with the model
- trainer.state.save_to_json(os.path.join(training_args.output_dir, "trainer_state.json"))
-
- # For convenience, we also re-save the tokenizer to the same directory,
- # so that you can share your model easily on huggingface.co/models =)
- tokenizer.save_pretrained(training_args.output_dir)
-
- # Evaluation
- if training_args.do_eval:
- logger.info("*** Evaluate ***")
-
- metrics = trainer.evaluate(metric_key_prefix="val")
- metrics["val_n_objs"] = data_args.n_val
- metrics["val_loss"] = round(metrics["val_loss"], 4)
-
- if trainer.is_world_process_zero():
- handle_metrics("val", metrics, training_args.output_dir)
- all_metrics.update(metrics)
-
- if training_args.do_predict:
- logger.info("*** Predict ***")
-
- test_output = trainer.predict(test_dataset=test_dataset, metric_key_prefix="test")
- metrics = test_output.metrics
- metrics["test_n_objs"] = data_args.n_test
-
- if trainer.is_world_process_zero():
- metrics["test_loss"] = round(metrics["test_loss"], 4)
- handle_metrics("test", metrics, training_args.output_dir)
- all_metrics.update(metrics)
-
- if training_args.predict_with_generate:
- test_preds = tokenizer.batch_decode(
- test_output.predictions, skip_special_tokens=True, clean_up_tokenization_spaces=True
- )
- test_preds = lmap(str.strip, test_preds)
- write_txt_file(test_preds, os.path.join(training_args.output_dir, "test_generations.txt"))
-
- if trainer.is_world_process_zero():
- save_json(all_metrics, os.path.join(training_args.output_dir, "all_results.json"))
-
- return all_metrics
-
-
-def _mp_fn(index):
- # For xla_spawn (TPUs)
- main()
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/chendl/compositional_test/transformers/examples/pytorch/language-modeling/run_mlm_no_trainer.py b/spaces/chendl/compositional_test/transformers/examples/pytorch/language-modeling/run_mlm_no_trainer.py
deleted file mode 100644
index 0eb40e840a6df907d2332723144f5fa5e4dfa821..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/pytorch/language-modeling/run_mlm_no_trainer.py
+++ /dev/null
@@ -1,730 +0,0 @@
-#!/usr/bin/env python
-# coding=utf-8
-# Copyright 2021 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
-Fine-tuning the library models for masked language modeling (BERT, ALBERT, RoBERTa...)
-on a text file or a dataset without using HuggingFace Trainer.
-
-Here is the full list of checkpoints on the hub that can be fine-tuned by this script:
-https://huggingface.co/models?filter=fill-mask
-"""
-# You can also adapt this script on your own mlm task. Pointers for this are left as comments.
-
-import argparse
-import json
-import logging
-import math
-import os
-import random
-from itertools import chain
-from pathlib import Path
-
-import datasets
-import torch
-from accelerate import Accelerator, DistributedType
-from accelerate.logging import get_logger
-from accelerate.utils import set_seed
-from datasets import load_dataset
-from huggingface_hub import Repository, create_repo
-from torch.utils.data import DataLoader
-from tqdm.auto import tqdm
-
-import transformers
-from transformers import (
- CONFIG_MAPPING,
- MODEL_MAPPING,
- AutoConfig,
- AutoModelForMaskedLM,
- AutoTokenizer,
- DataCollatorForLanguageModeling,
- SchedulerType,
- get_scheduler,
-)
-from transformers.utils import check_min_version, get_full_repo_name, send_example_telemetry
-from transformers.utils.versions import require_version
-
-
-# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
-check_min_version("4.28.0")
-
-logger = get_logger(__name__)
-require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/language-modeling/requirements.txt")
-MODEL_CONFIG_CLASSES = list(MODEL_MAPPING.keys())
-MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES)
-
-
-def parse_args():
- parser = argparse.ArgumentParser(description="Finetune a transformers model on a Masked Language Modeling task")
- parser.add_argument(
- "--dataset_name",
- type=str,
- default=None,
- help="The name of the dataset to use (via the datasets library).",
- )
- parser.add_argument(
- "--dataset_config_name",
- type=str,
- default=None,
- help="The configuration name of the dataset to use (via the datasets library).",
- )
- parser.add_argument(
- "--train_file", type=str, default=None, help="A csv or a json file containing the training data."
- )
- parser.add_argument(
- "--validation_file", type=str, default=None, help="A csv or a json file containing the validation data."
- )
- parser.add_argument(
- "--validation_split_percentage",
- default=5,
- help="The percentage of the train set used as validation set in case there's no validation split",
- )
- parser.add_argument(
- "--pad_to_max_length",
- action="store_true",
- help="If passed, pad all samples to `max_length`. Otherwise, dynamic padding is used.",
- )
- parser.add_argument(
- "--model_name_or_path",
- type=str,
- help="Path to pretrained model or model identifier from huggingface.co/models.",
- required=False,
- )
- parser.add_argument(
- "--config_name",
- type=str,
- default=None,
- help="Pretrained config name or path if not the same as model_name",
- )
- parser.add_argument(
- "--tokenizer_name",
- type=str,
- default=None,
- help="Pretrained tokenizer name or path if not the same as model_name",
- )
- parser.add_argument(
- "--use_slow_tokenizer",
- action="store_true",
- help="If passed, will use a slow tokenizer (not backed by the 🤗 Tokenizers library).",
- )
- parser.add_argument(
- "--per_device_train_batch_size",
- type=int,
- default=8,
- help="Batch size (per device) for the training dataloader.",
- )
- parser.add_argument(
- "--per_device_eval_batch_size",
- type=int,
- default=8,
- help="Batch size (per device) for the evaluation dataloader.",
- )
- parser.add_argument(
- "--learning_rate",
- type=float,
- default=5e-5,
- help="Initial learning rate (after the potential warmup period) to use.",
- )
- parser.add_argument("--weight_decay", type=float, default=0.0, help="Weight decay to use.")
- parser.add_argument("--num_train_epochs", type=int, default=3, help="Total number of training epochs to perform.")
- parser.add_argument(
- "--max_train_steps",
- type=int,
- default=None,
- help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
- )
- parser.add_argument(
- "--gradient_accumulation_steps",
- type=int,
- default=1,
- help="Number of updates steps to accumulate before performing a backward/update pass.",
- )
- parser.add_argument(
- "--lr_scheduler_type",
- type=SchedulerType,
- default="linear",
- help="The scheduler type to use.",
- choices=["linear", "cosine", "cosine_with_restarts", "polynomial", "constant", "constant_with_warmup"],
- )
- parser.add_argument(
- "--num_warmup_steps", type=int, default=0, help="Number of steps for the warmup in the lr scheduler."
- )
- parser.add_argument("--output_dir", type=str, default=None, help="Where to store the final model.")
- parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
- parser.add_argument(
- "--model_type",
- type=str,
- default=None,
- help="Model type to use if training from scratch.",
- choices=MODEL_TYPES,
- )
- parser.add_argument(
- "--max_seq_length",
- type=int,
- default=None,
- help=(
- "The maximum total input sequence length after tokenization. Sequences longer than this will be truncated."
- ),
- )
- parser.add_argument(
- "--line_by_line",
- type=bool,
- default=False,
- help="Whether distinct lines of text in the dataset are to be handled as distinct sequences.",
- )
- parser.add_argument(
- "--preprocessing_num_workers",
- type=int,
- default=None,
- help="The number of processes to use for the preprocessing.",
- )
- parser.add_argument(
- "--overwrite_cache", action="store_true", help="Overwrite the cached training and evaluation sets"
- )
- parser.add_argument(
- "--mlm_probability", type=float, default=0.15, help="Ratio of tokens to mask for masked language modeling loss"
- )
- parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
- parser.add_argument(
- "--hub_model_id", type=str, help="The name of the repository to keep in sync with the local `output_dir`."
- )
- parser.add_argument("--hub_token", type=str, help="The token to use to push to the Model Hub.")
- parser.add_argument(
- "--checkpointing_steps",
- type=str,
- default=None,
- help="Whether the various states should be saved at the end of every n steps, or 'epoch' for each epoch.",
- )
- parser.add_argument(
- "--resume_from_checkpoint",
- type=str,
- default=None,
- help="If the training should continue from a checkpoint folder.",
- )
- parser.add_argument(
- "--with_tracking",
- action="store_true",
- help="Whether to enable experiment trackers for logging.",
- )
- parser.add_argument(
- "--report_to",
- type=str,
- default="all",
- help=(
- 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`,'
- ' `"wandb"`, `"comet_ml"` and `"clearml"`. Use `"all"` (default) to report to all integrations.'
- "Only applicable when `--with_tracking` is passed."
- ),
- )
- parser.add_argument(
- "--low_cpu_mem_usage",
- action="store_true",
- help=(
- "It is an option to create the model as an empty shell, then only materialize its parameters when the pretrained weights are loaded."
- "If passed, LLM loading time and RAM consumption will be benefited."
- ),
- )
- args = parser.parse_args()
-
- # Sanity checks
- if args.dataset_name is None and args.train_file is None and args.validation_file is None:
- raise ValueError("Need either a dataset name or a training/validation file.")
- else:
- if args.train_file is not None:
- extension = args.train_file.split(".")[-1]
- if extension not in ["csv", "json", "txt"]:
- raise ValueError("`train_file` should be a csv, json or txt file.")
- if args.validation_file is not None:
- extension = args.validation_file.split(".")[-1]
- if extension not in ["csv", "json", "txt"]:
- raise ValueError("`validation_file` should be a csv, json or txt file.")
-
- if args.push_to_hub:
- assert args.output_dir is not None, "Need an `output_dir` to create a repo when `--push_to_hub` is passed."
-
- return args
-
-
-def main():
- args = parse_args()
-
- # Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The
- # information sent is the one passed as arguments along with your Python/PyTorch versions.
- send_example_telemetry("run_mlm_no_trainer", args)
-
- # Initialize the accelerator. We will let the accelerator handle device placement for us in this example.
- # If we're using tracking, we also need to initialize it here and it will by default pick up all supported trackers
- # in the environment
- accelerator_log_kwargs = {}
-
- if args.with_tracking:
- accelerator_log_kwargs["log_with"] = args.report_to
- accelerator_log_kwargs["logging_dir"] = args.output_dir
-
- accelerator = Accelerator(gradient_accumulation_steps=args.gradient_accumulation_steps, **accelerator_log_kwargs)
-
- # Make one log on every process with the configuration for debugging.
- logging.basicConfig(
- format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
- datefmt="%m/%d/%Y %H:%M:%S",
- level=logging.INFO,
- )
- logger.info(accelerator.state, main_process_only=False)
- if accelerator.is_local_main_process:
- datasets.utils.logging.set_verbosity_warning()
- transformers.utils.logging.set_verbosity_info()
- else:
- datasets.utils.logging.set_verbosity_error()
- transformers.utils.logging.set_verbosity_error()
-
- # If passed along, set the training seed now.
- if args.seed is not None:
- set_seed(args.seed)
-
- # Handle the repository creation
- if accelerator.is_main_process:
- if args.push_to_hub:
- if args.hub_model_id is None:
- repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token)
- else:
- repo_name = args.hub_model_id
- create_repo(repo_name, exist_ok=True, token=args.hub_token)
- repo = Repository(args.output_dir, clone_from=repo_name, token=args.hub_token)
-
- with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore:
- if "step_*" not in gitignore:
- gitignore.write("step_*\n")
- if "epoch_*" not in gitignore:
- gitignore.write("epoch_*\n")
- elif args.output_dir is not None:
- os.makedirs(args.output_dir, exist_ok=True)
- accelerator.wait_for_everyone()
-
- # Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below)
- # or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/
- # (the dataset will be downloaded automatically from the datasets Hub).
- #
- # For CSV/JSON files, this script will use the column called 'text' or the first column if no column called
- # 'text' is found. You can easily tweak this behavior (see below).
- #
- # In distributed training, the load_dataset function guarantee that only one local process can concurrently
- # download the dataset.
- if args.dataset_name is not None:
- # Downloading and loading a dataset from the hub.
- raw_datasets = load_dataset(args.dataset_name, args.dataset_config_name)
- if "validation" not in raw_datasets.keys():
- raw_datasets["validation"] = load_dataset(
- args.dataset_name,
- args.dataset_config_name,
- split=f"train[:{args.validation_split_percentage}%]",
- )
- raw_datasets["train"] = load_dataset(
- args.dataset_name,
- args.dataset_config_name,
- split=f"train[{args.validation_split_percentage}%:]",
- )
- else:
- data_files = {}
- if args.train_file is not None:
- data_files["train"] = args.train_file
- if args.validation_file is not None:
- data_files["validation"] = args.validation_file
- extension = args.train_file.split(".")[-1]
- if extension == "txt":
- extension = "text"
- raw_datasets = load_dataset(extension, data_files=data_files)
- # If no validation data is there, validation_split_percentage will be used to divide the dataset.
- if "validation" not in raw_datasets.keys():
- raw_datasets["validation"] = load_dataset(
- extension,
- data_files=data_files,
- split=f"train[:{args.validation_split_percentage}%]",
- )
- raw_datasets["train"] = load_dataset(
- extension,
- data_files=data_files,
- split=f"train[{args.validation_split_percentage}%:]",
- )
-
- # See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at
- # https://huggingface.co/docs/datasets/loading_datasets.html.
-
- # Load pretrained model and tokenizer
- #
- # In distributed training, the .from_pretrained methods guarantee that only one local process can concurrently
- # download model & vocab.
- if args.config_name:
- config = AutoConfig.from_pretrained(args.config_name)
- elif args.model_name_or_path:
- config = AutoConfig.from_pretrained(args.model_name_or_path)
- else:
- config = CONFIG_MAPPING[args.model_type]()
- logger.warning("You are instantiating a new config instance from scratch.")
-
- if args.tokenizer_name:
- tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name, use_fast=not args.use_slow_tokenizer)
- elif args.model_name_or_path:
- tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path, use_fast=not args.use_slow_tokenizer)
- else:
- raise ValueError(
- "You are instantiating a new tokenizer from scratch. This is not supported by this script."
- "You can do it from another script, save it, and load it from here, using --tokenizer_name."
- )
-
- if args.model_name_or_path:
- model = AutoModelForMaskedLM.from_pretrained(
- args.model_name_or_path,
- from_tf=bool(".ckpt" in args.model_name_or_path),
- config=config,
- low_cpu_mem_usage=args.low_cpu_mem_usage,
- )
- else:
- logger.info("Training new model from scratch")
- model = AutoModelForMaskedLM.from_config(config)
-
- # We resize the embeddings only when necessary to avoid index errors. If you are creating a model from scratch
- # on a small vocab and want a smaller embedding size, remove this test.
- embedding_size = model.get_input_embeddings().weight.shape[0]
- if len(tokenizer) > embedding_size:
- model.resize_token_embeddings(len(tokenizer))
-
- # Preprocessing the datasets.
- # First we tokenize all the texts.
- column_names = raw_datasets["train"].column_names
- text_column_name = "text" if "text" in column_names else column_names[0]
-
- if args.max_seq_length is None:
- max_seq_length = tokenizer.model_max_length
- if max_seq_length > 1024:
- logger.warning(
- "The chosen tokenizer supports a `model_max_length` that is longer than the default `block_size` value"
- " of 1024. If you would like to use a longer `block_size` up to `tokenizer.model_max_length` you can"
- " override this default with `--block_size xxx`."
- )
- max_seq_length = 1024
- else:
- if args.max_seq_length > tokenizer.model_max_length:
- logger.warning(
- f"The max_seq_length passed ({args.max_seq_length}) is larger than the maximum length for the"
- f"model ({tokenizer.model_max_length}). Using max_seq_length={tokenizer.model_max_length}."
- )
- max_seq_length = min(args.max_seq_length, tokenizer.model_max_length)
-
- if args.line_by_line:
- # When using line_by_line, we just tokenize each nonempty line.
- padding = "max_length" if args.pad_to_max_length else False
-
- def tokenize_function(examples):
- # Remove empty lines
- examples[text_column_name] = [
- line for line in examples[text_column_name] if len(line) > 0 and not line.isspace()
- ]
- return tokenizer(
- examples[text_column_name],
- padding=padding,
- truncation=True,
- max_length=max_seq_length,
- # We use this option because DataCollatorForLanguageModeling (see below) is more efficient when it
- # receives the `special_tokens_mask`.
- return_special_tokens_mask=True,
- )
-
- with accelerator.main_process_first():
- tokenized_datasets = raw_datasets.map(
- tokenize_function,
- batched=True,
- num_proc=args.preprocessing_num_workers,
- remove_columns=[text_column_name],
- load_from_cache_file=not args.overwrite_cache,
- desc="Running tokenizer on dataset line_by_line",
- )
- else:
- # Otherwise, we tokenize every text, then concatenate them together before splitting them in smaller parts.
- # We use `return_special_tokens_mask=True` because DataCollatorForLanguageModeling (see below) is more
- # efficient when it receives the `special_tokens_mask`.
- def tokenize_function(examples):
- return tokenizer(examples[text_column_name], return_special_tokens_mask=True)
-
- with accelerator.main_process_first():
- tokenized_datasets = raw_datasets.map(
- tokenize_function,
- batched=True,
- num_proc=args.preprocessing_num_workers,
- remove_columns=column_names,
- load_from_cache_file=not args.overwrite_cache,
- desc="Running tokenizer on every text in dataset",
- )
-
- # Main data processing function that will concatenate all texts from our dataset and generate chunks of
- # max_seq_length.
- def group_texts(examples):
- # Concatenate all texts.
- concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
- total_length = len(concatenated_examples[list(examples.keys())[0]])
- # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
- # customize this part to your needs.
- if total_length >= max_seq_length:
- total_length = (total_length // max_seq_length) * max_seq_length
- # Split by chunks of max_len.
- result = {
- k: [t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)]
- for k, t in concatenated_examples.items()
- }
- return result
-
- # Note that with `batched=True`, this map processes 1,000 texts together, so group_texts throws away a
- # remainder for each of those groups of 1,000 texts. You can adjust that batch_size here but a higher value
- # might be slower to preprocess.
- #
- # To speed up this part, we use multiprocessing. See the documentation of the map method for more information:
- # https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map
-
- with accelerator.main_process_first():
- tokenized_datasets = tokenized_datasets.map(
- group_texts,
- batched=True,
- num_proc=args.preprocessing_num_workers,
- load_from_cache_file=not args.overwrite_cache,
- desc=f"Grouping texts in chunks of {max_seq_length}",
- )
-
- train_dataset = tokenized_datasets["train"]
- eval_dataset = tokenized_datasets["validation"]
-
- # Conditional for small test subsets
- if len(train_dataset) > 3:
- # Log a few random samples from the training set:
- for index in random.sample(range(len(train_dataset)), 3):
- logger.info(f"Sample {index} of the training set: {train_dataset[index]}.")
-
- # Data collator
- # This one will take care of randomly masking the tokens.
- data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=args.mlm_probability)
-
- # DataLoaders creation:
- train_dataloader = DataLoader(
- train_dataset, shuffle=True, collate_fn=data_collator, batch_size=args.per_device_train_batch_size
- )
- eval_dataloader = DataLoader(eval_dataset, collate_fn=data_collator, batch_size=args.per_device_eval_batch_size)
-
- # Optimizer
- # Split weights in two groups, one with weight decay and the other not.
- no_decay = ["bias", "LayerNorm.weight"]
- optimizer_grouped_parameters = [
- {
- "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
- "weight_decay": args.weight_decay,
- },
- {
- "params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
- "weight_decay": 0.0,
- },
- ]
- optimizer = torch.optim.AdamW(optimizer_grouped_parameters, lr=args.learning_rate)
-
- # Note -> the training dataloader needs to be prepared before we grab his length below (cause its length will be
- # shorter in multiprocess)
-
- # Scheduler and math around the number of training steps.
- overrode_max_train_steps = False
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
- if args.max_train_steps is None:
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
- overrode_max_train_steps = True
-
- lr_scheduler = get_scheduler(
- name=args.lr_scheduler_type,
- optimizer=optimizer,
- num_warmup_steps=args.num_warmup_steps * args.gradient_accumulation_steps,
- num_training_steps=args.max_train_steps * args.gradient_accumulation_steps,
- )
-
- # Prepare everything with our `accelerator`.
- model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
- model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
- )
-
- # On TPU, the tie weights in our model have been disconnected, so we need to restore the ties.
- if accelerator.distributed_type == DistributedType.TPU:
- model.tie_weights()
-
- # We need to recalculate our total training steps as the size of the training dataloader may have changed.
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
- if overrode_max_train_steps:
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
- # Afterwards we recalculate our number of training epochs
- args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
-
- # Figure out how many steps we should save the Accelerator states
- checkpointing_steps = args.checkpointing_steps
- if checkpointing_steps is not None and checkpointing_steps.isdigit():
- checkpointing_steps = int(checkpointing_steps)
-
- # We need to initialize the trackers we use, and also store our configuration.
- # The trackers initializes automatically on the main process.
- if args.with_tracking:
- experiment_config = vars(args)
- # TensorBoard cannot log Enums, need the raw value
- experiment_config["lr_scheduler_type"] = experiment_config["lr_scheduler_type"].value
- accelerator.init_trackers("mlm_no_trainer", experiment_config)
-
- # Train!
- total_batch_size = args.per_device_train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
-
- logger.info("***** Running training *****")
- logger.info(f" Num examples = {len(train_dataset)}")
- logger.info(f" Num Epochs = {args.num_train_epochs}")
- logger.info(f" Instantaneous batch size per device = {args.per_device_train_batch_size}")
- logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
- logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
- logger.info(f" Total optimization steps = {args.max_train_steps}")
- # Only show the progress bar once on each machine.
- progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process)
- completed_steps = 0
- starting_epoch = 0
-
- # Potentially load in the weights and states from a previous save
- if args.resume_from_checkpoint:
- if args.resume_from_checkpoint is not None or args.resume_from_checkpoint != "":
- accelerator.print(f"Resumed from checkpoint: {args.resume_from_checkpoint}")
- accelerator.load_state(args.resume_from_checkpoint)
- path = os.path.basename(args.resume_from_checkpoint)
- else:
- # Get the most recent checkpoint
- dirs = [f.name for f in os.scandir(os.getcwd()) if f.is_dir()]
- dirs.sort(key=os.path.getctime)
- path = dirs[-1] # Sorts folders by date modified, most recent checkpoint is the last
- # Extract `epoch_{i}` or `step_{i}`
- training_difference = os.path.splitext(path)[0]
-
- if "epoch" in training_difference:
- starting_epoch = int(training_difference.replace("epoch_", "")) + 1
- resume_step = None
- else:
- # need to multiply `gradient_accumulation_steps` to reflect real steps
- resume_step = int(training_difference.replace("step_", "")) * args.gradient_accumulation_steps
- starting_epoch = resume_step // len(train_dataloader)
- resume_step -= starting_epoch * len(train_dataloader)
-
- # update the progress_bar if load from checkpoint
- progress_bar.update(starting_epoch * num_update_steps_per_epoch)
- completed_steps = starting_epoch * num_update_steps_per_epoch
-
- for epoch in range(starting_epoch, args.num_train_epochs):
- model.train()
- if args.with_tracking:
- total_loss = 0
- for step, batch in enumerate(train_dataloader):
- # We need to skip steps until we reach the resumed step
- if args.resume_from_checkpoint and epoch == starting_epoch:
- if resume_step is not None and step < resume_step:
- if step % args.gradient_accumulation_steps == 0:
- progress_bar.update(1)
- completed_steps += 1
- continue
-
- with accelerator.accumulate(model):
- outputs = model(**batch)
- loss = outputs.loss
- # We keep track of the loss at each epoch
- if args.with_tracking:
- total_loss += loss.detach().float()
- accelerator.backward(loss)
- optimizer.step()
- lr_scheduler.step()
- optimizer.zero_grad()
-
- # Checks if the accelerator has performed an optimization step behind the scenes
- if accelerator.sync_gradients:
- progress_bar.update(1)
- completed_steps += 1
-
- if isinstance(checkpointing_steps, int):
- if completed_steps % checkpointing_steps == 0:
- output_dir = f"step_{completed_steps }"
- if args.output_dir is not None:
- output_dir = os.path.join(args.output_dir, output_dir)
- accelerator.save_state(output_dir)
-
- if completed_steps >= args.max_train_steps:
- break
-
- model.eval()
- losses = []
- for step, batch in enumerate(eval_dataloader):
- with torch.no_grad():
- outputs = model(**batch)
-
- loss = outputs.loss
- losses.append(accelerator.gather_for_metrics(loss.repeat(args.per_device_eval_batch_size)))
-
- losses = torch.cat(losses)
- try:
- eval_loss = torch.mean(losses)
- perplexity = math.exp(eval_loss)
- except OverflowError:
- perplexity = float("inf")
-
- logger.info(f"epoch {epoch}: perplexity: {perplexity}")
-
- if args.with_tracking:
- accelerator.log(
- {
- "perplexity": perplexity,
- "eval_loss": eval_loss,
- "train_loss": total_loss.item() / len(train_dataloader),
- "epoch": epoch,
- "step": completed_steps,
- },
- step=completed_steps,
- )
-
- if args.push_to_hub and epoch < args.num_train_epochs - 1:
- accelerator.wait_for_everyone()
- unwrapped_model = accelerator.unwrap_model(model)
- unwrapped_model.save_pretrained(
- args.output_dir, is_main_process=accelerator.is_main_process, save_function=accelerator.save
- )
- if accelerator.is_main_process:
- tokenizer.save_pretrained(args.output_dir)
- repo.push_to_hub(
- commit_message=f"Training in progress epoch {epoch}", blocking=False, auto_lfs_prune=True
- )
-
- if args.checkpointing_steps == "epoch":
- output_dir = f"epoch_{epoch}"
- if args.output_dir is not None:
- output_dir = os.path.join(args.output_dir, output_dir)
- accelerator.save_state(output_dir)
-
- if args.with_tracking:
- accelerator.end_training()
-
- if args.output_dir is not None:
- accelerator.wait_for_everyone()
- unwrapped_model = accelerator.unwrap_model(model)
- unwrapped_model.save_pretrained(
- args.output_dir, is_main_process=accelerator.is_main_process, save_function=accelerator.save
- )
- if accelerator.is_main_process:
- tokenizer.save_pretrained(args.output_dir)
- if args.push_to_hub:
- repo.push_to_hub(commit_message="End of training", auto_lfs_prune=True)
-
- with open(os.path.join(args.output_dir, "all_results.json"), "w") as f:
- json.dump({"perplexity": perplexity}, f)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/chrisvnz/IFC-Extract-Properties/README.md b/spaces/chrisvnz/IFC-Extract-Properties/README.md
deleted file mode 100644
index 30e1856aca40ade7f6e82cb1e8b8dd1b8da1a2a5..0000000000000000000000000000000000000000
--- a/spaces/chrisvnz/IFC-Extract-Properties/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: IFC Extract Properties
-emoji: 😻
-colorFrom: yellow
-colorTo: purple
-sdk: gradio
-sdk_version: 3.36.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/click/formatting.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/click/formatting.py
deleted file mode 100644
index ddd2a2f825f206164eb9efb0a5c41528365beb85..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/click/formatting.py
+++ /dev/null
@@ -1,301 +0,0 @@
-import typing as t
-from contextlib import contextmanager
-from gettext import gettext as _
-
-from ._compat import term_len
-from .parser import split_opt
-
-# Can force a width. This is used by the test system
-FORCED_WIDTH: t.Optional[int] = None
-
-
-def measure_table(rows: t.Iterable[t.Tuple[str, str]]) -> t.Tuple[int, ...]:
- widths: t.Dict[int, int] = {}
-
- for row in rows:
- for idx, col in enumerate(row):
- widths[idx] = max(widths.get(idx, 0), term_len(col))
-
- return tuple(y for x, y in sorted(widths.items()))
-
-
-def iter_rows(
- rows: t.Iterable[t.Tuple[str, str]], col_count: int
-) -> t.Iterator[t.Tuple[str, ...]]:
- for row in rows:
- yield row + ("",) * (col_count - len(row))
-
-
-def wrap_text(
- text: str,
- width: int = 78,
- initial_indent: str = "",
- subsequent_indent: str = "",
- preserve_paragraphs: bool = False,
-) -> str:
- """A helper function that intelligently wraps text. By default, it
- assumes that it operates on a single paragraph of text but if the
- `preserve_paragraphs` parameter is provided it will intelligently
- handle paragraphs (defined by two empty lines).
-
- If paragraphs are handled, a paragraph can be prefixed with an empty
- line containing the ``\\b`` character (``\\x08``) to indicate that
- no rewrapping should happen in that block.
-
- :param text: the text that should be rewrapped.
- :param width: the maximum width for the text.
- :param initial_indent: the initial indent that should be placed on the
- first line as a string.
- :param subsequent_indent: the indent string that should be placed on
- each consecutive line.
- :param preserve_paragraphs: if this flag is set then the wrapping will
- intelligently handle paragraphs.
- """
- from ._textwrap import TextWrapper
-
- text = text.expandtabs()
- wrapper = TextWrapper(
- width,
- initial_indent=initial_indent,
- subsequent_indent=subsequent_indent,
- replace_whitespace=False,
- )
- if not preserve_paragraphs:
- return wrapper.fill(text)
-
- p: t.List[t.Tuple[int, bool, str]] = []
- buf: t.List[str] = []
- indent = None
-
- def _flush_par() -> None:
- if not buf:
- return
- if buf[0].strip() == "\b":
- p.append((indent or 0, True, "\n".join(buf[1:])))
- else:
- p.append((indent or 0, False, " ".join(buf)))
- del buf[:]
-
- for line in text.splitlines():
- if not line:
- _flush_par()
- indent = None
- else:
- if indent is None:
- orig_len = term_len(line)
- line = line.lstrip()
- indent = orig_len - term_len(line)
- buf.append(line)
- _flush_par()
-
- rv = []
- for indent, raw, text in p:
- with wrapper.extra_indent(" " * indent):
- if raw:
- rv.append(wrapper.indent_only(text))
- else:
- rv.append(wrapper.fill(text))
-
- return "\n\n".join(rv)
-
-
-class HelpFormatter:
- """This class helps with formatting text-based help pages. It's
- usually just needed for very special internal cases, but it's also
- exposed so that developers can write their own fancy outputs.
-
- At present, it always writes into memory.
-
- :param indent_increment: the additional increment for each level.
- :param width: the width for the text. This defaults to the terminal
- width clamped to a maximum of 78.
- """
-
- def __init__(
- self,
- indent_increment: int = 2,
- width: t.Optional[int] = None,
- max_width: t.Optional[int] = None,
- ) -> None:
- import shutil
-
- self.indent_increment = indent_increment
- if max_width is None:
- max_width = 80
- if width is None:
- width = FORCED_WIDTH
- if width is None:
- width = max(min(shutil.get_terminal_size().columns, max_width) - 2, 50)
- self.width = width
- self.current_indent = 0
- self.buffer: t.List[str] = []
-
- def write(self, string: str) -> None:
- """Writes a unicode string into the internal buffer."""
- self.buffer.append(string)
-
- def indent(self) -> None:
- """Increases the indentation."""
- self.current_indent += self.indent_increment
-
- def dedent(self) -> None:
- """Decreases the indentation."""
- self.current_indent -= self.indent_increment
-
- def write_usage(
- self, prog: str, args: str = "", prefix: t.Optional[str] = None
- ) -> None:
- """Writes a usage line into the buffer.
-
- :param prog: the program name.
- :param args: whitespace separated list of arguments.
- :param prefix: The prefix for the first line. Defaults to
- ``"Usage: "``.
- """
- if prefix is None:
- prefix = f"{_('Usage:')} "
-
- usage_prefix = f"{prefix:>{self.current_indent}}{prog} "
- text_width = self.width - self.current_indent
-
- if text_width >= (term_len(usage_prefix) + 20):
- # The arguments will fit to the right of the prefix.
- indent = " " * term_len(usage_prefix)
- self.write(
- wrap_text(
- args,
- text_width,
- initial_indent=usage_prefix,
- subsequent_indent=indent,
- )
- )
- else:
- # The prefix is too long, put the arguments on the next line.
- self.write(usage_prefix)
- self.write("\n")
- indent = " " * (max(self.current_indent, term_len(prefix)) + 4)
- self.write(
- wrap_text(
- args, text_width, initial_indent=indent, subsequent_indent=indent
- )
- )
-
- self.write("\n")
-
- def write_heading(self, heading: str) -> None:
- """Writes a heading into the buffer."""
- self.write(f"{'':>{self.current_indent}}{heading}:\n")
-
- def write_paragraph(self) -> None:
- """Writes a paragraph into the buffer."""
- if self.buffer:
- self.write("\n")
-
- def write_text(self, text: str) -> None:
- """Writes re-indented text into the buffer. This rewraps and
- preserves paragraphs.
- """
- indent = " " * self.current_indent
- self.write(
- wrap_text(
- text,
- self.width,
- initial_indent=indent,
- subsequent_indent=indent,
- preserve_paragraphs=True,
- )
- )
- self.write("\n")
-
- def write_dl(
- self,
- rows: t.Sequence[t.Tuple[str, str]],
- col_max: int = 30,
- col_spacing: int = 2,
- ) -> None:
- """Writes a definition list into the buffer. This is how options
- and commands are usually formatted.
-
- :param rows: a list of two item tuples for the terms and values.
- :param col_max: the maximum width of the first column.
- :param col_spacing: the number of spaces between the first and
- second column.
- """
- rows = list(rows)
- widths = measure_table(rows)
- if len(widths) != 2:
- raise TypeError("Expected two columns for definition list")
-
- first_col = min(widths[0], col_max) + col_spacing
-
- for first, second in iter_rows(rows, len(widths)):
- self.write(f"{'':>{self.current_indent}}{first}")
- if not second:
- self.write("\n")
- continue
- if term_len(first) <= first_col - col_spacing:
- self.write(" " * (first_col - term_len(first)))
- else:
- self.write("\n")
- self.write(" " * (first_col + self.current_indent))
-
- text_width = max(self.width - first_col - 2, 10)
- wrapped_text = wrap_text(second, text_width, preserve_paragraphs=True)
- lines = wrapped_text.splitlines()
-
- if lines:
- self.write(f"{lines[0]}\n")
-
- for line in lines[1:]:
- self.write(f"{'':>{first_col + self.current_indent}}{line}\n")
- else:
- self.write("\n")
-
- @contextmanager
- def section(self, name: str) -> t.Iterator[None]:
- """Helpful context manager that writes a paragraph, a heading,
- and the indents.
-
- :param name: the section name that is written as heading.
- """
- self.write_paragraph()
- self.write_heading(name)
- self.indent()
- try:
- yield
- finally:
- self.dedent()
-
- @contextmanager
- def indentation(self) -> t.Iterator[None]:
- """A context manager that increases the indentation."""
- self.indent()
- try:
- yield
- finally:
- self.dedent()
-
- def getvalue(self) -> str:
- """Returns the buffer contents."""
- return "".join(self.buffer)
-
-
-def join_options(options: t.Sequence[str]) -> t.Tuple[str, bool]:
- """Given a list of option strings this joins them in the most appropriate
- way and returns them in the form ``(formatted_string,
- any_prefix_is_slash)`` where the second item in the tuple is a flag that
- indicates if any of the option prefixes was a slash.
- """
- rv = []
- any_prefix_is_slash = False
-
- for opt in options:
- prefix = split_opt(opt)[0]
-
- if prefix == "/":
- any_prefix_is_slash = True
-
- rv.append((len(prefix), opt))
-
- rv.sort(key=lambda x: x[0])
- return ", ".join(x[1] for x in rv), any_prefix_is_slash
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/settings.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/settings.py
deleted file mode 100644
index 502c9d4db77481690c57feed3c5e04f8f643c4f7..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/settings.py
+++ /dev/null
@@ -1,28 +0,0 @@
-# encoding: utf-8
-
-"""Settings object, providing access to document-level settings"""
-
-from __future__ import absolute_import, division, print_function, unicode_literals
-
-from docx.shared import ElementProxy
-
-
-class Settings(ElementProxy):
- """Provides access to document-level settings for a document.
-
- Accessed using the :attr:`.Document.settings` property.
- """
-
- __slots__ = ()
-
- @property
- def odd_and_even_pages_header_footer(self):
- """True if this document has distinct odd and even page headers and footers.
-
- Read/write.
- """
- return self._element.evenAndOddHeaders_val
-
- @odd_and_even_pages_header_footer.setter
- def odd_and_even_pages_header_footer(self, value):
- self._element.evenAndOddHeaders_val = value
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/dataframe.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/dataframe.py
deleted file mode 100644
index f0ed13adafa0573f5ab06bfb6773148aed4f0671..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/dataframe.py
+++ /dev/null
@@ -1,304 +0,0 @@
-"""gr.Dataframe() component"""
-
-from __future__ import annotations
-
-from typing import TYPE_CHECKING, Any, Callable, Literal
-
-import numpy as np
-import pandas as pd
-from gradio_client.documentation import document, set_documentation_group
-from gradio_client.serializing import JSONSerializable
-
-from gradio import utils
-from gradio.components.base import IOComponent, _Keywords
-from gradio.events import (
- Changeable,
- EventListenerMethod,
- Inputable,
- Selectable,
-)
-
-if TYPE_CHECKING:
- from typing import TypedDict
-
- class DataframeData(TypedDict):
- headers: list[str]
- data: list[list[str | int | bool]]
-
-
-set_documentation_group("component")
-
-
-@document()
-class Dataframe(Changeable, Inputable, Selectable, IOComponent, JSONSerializable):
- """
- Accepts or displays 2D input through a spreadsheet-like component for dataframes.
- Preprocessing: passes the uploaded spreadsheet data as a {pandas.DataFrame}, {numpy.array}, {List[List]}, or {List} depending on `type`
- Postprocessing: expects a {pandas.DataFrame}, {numpy.array}, {List[List]}, {List}, a {Dict} with keys `data` (and optionally `headers`), or {str} path to a csv, which is rendered in the spreadsheet.
- Examples-format: a {str} filepath to a csv with data, a pandas dataframe, or a list of lists (excluding headers) where each sublist is a row of data.
- Demos: filter_records, matrix_transpose, tax_calculator
- """
-
- markdown_parser = None
-
- def __init__(
- self,
- value: list[list[Any]] | Callable | None = None,
- *,
- headers: list[str] | None = None,
- row_count: int | tuple[int, str] = (1, "dynamic"),
- col_count: int | tuple[int, str] | None = None,
- datatype: str | list[str] = "str",
- type: Literal["pandas", "numpy", "array"] = "pandas",
- max_rows: int | None = 20,
- max_cols: int | None = None,
- overflow_row_behaviour: Literal["paginate", "show_ends"] = "paginate",
- label: str | None = None,
- every: float | None = None,
- show_label: bool = True,
- scale: int | None = None,
- min_width: int = 160,
- interactive: bool | None = None,
- visible: bool = True,
- elem_id: str | None = None,
- elem_classes: list[str] | str | None = None,
- wrap: bool = False,
- **kwargs,
- ):
- """
- Parameters:
- value: Default value as a 2-dimensional list of values. If callable, the function will be called whenever the app loads to set the initial value of the component.
- headers: List of str header names. If None, no headers are shown.
- row_count: Limit number of rows for input and decide whether user can create new rows. The first element of the tuple is an `int`, the row count; the second should be 'fixed' or 'dynamic', the new row behaviour. If an `int` is passed the rows default to 'dynamic'
- col_count: Limit number of columns for input and decide whether user can create new columns. The first element of the tuple is an `int`, the number of columns; the second should be 'fixed' or 'dynamic', the new column behaviour. If an `int` is passed the columns default to 'dynamic'
- datatype: Datatype of values in sheet. Can be provided per column as a list of strings, or for the entire sheet as a single string. Valid datatypes are "str", "number", "bool", "date", and "markdown".
- type: Type of value to be returned by component. "pandas" for pandas dataframe, "numpy" for numpy array, or "array" for a Python array.
- label: component name in interface.
- max_rows: Maximum number of rows to display at once. Set to None for infinite.
- max_cols: Maximum number of columns to display at once. Set to None for infinite.
- overflow_row_behaviour: If set to "paginate", will create pages for overflow rows. If set to "show_ends", will show initial and final rows and truncate middle rows.
- label: component name in interface.
- every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute.
- show_label: if True, will display label.
- scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer.
- min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.
- interactive: if True, will allow users to edit the dataframe; if False, can only be used to display data. If not provided, this is inferred based on whether the component is used as an input or output.
- visible: If False, component will be hidden.
- elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
- elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.
- wrap: if True text in table cells will wrap when appropriate, if False the table will scroll horizontally. Defaults to False.
- """
-
- self.wrap = wrap
- self.row_count = self.__process_counts(row_count)
- self.col_count = self.__process_counts(
- col_count, len(headers) if headers else 3
- )
-
- self.__validate_headers(headers, self.col_count[0])
-
- self.headers = (
- headers if headers is not None else list(range(1, self.col_count[0] + 1))
- )
- self.datatype = (
- datatype if isinstance(datatype, list) else [datatype] * self.col_count[0]
- )
- valid_types = ["pandas", "numpy", "array"]
- if type not in valid_types:
- raise ValueError(
- f"Invalid value for parameter `type`: {type}. Please choose from one of: {valid_types}"
- )
- self.type = type
- values = {
- "str": "",
- "number": 0,
- "bool": False,
- "date": "01/01/1970",
- "markdown": "",
- "html": "",
- }
- column_dtypes = (
- [datatype] * self.col_count[0] if isinstance(datatype, str) else datatype
- )
- self.empty_input = [
- [values[c] for c in column_dtypes] for _ in range(self.row_count[0])
- ]
-
- self.max_rows = max_rows
- self.max_cols = max_cols
- self.overflow_row_behaviour = overflow_row_behaviour
- self.select: EventListenerMethod
- """
- Event listener for when the user selects cell within Dataframe.
- Uses event data gradio.SelectData to carry `value` referring to value of selected cell, and `index` tuple to refer to index row and column.
- See EventData documentation on how to use this event data.
- """
- IOComponent.__init__(
- self,
- label=label,
- every=every,
- show_label=show_label,
- scale=scale,
- min_width=min_width,
- interactive=interactive,
- visible=visible,
- elem_id=elem_id,
- elem_classes=elem_classes,
- value=value,
- **kwargs,
- )
-
- def get_config(self):
- return {
- "headers": self.headers,
- "datatype": self.datatype,
- "row_count": self.row_count,
- "col_count": self.col_count,
- "value": self.value,
- "max_rows": self.max_rows,
- "max_cols": self.max_cols,
- "overflow_row_behaviour": self.overflow_row_behaviour,
- "wrap": self.wrap,
- **IOComponent.get_config(self),
- }
-
- @staticmethod
- def update(
- value: Any | Literal[_Keywords.NO_VALUE] | None = _Keywords.NO_VALUE,
- max_rows: int | None = None,
- max_cols: str | None = None,
- label: str | None = None,
- show_label: bool | None = None,
- scale: int | None = None,
- min_width: int | None = None,
- interactive: bool | None = None,
- visible: bool | None = None,
- ):
- return {
- "max_rows": max_rows,
- "max_cols": max_cols,
- "label": label,
- "show_label": show_label,
- "scale": scale,
- "min_width": min_width,
- "interactive": interactive,
- "visible": visible,
- "value": value,
- "__type__": "update",
- }
-
- def preprocess(self, x: DataframeData):
- """
- Parameters:
- x: 2D array of str, numeric, or bool data
- Returns:
- Dataframe in requested format
- """
- if self.type == "pandas":
- if x.get("headers") is not None:
- return pd.DataFrame(x["data"], columns=x.get("headers"))
- else:
- return pd.DataFrame(x["data"])
- if self.type == "numpy":
- return np.array(x["data"])
- elif self.type == "array":
- return x["data"]
- else:
- raise ValueError(
- "Unknown type: "
- + str(self.type)
- + ". Please choose from: 'pandas', 'numpy', 'array'."
- )
-
- def postprocess(
- self, y: str | pd.DataFrame | np.ndarray | list[list[str | float]] | dict
- ) -> dict:
- """
- Parameters:
- y: dataframe in given format
- Returns:
- JSON object with key 'headers' for list of header names, 'data' for 2D array of string or numeric data
- """
- if y is None:
- return self.postprocess(self.empty_input)
- if isinstance(y, dict):
- return y
- if isinstance(y, str):
- dataframe = pd.read_csv(y)
- return {
- "headers": list(dataframe.columns),
- "data": Dataframe.__process_markdown(
- dataframe.to_dict(orient="split")["data"], self.datatype
- ),
- }
- if isinstance(y, pd.DataFrame):
- return {
- "headers": list(y.columns), # type: ignore
- "data": Dataframe.__process_markdown(
- y.to_dict(orient="split")["data"], self.datatype # type: ignore
- ),
- }
- if isinstance(y, (np.ndarray, list)):
- if len(y) == 0:
- return self.postprocess([[]])
- if isinstance(y, np.ndarray):
- y = y.tolist()
- assert isinstance(y, list), "output cannot be converted to list"
-
- _headers = self.headers
-
- if len(self.headers) < len(y[0]):
- _headers = [
- *self.headers,
- *list(range(len(self.headers) + 1, len(y[0]) + 1)),
- ]
- elif len(self.headers) > len(y[0]):
- _headers = self.headers[: len(y[0])]
-
- return {
- "headers": _headers,
- "data": Dataframe.__process_markdown(y, self.datatype),
- }
- raise ValueError("Cannot process value as a Dataframe")
-
- @staticmethod
- def __process_counts(count, default=3) -> tuple[int, str]:
- if count is None:
- return (default, "dynamic")
- if type(count) == int or type(count) == float:
- return (int(count), "dynamic")
- else:
- return count
-
- @staticmethod
- def __validate_headers(headers: list[str] | None, col_count: int):
- if headers is not None and len(headers) != col_count:
- raise ValueError(
- f"The length of the headers list must be equal to the col_count int.\n"
- f"The column count is set to {col_count} but `headers` has {len(headers)} items. "
- f"Check the values passed to `col_count` and `headers`."
- )
-
- @classmethod
- def __process_markdown(cls, data: list[list[Any]], datatype: list[str]):
- if "markdown" not in datatype:
- return data
-
- if cls.markdown_parser is None:
- cls.markdown_parser = utils.get_markdown_parser()
-
- for i in range(len(data)):
- for j in range(len(data[i])):
- if datatype[j] == "markdown":
- data[i][j] = cls.markdown_parser.render(data[i][j])
-
- return data
-
- def as_example(self, input_data: pd.DataFrame | np.ndarray | str | None):
- if input_data is None:
- return ""
- elif isinstance(input_data, pd.DataFrame):
- return input_data.head(n=5).to_dict(orient="split")["data"] # type: ignore
- elif isinstance(input_data, np.ndarray):
- return input_data.tolist()
- return input_data
diff --git a/spaces/cihyFjudo/fairness-paper-search/Fuga da Los Angeles sub download la missione impossibile di Snake Plissken per salvare il mondo.md b/spaces/cihyFjudo/fairness-paper-search/Fuga da Los Angeles sub download la missione impossibile di Snake Plissken per salvare il mondo.md
deleted file mode 100644
index 6610f5c62778caf5b31312e20377b4760187f77d..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Fuga da Los Angeles sub download la missione impossibile di Snake Plissken per salvare il mondo.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Harvard Aircraft Maintenance Manuals Weight and Balance Data and Emergency Procedures for Harvard IIA.md b/spaces/cihyFjudo/fairness-paper-search/Harvard Aircraft Maintenance Manuals Weight and Balance Data and Emergency Procedures for Harvard IIA.md
deleted file mode 100644
index b348ccce3af999cb3c374e95dbc62f1157a26b9a..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Harvard Aircraft Maintenance Manuals Weight and Balance Data and Emergency Procedures for Harvard IIA.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
Aircraft maintenance technician decision-making and actions have resulted in aircraft system errors causing aircraft incidents and accidents. Aircraft accident investigators and researchers examined the factors that influence aircraft maintenance technician errors and categorized the types of errors in an attempt to prevent similar occurrences. New aircraft technology introduced to improve aviation safety and efficiency incur failures that have no information contained in the aircraft maintenance manuals. According to the Federal Aviation Administration, aircraft maintenance technicians must use only approved aircraft maintenance documents to repair, modify, and service aircraft. This qualitative research used a grounded theory approach to explore the decision-making processes and actions taken by aircraft maintenance technicians when confronted with an aircraft problem not contained in the aircraft maintenance manuals. The target population for the research was Federal Aviation Administration licensed aircraft and power plant mechanics from across the United States. Nonprobability purposeful sampling was used to obtain aircraft maintenance technicians with the experience sought in the study problem. The sample population recruitment yielded 19 participants for eight focus group sessions to obtain opinions, perceptions, and experiences related to the study problem. All data collected was entered into the Atlas ti qualitative analysis software. The emergence of Aircraft Maintenance Technician decision-making themes regarding Aircraft Maintenance Manual content, Aircraft Maintenance Technician experience, and legal implications of not following Aircraft Maintenance Manuals surfaced. Conclusions from this study suggest Aircraft Maintenance Technician decision-making were influenced by experience, gaps in the Aircraft Maintenance Manuals, reliance on others, realizing the impact of decisions concerning aircraft airworthiness, management pressures, and legal concerns related to decision-making. Recommendations included an in-depth systematic review of the Aircraft Maintenance Manuals, development of a Federal Aviation Administration approved standardized Aircraft Maintenance Technician decision-making flow diagram, and implementation of risk based decision-making training. The benefit of this study is to save the airline industry revenue by preventing poor decision-making practices that result in inefficient maintenance actions and aircraft incidents and accidents.
-
Your CV is the most effective tool you have in gaining the advantage in a competitive job market. The good news is, perfecting your CV does not have to be difficult. We give you the information you need to figure out the best ways to highlight your professional skills. Our aircraft maintenance engineer CV example takes the mystery and confusion out of issues like formatting, organization, and phrasing. We also include some helpful writing tips that explain important principles to keep in mind as you craft and refine your CV.
The MMP/Casemate Single series of publications has developed into a very useful tool for modelers to add authentic details to their models. The books usually take one type of aircraft, and in this case, one particular airplane, and provides extensive detail, including accurate view drawings in two scales, photos of the actual aircraft in service, copies of maintenance manual pages showing specific details of components, photos of details of surviving examples, and colorful four view general arrangement drawings.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/InventorProfessional2013PortableTorrent.md b/spaces/cihyFjudo/fairness-paper-search/InventorProfessional2013PortableTorrent.md
deleted file mode 100644
index 88eb867c9c518ca56a45fe24e99139417647978f..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/InventorProfessional2013PortableTorrent.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cis-lmu/glotlid-space/constants.py b/spaces/cis-lmu/glotlid-space/constants.py
deleted file mode 100644
index ee27ee44e2cb8c576ae559d15d5e9986b0ef446f..0000000000000000000000000000000000000000
--- a/spaces/cis-lmu/glotlid-space/constants.py
+++ /dev/null
@@ -1,4 +0,0 @@
-CHOICE_TEXT = "Input Text"
-CHOICE_FILE = "Upload File"
-TITLE = "GlotLID: Language Identification for Around 2000 Languages"
-MODEL_NAME = "cis-lmu/glotlid"
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/aiofiles/threadpool/text.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/aiofiles/threadpool/text.py
deleted file mode 100644
index 0e625909b6c960ebed4a0ed99941b28156fbf2d1..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/aiofiles/threadpool/text.py
+++ /dev/null
@@ -1,64 +0,0 @@
-from ..base import AsyncBase, AsyncIndirectBase
-from .utils import delegate_to_executor, proxy_method_directly, proxy_property_directly
-
-
-@delegate_to_executor(
- "close",
- "flush",
- "isatty",
- "read",
- "readable",
- "readline",
- "readlines",
- "seek",
- "seekable",
- "tell",
- "truncate",
- "write",
- "writable",
- "writelines",
-)
-@proxy_method_directly("detach", "fileno", "readable")
-@proxy_property_directly(
- "buffer",
- "closed",
- "encoding",
- "errors",
- "line_buffering",
- "newlines",
- "name",
- "mode",
-)
-class AsyncTextIOWrapper(AsyncBase):
- """The asyncio executor version of io.TextIOWrapper."""
-
-
-@delegate_to_executor(
- "close",
- "flush",
- "isatty",
- "read",
- "readable",
- "readline",
- "readlines",
- "seek",
- "seekable",
- "tell",
- "truncate",
- "write",
- "writable",
- "writelines",
-)
-@proxy_method_directly("detach", "fileno", "readable")
-@proxy_property_directly(
- "buffer",
- "closed",
- "encoding",
- "errors",
- "line_buffering",
- "newlines",
- "name",
- "mode",
-)
-class AsyncTextIndirectIOWrapper(AsyncIndirectBase):
- """The indirect asyncio executor version of io.TextIOWrapper."""
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/otlLib/builder.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/otlLib/builder.py
deleted file mode 100644
index 2a02c20045904915b50bdd9f1ef3644e3db5258f..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/otlLib/builder.py
+++ /dev/null
@@ -1,2916 +0,0 @@
-from collections import namedtuple, OrderedDict
-import os
-from fontTools.misc.fixedTools import fixedToFloat
-from fontTools import ttLib
-from fontTools.ttLib.tables import otTables as ot
-from fontTools.ttLib.tables.otBase import (
- ValueRecord,
- valueRecordFormatDict,
- OTTableWriter,
- CountReference,
-)
-from fontTools.ttLib.tables import otBase
-from fontTools.feaLib.ast import STATNameStatement
-from fontTools.otlLib.optimize.gpos import (
- _compression_level_from_env,
- compact_lookup,
-)
-from fontTools.otlLib.error import OpenTypeLibError
-from functools import reduce
-import logging
-import copy
-
-
-log = logging.getLogger(__name__)
-
-
-def buildCoverage(glyphs, glyphMap):
- """Builds a coverage table.
-
- Coverage tables (as defined in the `OpenType spec `__)
- are used in all OpenType Layout lookups apart from the Extension type, and
- define the glyphs involved in a layout subtable. This allows shaping engines
- to compare the glyph stream with the coverage table and quickly determine
- whether a subtable should be involved in a shaping operation.
-
- This function takes a list of glyphs and a glyphname-to-ID map, and
- returns a ``Coverage`` object representing the coverage table.
-
- Example::
-
- glyphMap = font.getReverseGlyphMap()
- glyphs = [ "A", "B", "C" ]
- coverage = buildCoverage(glyphs, glyphMap)
-
- Args:
- glyphs: a sequence of glyph names.
- glyphMap: a glyph name to ID map, typically returned from
- ``font.getReverseGlyphMap()``.
-
- Returns:
- An ``otTables.Coverage`` object or ``None`` if there are no glyphs
- supplied.
- """
-
- if not glyphs:
- return None
- self = ot.Coverage()
- self.glyphs = sorted(set(glyphs), key=glyphMap.__getitem__)
- return self
-
-
-LOOKUP_FLAG_RIGHT_TO_LEFT = 0x0001
-LOOKUP_FLAG_IGNORE_BASE_GLYPHS = 0x0002
-LOOKUP_FLAG_IGNORE_LIGATURES = 0x0004
-LOOKUP_FLAG_IGNORE_MARKS = 0x0008
-LOOKUP_FLAG_USE_MARK_FILTERING_SET = 0x0010
-
-
-def buildLookup(subtables, flags=0, markFilterSet=None):
- """Turns a collection of rules into a lookup.
-
- A Lookup (as defined in the `OpenType Spec `__)
- wraps the individual rules in a layout operation (substitution or
- positioning) in a data structure expressing their overall lookup type -
- for example, single substitution, mark-to-base attachment, and so on -
- as well as the lookup flags and any mark filtering sets. You may import
- the following constants to express lookup flags:
-
- - ``LOOKUP_FLAG_RIGHT_TO_LEFT``
- - ``LOOKUP_FLAG_IGNORE_BASE_GLYPHS``
- - ``LOOKUP_FLAG_IGNORE_LIGATURES``
- - ``LOOKUP_FLAG_IGNORE_MARKS``
- - ``LOOKUP_FLAG_USE_MARK_FILTERING_SET``
-
- Args:
- subtables: A list of layout subtable objects (e.g.
- ``MultipleSubst``, ``PairPos``, etc.) or ``None``.
- flags (int): This lookup's flags.
- markFilterSet: Either ``None`` if no mark filtering set is used, or
- an integer representing the filtering set to be used for this
- lookup. If a mark filtering set is provided,
- `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's
- flags.
-
- Returns:
- An ``otTables.Lookup`` object or ``None`` if there are no subtables
- supplied.
- """
- if subtables is None:
- return None
- subtables = [st for st in subtables if st is not None]
- if not subtables:
- return None
- assert all(
- t.LookupType == subtables[0].LookupType for t in subtables
- ), "all subtables must have the same LookupType; got %s" % repr(
- [t.LookupType for t in subtables]
- )
- self = ot.Lookup()
- self.LookupType = subtables[0].LookupType
- self.LookupFlag = flags
- self.SubTable = subtables
- self.SubTableCount = len(self.SubTable)
- if markFilterSet is not None:
- self.LookupFlag |= LOOKUP_FLAG_USE_MARK_FILTERING_SET
- assert isinstance(markFilterSet, int), markFilterSet
- self.MarkFilteringSet = markFilterSet
- else:
- assert (self.LookupFlag & LOOKUP_FLAG_USE_MARK_FILTERING_SET) == 0, (
- "if markFilterSet is None, flags must not set "
- "LOOKUP_FLAG_USE_MARK_FILTERING_SET; flags=0x%04x" % flags
- )
- return self
-
-
-class LookupBuilder(object):
- SUBTABLE_BREAK_ = "SUBTABLE_BREAK"
-
- def __init__(self, font, location, table, lookup_type):
- self.font = font
- self.glyphMap = font.getReverseGlyphMap()
- self.location = location
- self.table, self.lookup_type = table, lookup_type
- self.lookupflag = 0
- self.markFilterSet = None
- self.lookup_index = None # assigned when making final tables
- assert table in ("GPOS", "GSUB")
-
- def equals(self, other):
- return (
- isinstance(other, self.__class__)
- and self.table == other.table
- and self.lookupflag == other.lookupflag
- and self.markFilterSet == other.markFilterSet
- )
-
- def inferGlyphClasses(self):
- """Infers glyph glasses for the GDEF table, such as {"cedilla":3}."""
- return {}
-
- def getAlternateGlyphs(self):
- """Helper for building 'aalt' features."""
- return {}
-
- def buildLookup_(self, subtables):
- return buildLookup(subtables, self.lookupflag, self.markFilterSet)
-
- def buildMarkClasses_(self, marks):
- """{"cedilla": ("BOTTOM", ast.Anchor), ...} --> {"BOTTOM":0, "TOP":1}
-
- Helper for MarkBasePostBuilder, MarkLigPosBuilder, and
- MarkMarkPosBuilder. Seems to return the same numeric IDs
- for mark classes as the AFDKO makeotf tool.
- """
- ids = {}
- for mark in sorted(marks.keys(), key=self.font.getGlyphID):
- markClassName, _markAnchor = marks[mark]
- if markClassName not in ids:
- ids[markClassName] = len(ids)
- return ids
-
- def setBacktrackCoverage_(self, prefix, subtable):
- subtable.BacktrackGlyphCount = len(prefix)
- subtable.BacktrackCoverage = []
- for p in reversed(prefix):
- coverage = buildCoverage(p, self.glyphMap)
- subtable.BacktrackCoverage.append(coverage)
-
- def setLookAheadCoverage_(self, suffix, subtable):
- subtable.LookAheadGlyphCount = len(suffix)
- subtable.LookAheadCoverage = []
- for s in suffix:
- coverage = buildCoverage(s, self.glyphMap)
- subtable.LookAheadCoverage.append(coverage)
-
- def setInputCoverage_(self, glyphs, subtable):
- subtable.InputGlyphCount = len(glyphs)
- subtable.InputCoverage = []
- for g in glyphs:
- coverage = buildCoverage(g, self.glyphMap)
- subtable.InputCoverage.append(coverage)
-
- def setCoverage_(self, glyphs, subtable):
- subtable.GlyphCount = len(glyphs)
- subtable.Coverage = []
- for g in glyphs:
- coverage = buildCoverage(g, self.glyphMap)
- subtable.Coverage.append(coverage)
-
- def build_subst_subtables(self, mapping, klass):
- substitutions = [{}]
- for key in mapping:
- if key[0] == self.SUBTABLE_BREAK_:
- substitutions.append({})
- else:
- substitutions[-1][key] = mapping[key]
- subtables = [klass(s) for s in substitutions]
- return subtables
-
- def add_subtable_break(self, location):
- """Add an explicit subtable break.
-
- Args:
- location: A string or tuple representing the location in the
- original source which produced this break, or ``None`` if
- no location is provided.
- """
- log.warning(
- OpenTypeLibError(
- 'unsupported "subtable" statement for lookup type', location
- )
- )
-
-
-class AlternateSubstBuilder(LookupBuilder):
- """Builds an Alternate Substitution (GSUB3) lookup.
-
- Users are expected to manually add alternate glyph substitutions to
- the ``alternates`` attribute after the object has been initialized,
- e.g.::
-
- builder.alternates["A"] = ["A.alt1", "A.alt2"]
-
- Attributes:
- font (``fontTools.TTLib.TTFont``): A font object.
- location: A string or tuple representing the location in the original
- source which produced this lookup.
- alternates: An ordered dictionary of alternates, mapping glyph names
- to a list of names of alternates.
- lookupflag (int): The lookup's flag
- markFilterSet: Either ``None`` if no mark filtering set is used, or
- an integer representing the filtering set to be used for this
- lookup. If a mark filtering set is provided,
- `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's
- flags.
- """
-
- def __init__(self, font, location):
- LookupBuilder.__init__(self, font, location, "GSUB", 3)
- self.alternates = OrderedDict()
-
- def equals(self, other):
- return LookupBuilder.equals(self, other) and self.alternates == other.alternates
-
- def build(self):
- """Build the lookup.
-
- Returns:
- An ``otTables.Lookup`` object representing the alternate
- substitution lookup.
- """
- subtables = self.build_subst_subtables(
- self.alternates, buildAlternateSubstSubtable
- )
- return self.buildLookup_(subtables)
-
- def getAlternateGlyphs(self):
- return self.alternates
-
- def add_subtable_break(self, location):
- self.alternates[(self.SUBTABLE_BREAK_, location)] = self.SUBTABLE_BREAK_
-
-
-class ChainContextualRule(
- namedtuple("ChainContextualRule", ["prefix", "glyphs", "suffix", "lookups"])
-):
- @property
- def is_subtable_break(self):
- return self.prefix == LookupBuilder.SUBTABLE_BREAK_
-
-
-class ChainContextualRuleset:
- def __init__(self):
- self.rules = []
-
- def addRule(self, rule):
- self.rules.append(rule)
-
- @property
- def hasPrefixOrSuffix(self):
- # Do we have any prefixes/suffixes? If this is False for all
- # rulesets, we can express the whole lookup as GPOS5/GSUB7.
- for rule in self.rules:
- if len(rule.prefix) > 0 or len(rule.suffix) > 0:
- return True
- return False
-
- @property
- def hasAnyGlyphClasses(self):
- # Do we use glyph classes anywhere in the rules? If this is False
- # we can express this subtable as a Format 1.
- for rule in self.rules:
- for coverage in (rule.prefix, rule.glyphs, rule.suffix):
- if any(len(x) > 1 for x in coverage):
- return True
- return False
-
- def format2ClassDefs(self):
- PREFIX, GLYPHS, SUFFIX = 0, 1, 2
- classDefBuilders = []
- for ix in [PREFIX, GLYPHS, SUFFIX]:
- context = []
- for r in self.rules:
- context.append(r[ix])
- classes = self._classBuilderForContext(context)
- if not classes:
- return None
- classDefBuilders.append(classes)
- return classDefBuilders
-
- def _classBuilderForContext(self, context):
- classdefbuilder = ClassDefBuilder(useClass0=False)
- for position in context:
- for glyphset in position:
- glyphs = set(glyphset)
- if not classdefbuilder.canAdd(glyphs):
- return None
- classdefbuilder.add(glyphs)
- return classdefbuilder
-
-
-class ChainContextualBuilder(LookupBuilder):
- def equals(self, other):
- return LookupBuilder.equals(self, other) and self.rules == other.rules
-
- def rulesets(self):
- # Return a list of ChainContextRuleset objects, taking explicit
- # subtable breaks into account
- ruleset = [ChainContextualRuleset()]
- for rule in self.rules:
- if rule.is_subtable_break:
- ruleset.append(ChainContextualRuleset())
- continue
- ruleset[-1].addRule(rule)
- # Squish any empty subtables
- return [x for x in ruleset if len(x.rules) > 0]
-
- def getCompiledSize_(self, subtables):
- size = 0
- for st in subtables:
- w = OTTableWriter()
- w["LookupType"] = CountReference(
- {"LookupType": st.LookupType}, "LookupType"
- )
- # We need to make a copy here because compiling
- # modifies the subtable (finalizing formats etc.)
- copy.deepcopy(st).compile(w, self.font)
- size += len(w.getAllData())
- return size
-
- def build(self):
- """Build the lookup.
-
- Returns:
- An ``otTables.Lookup`` object representing the chained
- contextual positioning lookup.
- """
- subtables = []
-
- rulesets = self.rulesets()
- chaining = any(ruleset.hasPrefixOrSuffix for ruleset in rulesets)
-
- # https://github.com/fonttools/fonttools/issues/2539
- #
- # Unfortunately, as of 2022-03-07, Apple's CoreText renderer does not
- # correctly process GPOS7 lookups, so for now we force contextual
- # positioning lookups to be chaining (GPOS8).
- #
- # This seems to be fixed as of macOS 13.2, but we keep disabling this
- # for now until we are no longer concerned about old macOS versions.
- # But we allow people to opt-out of this with the config key below.
- write_gpos7 = self.font.cfg.get("fontTools.otlLib.builder:WRITE_GPOS7")
- # horrible separation of concerns breach
- if not write_gpos7 and self.subtable_type == "Pos":
- chaining = True
-
- for ruleset in rulesets:
- # Determine format strategy. We try to build formats 1, 2 and 3
- # subtables and then work out which is best. candidates list holds
- # the subtables in each format for this ruleset (including a dummy
- # "format 0" to make the addressing match the format numbers).
-
- # We can always build a format 3 lookup by accumulating each of
- # the rules into a list, so start with that.
- candidates = [None, None, None, []]
- for rule in ruleset.rules:
- candidates[3].append(self.buildFormat3Subtable(rule, chaining))
-
- # Can we express the whole ruleset as a format 2 subtable?
- classdefs = ruleset.format2ClassDefs()
- if classdefs:
- candidates[2] = [
- self.buildFormat2Subtable(ruleset, classdefs, chaining)
- ]
-
- if not ruleset.hasAnyGlyphClasses:
- candidates[1] = [self.buildFormat1Subtable(ruleset, chaining)]
-
- for i in [1, 2, 3]:
- if candidates[i]:
- try:
- self.getCompiledSize_(candidates[i])
- except Exception as e:
- log.warning(
- "Contextual format %i at %s overflowed (%s)"
- % (i, str(self.location), e)
- )
- candidates[i] = None
-
- candidates = [x for x in candidates if x is not None]
- if not candidates:
- raise OpenTypeLibError("All candidates overflowed", self.location)
-
- winner = min(candidates, key=self.getCompiledSize_)
- subtables.extend(winner)
-
- # If we are not chaining, lookup type will be automatically fixed by
- # buildLookup_
- return self.buildLookup_(subtables)
-
- def buildFormat1Subtable(self, ruleset, chaining=True):
- st = self.newSubtable_(chaining=chaining)
- st.Format = 1
- st.populateDefaults()
- coverage = set()
- rulesetsByFirstGlyph = {}
- ruleAttr = self.ruleAttr_(format=1, chaining=chaining)
-
- for rule in ruleset.rules:
- ruleAsSubtable = self.newRule_(format=1, chaining=chaining)
-
- if chaining:
- ruleAsSubtable.BacktrackGlyphCount = len(rule.prefix)
- ruleAsSubtable.LookAheadGlyphCount = len(rule.suffix)
- ruleAsSubtable.Backtrack = [list(x)[0] for x in reversed(rule.prefix)]
- ruleAsSubtable.LookAhead = [list(x)[0] for x in rule.suffix]
-
- ruleAsSubtable.InputGlyphCount = len(rule.glyphs)
- else:
- ruleAsSubtable.GlyphCount = len(rule.glyphs)
-
- ruleAsSubtable.Input = [list(x)[0] for x in rule.glyphs[1:]]
-
- self.buildLookupList(rule, ruleAsSubtable)
-
- firstGlyph = list(rule.glyphs[0])[0]
- if firstGlyph not in rulesetsByFirstGlyph:
- coverage.add(firstGlyph)
- rulesetsByFirstGlyph[firstGlyph] = []
- rulesetsByFirstGlyph[firstGlyph].append(ruleAsSubtable)
-
- st.Coverage = buildCoverage(coverage, self.glyphMap)
- ruleSets = []
- for g in st.Coverage.glyphs:
- ruleSet = self.newRuleSet_(format=1, chaining=chaining)
- setattr(ruleSet, ruleAttr, rulesetsByFirstGlyph[g])
- setattr(ruleSet, f"{ruleAttr}Count", len(rulesetsByFirstGlyph[g]))
- ruleSets.append(ruleSet)
-
- setattr(st, self.ruleSetAttr_(format=1, chaining=chaining), ruleSets)
- setattr(
- st, self.ruleSetAttr_(format=1, chaining=chaining) + "Count", len(ruleSets)
- )
-
- return st
-
- def buildFormat2Subtable(self, ruleset, classdefs, chaining=True):
- st = self.newSubtable_(chaining=chaining)
- st.Format = 2
- st.populateDefaults()
-
- if chaining:
- (
- st.BacktrackClassDef,
- st.InputClassDef,
- st.LookAheadClassDef,
- ) = [c.build() for c in classdefs]
- else:
- st.ClassDef = classdefs[1].build()
-
- inClasses = classdefs[1].classes()
-
- classSets = []
- for _ in inClasses:
- classSet = self.newRuleSet_(format=2, chaining=chaining)
- classSets.append(classSet)
-
- coverage = set()
- classRuleAttr = self.ruleAttr_(format=2, chaining=chaining)
-
- for rule in ruleset.rules:
- ruleAsSubtable = self.newRule_(format=2, chaining=chaining)
- if chaining:
- ruleAsSubtable.BacktrackGlyphCount = len(rule.prefix)
- ruleAsSubtable.LookAheadGlyphCount = len(rule.suffix)
- # The glyphs in the rule may be list, tuple, odict_keys...
- # Order is not important anyway because they are guaranteed
- # to be members of the same class.
- ruleAsSubtable.Backtrack = [
- st.BacktrackClassDef.classDefs[list(x)[0]]
- for x in reversed(rule.prefix)
- ]
- ruleAsSubtable.LookAhead = [
- st.LookAheadClassDef.classDefs[list(x)[0]] for x in rule.suffix
- ]
-
- ruleAsSubtable.InputGlyphCount = len(rule.glyphs)
- ruleAsSubtable.Input = [
- st.InputClassDef.classDefs[list(x)[0]] for x in rule.glyphs[1:]
- ]
- setForThisRule = classSets[
- st.InputClassDef.classDefs[list(rule.glyphs[0])[0]]
- ]
- else:
- ruleAsSubtable.GlyphCount = len(rule.glyphs)
- ruleAsSubtable.Class = [ # The spec calls this InputSequence
- st.ClassDef.classDefs[list(x)[0]] for x in rule.glyphs[1:]
- ]
- setForThisRule = classSets[
- st.ClassDef.classDefs[list(rule.glyphs[0])[0]]
- ]
-
- self.buildLookupList(rule, ruleAsSubtable)
- coverage |= set(rule.glyphs[0])
-
- getattr(setForThisRule, classRuleAttr).append(ruleAsSubtable)
- setattr(
- setForThisRule,
- f"{classRuleAttr}Count",
- getattr(setForThisRule, f"{classRuleAttr}Count") + 1,
- )
- setattr(st, self.ruleSetAttr_(format=2, chaining=chaining), classSets)
- setattr(
- st, self.ruleSetAttr_(format=2, chaining=chaining) + "Count", len(classSets)
- )
- st.Coverage = buildCoverage(coverage, self.glyphMap)
- return st
-
- def buildFormat3Subtable(self, rule, chaining=True):
- st = self.newSubtable_(chaining=chaining)
- st.Format = 3
- if chaining:
- self.setBacktrackCoverage_(rule.prefix, st)
- self.setLookAheadCoverage_(rule.suffix, st)
- self.setInputCoverage_(rule.glyphs, st)
- else:
- self.setCoverage_(rule.glyphs, st)
- self.buildLookupList(rule, st)
- return st
-
- def buildLookupList(self, rule, st):
- for sequenceIndex, lookupList in enumerate(rule.lookups):
- if lookupList is not None:
- if not isinstance(lookupList, list):
- # Can happen with synthesised lookups
- lookupList = [lookupList]
- for l in lookupList:
- if l.lookup_index is None:
- if isinstance(self, ChainContextPosBuilder):
- other = "substitution"
- else:
- other = "positioning"
- raise OpenTypeLibError(
- "Missing index of the specified "
- f"lookup, might be a {other} lookup",
- self.location,
- )
- rec = self.newLookupRecord_(st)
- rec.SequenceIndex = sequenceIndex
- rec.LookupListIndex = l.lookup_index
-
- def add_subtable_break(self, location):
- self.rules.append(
- ChainContextualRule(
- self.SUBTABLE_BREAK_,
- self.SUBTABLE_BREAK_,
- self.SUBTABLE_BREAK_,
- [self.SUBTABLE_BREAK_],
- )
- )
-
- def newSubtable_(self, chaining=True):
- subtablename = f"Context{self.subtable_type}"
- if chaining:
- subtablename = "Chain" + subtablename
- st = getattr(ot, subtablename)() # ot.ChainContextPos()/ot.ChainSubst()/etc.
- setattr(st, f"{self.subtable_type}Count", 0)
- setattr(st, f"{self.subtable_type}LookupRecord", [])
- return st
-
- # Format 1 and format 2 GSUB5/GSUB6/GPOS7/GPOS8 rulesets and rules form a family:
- #
- # format 1 ruleset format 1 rule format 2 ruleset format 2 rule
- # GSUB5 SubRuleSet SubRule SubClassSet SubClassRule
- # GSUB6 ChainSubRuleSet ChainSubRule ChainSubClassSet ChainSubClassRule
- # GPOS7 PosRuleSet PosRule PosClassSet PosClassRule
- # GPOS8 ChainPosRuleSet ChainPosRule ChainPosClassSet ChainPosClassRule
- #
- # The following functions generate the attribute names and subtables according
- # to this naming convention.
- def ruleSetAttr_(self, format=1, chaining=True):
- if format == 1:
- formatType = "Rule"
- elif format == 2:
- formatType = "Class"
- else:
- raise AssertionError(formatType)
- subtablename = f"{self.subtable_type[0:3]}{formatType}Set" # Sub, not Subst.
- if chaining:
- subtablename = "Chain" + subtablename
- return subtablename
-
- def ruleAttr_(self, format=1, chaining=True):
- if format == 1:
- formatType = ""
- elif format == 2:
- formatType = "Class"
- else:
- raise AssertionError(formatType)
- subtablename = f"{self.subtable_type[0:3]}{formatType}Rule" # Sub, not Subst.
- if chaining:
- subtablename = "Chain" + subtablename
- return subtablename
-
- def newRuleSet_(self, format=1, chaining=True):
- st = getattr(
- ot, self.ruleSetAttr_(format, chaining)
- )() # ot.ChainPosRuleSet()/ot.SubRuleSet()/etc.
- st.populateDefaults()
- return st
-
- def newRule_(self, format=1, chaining=True):
- st = getattr(
- ot, self.ruleAttr_(format, chaining)
- )() # ot.ChainPosClassRule()/ot.SubClassRule()/etc.
- st.populateDefaults()
- return st
-
- def attachSubtableWithCount_(
- self, st, subtable_name, count_name, existing=None, index=None, chaining=False
- ):
- if chaining:
- subtable_name = "Chain" + subtable_name
- count_name = "Chain" + count_name
-
- if not hasattr(st, count_name):
- setattr(st, count_name, 0)
- setattr(st, subtable_name, [])
-
- if existing:
- new_subtable = existing
- else:
- # Create a new, empty subtable from otTables
- new_subtable = getattr(ot, subtable_name)()
-
- setattr(st, count_name, getattr(st, count_name) + 1)
-
- if index:
- getattr(st, subtable_name).insert(index, new_subtable)
- else:
- getattr(st, subtable_name).append(new_subtable)
-
- return new_subtable
-
- def newLookupRecord_(self, st):
- return self.attachSubtableWithCount_(
- st,
- f"{self.subtable_type}LookupRecord",
- f"{self.subtable_type}Count",
- chaining=False,
- ) # Oddly, it isn't ChainSubstLookupRecord
-
-
-class ChainContextPosBuilder(ChainContextualBuilder):
- """Builds a Chained Contextual Positioning (GPOS8) lookup.
-
- Users are expected to manually add rules to the ``rules`` attribute after
- the object has been initialized, e.g.::
-
- # pos [A B] [C D] x' lookup lu1 y' z' lookup lu2 E;
-
- prefix = [ ["A", "B"], ["C", "D"] ]
- suffix = [ ["E"] ]
- glyphs = [ ["x"], ["y"], ["z"] ]
- lookups = [ [lu1], None, [lu2] ]
- builder.rules.append( (prefix, glyphs, suffix, lookups) )
-
- Attributes:
- font (``fontTools.TTLib.TTFont``): A font object.
- location: A string or tuple representing the location in the original
- source which produced this lookup.
- rules: A list of tuples representing the rules in this lookup.
- lookupflag (int): The lookup's flag
- markFilterSet: Either ``None`` if no mark filtering set is used, or
- an integer representing the filtering set to be used for this
- lookup. If a mark filtering set is provided,
- `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's
- flags.
- """
-
- def __init__(self, font, location):
- LookupBuilder.__init__(self, font, location, "GPOS", 8)
- self.rules = []
- self.subtable_type = "Pos"
-
- def find_chainable_single_pos(self, lookups, glyphs, value):
- """Helper for add_single_pos_chained_()"""
- res = None
- for lookup in lookups[::-1]:
- if lookup == self.SUBTABLE_BREAK_:
- return res
- if isinstance(lookup, SinglePosBuilder) and all(
- lookup.can_add(glyph, value) for glyph in glyphs
- ):
- res = lookup
- return res
-
-
-class ChainContextSubstBuilder(ChainContextualBuilder):
- """Builds a Chained Contextual Substitution (GSUB6) lookup.
-
- Users are expected to manually add rules to the ``rules`` attribute after
- the object has been initialized, e.g.::
-
- # sub [A B] [C D] x' lookup lu1 y' z' lookup lu2 E;
-
- prefix = [ ["A", "B"], ["C", "D"] ]
- suffix = [ ["E"] ]
- glyphs = [ ["x"], ["y"], ["z"] ]
- lookups = [ [lu1], None, [lu2] ]
- builder.rules.append( (prefix, glyphs, suffix, lookups) )
-
- Attributes:
- font (``fontTools.TTLib.TTFont``): A font object.
- location: A string or tuple representing the location in the original
- source which produced this lookup.
- rules: A list of tuples representing the rules in this lookup.
- lookupflag (int): The lookup's flag
- markFilterSet: Either ``None`` if no mark filtering set is used, or
- an integer representing the filtering set to be used for this
- lookup. If a mark filtering set is provided,
- `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's
- flags.
- """
-
- def __init__(self, font, location):
- LookupBuilder.__init__(self, font, location, "GSUB", 6)
- self.rules = [] # (prefix, input, suffix, lookups)
- self.subtable_type = "Subst"
-
- def getAlternateGlyphs(self):
- result = {}
- for rule in self.rules:
- if rule.is_subtable_break:
- continue
- for lookups in rule.lookups:
- if not isinstance(lookups, list):
- lookups = [lookups]
- for lookup in lookups:
- if lookup is not None:
- alts = lookup.getAlternateGlyphs()
- for glyph, replacements in alts.items():
- result.setdefault(glyph, set()).update(replacements)
- return result
-
- def find_chainable_single_subst(self, mapping):
- """Helper for add_single_subst_chained_()"""
- res = None
- for rule in self.rules[::-1]:
- if rule.is_subtable_break:
- return res
- for sub in rule.lookups:
- if isinstance(sub, SingleSubstBuilder) and not any(
- g in mapping and mapping[g] != sub.mapping[g] for g in sub.mapping
- ):
- res = sub
- return res
-
-
-class LigatureSubstBuilder(LookupBuilder):
- """Builds a Ligature Substitution (GSUB4) lookup.
-
- Users are expected to manually add ligatures to the ``ligatures``
- attribute after the object has been initialized, e.g.::
-
- # sub f i by f_i;
- builder.ligatures[("f","f","i")] = "f_f_i"
-
- Attributes:
- font (``fontTools.TTLib.TTFont``): A font object.
- location: A string or tuple representing the location in the original
- source which produced this lookup.
- ligatures: An ordered dictionary mapping a tuple of glyph names to the
- ligature glyphname.
- lookupflag (int): The lookup's flag
- markFilterSet: Either ``None`` if no mark filtering set is used, or
- an integer representing the filtering set to be used for this
- lookup. If a mark filtering set is provided,
- `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's
- flags.
- """
-
- def __init__(self, font, location):
- LookupBuilder.__init__(self, font, location, "GSUB", 4)
- self.ligatures = OrderedDict() # {('f','f','i'): 'f_f_i'}
-
- def equals(self, other):
- return LookupBuilder.equals(self, other) and self.ligatures == other.ligatures
-
- def build(self):
- """Build the lookup.
-
- Returns:
- An ``otTables.Lookup`` object representing the ligature
- substitution lookup.
- """
- subtables = self.build_subst_subtables(
- self.ligatures, buildLigatureSubstSubtable
- )
- return self.buildLookup_(subtables)
-
- def add_subtable_break(self, location):
- self.ligatures[(self.SUBTABLE_BREAK_, location)] = self.SUBTABLE_BREAK_
-
-
-class MultipleSubstBuilder(LookupBuilder):
- """Builds a Multiple Substitution (GSUB2) lookup.
-
- Users are expected to manually add substitutions to the ``mapping``
- attribute after the object has been initialized, e.g.::
-
- # sub uni06C0 by uni06D5.fina hamza.above;
- builder.mapping["uni06C0"] = [ "uni06D5.fina", "hamza.above"]
-
- Attributes:
- font (``fontTools.TTLib.TTFont``): A font object.
- location: A string or tuple representing the location in the original
- source which produced this lookup.
- mapping: An ordered dictionary mapping a glyph name to a list of
- substituted glyph names.
- lookupflag (int): The lookup's flag
- markFilterSet: Either ``None`` if no mark filtering set is used, or
- an integer representing the filtering set to be used for this
- lookup. If a mark filtering set is provided,
- `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's
- flags.
- """
-
- def __init__(self, font, location):
- LookupBuilder.__init__(self, font, location, "GSUB", 2)
- self.mapping = OrderedDict()
-
- def equals(self, other):
- return LookupBuilder.equals(self, other) and self.mapping == other.mapping
-
- def build(self):
- subtables = self.build_subst_subtables(self.mapping, buildMultipleSubstSubtable)
- return self.buildLookup_(subtables)
-
- def add_subtable_break(self, location):
- self.mapping[(self.SUBTABLE_BREAK_, location)] = self.SUBTABLE_BREAK_
-
-
-class CursivePosBuilder(LookupBuilder):
- """Builds a Cursive Positioning (GPOS3) lookup.
-
- Attributes:
- font (``fontTools.TTLib.TTFont``): A font object.
- location: A string or tuple representing the location in the original
- source which produced this lookup.
- attachments: An ordered dictionary mapping a glyph name to a two-element
- tuple of ``otTables.Anchor`` objects.
- lookupflag (int): The lookup's flag
- markFilterSet: Either ``None`` if no mark filtering set is used, or
- an integer representing the filtering set to be used for this
- lookup. If a mark filtering set is provided,
- `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's
- flags.
- """
-
- def __init__(self, font, location):
- LookupBuilder.__init__(self, font, location, "GPOS", 3)
- self.attachments = {}
-
- def equals(self, other):
- return (
- LookupBuilder.equals(self, other) and self.attachments == other.attachments
- )
-
- def add_attachment(self, location, glyphs, entryAnchor, exitAnchor):
- """Adds attachment information to the cursive positioning lookup.
-
- Args:
- location: A string or tuple representing the location in the
- original source which produced this lookup. (Unused.)
- glyphs: A list of glyph names sharing these entry and exit
- anchor locations.
- entryAnchor: A ``otTables.Anchor`` object representing the
- entry anchor, or ``None`` if no entry anchor is present.
- exitAnchor: A ``otTables.Anchor`` object representing the
- exit anchor, or ``None`` if no exit anchor is present.
- """
- for glyph in glyphs:
- self.attachments[glyph] = (entryAnchor, exitAnchor)
-
- def build(self):
- """Build the lookup.
-
- Returns:
- An ``otTables.Lookup`` object representing the cursive
- positioning lookup.
- """
- st = buildCursivePosSubtable(self.attachments, self.glyphMap)
- return self.buildLookup_([st])
-
-
-class MarkBasePosBuilder(LookupBuilder):
- """Builds a Mark-To-Base Positioning (GPOS4) lookup.
-
- Users are expected to manually add marks and bases to the ``marks``
- and ``bases`` attributes after the object has been initialized, e.g.::
-
- builder.marks["acute"] = (0, a1)
- builder.marks["grave"] = (0, a1)
- builder.marks["cedilla"] = (1, a2)
- builder.bases["a"] = {0: a3, 1: a5}
- builder.bases["b"] = {0: a4, 1: a5}
-
- Attributes:
- font (``fontTools.TTLib.TTFont``): A font object.
- location: A string or tuple representing the location in the original
- source which produced this lookup.
- marks: An dictionary mapping a glyph name to a two-element
- tuple containing a mark class ID and ``otTables.Anchor`` object.
- bases: An dictionary mapping a glyph name to a dictionary of
- mark class IDs and ``otTables.Anchor`` object.
- lookupflag (int): The lookup's flag
- markFilterSet: Either ``None`` if no mark filtering set is used, or
- an integer representing the filtering set to be used for this
- lookup. If a mark filtering set is provided,
- `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's
- flags.
- """
-
- def __init__(self, font, location):
- LookupBuilder.__init__(self, font, location, "GPOS", 4)
- self.marks = {} # glyphName -> (markClassName, anchor)
- self.bases = {} # glyphName -> {markClassName: anchor}
-
- def equals(self, other):
- return (
- LookupBuilder.equals(self, other)
- and self.marks == other.marks
- and self.bases == other.bases
- )
-
- def inferGlyphClasses(self):
- result = {glyph: 1 for glyph in self.bases}
- result.update({glyph: 3 for glyph in self.marks})
- return result
-
- def build(self):
- """Build the lookup.
-
- Returns:
- An ``otTables.Lookup`` object representing the mark-to-base
- positioning lookup.
- """
- markClasses = self.buildMarkClasses_(self.marks)
- marks = {}
- for mark, (mc, anchor) in self.marks.items():
- if mc not in markClasses:
- raise ValueError(
- "Mark class %s not found for mark glyph %s" % (mc, mark)
- )
- marks[mark] = (markClasses[mc], anchor)
- bases = {}
- for glyph, anchors in self.bases.items():
- bases[glyph] = {}
- for mc, anchor in anchors.items():
- if mc not in markClasses:
- raise ValueError(
- "Mark class %s not found for base glyph %s" % (mc, glyph)
- )
- bases[glyph][markClasses[mc]] = anchor
- subtables = buildMarkBasePos(marks, bases, self.glyphMap)
- return self.buildLookup_(subtables)
-
-
-class MarkLigPosBuilder(LookupBuilder):
- """Builds a Mark-To-Ligature Positioning (GPOS5) lookup.
-
- Users are expected to manually add marks and bases to the ``marks``
- and ``ligatures`` attributes after the object has been initialized, e.g.::
-
- builder.marks["acute"] = (0, a1)
- builder.marks["grave"] = (0, a1)
- builder.marks["cedilla"] = (1, a2)
- builder.ligatures["f_i"] = [
- { 0: a3, 1: a5 }, # f
- { 0: a4, 1: a5 } # i
- ]
-
- Attributes:
- font (``fontTools.TTLib.TTFont``): A font object.
- location: A string or tuple representing the location in the original
- source which produced this lookup.
- marks: An dictionary mapping a glyph name to a two-element
- tuple containing a mark class ID and ``otTables.Anchor`` object.
- ligatures: An dictionary mapping a glyph name to an array with one
- element for each ligature component. Each array element should be
- a dictionary mapping mark class IDs to ``otTables.Anchor`` objects.
- lookupflag (int): The lookup's flag
- markFilterSet: Either ``None`` if no mark filtering set is used, or
- an integer representing the filtering set to be used for this
- lookup. If a mark filtering set is provided,
- `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's
- flags.
- """
-
- def __init__(self, font, location):
- LookupBuilder.__init__(self, font, location, "GPOS", 5)
- self.marks = {} # glyphName -> (markClassName, anchor)
- self.ligatures = {} # glyphName -> [{markClassName: anchor}, ...]
-
- def equals(self, other):
- return (
- LookupBuilder.equals(self, other)
- and self.marks == other.marks
- and self.ligatures == other.ligatures
- )
-
- def inferGlyphClasses(self):
- result = {glyph: 2 for glyph in self.ligatures}
- result.update({glyph: 3 for glyph in self.marks})
- return result
-
- def build(self):
- """Build the lookup.
-
- Returns:
- An ``otTables.Lookup`` object representing the mark-to-ligature
- positioning lookup.
- """
- markClasses = self.buildMarkClasses_(self.marks)
- marks = {
- mark: (markClasses[mc], anchor) for mark, (mc, anchor) in self.marks.items()
- }
- ligs = {}
- for lig, components in self.ligatures.items():
- ligs[lig] = []
- for c in components:
- ligs[lig].append({markClasses[mc]: a for mc, a in c.items()})
- subtables = buildMarkLigPos(marks, ligs, self.glyphMap)
- return self.buildLookup_(subtables)
-
-
-class MarkMarkPosBuilder(LookupBuilder):
- """Builds a Mark-To-Mark Positioning (GPOS6) lookup.
-
- Users are expected to manually add marks and bases to the ``marks``
- and ``baseMarks`` attributes after the object has been initialized, e.g.::
-
- builder.marks["acute"] = (0, a1)
- builder.marks["grave"] = (0, a1)
- builder.marks["cedilla"] = (1, a2)
- builder.baseMarks["acute"] = {0: a3}
-
- Attributes:
- font (``fontTools.TTLib.TTFont``): A font object.
- location: A string or tuple representing the location in the original
- source which produced this lookup.
- marks: An dictionary mapping a glyph name to a two-element
- tuple containing a mark class ID and ``otTables.Anchor`` object.
- baseMarks: An dictionary mapping a glyph name to a dictionary
- containing one item: a mark class ID and a ``otTables.Anchor`` object.
- lookupflag (int): The lookup's flag
- markFilterSet: Either ``None`` if no mark filtering set is used, or
- an integer representing the filtering set to be used for this
- lookup. If a mark filtering set is provided,
- `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's
- flags.
- """
-
- def __init__(self, font, location):
- LookupBuilder.__init__(self, font, location, "GPOS", 6)
- self.marks = {} # glyphName -> (markClassName, anchor)
- self.baseMarks = {} # glyphName -> {markClassName: anchor}
-
- def equals(self, other):
- return (
- LookupBuilder.equals(self, other)
- and self.marks == other.marks
- and self.baseMarks == other.baseMarks
- )
-
- def inferGlyphClasses(self):
- result = {glyph: 3 for glyph in self.baseMarks}
- result.update({glyph: 3 for glyph in self.marks})
- return result
-
- def build(self):
- """Build the lookup.
-
- Returns:
- An ``otTables.Lookup`` object representing the mark-to-mark
- positioning lookup.
- """
- markClasses = self.buildMarkClasses_(self.marks)
- markClassList = sorted(markClasses.keys(), key=markClasses.get)
- marks = {
- mark: (markClasses[mc], anchor) for mark, (mc, anchor) in self.marks.items()
- }
-
- st = ot.MarkMarkPos()
- st.Format = 1
- st.ClassCount = len(markClasses)
- st.Mark1Coverage = buildCoverage(marks, self.glyphMap)
- st.Mark2Coverage = buildCoverage(self.baseMarks, self.glyphMap)
- st.Mark1Array = buildMarkArray(marks, self.glyphMap)
- st.Mark2Array = ot.Mark2Array()
- st.Mark2Array.Mark2Count = len(st.Mark2Coverage.glyphs)
- st.Mark2Array.Mark2Record = []
- for base in st.Mark2Coverage.glyphs:
- anchors = [self.baseMarks[base].get(mc) for mc in markClassList]
- st.Mark2Array.Mark2Record.append(buildMark2Record(anchors))
- return self.buildLookup_([st])
-
-
-class ReverseChainSingleSubstBuilder(LookupBuilder):
- """Builds a Reverse Chaining Contextual Single Substitution (GSUB8) lookup.
-
- Users are expected to manually add substitutions to the ``substitutions``
- attribute after the object has been initialized, e.g.::
-
- # reversesub [a e n] d' by d.alt;
- prefix = [ ["a", "e", "n"] ]
- suffix = []
- mapping = { "d": "d.alt" }
- builder.substitutions.append( (prefix, suffix, mapping) )
-
- Attributes:
- font (``fontTools.TTLib.TTFont``): A font object.
- location: A string or tuple representing the location in the original
- source which produced this lookup.
- substitutions: A three-element tuple consisting of a prefix sequence,
- a suffix sequence, and a dictionary of single substitutions.
- lookupflag (int): The lookup's flag
- markFilterSet: Either ``None`` if no mark filtering set is used, or
- an integer representing the filtering set to be used for this
- lookup. If a mark filtering set is provided,
- `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's
- flags.
- """
-
- def __init__(self, font, location):
- LookupBuilder.__init__(self, font, location, "GSUB", 8)
- self.rules = [] # (prefix, suffix, mapping)
-
- def equals(self, other):
- return LookupBuilder.equals(self, other) and self.rules == other.rules
-
- def build(self):
- """Build the lookup.
-
- Returns:
- An ``otTables.Lookup`` object representing the chained
- contextual substitution lookup.
- """
- subtables = []
- for prefix, suffix, mapping in self.rules:
- st = ot.ReverseChainSingleSubst()
- st.Format = 1
- self.setBacktrackCoverage_(prefix, st)
- self.setLookAheadCoverage_(suffix, st)
- st.Coverage = buildCoverage(mapping.keys(), self.glyphMap)
- st.GlyphCount = len(mapping)
- st.Substitute = [mapping[g] for g in st.Coverage.glyphs]
- subtables.append(st)
- return self.buildLookup_(subtables)
-
- def add_subtable_break(self, location):
- # Nothing to do here, each substitution is in its own subtable.
- pass
-
-
-class SingleSubstBuilder(LookupBuilder):
- """Builds a Single Substitution (GSUB1) lookup.
-
- Users are expected to manually add substitutions to the ``mapping``
- attribute after the object has been initialized, e.g.::
-
- # sub x by y;
- builder.mapping["x"] = "y"
-
- Attributes:
- font (``fontTools.TTLib.TTFont``): A font object.
- location: A string or tuple representing the location in the original
- source which produced this lookup.
- mapping: A dictionary mapping a single glyph name to another glyph name.
- lookupflag (int): The lookup's flag
- markFilterSet: Either ``None`` if no mark filtering set is used, or
- an integer representing the filtering set to be used for this
- lookup. If a mark filtering set is provided,
- `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's
- flags.
- """
-
- def __init__(self, font, location):
- LookupBuilder.__init__(self, font, location, "GSUB", 1)
- self.mapping = OrderedDict()
-
- def equals(self, other):
- return LookupBuilder.equals(self, other) and self.mapping == other.mapping
-
- def build(self):
- """Build the lookup.
-
- Returns:
- An ``otTables.Lookup`` object representing the multiple
- substitution lookup.
- """
- subtables = self.build_subst_subtables(self.mapping, buildSingleSubstSubtable)
- return self.buildLookup_(subtables)
-
- def getAlternateGlyphs(self):
- return {glyph: set([repl]) for glyph, repl in self.mapping.items()}
-
- def add_subtable_break(self, location):
- self.mapping[(self.SUBTABLE_BREAK_, location)] = self.SUBTABLE_BREAK_
-
-
-class ClassPairPosSubtableBuilder(object):
- """Builds class-based Pair Positioning (GPOS2 format 2) subtables.
-
- Note that this does *not* build a GPOS2 ``otTables.Lookup`` directly,
- but builds a list of ``otTables.PairPos`` subtables. It is used by the
- :class:`PairPosBuilder` below.
-
- Attributes:
- builder (PairPosBuilder): A pair positioning lookup builder.
- """
-
- def __init__(self, builder):
- self.builder_ = builder
- self.classDef1_, self.classDef2_ = None, None
- self.values_ = {} # (glyphclass1, glyphclass2) --> (value1, value2)
- self.forceSubtableBreak_ = False
- self.subtables_ = []
-
- def addPair(self, gc1, value1, gc2, value2):
- """Add a pair positioning rule.
-
- Args:
- gc1: A set of glyph names for the "left" glyph
- value1: An ``otTables.ValueRecord`` object for the left glyph's
- positioning.
- gc2: A set of glyph names for the "right" glyph
- value2: An ``otTables.ValueRecord`` object for the right glyph's
- positioning.
- """
- mergeable = (
- not self.forceSubtableBreak_
- and self.classDef1_ is not None
- and self.classDef1_.canAdd(gc1)
- and self.classDef2_ is not None
- and self.classDef2_.canAdd(gc2)
- )
- if not mergeable:
- self.flush_()
- self.classDef1_ = ClassDefBuilder(useClass0=True)
- self.classDef2_ = ClassDefBuilder(useClass0=False)
- self.values_ = {}
- self.classDef1_.add(gc1)
- self.classDef2_.add(gc2)
- self.values_[(gc1, gc2)] = (value1, value2)
-
- def addSubtableBreak(self):
- """Add an explicit subtable break at this point."""
- self.forceSubtableBreak_ = True
-
- def subtables(self):
- """Return the list of ``otTables.PairPos`` subtables constructed."""
- self.flush_()
- return self.subtables_
-
- def flush_(self):
- if self.classDef1_ is None or self.classDef2_ is None:
- return
- st = buildPairPosClassesSubtable(self.values_, self.builder_.glyphMap)
- if st.Coverage is None:
- return
- self.subtables_.append(st)
- self.forceSubtableBreak_ = False
-
-
-class PairPosBuilder(LookupBuilder):
- """Builds a Pair Positioning (GPOS2) lookup.
-
- Attributes:
- font (``fontTools.TTLib.TTFont``): A font object.
- location: A string or tuple representing the location in the original
- source which produced this lookup.
- pairs: An array of class-based pair positioning tuples. Usually
- manipulated with the :meth:`addClassPair` method below.
- glyphPairs: A dictionary mapping a tuple of glyph names to a tuple
- of ``otTables.ValueRecord`` objects. Usually manipulated with the
- :meth:`addGlyphPair` method below.
- lookupflag (int): The lookup's flag
- markFilterSet: Either ``None`` if no mark filtering set is used, or
- an integer representing the filtering set to be used for this
- lookup. If a mark filtering set is provided,
- `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's
- flags.
- """
-
- def __init__(self, font, location):
- LookupBuilder.__init__(self, font, location, "GPOS", 2)
- self.pairs = [] # [(gc1, value1, gc2, value2)*]
- self.glyphPairs = {} # (glyph1, glyph2) --> (value1, value2)
- self.locations = {} # (gc1, gc2) --> (filepath, line, column)
-
- def addClassPair(self, location, glyphclass1, value1, glyphclass2, value2):
- """Add a class pair positioning rule to the current lookup.
-
- Args:
- location: A string or tuple representing the location in the
- original source which produced this rule. Unused.
- glyphclass1: A set of glyph names for the "left" glyph in the pair.
- value1: A ``otTables.ValueRecord`` for positioning the left glyph.
- glyphclass2: A set of glyph names for the "right" glyph in the pair.
- value2: A ``otTables.ValueRecord`` for positioning the right glyph.
- """
- self.pairs.append((glyphclass1, value1, glyphclass2, value2))
-
- def addGlyphPair(self, location, glyph1, value1, glyph2, value2):
- """Add a glyph pair positioning rule to the current lookup.
-
- Args:
- location: A string or tuple representing the location in the
- original source which produced this rule.
- glyph1: A glyph name for the "left" glyph in the pair.
- value1: A ``otTables.ValueRecord`` for positioning the left glyph.
- glyph2: A glyph name for the "right" glyph in the pair.
- value2: A ``otTables.ValueRecord`` for positioning the right glyph.
- """
- key = (glyph1, glyph2)
- oldValue = self.glyphPairs.get(key, None)
- if oldValue is not None:
- # the Feature File spec explicitly allows specific pairs generated
- # by an 'enum' rule to be overridden by preceding single pairs
- otherLoc = self.locations[key]
- log.debug(
- "Already defined position for pair %s %s at %s; "
- "choosing the first value",
- glyph1,
- glyph2,
- otherLoc,
- )
- else:
- self.glyphPairs[key] = (value1, value2)
- self.locations[key] = location
-
- def add_subtable_break(self, location):
- self.pairs.append(
- (
- self.SUBTABLE_BREAK_,
- self.SUBTABLE_BREAK_,
- self.SUBTABLE_BREAK_,
- self.SUBTABLE_BREAK_,
- )
- )
-
- def equals(self, other):
- return (
- LookupBuilder.equals(self, other)
- and self.glyphPairs == other.glyphPairs
- and self.pairs == other.pairs
- )
-
- def build(self):
- """Build the lookup.
-
- Returns:
- An ``otTables.Lookup`` object representing the pair positioning
- lookup.
- """
- builders = {}
- builder = ClassPairPosSubtableBuilder(self)
- for glyphclass1, value1, glyphclass2, value2 in self.pairs:
- if glyphclass1 is self.SUBTABLE_BREAK_:
- builder.addSubtableBreak()
- continue
- builder.addPair(glyphclass1, value1, glyphclass2, value2)
- subtables = []
- if self.glyphPairs:
- subtables.extend(buildPairPosGlyphs(self.glyphPairs, self.glyphMap))
- subtables.extend(builder.subtables())
- lookup = self.buildLookup_(subtables)
-
- # Compact the lookup
- # This is a good moment to do it because the compaction should create
- # smaller subtables, which may prevent overflows from happening.
- # Keep reading the value from the ENV until ufo2ft switches to the config system
- level = self.font.cfg.get(
- "fontTools.otlLib.optimize.gpos:COMPRESSION_LEVEL",
- default=_compression_level_from_env(),
- )
- if level != 0:
- log.info("Compacting GPOS...")
- compact_lookup(self.font, level, lookup)
-
- return lookup
-
-
-class SinglePosBuilder(LookupBuilder):
- """Builds a Single Positioning (GPOS1) lookup.
-
- Attributes:
- font (``fontTools.TTLib.TTFont``): A font object.
- location: A string or tuple representing the location in the original
- source which produced this lookup.
- mapping: A dictionary mapping a glyph name to a ``otTables.ValueRecord``
- objects. Usually manipulated with the :meth:`add_pos` method below.
- lookupflag (int): The lookup's flag
- markFilterSet: Either ``None`` if no mark filtering set is used, or
- an integer representing the filtering set to be used for this
- lookup. If a mark filtering set is provided,
- `LOOKUP_FLAG_USE_MARK_FILTERING_SET` will be set on the lookup's
- flags.
- """
-
- def __init__(self, font, location):
- LookupBuilder.__init__(self, font, location, "GPOS", 1)
- self.locations = {} # glyph -> (filename, line, column)
- self.mapping = {} # glyph -> ot.ValueRecord
-
- def add_pos(self, location, glyph, otValueRecord):
- """Add a single positioning rule.
-
- Args:
- location: A string or tuple representing the location in the
- original source which produced this lookup.
- glyph: A glyph name.
- otValueRection: A ``otTables.ValueRecord`` used to position the
- glyph.
- """
- if not self.can_add(glyph, otValueRecord):
- otherLoc = self.locations[glyph]
- raise OpenTypeLibError(
- 'Already defined different position for glyph "%s" at %s'
- % (glyph, otherLoc),
- location,
- )
- if otValueRecord:
- self.mapping[glyph] = otValueRecord
- self.locations[glyph] = location
-
- def can_add(self, glyph, value):
- assert isinstance(value, ValueRecord)
- curValue = self.mapping.get(glyph)
- return curValue is None or curValue == value
-
- def equals(self, other):
- return LookupBuilder.equals(self, other) and self.mapping == other.mapping
-
- def build(self):
- """Build the lookup.
-
- Returns:
- An ``otTables.Lookup`` object representing the single positioning
- lookup.
- """
- subtables = buildSinglePos(self.mapping, self.glyphMap)
- return self.buildLookup_(subtables)
-
-
-# GSUB
-
-
-def buildSingleSubstSubtable(mapping):
- """Builds a single substitution (GSUB1) subtable.
-
- Note that if you are implementing a layout compiler, you may find it more
- flexible to use
- :py:class:`fontTools.otlLib.lookupBuilders.SingleSubstBuilder` instead.
-
- Args:
- mapping: A dictionary mapping input glyph names to output glyph names.
-
- Returns:
- An ``otTables.SingleSubst`` object, or ``None`` if the mapping dictionary
- is empty.
- """
- if not mapping:
- return None
- self = ot.SingleSubst()
- self.mapping = dict(mapping)
- return self
-
-
-def buildMultipleSubstSubtable(mapping):
- """Builds a multiple substitution (GSUB2) subtable.
-
- Note that if you are implementing a layout compiler, you may find it more
- flexible to use
- :py:class:`fontTools.otlLib.lookupBuilders.MultipleSubstBuilder` instead.
-
- Example::
-
- # sub uni06C0 by uni06D5.fina hamza.above
- # sub uni06C2 by uni06C1.fina hamza.above;
-
- subtable = buildMultipleSubstSubtable({
- "uni06C0": [ "uni06D5.fina", "hamza.above"],
- "uni06C2": [ "uni06D1.fina", "hamza.above"]
- })
-
- Args:
- mapping: A dictionary mapping input glyph names to a list of output
- glyph names.
-
- Returns:
- An ``otTables.MultipleSubst`` object or ``None`` if the mapping dictionary
- is empty.
- """
- if not mapping:
- return None
- self = ot.MultipleSubst()
- self.mapping = dict(mapping)
- return self
-
-
-def buildAlternateSubstSubtable(mapping):
- """Builds an alternate substitution (GSUB3) subtable.
-
- Note that if you are implementing a layout compiler, you may find it more
- flexible to use
- :py:class:`fontTools.otlLib.lookupBuilders.AlternateSubstBuilder` instead.
-
- Args:
- mapping: A dictionary mapping input glyph names to a list of output
- glyph names.
-
- Returns:
- An ``otTables.AlternateSubst`` object or ``None`` if the mapping dictionary
- is empty.
- """
- if not mapping:
- return None
- self = ot.AlternateSubst()
- self.alternates = dict(mapping)
- return self
-
-
-def _getLigatureKey(components):
- # Computes a key for ordering ligatures in a GSUB Type-4 lookup.
-
- # When building the OpenType lookup, we need to make sure that
- # the longest sequence of components is listed first, so we
- # use the negative length as the primary key for sorting.
- # To make buildLigatureSubstSubtable() deterministic, we use the
- # component sequence as the secondary key.
-
- # For example, this will sort (f,f,f) < (f,f,i) < (f,f) < (f,i) < (f,l).
- return (-len(components), components)
-
-
-def buildLigatureSubstSubtable(mapping):
- """Builds a ligature substitution (GSUB4) subtable.
-
- Note that if you are implementing a layout compiler, you may find it more
- flexible to use
- :py:class:`fontTools.otlLib.lookupBuilders.LigatureSubstBuilder` instead.
-
- Example::
-
- # sub f f i by f_f_i;
- # sub f i by f_i;
-
- subtable = buildLigatureSubstSubtable({
- ("f", "f", "i"): "f_f_i",
- ("f", "i"): "f_i",
- })
-
- Args:
- mapping: A dictionary mapping tuples of glyph names to output
- glyph names.
-
- Returns:
- An ``otTables.LigatureSubst`` object or ``None`` if the mapping dictionary
- is empty.
- """
-
- if not mapping:
- return None
- self = ot.LigatureSubst()
- # The following single line can replace the rest of this function
- # with fontTools >= 3.1:
- # self.ligatures = dict(mapping)
- self.ligatures = {}
- for components in sorted(mapping.keys(), key=_getLigatureKey):
- ligature = ot.Ligature()
- ligature.Component = components[1:]
- ligature.CompCount = len(ligature.Component) + 1
- ligature.LigGlyph = mapping[components]
- firstGlyph = components[0]
- self.ligatures.setdefault(firstGlyph, []).append(ligature)
- return self
-
-
-# GPOS
-
-
-def buildAnchor(x, y, point=None, deviceX=None, deviceY=None):
- """Builds an Anchor table.
-
- This determines the appropriate anchor format based on the passed parameters.
-
- Args:
- x (int): X coordinate.
- y (int): Y coordinate.
- point (int): Index of glyph contour point, if provided.
- deviceX (``otTables.Device``): X coordinate device table, if provided.
- deviceY (``otTables.Device``): Y coordinate device table, if provided.
-
- Returns:
- An ``otTables.Anchor`` object.
- """
- self = ot.Anchor()
- self.XCoordinate, self.YCoordinate = x, y
- self.Format = 1
- if point is not None:
- self.AnchorPoint = point
- self.Format = 2
- if deviceX is not None or deviceY is not None:
- assert (
- self.Format == 1
- ), "Either point, or both of deviceX/deviceY, must be None."
- self.XDeviceTable = deviceX
- self.YDeviceTable = deviceY
- self.Format = 3
- return self
-
-
-def buildBaseArray(bases, numMarkClasses, glyphMap):
- """Builds a base array record.
-
- As part of building mark-to-base positioning rules, you will need to define
- a ``BaseArray`` record, which "defines for each base glyph an array of
- anchors, one for each mark class." This function builds the base array
- subtable.
-
- Example::
-
- bases = {"a": {0: a3, 1: a5}, "b": {0: a4, 1: a5}}
- basearray = buildBaseArray(bases, 2, font.getReverseGlyphMap())
-
- Args:
- bases (dict): A dictionary mapping anchors to glyphs; the keys being
- glyph names, and the values being dictionaries mapping mark class ID
- to the appropriate ``otTables.Anchor`` object used for attaching marks
- of that class.
- numMarkClasses (int): The total number of mark classes for which anchors
- are defined.
- glyphMap: a glyph name to ID map, typically returned from
- ``font.getReverseGlyphMap()``.
-
- Returns:
- An ``otTables.BaseArray`` object.
- """
- self = ot.BaseArray()
- self.BaseRecord = []
- for base in sorted(bases, key=glyphMap.__getitem__):
- b = bases[base]
- anchors = [b.get(markClass) for markClass in range(numMarkClasses)]
- self.BaseRecord.append(buildBaseRecord(anchors))
- self.BaseCount = len(self.BaseRecord)
- return self
-
-
-def buildBaseRecord(anchors):
- # [otTables.Anchor, otTables.Anchor, ...] --> otTables.BaseRecord
- self = ot.BaseRecord()
- self.BaseAnchor = anchors
- return self
-
-
-def buildComponentRecord(anchors):
- """Builds a component record.
-
- As part of building mark-to-ligature positioning rules, you will need to
- define ``ComponentRecord`` objects, which contain "an array of offsets...
- to the Anchor tables that define all the attachment points used to attach
- marks to the component." This function builds the component record.
-
- Args:
- anchors: A list of ``otTables.Anchor`` objects or ``None``.
-
- Returns:
- A ``otTables.ComponentRecord`` object or ``None`` if no anchors are
- supplied.
- """
- if not anchors:
- return None
- self = ot.ComponentRecord()
- self.LigatureAnchor = anchors
- return self
-
-
-def buildCursivePosSubtable(attach, glyphMap):
- """Builds a cursive positioning (GPOS3) subtable.
-
- Cursive positioning lookups are made up of a coverage table of glyphs,
- and a set of ``EntryExitRecord`` records containing the anchors for
- each glyph. This function builds the cursive positioning subtable.
-
- Example::
-
- subtable = buildCursivePosSubtable({
- "AlifIni": (None, buildAnchor(0, 50)),
- "BehMed": (buildAnchor(500,250), buildAnchor(0,50)),
- # ...
- }, font.getReverseGlyphMap())
-
- Args:
- attach (dict): A mapping between glyph names and a tuple of two
- ``otTables.Anchor`` objects representing entry and exit anchors.
- glyphMap: a glyph name to ID map, typically returned from
- ``font.getReverseGlyphMap()``.
-
- Returns:
- An ``otTables.CursivePos`` object, or ``None`` if the attachment
- dictionary was empty.
- """
- if not attach:
- return None
- self = ot.CursivePos()
- self.Format = 1
- self.Coverage = buildCoverage(attach.keys(), glyphMap)
- self.EntryExitRecord = []
- for glyph in self.Coverage.glyphs:
- entryAnchor, exitAnchor = attach[glyph]
- rec = ot.EntryExitRecord()
- rec.EntryAnchor = entryAnchor
- rec.ExitAnchor = exitAnchor
- self.EntryExitRecord.append(rec)
- self.EntryExitCount = len(self.EntryExitRecord)
- return self
-
-
-def buildDevice(deltas):
- """Builds a Device record as part of a ValueRecord or Anchor.
-
- Device tables specify size-specific adjustments to value records
- and anchors to reflect changes based on the resolution of the output.
- For example, one could specify that an anchor's Y position should be
- increased by 1 pixel when displayed at 8 pixels per em. This routine
- builds device records.
-
- Args:
- deltas: A dictionary mapping pixels-per-em sizes to the delta
- adjustment in pixels when the font is displayed at that size.
-
- Returns:
- An ``otTables.Device`` object if any deltas were supplied, or
- ``None`` otherwise.
- """
- if not deltas:
- return None
- self = ot.Device()
- keys = deltas.keys()
- self.StartSize = startSize = min(keys)
- self.EndSize = endSize = max(keys)
- assert 0 <= startSize <= endSize
- self.DeltaValue = deltaValues = [
- deltas.get(size, 0) for size in range(startSize, endSize + 1)
- ]
- maxDelta = max(deltaValues)
- minDelta = min(deltaValues)
- assert minDelta > -129 and maxDelta < 128
- if minDelta > -3 and maxDelta < 2:
- self.DeltaFormat = 1
- elif minDelta > -9 and maxDelta < 8:
- self.DeltaFormat = 2
- else:
- self.DeltaFormat = 3
- return self
-
-
-def buildLigatureArray(ligs, numMarkClasses, glyphMap):
- """Builds a LigatureArray subtable.
-
- As part of building a mark-to-ligature lookup, you will need to define
- the set of anchors (for each mark class) on each component of the ligature
- where marks can be attached. For example, for an Arabic divine name ligature
- (lam lam heh), you may want to specify mark attachment positioning for
- superior marks (fatha, etc.) and inferior marks (kasra, etc.) on each glyph
- of the ligature. This routine builds the ligature array record.
-
- Example::
-
- buildLigatureArray({
- "lam-lam-heh": [
- { 0: superiorAnchor1, 1: inferiorAnchor1 }, # attach points for lam1
- { 0: superiorAnchor2, 1: inferiorAnchor2 }, # attach points for lam2
- { 0: superiorAnchor3, 1: inferiorAnchor3 }, # attach points for heh
- ]
- }, 2, font.getReverseGlyphMap())
-
- Args:
- ligs (dict): A mapping of ligature names to an array of dictionaries:
- for each component glyph in the ligature, an dictionary mapping
- mark class IDs to anchors.
- numMarkClasses (int): The number of mark classes.
- glyphMap: a glyph name to ID map, typically returned from
- ``font.getReverseGlyphMap()``.
-
- Returns:
- An ``otTables.LigatureArray`` object if deltas were supplied.
- """
- self = ot.LigatureArray()
- self.LigatureAttach = []
- for lig in sorted(ligs, key=glyphMap.__getitem__):
- anchors = []
- for component in ligs[lig]:
- anchors.append([component.get(mc) for mc in range(numMarkClasses)])
- self.LigatureAttach.append(buildLigatureAttach(anchors))
- self.LigatureCount = len(self.LigatureAttach)
- return self
-
-
-def buildLigatureAttach(components):
- # [[Anchor, Anchor], [Anchor, Anchor, Anchor]] --> LigatureAttach
- self = ot.LigatureAttach()
- self.ComponentRecord = [buildComponentRecord(c) for c in components]
- self.ComponentCount = len(self.ComponentRecord)
- return self
-
-
-def buildMarkArray(marks, glyphMap):
- """Builds a mark array subtable.
-
- As part of building mark-to-* positioning rules, you will need to define
- a MarkArray subtable, which "defines the class and the anchor point
- for a mark glyph." This function builds the mark array subtable.
-
- Example::
-
- mark = {
- "acute": (0, buildAnchor(300,712)),
- # ...
- }
- markarray = buildMarkArray(marks, font.getReverseGlyphMap())
-
- Args:
- marks (dict): A dictionary mapping anchors to glyphs; the keys being
- glyph names, and the values being a tuple of mark class number and
- an ``otTables.Anchor`` object representing the mark's attachment
- point.
- glyphMap: a glyph name to ID map, typically returned from
- ``font.getReverseGlyphMap()``.
-
- Returns:
- An ``otTables.MarkArray`` object.
- """
- self = ot.MarkArray()
- self.MarkRecord = []
- for mark in sorted(marks.keys(), key=glyphMap.__getitem__):
- markClass, anchor = marks[mark]
- markrec = buildMarkRecord(markClass, anchor)
- self.MarkRecord.append(markrec)
- self.MarkCount = len(self.MarkRecord)
- return self
-
-
-def buildMarkBasePos(marks, bases, glyphMap):
- """Build a list of MarkBasePos (GPOS4) subtables.
-
- This routine turns a set of marks and bases into a list of mark-to-base
- positioning subtables. Currently the list will contain a single subtable
- containing all marks and bases, although at a later date it may return the
- optimal list of subtables subsetting the marks and bases into groups which
- save space. See :func:`buildMarkBasePosSubtable` below.
-
- Note that if you are implementing a layout compiler, you may find it more
- flexible to use
- :py:class:`fontTools.otlLib.lookupBuilders.MarkBasePosBuilder` instead.
-
- Example::
-
- # a1, a2, a3, a4, a5 = buildAnchor(500, 100), ...
-
- marks = {"acute": (0, a1), "grave": (0, a1), "cedilla": (1, a2)}
- bases = {"a": {0: a3, 1: a5}, "b": {0: a4, 1: a5}}
- markbaseposes = buildMarkBasePos(marks, bases, font.getReverseGlyphMap())
-
- Args:
- marks (dict): A dictionary mapping anchors to glyphs; the keys being
- glyph names, and the values being a tuple of mark class number and
- an ``otTables.Anchor`` object representing the mark's attachment
- point. (See :func:`buildMarkArray`.)
- bases (dict): A dictionary mapping anchors to glyphs; the keys being
- glyph names, and the values being dictionaries mapping mark class ID
- to the appropriate ``otTables.Anchor`` object used for attaching marks
- of that class. (See :func:`buildBaseArray`.)
- glyphMap: a glyph name to ID map, typically returned from
- ``font.getReverseGlyphMap()``.
-
- Returns:
- A list of ``otTables.MarkBasePos`` objects.
- """
- # TODO: Consider emitting multiple subtables to save space.
- # Partition the marks and bases into disjoint subsets, so that
- # MarkBasePos rules would only access glyphs from a single
- # subset. This would likely lead to smaller mark/base
- # matrices, so we might be able to omit many of the empty
- # anchor tables that we currently produce. Of course, this
- # would only work if the MarkBasePos rules of real-world fonts
- # allow partitioning into multiple subsets. We should find out
- # whether this is the case; if so, implement the optimization.
- # On the other hand, a very large number of subtables could
- # slow down layout engines; so this would need profiling.
- return [buildMarkBasePosSubtable(marks, bases, glyphMap)]
-
-
-def buildMarkBasePosSubtable(marks, bases, glyphMap):
- """Build a single MarkBasePos (GPOS4) subtable.
-
- This builds a mark-to-base lookup subtable containing all of the referenced
- marks and bases. See :func:`buildMarkBasePos`.
-
- Args:
- marks (dict): A dictionary mapping anchors to glyphs; the keys being
- glyph names, and the values being a tuple of mark class number and
- an ``otTables.Anchor`` object representing the mark's attachment
- point. (See :func:`buildMarkArray`.)
- bases (dict): A dictionary mapping anchors to glyphs; the keys being
- glyph names, and the values being dictionaries mapping mark class ID
- to the appropriate ``otTables.Anchor`` object used for attaching marks
- of that class. (See :func:`buildBaseArray`.)
- glyphMap: a glyph name to ID map, typically returned from
- ``font.getReverseGlyphMap()``.
-
- Returns:
- A ``otTables.MarkBasePos`` object.
- """
- self = ot.MarkBasePos()
- self.Format = 1
- self.MarkCoverage = buildCoverage(marks, glyphMap)
- self.MarkArray = buildMarkArray(marks, glyphMap)
- self.ClassCount = max([mc for mc, _ in marks.values()]) + 1
- self.BaseCoverage = buildCoverage(bases, glyphMap)
- self.BaseArray = buildBaseArray(bases, self.ClassCount, glyphMap)
- return self
-
-
-def buildMarkLigPos(marks, ligs, glyphMap):
- """Build a list of MarkLigPos (GPOS5) subtables.
-
- This routine turns a set of marks and ligatures into a list of mark-to-ligature
- positioning subtables. Currently the list will contain a single subtable
- containing all marks and ligatures, although at a later date it may return
- the optimal list of subtables subsetting the marks and ligatures into groups
- which save space. See :func:`buildMarkLigPosSubtable` below.
-
- Note that if you are implementing a layout compiler, you may find it more
- flexible to use
- :py:class:`fontTools.otlLib.lookupBuilders.MarkLigPosBuilder` instead.
-
- Example::
-
- # a1, a2, a3, a4, a5 = buildAnchor(500, 100), ...
- marks = {
- "acute": (0, a1),
- "grave": (0, a1),
- "cedilla": (1, a2)
- }
- ligs = {
- "f_i": [
- { 0: a3, 1: a5 }, # f
- { 0: a4, 1: a5 } # i
- ],
- # "c_t": [{...}, {...}]
- }
- markligposes = buildMarkLigPos(marks, ligs,
- font.getReverseGlyphMap())
-
- Args:
- marks (dict): A dictionary mapping anchors to glyphs; the keys being
- glyph names, and the values being a tuple of mark class number and
- an ``otTables.Anchor`` object representing the mark's attachment
- point. (See :func:`buildMarkArray`.)
- ligs (dict): A mapping of ligature names to an array of dictionaries:
- for each component glyph in the ligature, an dictionary mapping
- mark class IDs to anchors. (See :func:`buildLigatureArray`.)
- glyphMap: a glyph name to ID map, typically returned from
- ``font.getReverseGlyphMap()``.
-
- Returns:
- A list of ``otTables.MarkLigPos`` objects.
-
- """
- # TODO: Consider splitting into multiple subtables to save space,
- # as with MarkBasePos, this would be a trade-off that would need
- # profiling. And, depending on how typical fonts are structured,
- # it might not be worth doing at all.
- return [buildMarkLigPosSubtable(marks, ligs, glyphMap)]
-
-
-def buildMarkLigPosSubtable(marks, ligs, glyphMap):
- """Build a single MarkLigPos (GPOS5) subtable.
-
- This builds a mark-to-base lookup subtable containing all of the referenced
- marks and bases. See :func:`buildMarkLigPos`.
-
- Args:
- marks (dict): A dictionary mapping anchors to glyphs; the keys being
- glyph names, and the values being a tuple of mark class number and
- an ``otTables.Anchor`` object representing the mark's attachment
- point. (See :func:`buildMarkArray`.)
- ligs (dict): A mapping of ligature names to an array of dictionaries:
- for each component glyph in the ligature, an dictionary mapping
- mark class IDs to anchors. (See :func:`buildLigatureArray`.)
- glyphMap: a glyph name to ID map, typically returned from
- ``font.getReverseGlyphMap()``.
-
- Returns:
- A ``otTables.MarkLigPos`` object.
- """
- self = ot.MarkLigPos()
- self.Format = 1
- self.MarkCoverage = buildCoverage(marks, glyphMap)
- self.MarkArray = buildMarkArray(marks, glyphMap)
- self.ClassCount = max([mc for mc, _ in marks.values()]) + 1
- self.LigatureCoverage = buildCoverage(ligs, glyphMap)
- self.LigatureArray = buildLigatureArray(ligs, self.ClassCount, glyphMap)
- return self
-
-
-def buildMarkRecord(classID, anchor):
- assert isinstance(classID, int)
- assert isinstance(anchor, ot.Anchor)
- self = ot.MarkRecord()
- self.Class = classID
- self.MarkAnchor = anchor
- return self
-
-
-def buildMark2Record(anchors):
- # [otTables.Anchor, otTables.Anchor, ...] --> otTables.Mark2Record
- self = ot.Mark2Record()
- self.Mark2Anchor = anchors
- return self
-
-
-def _getValueFormat(f, values, i):
- # Helper for buildPairPos{Glyphs|Classes}Subtable.
- if f is not None:
- return f
- mask = 0
- for value in values:
- if value is not None and value[i] is not None:
- mask |= value[i].getFormat()
- return mask
-
-
-def buildPairPosClassesSubtable(pairs, glyphMap, valueFormat1=None, valueFormat2=None):
- """Builds a class pair adjustment (GPOS2 format 2) subtable.
-
- Kerning tables are generally expressed as pair positioning tables using
- class-based pair adjustments. This routine builds format 2 PairPos
- subtables.
-
- Note that if you are implementing a layout compiler, you may find it more
- flexible to use
- :py:class:`fontTools.otlLib.lookupBuilders.ClassPairPosSubtableBuilder`
- instead, as this takes care of ensuring that the supplied pairs can be
- formed into non-overlapping classes and emitting individual subtables
- whenever the non-overlapping requirement means that a new subtable is
- required.
-
- Example::
-
- pairs = {}
-
- pairs[(
- [ "K", "X" ],
- [ "W", "V" ]
- )] = ( buildValue(xAdvance=+5), buildValue() )
- # pairs[(... , ...)] = (..., ...)
-
- pairpos = buildPairPosClassesSubtable(pairs, font.getReverseGlyphMap())
-
- Args:
- pairs (dict): Pair positioning data; the keys being a two-element
- tuple of lists of glyphnames, and the values being a two-element
- tuple of ``otTables.ValueRecord`` objects.
- glyphMap: a glyph name to ID map, typically returned from
- ``font.getReverseGlyphMap()``.
- valueFormat1: Force the "left" value records to the given format.
- valueFormat2: Force the "right" value records to the given format.
-
- Returns:
- A ``otTables.PairPos`` object.
- """
- coverage = set()
- classDef1 = ClassDefBuilder(useClass0=True)
- classDef2 = ClassDefBuilder(useClass0=False)
- for gc1, gc2 in sorted(pairs):
- coverage.update(gc1)
- classDef1.add(gc1)
- classDef2.add(gc2)
- self = ot.PairPos()
- self.Format = 2
- valueFormat1 = self.ValueFormat1 = _getValueFormat(valueFormat1, pairs.values(), 0)
- valueFormat2 = self.ValueFormat2 = _getValueFormat(valueFormat2, pairs.values(), 1)
- self.Coverage = buildCoverage(coverage, glyphMap)
- self.ClassDef1 = classDef1.build()
- self.ClassDef2 = classDef2.build()
- classes1 = classDef1.classes()
- classes2 = classDef2.classes()
- self.Class1Record = []
- for c1 in classes1:
- rec1 = ot.Class1Record()
- rec1.Class2Record = []
- self.Class1Record.append(rec1)
- for c2 in classes2:
- rec2 = ot.Class2Record()
- val1, val2 = pairs.get((c1, c2), (None, None))
- rec2.Value1 = (
- ValueRecord(src=val1, valueFormat=valueFormat1)
- if valueFormat1
- else None
- )
- rec2.Value2 = (
- ValueRecord(src=val2, valueFormat=valueFormat2)
- if valueFormat2
- else None
- )
- rec1.Class2Record.append(rec2)
- self.Class1Count = len(self.Class1Record)
- self.Class2Count = len(classes2)
- return self
-
-
-def buildPairPosGlyphs(pairs, glyphMap):
- """Builds a list of glyph-based pair adjustment (GPOS2 format 1) subtables.
-
- This organises a list of pair positioning adjustments into subtables based
- on common value record formats.
-
- Note that if you are implementing a layout compiler, you may find it more
- flexible to use
- :py:class:`fontTools.otlLib.lookupBuilders.PairPosBuilder`
- instead.
-
- Example::
-
- pairs = {
- ("K", "W"): ( buildValue(xAdvance=+5), buildValue() ),
- ("K", "V"): ( buildValue(xAdvance=+5), buildValue() ),
- # ...
- }
-
- subtables = buildPairPosGlyphs(pairs, font.getReverseGlyphMap())
-
- Args:
- pairs (dict): Pair positioning data; the keys being a two-element
- tuple of glyphnames, and the values being a two-element
- tuple of ``otTables.ValueRecord`` objects.
- glyphMap: a glyph name to ID map, typically returned from
- ``font.getReverseGlyphMap()``.
-
- Returns:
- A list of ``otTables.PairPos`` objects.
- """
-
- p = {} # (formatA, formatB) --> {(glyphA, glyphB): (valA, valB)}
- for (glyphA, glyphB), (valA, valB) in pairs.items():
- formatA = valA.getFormat() if valA is not None else 0
- formatB = valB.getFormat() if valB is not None else 0
- pos = p.setdefault((formatA, formatB), {})
- pos[(glyphA, glyphB)] = (valA, valB)
- return [
- buildPairPosGlyphsSubtable(pos, glyphMap, formatA, formatB)
- for ((formatA, formatB), pos) in sorted(p.items())
- ]
-
-
-def buildPairPosGlyphsSubtable(pairs, glyphMap, valueFormat1=None, valueFormat2=None):
- """Builds a single glyph-based pair adjustment (GPOS2 format 1) subtable.
-
- This builds a PairPos subtable from a dictionary of glyph pairs and
- their positioning adjustments. See also :func:`buildPairPosGlyphs`.
-
- Note that if you are implementing a layout compiler, you may find it more
- flexible to use
- :py:class:`fontTools.otlLib.lookupBuilders.PairPosBuilder` instead.
-
- Example::
-
- pairs = {
- ("K", "W"): ( buildValue(xAdvance=+5), buildValue() ),
- ("K", "V"): ( buildValue(xAdvance=+5), buildValue() ),
- # ...
- }
-
- pairpos = buildPairPosGlyphsSubtable(pairs, font.getReverseGlyphMap())
-
- Args:
- pairs (dict): Pair positioning data; the keys being a two-element
- tuple of glyphnames, and the values being a two-element
- tuple of ``otTables.ValueRecord`` objects.
- glyphMap: a glyph name to ID map, typically returned from
- ``font.getReverseGlyphMap()``.
- valueFormat1: Force the "left" value records to the given format.
- valueFormat2: Force the "right" value records to the given format.
-
- Returns:
- A ``otTables.PairPos`` object.
- """
- self = ot.PairPos()
- self.Format = 1
- valueFormat1 = self.ValueFormat1 = _getValueFormat(valueFormat1, pairs.values(), 0)
- valueFormat2 = self.ValueFormat2 = _getValueFormat(valueFormat2, pairs.values(), 1)
- p = {}
- for (glyphA, glyphB), (valA, valB) in pairs.items():
- p.setdefault(glyphA, []).append((glyphB, valA, valB))
- self.Coverage = buildCoverage({g for g, _ in pairs.keys()}, glyphMap)
- self.PairSet = []
- for glyph in self.Coverage.glyphs:
- ps = ot.PairSet()
- ps.PairValueRecord = []
- self.PairSet.append(ps)
- for glyph2, val1, val2 in sorted(p[glyph], key=lambda x: glyphMap[x[0]]):
- pvr = ot.PairValueRecord()
- pvr.SecondGlyph = glyph2
- pvr.Value1 = (
- ValueRecord(src=val1, valueFormat=valueFormat1)
- if valueFormat1
- else None
- )
- pvr.Value2 = (
- ValueRecord(src=val2, valueFormat=valueFormat2)
- if valueFormat2
- else None
- )
- ps.PairValueRecord.append(pvr)
- ps.PairValueCount = len(ps.PairValueRecord)
- self.PairSetCount = len(self.PairSet)
- return self
-
-
-def buildSinglePos(mapping, glyphMap):
- """Builds a list of single adjustment (GPOS1) subtables.
-
- This builds a list of SinglePos subtables from a dictionary of glyph
- names and their positioning adjustments. The format of the subtables are
- determined to optimize the size of the resulting subtables.
- See also :func:`buildSinglePosSubtable`.
-
- Note that if you are implementing a layout compiler, you may find it more
- flexible to use
- :py:class:`fontTools.otlLib.lookupBuilders.SinglePosBuilder` instead.
-
- Example::
-
- mapping = {
- "V": buildValue({ "xAdvance" : +5 }),
- # ...
- }
-
- subtables = buildSinglePos(pairs, font.getReverseGlyphMap())
-
- Args:
- mapping (dict): A mapping between glyphnames and
- ``otTables.ValueRecord`` objects.
- glyphMap: a glyph name to ID map, typically returned from
- ``font.getReverseGlyphMap()``.
-
- Returns:
- A list of ``otTables.SinglePos`` objects.
- """
- result, handled = [], set()
- # In SinglePos format 1, the covered glyphs all share the same ValueRecord.
- # In format 2, each glyph has its own ValueRecord, but these records
- # all have the same properties (eg., all have an X but no Y placement).
- coverages, masks, values = {}, {}, {}
- for glyph, value in mapping.items():
- key = _getSinglePosValueKey(value)
- coverages.setdefault(key, []).append(glyph)
- masks.setdefault(key[0], []).append(key)
- values[key] = value
-
- # If a ValueRecord is shared between multiple glyphs, we generate
- # a SinglePos format 1 subtable; that is the most compact form.
- for key, glyphs in coverages.items():
- # 5 ushorts is the length of introducing another sublookup
- if len(glyphs) * _getSinglePosValueSize(key) > 5:
- format1Mapping = {g: values[key] for g in glyphs}
- result.append(buildSinglePosSubtable(format1Mapping, glyphMap))
- handled.add(key)
-
- # In the remaining ValueRecords, look for those whose valueFormat
- # (the set of used properties) is shared between multiple records.
- # These will get encoded in format 2.
- for valueFormat, keys in masks.items():
- f2 = [k for k in keys if k not in handled]
- if len(f2) > 1:
- format2Mapping = {}
- for k in f2:
- format2Mapping.update((g, values[k]) for g in coverages[k])
- result.append(buildSinglePosSubtable(format2Mapping, glyphMap))
- handled.update(f2)
-
- # The remaining ValueRecords are only used by a few glyphs, normally
- # one. We encode these in format 1 again.
- for key, glyphs in coverages.items():
- if key not in handled:
- for g in glyphs:
- st = buildSinglePosSubtable({g: values[key]}, glyphMap)
- result.append(st)
-
- # When the OpenType layout engine traverses the subtables, it will
- # stop after the first matching subtable. Therefore, we sort the
- # resulting subtables by decreasing coverage size; this increases
- # the chance that the layout engine can do an early exit. (Of course,
- # this would only be true if all glyphs were equally frequent, which
- # is not really the case; but we do not know their distribution).
- # If two subtables cover the same number of glyphs, we sort them
- # by glyph ID so that our output is deterministic.
- result.sort(key=lambda t: _getSinglePosTableKey(t, glyphMap))
- return result
-
-
-def buildSinglePosSubtable(values, glyphMap):
- """Builds a single adjustment (GPOS1) subtable.
-
- This builds a list of SinglePos subtables from a dictionary of glyph
- names and their positioning adjustments. The format of the subtable is
- determined to optimize the size of the output.
- See also :func:`buildSinglePos`.
-
- Note that if you are implementing a layout compiler, you may find it more
- flexible to use
- :py:class:`fontTools.otlLib.lookupBuilders.SinglePosBuilder` instead.
-
- Example::
-
- mapping = {
- "V": buildValue({ "xAdvance" : +5 }),
- # ...
- }
-
- subtable = buildSinglePos(pairs, font.getReverseGlyphMap())
-
- Args:
- mapping (dict): A mapping between glyphnames and
- ``otTables.ValueRecord`` objects.
- glyphMap: a glyph name to ID map, typically returned from
- ``font.getReverseGlyphMap()``.
-
- Returns:
- A ``otTables.SinglePos`` object.
- """
- self = ot.SinglePos()
- self.Coverage = buildCoverage(values.keys(), glyphMap)
- valueFormat = self.ValueFormat = reduce(
- int.__or__, [v.getFormat() for v in values.values()], 0
- )
- valueRecords = [
- ValueRecord(src=values[g], valueFormat=valueFormat)
- for g in self.Coverage.glyphs
- ]
- if all(v == valueRecords[0] for v in valueRecords):
- self.Format = 1
- if self.ValueFormat != 0:
- self.Value = valueRecords[0]
- else:
- self.Value = None
- else:
- self.Format = 2
- self.Value = valueRecords
- self.ValueCount = len(self.Value)
- return self
-
-
-def _getSinglePosTableKey(subtable, glyphMap):
- assert isinstance(subtable, ot.SinglePos), subtable
- glyphs = subtable.Coverage.glyphs
- return (-len(glyphs), glyphMap[glyphs[0]])
-
-
-def _getSinglePosValueKey(valueRecord):
- # otBase.ValueRecord --> (2, ("YPlacement": 12))
- assert isinstance(valueRecord, ValueRecord), valueRecord
- valueFormat, result = 0, []
- for name, value in valueRecord.__dict__.items():
- if isinstance(value, ot.Device):
- result.append((name, _makeDeviceTuple(value)))
- else:
- result.append((name, value))
- valueFormat |= valueRecordFormatDict[name][0]
- result.sort()
- result.insert(0, valueFormat)
- return tuple(result)
-
-
-_DeviceTuple = namedtuple("_DeviceTuple", "DeltaFormat StartSize EndSize DeltaValue")
-
-
-def _makeDeviceTuple(device):
- # otTables.Device --> tuple, for making device tables unique
- return _DeviceTuple(
- device.DeltaFormat,
- device.StartSize,
- device.EndSize,
- () if device.DeltaFormat & 0x8000 else tuple(device.DeltaValue),
- )
-
-
-def _getSinglePosValueSize(valueKey):
- # Returns how many ushorts this valueKey (short form of ValueRecord) takes up
- count = 0
- for _, v in valueKey[1:]:
- if isinstance(v, _DeviceTuple):
- count += len(v.DeltaValue) + 3
- else:
- count += 1
- return count
-
-
-def buildValue(value):
- """Builds a positioning value record.
-
- Value records are used to specify coordinates and adjustments for
- positioning and attaching glyphs. Many of the positioning functions
- in this library take ``otTables.ValueRecord`` objects as arguments.
- This function builds value records from dictionaries.
-
- Args:
- value (dict): A dictionary with zero or more of the following keys:
- - ``xPlacement``
- - ``yPlacement``
- - ``xAdvance``
- - ``yAdvance``
- - ``xPlaDevice``
- - ``yPlaDevice``
- - ``xAdvDevice``
- - ``yAdvDevice``
-
- Returns:
- An ``otTables.ValueRecord`` object.
- """
- self = ValueRecord()
- for k, v in value.items():
- setattr(self, k, v)
- return self
-
-
-# GDEF
-
-
-def buildAttachList(attachPoints, glyphMap):
- """Builds an AttachList subtable.
-
- A GDEF table may contain an Attachment Point List table (AttachList)
- which stores the contour indices of attachment points for glyphs with
- attachment points. This routine builds AttachList subtables.
-
- Args:
- attachPoints (dict): A mapping between glyph names and a list of
- contour indices.
-
- Returns:
- An ``otTables.AttachList`` object if attachment points are supplied,
- or ``None`` otherwise.
- """
- if not attachPoints:
- return None
- self = ot.AttachList()
- self.Coverage = buildCoverage(attachPoints.keys(), glyphMap)
- self.AttachPoint = [buildAttachPoint(attachPoints[g]) for g in self.Coverage.glyphs]
- self.GlyphCount = len(self.AttachPoint)
- return self
-
-
-def buildAttachPoint(points):
- # [4, 23, 41] --> otTables.AttachPoint
- # Only used by above.
- if not points:
- return None
- self = ot.AttachPoint()
- self.PointIndex = sorted(set(points))
- self.PointCount = len(self.PointIndex)
- return self
-
-
-def buildCaretValueForCoord(coord):
- # 500 --> otTables.CaretValue, format 1
- # (500, DeviceTable) --> otTables.CaretValue, format 3
- self = ot.CaretValue()
- if isinstance(coord, tuple):
- self.Format = 3
- self.Coordinate, self.DeviceTable = coord
- else:
- self.Format = 1
- self.Coordinate = coord
- return self
-
-
-def buildCaretValueForPoint(point):
- # 4 --> otTables.CaretValue, format 2
- self = ot.CaretValue()
- self.Format = 2
- self.CaretValuePoint = point
- return self
-
-
-def buildLigCaretList(coords, points, glyphMap):
- """Builds a ligature caret list table.
-
- Ligatures appear as a single glyph representing multiple characters; however
- when, for example, editing text containing a ``f_i`` ligature, the user may
- want to place the cursor between the ``f`` and the ``i``. The ligature caret
- list in the GDEF table specifies the position to display the "caret" (the
- character insertion indicator, typically a flashing vertical bar) "inside"
- the ligature to represent an insertion point. The insertion positions may
- be specified either by coordinate or by contour point.
-
- Example::
-
- coords = {
- "f_f_i": [300, 600] # f|fi cursor at 300 units, ff|i cursor at 600.
- }
- points = {
- "c_t": [28] # c|t cursor appears at coordinate of contour point 28.
- }
- ligcaretlist = buildLigCaretList(coords, points, font.getReverseGlyphMap())
-
- Args:
- coords: A mapping between glyph names and a list of coordinates for
- the insertion point of each ligature component after the first one.
- points: A mapping between glyph names and a list of contour points for
- the insertion point of each ligature component after the first one.
- glyphMap: a glyph name to ID map, typically returned from
- ``font.getReverseGlyphMap()``.
-
- Returns:
- A ``otTables.LigCaretList`` object if any carets are present, or
- ``None`` otherwise."""
- glyphs = set(coords.keys()) if coords else set()
- if points:
- glyphs.update(points.keys())
- carets = {g: buildLigGlyph(coords.get(g), points.get(g)) for g in glyphs}
- carets = {g: c for g, c in carets.items() if c is not None}
- if not carets:
- return None
- self = ot.LigCaretList()
- self.Coverage = buildCoverage(carets.keys(), glyphMap)
- self.LigGlyph = [carets[g] for g in self.Coverage.glyphs]
- self.LigGlyphCount = len(self.LigGlyph)
- return self
-
-
-def buildLigGlyph(coords, points):
- # ([500], [4]) --> otTables.LigGlyph; None for empty coords/points
- carets = []
- if coords:
- coords = sorted(coords, key=lambda c: c[0] if isinstance(c, tuple) else c)
- carets.extend([buildCaretValueForCoord(c) for c in coords])
- if points:
- carets.extend([buildCaretValueForPoint(p) for p in sorted(points)])
- if not carets:
- return None
- self = ot.LigGlyph()
- self.CaretValue = carets
- self.CaretCount = len(self.CaretValue)
- return self
-
-
-def buildMarkGlyphSetsDef(markSets, glyphMap):
- """Builds a mark glyph sets definition table.
-
- OpenType Layout lookups may choose to use mark filtering sets to consider
- or ignore particular combinations of marks. These sets are specified by
- setting a flag on the lookup, but the mark filtering sets are defined in
- the ``GDEF`` table. This routine builds the subtable containing the mark
- glyph set definitions.
-
- Example::
-
- set0 = set("acute", "grave")
- set1 = set("caron", "grave")
-
- markglyphsets = buildMarkGlyphSetsDef([set0, set1], font.getReverseGlyphMap())
-
- Args:
-
- markSets: A list of sets of glyphnames.
- glyphMap: a glyph name to ID map, typically returned from
- ``font.getReverseGlyphMap()``.
-
- Returns
- An ``otTables.MarkGlyphSetsDef`` object.
- """
- if not markSets:
- return None
- self = ot.MarkGlyphSetsDef()
- self.MarkSetTableFormat = 1
- self.Coverage = [buildCoverage(m, glyphMap) for m in markSets]
- self.MarkSetCount = len(self.Coverage)
- return self
-
-
-class ClassDefBuilder(object):
- """Helper for building ClassDef tables."""
-
- def __init__(self, useClass0):
- self.classes_ = set()
- self.glyphs_ = {}
- self.useClass0_ = useClass0
-
- def canAdd(self, glyphs):
- if isinstance(glyphs, (set, frozenset)):
- glyphs = sorted(glyphs)
- glyphs = tuple(glyphs)
- if glyphs in self.classes_:
- return True
- for glyph in glyphs:
- if glyph in self.glyphs_:
- return False
- return True
-
- def add(self, glyphs):
- if isinstance(glyphs, (set, frozenset)):
- glyphs = sorted(glyphs)
- glyphs = tuple(glyphs)
- if glyphs in self.classes_:
- return
- self.classes_.add(glyphs)
- for glyph in glyphs:
- if glyph in self.glyphs_:
- raise OpenTypeLibError(
- f"Glyph {glyph} is already present in class.", None
- )
- self.glyphs_[glyph] = glyphs
-
- def classes(self):
- # In ClassDef1 tables, class id #0 does not need to be encoded
- # because zero is the default. Therefore, we use id #0 for the
- # glyph class that has the largest number of members. However,
- # in other tables than ClassDef1, 0 means "every other glyph"
- # so we should not use that ID for any real glyph classes;
- # we implement this by inserting an empty set at position 0.
- #
- # TODO: Instead of counting the number of glyphs in each class,
- # we should determine the encoded size. If the glyphs in a large
- # class form a contiguous range, the encoding is actually quite
- # compact, whereas a non-contiguous set might need a lot of bytes
- # in the output file. We don't get this right with the key below.
- result = sorted(self.classes_, key=lambda s: (len(s), s), reverse=True)
- if not self.useClass0_:
- result.insert(0, frozenset())
- return result
-
- def build(self):
- glyphClasses = {}
- for classID, glyphs in enumerate(self.classes()):
- if classID == 0:
- continue
- for glyph in glyphs:
- glyphClasses[glyph] = classID
- classDef = ot.ClassDef()
- classDef.classDefs = glyphClasses
- return classDef
-
-
-AXIS_VALUE_NEGATIVE_INFINITY = fixedToFloat(-0x80000000, 16)
-AXIS_VALUE_POSITIVE_INFINITY = fixedToFloat(0x7FFFFFFF, 16)
-
-
-def buildStatTable(
- ttFont, axes, locations=None, elidedFallbackName=2, windowsNames=True, macNames=True
-):
- """Add a 'STAT' table to 'ttFont'.
-
- 'axes' is a list of dictionaries describing axes and their
- values.
-
- Example::
-
- axes = [
- dict(
- tag="wght",
- name="Weight",
- ordering=0, # optional
- values=[
- dict(value=100, name='Thin'),
- dict(value=300, name='Light'),
- dict(value=400, name='Regular', flags=0x2),
- dict(value=900, name='Black'),
- ],
- )
- ]
-
- Each axis dict must have 'tag' and 'name' items. 'tag' maps
- to the 'AxisTag' field. 'name' can be a name ID (int), a string,
- or a dictionary containing multilingual names (see the
- addMultilingualName() name table method), and will translate to
- the AxisNameID field.
-
- An axis dict may contain an 'ordering' item that maps to the
- AxisOrdering field. If omitted, the order of the axes list is
- used to calculate AxisOrdering fields.
-
- The axis dict may contain a 'values' item, which is a list of
- dictionaries describing AxisValue records belonging to this axis.
-
- Each value dict must have a 'name' item, which can be a name ID
- (int), a string, or a dictionary containing multilingual names,
- like the axis name. It translates to the ValueNameID field.
-
- Optionally the value dict can contain a 'flags' item. It maps to
- the AxisValue Flags field, and will be 0 when omitted.
-
- The format of the AxisValue is determined by the remaining contents
- of the value dictionary:
-
- If the value dict contains a 'value' item, an AxisValue record
- Format 1 is created. If in addition to the 'value' item it contains
- a 'linkedValue' item, an AxisValue record Format 3 is built.
-
- If the value dict contains a 'nominalValue' item, an AxisValue
- record Format 2 is built. Optionally it may contain 'rangeMinValue'
- and 'rangeMaxValue' items. These map to -Infinity and +Infinity
- respectively if omitted.
-
- You cannot specify Format 4 AxisValue tables this way, as they are
- not tied to a single axis, and specify a name for a location that
- is defined by multiple axes values. Instead, you need to supply the
- 'locations' argument.
-
- The optional 'locations' argument specifies AxisValue Format 4
- tables. It should be a list of dicts, where each dict has a 'name'
- item, which works just like the value dicts above, an optional
- 'flags' item (defaulting to 0x0), and a 'location' dict. A
- location dict key is an axis tag, and the associated value is the
- location on the specified axis. They map to the AxisIndex and Value
- fields of the AxisValueRecord.
-
- Example::
-
- locations = [
- dict(name='Regular ABCD', location=dict(wght=300, ABCD=100)),
- dict(name='Bold ABCD XYZ', location=dict(wght=600, ABCD=200)),
- ]
-
- The optional 'elidedFallbackName' argument can be a name ID (int),
- a string, a dictionary containing multilingual names, or a list of
- STATNameStatements. It translates to the ElidedFallbackNameID field.
-
- The 'ttFont' argument must be a TTFont instance that already has a
- 'name' table. If a 'STAT' table already exists, it will be
- overwritten by the newly created one.
- """
- ttFont["STAT"] = ttLib.newTable("STAT")
- statTable = ttFont["STAT"].table = ot.STAT()
- nameTable = ttFont["name"]
- statTable.ElidedFallbackNameID = _addName(
- nameTable, elidedFallbackName, windows=windowsNames, mac=macNames
- )
-
- # 'locations' contains data for AxisValue Format 4
- axisRecords, axisValues = _buildAxisRecords(
- axes, nameTable, windowsNames=windowsNames, macNames=macNames
- )
- if not locations:
- statTable.Version = 0x00010001
- else:
- # We'll be adding Format 4 AxisValue records, which
- # requires a higher table version
- statTable.Version = 0x00010002
- multiAxisValues = _buildAxisValuesFormat4(
- locations, axes, nameTable, windowsNames=windowsNames, macNames=macNames
- )
- axisValues = multiAxisValues + axisValues
- nameTable.names.sort()
-
- # Store AxisRecords
- axisRecordArray = ot.AxisRecordArray()
- axisRecordArray.Axis = axisRecords
- # XXX these should not be hard-coded but computed automatically
- statTable.DesignAxisRecordSize = 8
- statTable.DesignAxisRecord = axisRecordArray
- statTable.DesignAxisCount = len(axisRecords)
-
- statTable.AxisValueCount = 0
- statTable.AxisValueArray = None
- if axisValues:
- # Store AxisValueRecords
- axisValueArray = ot.AxisValueArray()
- axisValueArray.AxisValue = axisValues
- statTable.AxisValueArray = axisValueArray
- statTable.AxisValueCount = len(axisValues)
-
-
-def _buildAxisRecords(axes, nameTable, windowsNames=True, macNames=True):
- axisRecords = []
- axisValues = []
- for axisRecordIndex, axisDict in enumerate(axes):
- axis = ot.AxisRecord()
- axis.AxisTag = axisDict["tag"]
- axis.AxisNameID = _addName(
- nameTable, axisDict["name"], 256, windows=windowsNames, mac=macNames
- )
- axis.AxisOrdering = axisDict.get("ordering", axisRecordIndex)
- axisRecords.append(axis)
-
- for axisVal in axisDict.get("values", ()):
- axisValRec = ot.AxisValue()
- axisValRec.AxisIndex = axisRecordIndex
- axisValRec.Flags = axisVal.get("flags", 0)
- axisValRec.ValueNameID = _addName(
- nameTable, axisVal["name"], windows=windowsNames, mac=macNames
- )
-
- if "value" in axisVal:
- axisValRec.Value = axisVal["value"]
- if "linkedValue" in axisVal:
- axisValRec.Format = 3
- axisValRec.LinkedValue = axisVal["linkedValue"]
- else:
- axisValRec.Format = 1
- elif "nominalValue" in axisVal:
- axisValRec.Format = 2
- axisValRec.NominalValue = axisVal["nominalValue"]
- axisValRec.RangeMinValue = axisVal.get(
- "rangeMinValue", AXIS_VALUE_NEGATIVE_INFINITY
- )
- axisValRec.RangeMaxValue = axisVal.get(
- "rangeMaxValue", AXIS_VALUE_POSITIVE_INFINITY
- )
- else:
- raise ValueError("Can't determine format for AxisValue")
-
- axisValues.append(axisValRec)
- return axisRecords, axisValues
-
-
-def _buildAxisValuesFormat4(
- locations, axes, nameTable, windowsNames=True, macNames=True
-):
- axisTagToIndex = {}
- for axisRecordIndex, axisDict in enumerate(axes):
- axisTagToIndex[axisDict["tag"]] = axisRecordIndex
-
- axisValues = []
- for axisLocationDict in locations:
- axisValRec = ot.AxisValue()
- axisValRec.Format = 4
- axisValRec.ValueNameID = _addName(
- nameTable, axisLocationDict["name"], windows=windowsNames, mac=macNames
- )
- axisValRec.Flags = axisLocationDict.get("flags", 0)
- axisValueRecords = []
- for tag, value in axisLocationDict["location"].items():
- avr = ot.AxisValueRecord()
- avr.AxisIndex = axisTagToIndex[tag]
- avr.Value = value
- axisValueRecords.append(avr)
- axisValueRecords.sort(key=lambda avr: avr.AxisIndex)
- axisValRec.AxisCount = len(axisValueRecords)
- axisValRec.AxisValueRecord = axisValueRecords
- axisValues.append(axisValRec)
- return axisValues
-
-
-def _addName(nameTable, value, minNameID=0, windows=True, mac=True):
- if isinstance(value, int):
- # Already a nameID
- return value
- if isinstance(value, str):
- names = dict(en=value)
- elif isinstance(value, dict):
- names = value
- elif isinstance(value, list):
- nameID = nameTable._findUnusedNameID()
- for nameRecord in value:
- if isinstance(nameRecord, STATNameStatement):
- nameTable.setName(
- nameRecord.string,
- nameID,
- nameRecord.platformID,
- nameRecord.platEncID,
- nameRecord.langID,
- )
- else:
- raise TypeError("value must be a list of STATNameStatements")
- return nameID
- else:
- raise TypeError("value must be int, str, dict or list")
- return nameTable.addMultilingualName(
- names, windows=windows, mac=mac, minNameID=minNameID
- )
diff --git a/spaces/codertoro/gpt-academic/request_llm/bridge_tgui.py b/spaces/codertoro/gpt-academic/request_llm/bridge_tgui.py
deleted file mode 100644
index 22a407557fa884f23dd768164b009d7bed841dd9..0000000000000000000000000000000000000000
--- a/spaces/codertoro/gpt-academic/request_llm/bridge_tgui.py
+++ /dev/null
@@ -1,167 +0,0 @@
-'''
-Contributed by SagsMug. Modified by binary-husky
-https://github.com/oobabooga/text-generation-webui/pull/175
-'''
-
-import asyncio
-import json
-import random
-import string
-import websockets
-import logging
-import time
-import threading
-import importlib
-from toolbox import get_conf, update_ui
-LLM_MODEL, = get_conf('LLM_MODEL')
-
-# "TGUI:galactica-1.3b@localhost:7860"
-model_name, addr_port = LLM_MODEL.split('@')
-assert ':' in addr_port, "LLM_MODEL 格式不正确!" + LLM_MODEL
-addr, port = addr_port.split(':')
-
-def random_hash():
- letters = string.ascii_lowercase + string.digits
- return ''.join(random.choice(letters) for i in range(9))
-
-async def run(context, max_token=512):
- params = {
- 'max_new_tokens': max_token,
- 'do_sample': True,
- 'temperature': 0.5,
- 'top_p': 0.9,
- 'typical_p': 1,
- 'repetition_penalty': 1.05,
- 'encoder_repetition_penalty': 1.0,
- 'top_k': 0,
- 'min_length': 0,
- 'no_repeat_ngram_size': 0,
- 'num_beams': 1,
- 'penalty_alpha': 0,
- 'length_penalty': 1,
- 'early_stopping': True,
- 'seed': -1,
- }
- session = random_hash()
-
- async with websockets.connect(f"ws://{addr}:{port}/queue/join") as websocket:
- while content := json.loads(await websocket.recv()):
- #Python3.10 syntax, replace with if elif on older
- if content["msg"] == "send_hash":
- await websocket.send(json.dumps({
- "session_hash": session,
- "fn_index": 12
- }))
- elif content["msg"] == "estimation":
- pass
- elif content["msg"] == "send_data":
- await websocket.send(json.dumps({
- "session_hash": session,
- "fn_index": 12,
- "data": [
- context,
- params['max_new_tokens'],
- params['do_sample'],
- params['temperature'],
- params['top_p'],
- params['typical_p'],
- params['repetition_penalty'],
- params['encoder_repetition_penalty'],
- params['top_k'],
- params['min_length'],
- params['no_repeat_ngram_size'],
- params['num_beams'],
- params['penalty_alpha'],
- params['length_penalty'],
- params['early_stopping'],
- params['seed'],
- ]
- }))
- elif content["msg"] == "process_starts":
- pass
- elif content["msg"] in ["process_generating", "process_completed"]:
- yield content["output"]["data"][0]
- # You can search for your desired end indicator and
- # stop generation by closing the websocket here
- if (content["msg"] == "process_completed"):
- break
-
-
-
-
-
-def predict_tgui(inputs, top_p, temperature, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
- """
- 发送至chatGPT,流式获取输出。
- 用于基础的对话功能。
- inputs 是本次问询的输入
- top_p, temperature是chatGPT的内部调优参数
- history 是之前的对话列表(注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误)
- chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容
- additional_fn代表点击的哪个按钮,按钮见functional.py
- """
- if additional_fn is not None:
- import core_functional
- importlib.reload(core_functional) # 热更新prompt
- core_functional = core_functional.get_core_functions()
- if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
- inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
-
- raw_input = "What I would like to say is the following: " + inputs
- logging.info(f'[raw_input] {raw_input}')
- history.extend([inputs, ""])
- chatbot.append([inputs, ""])
- yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
-
- prompt = inputs
- tgui_say = ""
-
- mutable = ["", time.time()]
- def run_coorotine(mutable):
- async def get_result(mutable):
- async for response in run(prompt):
- print(response[len(mutable[0]):])
- mutable[0] = response
- if (time.time() - mutable[1]) > 3:
- print('exit when no listener')
- break
- asyncio.run(get_result(mutable))
-
- thread_listen = threading.Thread(target=run_coorotine, args=(mutable,), daemon=True)
- thread_listen.start()
-
- while thread_listen.is_alive():
- time.sleep(1)
- mutable[1] = time.time()
- # Print intermediate steps
- if tgui_say != mutable[0]:
- tgui_say = mutable[0]
- history[-1] = tgui_say
- chatbot[-1] = (history[-2], history[-1])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- logging.info(f'[response] {tgui_say}')
-
-
-
-def predict_tgui_no_ui(inputs, top_p, temperature, history=[], sys_prompt=""):
- raw_input = "What I would like to say is the following: " + inputs
- prompt = inputs
- tgui_say = ""
- mutable = ["", time.time()]
- def run_coorotine(mutable):
- async def get_result(mutable):
- async for response in run(prompt, max_token=20):
- print(response[len(mutable[0]):])
- mutable[0] = response
- if (time.time() - mutable[1]) > 3:
- print('exit when no listener')
- break
- asyncio.run(get_result(mutable))
- thread_listen = threading.Thread(target=run_coorotine, args=(mutable,))
- thread_listen.start()
- while thread_listen.is_alive():
- time.sleep(1)
- mutable[1] = time.time()
- tgui_say = mutable[0]
- return tgui_say
diff --git a/spaces/codys12/MergeLlama-7b/app.py b/spaces/codys12/MergeLlama-7b/app.py
deleted file mode 100644
index 8ddb60417d941f618b2a1cdacb89cad9d2711f19..0000000000000000000000000000000000000000
--- a/spaces/codys12/MergeLlama-7b/app.py
+++ /dev/null
@@ -1,156 +0,0 @@
-#!/usr/bin/env python
-
-import os
-from threading import Thread
-from typing import Iterator
-
-import gradio as gr
-import spaces
-import torch
-from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer
-from peft import PeftModel, PeftConfig
-
-DESCRIPTION = "# MergeLlama-7b\nThis is a conversational interface powered by the MergeLlama-7b model, a finetune of CodeLlama-7b designed to assist developers in resolving merge conflicts in their code. "
-DESCRIPTION += "It leverages the capabilities of deep learning to provide suggestions for reconciling code differences, presenting potential resolutions for highlighted changes\n"
-DESCRIPTION += "The feedback from this space will help develop future versions including more powerful 13b and 34b versions."
-
-DESCRIPTION += "\n# How to use: \n"
-DESCRIPTION += "1. Input your merge conflict in the chat in the following format:\n```\n<<<<<<<\n[Current change]\n=======\n[Incoming change]\n>>>>>>>\n```\n"
-DESCRIPTION += "The model will generate the merge resolution. Context can be added before the conflict and multiple conflicts/resolutions can be chained together for context.\n"
-DESCRIPTION += "**Additional Information:**\n"
-DESCRIPTION += "- The model behind this tool is based on the MergeLlama dataset, which can be found [here](https://huggingface.co/datasets/codys12/MergeLlama).\n"
-DESCRIPTION += "- For more information about the MergeLlama-7b model, visit [here](https://huggingface.co/codys12/MergeLlama-7b).\n"
-DESCRIPTION += "- If you are interested in supporting the larger versions of this model, such as the 13b and 34b variants, you can check them out [here](https://www.dreamcatcher.co/ProjectPage?projectId=uibaxk4sfzetpkg7ch71ui).\n"
-DESCRIPTION += "- This model was trained on [DreamcatcherAI](https://www.dreamcatcher.co/Discover)\n"
-
-if not torch.cuda.is_available():
- DESCRIPTION += "\n
Running on CPU 🥶 This demo does not work on CPU.
"
-
-MAX_MAX_NEW_TOKENS = 2048
-DEFAULT_MAX_NEW_TOKENS = 256
-MAX_INPUT_TOKEN_LENGTH = 4096
-
-if torch.cuda.is_available():
- model_id = "codys12/MergeLlama-7b"
- model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, torch_dtype=torch.float16, device_map=0, cache_dir="/data")
- tokenizer = AutoTokenizer.from_pretrained("codellama/CodeLlama-7b-hf", trust_remote_code=True)
- tokenizer.pad_token = tokenizer.eos_token
- tokenizer.padding_side = "right"
-
-
-@spaces.GPU
-def generate(
- message: str,
- chat_history: list[tuple[str, str]],
- max_new_tokens: int = 1024,
- #temperature: float = 0.6,
- #top_p: float = 0.9,
- #top_k: int = 50,
- #repetition_penalty: float = 1.2,
-) -> Iterator[str]:
- conversation = []
- current_input = ""
- for user, assistant in chat_history:
- current_input += user
- current_input += assistant
-
- history = current_input
- current_input += message
- current_input += "\n"
-
- device = "cuda:0"
- input_ids = tokenizer(current_input, return_tensors="pt").input_ids.to(device)
-
-
- if len(input_ids) > MAX_INPUT_TOKEN_LENGTH:
- input_ids = input_ids[-MAX_INPUT_TOKEN_LENGTH:]
- gr.Warning("Trimmed input from conversation as it was longer than {MAX_INPUT_TOKEN_LENGTH} tokens.")
-
- streamer = TextIteratorStreamer(tokenizer, timeout=10.0, skip_prompt=True, skip_special_tokens=True)
- generate_kwargs = dict(
- {"input_ids": input_ids},
- streamer=streamer,
- max_new_tokens=max_new_tokens,
- #do_sample=True,
- #top_p=top_p,
- #top_k=top_k,
- #temperature=temperature,
- #num_beams=1,
- #repetition_penalty=repetition_penalty,
- )
- t = Thread(target=model.generate, kwargs=generate_kwargs)
- t.start()
-
- outputs = []
- for text in streamer:
- outputs.append(text)
- combined_text = "".join(outputs)
- if "<<<<<<<" in combined_text:
- combined_text = combined_text.replace("<<<<<<<", "") # Remove the unwanted string
- yield combined_text
- break
- else:
- yield combined_text
-
-
-chat_interface = gr.ChatInterface(
- fn=generate,
- additional_inputs=[
- gr.Slider(
- label="Max new tokens",
- minimum=1,
- maximum=MAX_MAX_NEW_TOKENS,
- step=1,
- value=DEFAULT_MAX_NEW_TOKENS,
- ),
- # gr.Slider(
- # label="Temperature",
- # minimum=0.1,
- # maximum=4.0,
- # step=0.1,
- # value=0.6,
- # ),
- # gr.Slider(
- # label="Top-p (nucleus sampling)",
- # minimum=0.05,
- # maximum=1.0,
- # step=0.05,
- # value=0.9,
- # ),
- # gr.Slider(
- # label="Top-k",
- # minimum=1,
- # maximum=1000,
- # step=1,
- # value=50,
- # ),
- # gr.Slider(
- # label="Repetition penalty",
- # minimum=0.1,
- # maximum=2.0,
- # step=0.05,
- # value=1.2,
- # ),
- ],
- stop_btn=None,
- examples=[
- ["<<<<<<<\nlet x = max(y, 11)\n=======\nvar x = max(y, 12, z)\n>>>>>>>"],
- ["<<<<<<<\nclass Calculator { \nadd(a, b) {\n return a + b;\n }\n}\n=======\nclass Calculator {\n subtract(a, b) {\n return a - b;\n }\n}\n>>>>>>>"],
- ["<<<<<<<\nfunction greet(name) {\n return `Hello, ${name}! Have a good day.`;\n}\n=======\nfunction greet(name, time) {\n return `Good ${time}, ${name}!`;\n}\n>>>>>>>"],
- ["<<<<<<<\nconst user = {\n name: 'John',\n age: 30\n}\n=======\nconst user = {\n name: 'John',\n email: 'john@example.com'\n}\n>>>>>>>"],
- ["<<<<<<<\n.btn {\n background-color: blue;\n padding: 10px 20px;\n}\n=======\n.btn {\n border: 1px solid black;\n font-size: 16px;\n}\n>>>>>>>"],
- ["<<<<<<<\n var visibleSets = beatmapSets.Where(s => !s.Filtered).ToList();\n if (!visibleSets.Any())\n return;\n\n=======\n\n var visible = beatmapSets.Where(s => !s.Filtered).ToList();\n if (!visible.Any())\n return false;\n\n>>>>>>>"],
- ],
-)
-
-with gr.Blocks(css="style.css") as demo:
- gr.Markdown(DESCRIPTION)
- gr.DuplicateButton(
- value="Duplicate Space for private use",
- elem_id="duplicate-button",
- visible=os.getenv("SHOW_DUPLICATE_BUTTON") == "1",
- )
- chat_interface.render()
-
-if __name__ == "__main__":
- demo.queue(max_size=20).launch()
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/g722dsp.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/g722dsp.h
deleted file mode 100644
index c956a1e161a20e5465a53307e1a971adeb891355..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/g722dsp.h
+++ /dev/null
@@ -1,34 +0,0 @@
-/*
- * Copyright (c) 2015 Peter Meerwald
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#ifndef AVCODEC_G722DSP_H
-#define AVCODEC_G722DSP_H
-
-#include
-
-typedef struct G722DSPContext {
- void (*apply_qmf)(const int16_t *prev_samples, int xout[2]);
-} G722DSPContext;
-
-void ff_g722dsp_init(G722DSPContext *c);
-void ff_g722dsp_init_arm(G722DSPContext *c);
-void ff_g722dsp_init_x86(G722DSPContext *c);
-
-#endif /* AVCODEC_G722DSP_H */
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264dsp.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264dsp.h
deleted file mode 100644
index e0880c4d889c697b3c3eae77ff273fb18ce95602..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264dsp.h
+++ /dev/null
@@ -1,135 +0,0 @@
-/*
- * Copyright (c) 2003-2010 Michael Niedermayer
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * H.264 DSP functions.
- * @author Michael Niedermayer
- */
-
-#ifndef AVCODEC_H264DSP_H
-#define AVCODEC_H264DSP_H
-
-#include
-#include
-
-typedef void (*h264_weight_func)(uint8_t *block, ptrdiff_t stride, int height,
- int log2_denom, int weight, int offset);
-typedef void (*h264_biweight_func)(uint8_t *dst, uint8_t *src,
- ptrdiff_t stride, int height, int log2_denom,
- int weightd, int weights, int offset);
-
-/**
- * Context for storing H.264 DSP functions
- */
-typedef struct H264DSPContext {
- /* weighted MC */
- h264_weight_func weight_h264_pixels_tab[4];
- h264_biweight_func biweight_h264_pixels_tab[4];
-
- /* loop filter */
- void (*h264_v_loop_filter_luma)(uint8_t *pix /*align 16*/, ptrdiff_t stride,
- int alpha, int beta, int8_t *tc0);
- void (*h264_h_loop_filter_luma)(uint8_t *pix /*align 4 */, ptrdiff_t stride,
- int alpha, int beta, int8_t *tc0);
- void (*h264_h_loop_filter_luma_mbaff)(uint8_t *pix /*align 16*/, ptrdiff_t stride,
- int alpha, int beta, int8_t *tc0);
- /* v/h_loop_filter_luma_intra: align 16 */
- void (*h264_v_loop_filter_luma_intra)(uint8_t *pix, ptrdiff_t stride,
- int alpha, int beta);
- void (*h264_h_loop_filter_luma_intra)(uint8_t *pix, ptrdiff_t stride,
- int alpha, int beta);
- void (*h264_h_loop_filter_luma_mbaff_intra)(uint8_t *pix /*align 16*/,
- ptrdiff_t stride, int alpha, int beta);
- void (*h264_v_loop_filter_chroma)(uint8_t *pix /*align 8*/, ptrdiff_t stride,
- int alpha, int beta, int8_t *tc0);
- void (*h264_h_loop_filter_chroma)(uint8_t *pix /*align 4*/, ptrdiff_t stride,
- int alpha, int beta, int8_t *tc0);
- void (*h264_h_loop_filter_chroma_mbaff)(uint8_t *pix /*align 8*/,
- ptrdiff_t stride, int alpha, int beta,
- int8_t *tc0);
- void (*h264_v_loop_filter_chroma_intra)(uint8_t *pix /*align 8*/,
- ptrdiff_t stride, int alpha, int beta);
- void (*h264_h_loop_filter_chroma_intra)(uint8_t *pix /*align 8*/,
- ptrdiff_t stride, int alpha, int beta);
- void (*h264_h_loop_filter_chroma_mbaff_intra)(uint8_t *pix /*align 8*/,
- ptrdiff_t stride, int alpha, int beta);
- // h264_loop_filter_strength: simd only. the C version is inlined in h264_loopfilter.c
- void (*h264_loop_filter_strength)(int16_t bS[2][4][4], uint8_t nnz[40],
- int8_t ref[2][40], int16_t mv[2][40][2],
- int bidir, int edges, int step,
- int mask_mv0, int mask_mv1, int field);
-
- /* IDCT */
- void (*h264_idct_add)(uint8_t *dst /*align 4*/,
- int16_t *block /*align 16*/, int stride);
- void (*h264_idct8_add)(uint8_t *dst /*align 8*/,
- int16_t *block /*align 16*/, int stride);
- void (*h264_idct_dc_add)(uint8_t *dst /*align 4*/,
- int16_t *block /*align 16*/, int stride);
- void (*h264_idct8_dc_add)(uint8_t *dst /*align 8*/,
- int16_t *block /*align 16*/, int stride);
-
- void (*h264_idct_add16)(uint8_t *dst /*align 16*/, const int *blockoffset,
- int16_t *block /*align 16*/, int stride,
- const uint8_t nnzc[5 * 8]);
- void (*h264_idct8_add4)(uint8_t *dst /*align 16*/, const int *blockoffset,
- int16_t *block /*align 16*/, int stride,
- const uint8_t nnzc[5 * 8]);
- void (*h264_idct_add8)(uint8_t **dst /*align 16*/, const int *blockoffset,
- int16_t *block /*align 16*/, int stride,
- const uint8_t nnzc[15 * 8]);
- void (*h264_idct_add16intra)(uint8_t *dst /*align 16*/, const int *blockoffset,
- int16_t *block /*align 16*/,
- int stride, const uint8_t nnzc[5 * 8]);
- void (*h264_luma_dc_dequant_idct)(int16_t *output,
- int16_t *input /*align 16*/, int qmul);
- void (*h264_chroma_dc_dequant_idct)(int16_t *block, int qmul);
-
- /* bypass-transform */
- void (*h264_add_pixels8_clear)(uint8_t *dst, int16_t *block, int stride);
- void (*h264_add_pixels4_clear)(uint8_t *dst, int16_t *block, int stride);
-
- /**
- * Search buf from the start for up to size bytes. Return the index
- * of a zero byte, or >= size if not found. Ideally, use lookahead
- * to filter out any zero bytes that are known to not be followed by
- * one or more further zero bytes and a one byte. Better still, filter
- * out any bytes that form the trailing_zero_8bits syntax element too.
- */
- int (*startcode_find_candidate)(const uint8_t *buf, int size);
-} H264DSPContext;
-
-void ff_h264dsp_init(H264DSPContext *c, const int bit_depth,
- const int chroma_format_idc);
-void ff_h264dsp_init_aarch64(H264DSPContext *c, const int bit_depth,
- const int chroma_format_idc);
-void ff_h264dsp_init_arm(H264DSPContext *c, const int bit_depth,
- const int chroma_format_idc);
-void ff_h264dsp_init_ppc(H264DSPContext *c, const int bit_depth,
- const int chroma_format_idc);
-void ff_h264dsp_init_x86(H264DSPContext *c, const int bit_depth,
- const int chroma_format_idc);
-void ff_h264dsp_init_mips(H264DSPContext *c, const int bit_depth,
- const int chroma_format_idc);
-void ff_h264dsp_init_loongarch(H264DSPContext *c, const int bit_depth,
- const int chroma_format_idc);
-
-#endif /* AVCODEC_H264DSP_H */
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/jpeg2000.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/jpeg2000.h
deleted file mode 100644
index d004c08f104b4e6c59fb64685f8633e0f94622c1..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/jpeg2000.h
+++ /dev/null
@@ -1,311 +0,0 @@
-/*
- * JPEG 2000 common defines, structures and functions
- * Copyright (c) 2007 Kamil Nowosad
- * Copyright (c) 2013 Nicolas Bertrand
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#ifndef AVCODEC_JPEG2000_H
-#define AVCODEC_JPEG2000_H
-
-/**
- * @file
- * JPEG 2000 structures and defines common
- * to encoder and decoder
- */
-
-#include
-
-#include "avcodec.h"
-#include "mqc.h"
-#include "jpeg2000dwt.h"
-
-enum Jpeg2000Markers {
- JPEG2000_SOC = 0xff4f, // start of codestream
- JPEG2000_SIZ = 0xff51, // image and tile size
- JPEG2000_COD, // coding style default
- JPEG2000_COC, // coding style component
- JPEG2000_TLM = 0xff55, // tile-part length, main header
- JPEG2000_PLM = 0xff57, // packet length, main header
- JPEG2000_PLT, // packet length, tile-part header
- JPEG2000_QCD = 0xff5c, // quantization default
- JPEG2000_QCC, // quantization component
- JPEG2000_RGN, // region of interest
- JPEG2000_POC, // progression order change
- JPEG2000_PPM, // packed packet headers, main header
- JPEG2000_PPT, // packed packet headers, tile-part header
- JPEG2000_CRG = 0xff63, // component registration
- JPEG2000_COM, // comment
- JPEG2000_SOT = 0xff90, // start of tile-part
- JPEG2000_SOP, // start of packet
- JPEG2000_EPH, // end of packet header
- JPEG2000_SOD, // start of data
- JPEG2000_EOC = 0xffd9, // end of codestream
-};
-
-#define JPEG2000_SOP_FIXED_BYTES 0xFF910004
-#define JPEG2000_SOP_BYTE_LENGTH 6
-
-enum Jpeg2000Quantsty { // quantization style
- JPEG2000_QSTY_NONE, // no quantization
- JPEG2000_QSTY_SI, // scalar derived
- JPEG2000_QSTY_SE // scalar expounded
-};
-
-#define JPEG2000_MAX_DECLEVELS 33
-#define JPEG2000_MAX_RESLEVELS (JPEG2000_MAX_DECLEVELS + 1)
-
-#define JPEG2000_MAX_PASSES 100
-
-// T1 flags
-// flags determining significance of neighbor coefficients
-#define JPEG2000_T1_SIG_N 0x0001
-#define JPEG2000_T1_SIG_E 0x0002
-#define JPEG2000_T1_SIG_W 0x0004
-#define JPEG2000_T1_SIG_S 0x0008
-#define JPEG2000_T1_SIG_NE 0x0010
-#define JPEG2000_T1_SIG_NW 0x0020
-#define JPEG2000_T1_SIG_SE 0x0040
-#define JPEG2000_T1_SIG_SW 0x0080
-#define JPEG2000_T1_SIG_NB (JPEG2000_T1_SIG_N | JPEG2000_T1_SIG_E | \
- JPEG2000_T1_SIG_S | JPEG2000_T1_SIG_W | \
- JPEG2000_T1_SIG_NE | JPEG2000_T1_SIG_NW | \
- JPEG2000_T1_SIG_SE | JPEG2000_T1_SIG_SW)
-// flags determining sign bit of neighbor coefficients
-#define JPEG2000_T1_SGN_N 0x0100
-#define JPEG2000_T1_SGN_S 0x0200
-#define JPEG2000_T1_SGN_W 0x0400
-#define JPEG2000_T1_SGN_E 0x0800
-
-#define JPEG2000_T1_VIS 0x1000
-#define JPEG2000_T1_SIG 0x2000
-#define JPEG2000_T1_REF 0x4000
-
-#define JPEG2000_T1_SGN 0x8000
-
-// Codeblock coding styles
-#define JPEG2000_CBLK_BYPASS 0x01 // Selective arithmetic coding bypass
-#define JPEG2000_CBLK_RESET 0x02 // Reset context probabilities
-#define JPEG2000_CBLK_TERMALL 0x04 // Terminate after each coding pass
-#define JPEG2000_CBLK_VSC 0x08 // Vertical stripe causal context formation
-#define JPEG2000_CBLK_PREDTERM 0x10 // Predictable termination
-#define JPEG2000_CBLK_SEGSYM 0x20 // Segmentation symbols present
-
-// Coding styles
-#define JPEG2000_CSTY_PREC 0x01 // Precincts defined in coding style
-#define JPEG2000_CSTY_SOP 0x02 // SOP marker present
-#define JPEG2000_CSTY_EPH 0x04 // EPH marker present
-#define JPEG2000_CTSY_HTJ2K_F 0x40 // Only HT code-blocks (Rec. ITU-T T.814 | ISO/IEC 15444-15) are present
-#define JPEG2000_CTSY_HTJ2K_M 0xC0 // HT code-blocks (Rec. ITU-T T.814 | ISO/IEC 15444-15) can be present
-
-// Progression orders
-#define JPEG2000_PGOD_LRCP 0x00 // Layer-resolution level-component-position progression
-#define JPEG2000_PGOD_RLCP 0x01 // Resolution level-layer-component-position progression
-#define JPEG2000_PGOD_RPCL 0x02 // Resolution level-position-component-layer progression
-#define JPEG2000_PGOD_PCRL 0x03 // Position-component-resolution level-layer progression
-#define JPEG2000_PGOD_CPRL 0x04 // Component-position-resolution level-layer progression
-
-typedef struct Jpeg2000T1Context {
- int data[6144];
- uint16_t flags[6156];
- MqcState mqc;
- int stride;
-} Jpeg2000T1Context;
-
-typedef struct Jpeg2000TgtNode {
- uint8_t val;
- uint8_t temp_val;
- uint8_t vis;
- struct Jpeg2000TgtNode *parent;
-} Jpeg2000TgtNode;
-
-typedef struct Jpeg2000CodingStyle {
- int nreslevels; // number of resolution levels
- int nreslevels2decode; // number of resolution levels to decode
- uint8_t log2_cblk_width,
- log2_cblk_height; // exponent of codeblock size
- uint8_t transform; // DWT type
- uint8_t csty; // coding style
- uint8_t nlayers; // number of layers
- uint8_t mct; // multiple component transformation
- uint8_t cblk_style; // codeblock coding style
- uint8_t prog_order; // progression order
- uint8_t log2_prec_widths[JPEG2000_MAX_RESLEVELS]; // precincts size according resolution levels
- uint8_t log2_prec_heights[JPEG2000_MAX_RESLEVELS]; // TODO: initialize prec_size array with 0?
- uint8_t init;
-} Jpeg2000CodingStyle;
-
-typedef struct Jpeg2000QuantStyle {
- uint8_t expn[JPEG2000_MAX_DECLEVELS * 3]; // quantization exponent
- uint16_t mant[JPEG2000_MAX_DECLEVELS * 3]; // quantization mantissa
- uint8_t quantsty; // quantization style
- uint8_t nguardbits; // number of guard bits
-} Jpeg2000QuantStyle;
-
-typedef struct Jpeg2000Pass {
- uint16_t rate;
- int64_t disto;
- uint8_t flushed[4];
- int flushed_len;
-} Jpeg2000Pass;
-
-typedef struct Jpeg2000Layer {
- uint8_t *data_start;
- int data_len;
- int npasses;
- double disto;
- int cum_passes;
-} Jpeg2000Layer;
-
-typedef struct Jpeg2000Cblk {
- uint8_t npasses;
- uint8_t ninclpasses; // number coding of passes included in codestream
- uint8_t nonzerobits;
- uint8_t incl;
- uint16_t length;
- uint16_t *lengthinc;
- uint8_t nb_lengthinc;
- uint8_t lblock;
- uint8_t *data;
- size_t data_allocated;
- int nb_terminations;
- int nb_terminationsinc;
- int *data_start;
- Jpeg2000Pass *passes;
- Jpeg2000Layer *layers;
- int coord[2][2]; // border coordinates {{x0, x1}, {y0, y1}}
- /* specific to HT code-blocks */
- int zbp;
- int pass_lengths[2];
-} Jpeg2000Cblk; // code block
-
-typedef struct Jpeg2000Prec {
- int nb_codeblocks_width;
- int nb_codeblocks_height;
- Jpeg2000TgtNode *zerobits;
- Jpeg2000TgtNode *cblkincl;
- Jpeg2000Cblk *cblk;
- int decoded_layers;
- int coord[2][2]; // border coordinates {{x0, x1}, {y0, y1}}
-} Jpeg2000Prec; // precinct
-
-typedef struct Jpeg2000Band {
- int coord[2][2]; // border coordinates {{x0, x1}, {y0, y1}}
- uint16_t log2_cblk_width, log2_cblk_height;
- int i_stepsize; // quantization stepsize
- float f_stepsize; // quantization stepsize
- Jpeg2000Prec *prec;
-} Jpeg2000Band; // subband
-
-typedef struct Jpeg2000ResLevel {
- uint8_t nbands;
- int coord[2][2]; // border coordinates {{x0, x1}, {y0, y1}}
- int num_precincts_x, num_precincts_y; // number of precincts in x/y direction
- uint8_t log2_prec_width, log2_prec_height; // exponent of precinct size
- Jpeg2000Band *band;
-} Jpeg2000ResLevel; // resolution level
-
-typedef struct Jpeg2000Component {
- Jpeg2000ResLevel *reslevel;
- DWTContext dwt;
- float *f_data;
- int *i_data;
- int coord[2][2]; // border coordinates {{x0, x1}, {y0, y1}} -- can be reduced with lowres option
- int coord_o[2][2]; // border coordinates {{x0, x1}, {y0, y1}} -- original values from jpeg2000 headers
- uint8_t roi_shift; // ROI scaling value for the component
-} Jpeg2000Component;
-
-/* misc tools */
-static inline int ff_jpeg2000_ceildivpow2(int a, int b)
-{
- return -((-(int64_t)a) >> b);
-}
-
-static inline int ff_jpeg2000_ceildiv(int a, int64_t b)
-{
- return (a + b - 1) / b;
-}
-
-/* TIER-1 routines */
-
-/* Set up lookup tables used in TIER-1. */
-void ff_jpeg2000_init_tier1_luts(void);
-
-/* Update significance of a coefficient at current position (x,y) and
- * for neighbors. */
-void ff_jpeg2000_set_significance(Jpeg2000T1Context *t1,
- int x, int y, int negative);
-
-extern uint8_t ff_jpeg2000_sigctxno_lut[256][4];
-
-/* Get context label (number in range[0..8]) of a coefficient for significance
- * propagation and cleanup coding passes. */
-static inline int ff_jpeg2000_getsigctxno(int flag, int bandno)
-{
- return ff_jpeg2000_sigctxno_lut[flag & 255][bandno];
-}
-
-static const uint8_t refctxno_lut[2][2] = { { 14, 15 }, { 16, 16 } };
-
-/* Get context label (number in range[14..16]) of a coefficient for magnitude
- * refinement pass. */
-static inline int ff_jpeg2000_getrefctxno(int flag)
-{
- return refctxno_lut[(flag >> 14) & 1][(flag & 255) != 0];
-}
-
-extern uint8_t ff_jpeg2000_sgnctxno_lut[16][16];
-extern uint8_t ff_jpeg2000_xorbit_lut[16][16];
-
-/* Get context label (number in range[9..13]) for sign decoding. */
-static inline int ff_jpeg2000_getsgnctxno(int flag, int *xorbit)
-{
- *xorbit = ff_jpeg2000_xorbit_lut[flag & 15][(flag >> 8) & 15];
- return ff_jpeg2000_sgnctxno_lut[flag & 15][(flag >> 8) & 15];
-}
-
-int ff_jpeg2000_init_component(Jpeg2000Component *comp,
- Jpeg2000CodingStyle *codsty,
- Jpeg2000QuantStyle *qntsty,
- int cbps, int dx, int dy,
- AVCodecContext *ctx);
-
-void ff_jpeg2000_reinit(Jpeg2000Component *comp, Jpeg2000CodingStyle *codsty);
-
-void ff_jpeg2000_cleanup(Jpeg2000Component *comp, Jpeg2000CodingStyle *codsty);
-
-static inline int needs_termination(int style, int passno) {
- if (style & JPEG2000_CBLK_BYPASS) {
- int type = passno % 3;
- passno /= 3;
- if (type == 0 && passno > 2)
- return 2;
- if (type == 2 && passno > 2)
- return 1;
- if (style & JPEG2000_CBLK_TERMALL) {
- return passno > 2 ? 2 : 1;
- }
- }
- if (style & JPEG2000_CBLK_TERMALL)
- return 1;
- return 0;
-}
-
-void ff_tag_tree_zero(Jpeg2000TgtNode *t, int w, int h, int val);
-
-#endif /* AVCODEC_JPEG2000_H */
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download DC Heroes RPG (3rd Edition) PDF for Free - Experience the Thrill of Superhero Adventures.md b/spaces/congsaPfin/Manga-OCR/logs/Download DC Heroes RPG (3rd Edition) PDF for Free - Experience the Thrill of Superhero Adventures.md
deleted file mode 100644
index ac33432f4aeb88f7370cba70497599aedd8a0404..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download DC Heroes RPG (3rd Edition) PDF for Free - Experience the Thrill of Superhero Adventures.md
+++ /dev/null
@@ -1,170 +0,0 @@
-
-
DC Heroes RPG (3rd Edition PDF Download): A Guide for Superhero Fans
-
If you are a fan of superheroes and comics, you might have heard of DC Heroes RPG, a role-playing game based on the DC Universe. This game allows you to create and play as your favorite heroes or villains, or even your own original characters, in a variety of scenarios and adventures. Whether you want to fight crime, save the world, or conquer it, DC Heroes RPG gives you the opportunity to do so.
In this article, we will give you a comprehensive guide on DC Heroes RPG, especially the 3rd edition, which is considered by many fans as the best version of the game. We will cover the following topics:
-
-
What is DC Heroes RPG?
-
How to play DC Heroes RPG?
-
Where to find DC Heroes RPG (3rd edition PDF download)?
-
Why should you play DC Heroes RPG?
-
-
By the end of this article, you will have a better understanding of DC Heroes RPG and how to enjoy it. You will also find some useful links and resources to help you get started with the game. So, let's begin!
-
What is DC Heroes RPG?
-
DC Heroes RPG is a role-playing game that lets you create and play as characters from the DC Universe, or your own original characters, in a variety of settings and genres. You can choose from hundreds of heroes and villains, such as Superman, Batman, Wonder Woman, Joker, Lex Luthor, Darkseid, and more. You can also create your own characters using a flexible and customizable system that allows you to define their attributes, skills, powers, advantages, drawbacks, motivations, and backgrounds.
-
A brief history of the game
-
DC Heroes RPG was first published in 1985 by Mayfair Games under license from DC Comics. It was designed by Greg Gorden and used a system called MEGS (Mayfair Exponential Game System), which was based on exponential scales and logarithmic tables. The game was well-received by fans and critics alike, and won several awards and nominations.
-
The game went through several revisions and expansions over the years, including new rules, supplements, modules, sourcebooks, and character guides. The most notable ones were the 2nd edition in 1989, which introduced some changes and clarifications to the rules, and the 3rd edition in 1993, which was a major overhaul of the system and the content. The 3rd edition was also the last official edition of the game published by Mayfair Games before they lost their license from DC Comics in 1996.
-
The main features of the 3rd edition
-
The 3rd edition of DC Heroes RPG is widely regarded by fans as the best version of the game. It features several improvements and innovations over the previous editions, such as:
-
dc heroes role-playing game 3rd edition pdf
-dc heroes rpg 3e pdf free download
-dc heroes rpg third edition pdf
-dc heroes rpg 3rd edition scribd
-dc heroes rpg 3rd edition review
-dc heroes rpg 3rd edition character sheet
-dc heroes rpg 3rd edition rules
-dc heroes rpg 3rd edition mayfair games
-dc heroes rpg 3rd edition sourcebook pdf
-dc heroes rpg 3rd edition blood of heroes
-dc heroes rpg 3rd edition powers and abilities
-dc heroes rpg 3rd edition online
-dc heroes rpg 3rd edition errata
-dc heroes rpg 3rd edition modules
-dc heroes rpg 3rd edition conversion
-dc heroes rpg 3rd edition vs mutants and masterminds
-dc heroes rpg 3rd edition ap value chart
-dc heroes rpg 3rd edition character generator
-dc heroes rpg 3rd edition adventure ideas
-dc heroes rpg 3rd edition villains pdf
-dc heroes rpg 3rd edition campaign setting
-dc heroes rpg 3rd edition universe book pdf
-dc heroes rpg 3rd edition who's who pdf
-dc heroes rpg 3rd edition combat system
-dc heroes rpg 3rd edition skill list
-dc heroes rpg 3rd edition attribute table
-dc heroes rpg 3rd edition equipment guide pdf
-dc heroes rpg 3rd edition magic and mysticism pdf
-dc heroes rpg 3rd edition gadgets and gear pdf
-dc heroes rpg 3rd edition blood of the innocent pdf
-dc heroes rpg 3rd edition blood of the machine pdf
-dc heroes rpg 3rd edition blood of the gods pdf
-dc heroes rpg 3rd edition blood of the angels pdf
-dc heroes rpg 3rd edition blood of the demons pdf
-dc heroes rpg 3rd edition blood of the zombies pdf
-dc heroes rpg 3rd edition blood of the vampires pdf
-dc heroes rpg 3rd edition blood of the werewolves pdf
-dc heroes rpg 3rd edition blood of the dragons pdf
-dc heroes rpg 3rd edition blood of the aliens pdf
-dc heroes rpg 3rd edition blood of the mutants pdf
-dc heroes rpg 3rd edition world at war pdf
-dc heroes rpg 3rd edition project prometheus pdf
-dc heroes rpg 3rd edition project youngblood pdf
-dc heroes rpg 3rd edition project metropolis pdf
-dc heroes rpg 3rd edition project gotham pdf
-dc heroes rpg 3rd edition project atlantis pdf
-dc heroes rpg 3rd edition project new genesis pdf
-dc heroes rpg 3rd edition project apocalypse pdf
-dc heroes rpg 3rd edition project olympus pdf
-
-
A revised and streamlined MEGS system that uses only d10s instead of d100s, simplifies some calculations and conversions, adds some optional rules and modifiers, and updates some powers and abilities.
-
A more comprehensive and detailed character creation and advancement system that allows more flexibility and customization for players. It also includes new powers, advantages, drawbacks, motivations, backgrounds, skills, subskills, specialties, hero points, wealth points, fame points, karma points, influence points, contacts points, equipment points, and more.
-
A more realistic and consistent power level system that balances the characters and the challenges they face. It also includes new power levels, such as Cosmic, Godlike, and Infinite, to accommodate the most powerful beings in the DC Universe.
-
A more diverse and rich setting and genre system that covers different eras, worlds, dimensions, and themes of the DC Universe. It also includes new settings, such as Elseworlds, Hypertime, and Kingdom Come, to explore alternative and possible scenarios and stories.
-
A more extensive and updated content and material system that includes more characters, events, organizations, locations, items, vehicles, weapons, gadgets, and more from the DC Universe. It also includes new content, such as the Death of Superman, the Knightfall Saga, the Zero Hour Crisis, and more.
-
-
The 3rd edition of DC Heroes RPG is a comprehensive and versatile game that can cater to any superhero fan's preferences and tastes. It is a game that can be played in any way you want, from a simple and fun adventure to a complex and epic saga.
-
How to play DC Heroes RPG?
-
Playing DC Heroes RPG is easy and fun. All you need are some dice (preferably d10s), some paper and pencils, some character sheets (which you can find online or create yourself), some rulebooks (which you can also find online or buy from official or unofficial sources), and some imagination and creativity.
-
The game is played by a group of players, usually between two to six. One of the players is the Gamemaster (GM), who is responsible for creating and running the scenarios and adventures for the other players. The other players are the Heroes (or Villains), who create and play as their own characters in the game.
-
The game is divided into sessions, which are usually between two to four hours long. Each session consists of a series of scenes, which are events or situations that the characters encounter or participate in. Each scene can involve different types of actions, such as dialogue, investigation, exploration, combat, skill use, power use, etc.
-
The game uses a system called MEGS (Mayfair Exponential Game System), which is based on exponential scales and logarithmic tables. The system uses four main attributes for each character: Physical (P), Mental (M), Mystical (M), and Spiritual (S). Each attribute has a value that ranges from 1 to 30 (or higher for some characters), which represents their relative power level in that attribute. For example, a normal human has an average value of 2 in each attribute, while Superman has a value of 25 in Physical.
-
The basic rules and mechanics
-
The basic rules and mechanics of DC Heroes RPG are simple and intuitive. They involve the following steps:
-
-
The GM sets the scene and describes what is happening to the players.
-
The players declare what their characters want to do in response to the scene.
-
The GM determines if their actions are possible or not, and if they require a roll or not.
-
If a roll is required, the player rolls a d10 and adds their attribute value to it. This is called their Action Value (AV).
-
The GM rolls a d10 and adds the difficulty value or the opposing attribute value to it. This is called their Opposing Value (OV).
-
The player compares their AV with the OV using a logarithmic table called the Universal Table (UT). The UT shows the result of the comparison in terms of RAPs (Result Points). RAPs indicate how well or how poorly the action succeeded or failed.
-
The GM applies the RAPs to the action's effect or outcome. For example, if the action was an attack, the RAPs are applied as damage to the target's Body attribute. If the action was a skill use, the RAPs are applied as information gained or task completed.
-
The GM narrates what happens next based on the result of the action.
-
-
These steps are repeated until the scene is resolved or until a new scene begins.
-
The character creation and advancement
-
The character creation and advancement system of DC Heroes RPG is flexible and customizable. It allows players to create any type of character they want, from existing heroes or villains to original ones. It also allows players to improve their characters over time by gaining experience points (XP) and spending them on new or improved attributes, skills, powers, advantages, drawbacks, motivations, backgrounds, and more. The character creation and advancement system of DC Heroes RPG follows these steps: - The player chooses a concept and a name for their character. The concept is a brief description of the character's identity, personality, origin, and role in the game. The name is the character's superhero or supervillain name, or their real name if they don't have one. - The player allocates points to their character's four main attributes: Physical (P), Mental (M), Mystical (M), and Spiritual (S). Each attribute has a base value of 2, and the player can increase or decrease it by spending or gaining points. The total number of points available depends on the power level of the game, which is determined by the GM. For example, a normal human character has 20 points to spend, while a cosmic-level character has 1000 points to spend. - The player chooses skills, subskills, specialties, powers, advantages, drawbacks, motivations, backgrounds, and other traits for their character. Each trait has a value that represents its relative power or effect in the game. The player can buy or sell traits by spending or gaining points. The total number of points available depends on the power level of the game and the attribute values of the character. For example, a normal human character can have up to 20 points worth of skills and powers, while a cosmic-level character can have up to 1000 points worth of skills and powers. - The player calculates their character's secondary attributes and derived values. These include Body (B), Mind (M), Soul (S), Initiative (I), Hero Points (HP), Wealth Points (WP), Fame Points (FP), Karma Points (KP), Influence Points (IP), Contacts Points (CP), Equipment Points (EP), and more. These values are derived from the character's main attributes and other traits using formulas or tables. - The player fills out their character sheet with all the relevant information and details about their character. They can also add some notes, descriptions, illustrations, quotes, or anything else that helps them flesh out their character. The character advancement system of DC Heroes RPG allows players to improve their characters over time by gaining experience points (XP) and spending them on new or improved traits. XP are awarded by the GM at the end of each session based on the character's performance, role-playing, achievements, challenges, and goals. XP can be spent on any trait that the player wants to increase or acquire for their character, as long as it is consistent with their concept and background. XP can also be saved for later use or exchanged for other types of points.
The combat and action resolution
-
The combat and action resolution system of DC Heroes RPG is based on the same basic rules and mechanics as described above. However, there are some additional rules and modifiers that apply specifically to combat and action situations. These include:
-
-
The initiative order, which determines who acts first in each round of combat or action. The initiative order is based on the Initiative value of each character involved in the scene.
-
The attack action, which is any action that involves harming or affecting another character or object. The attack action requires an AV/OV roll using the attacker's attribute or skill value as AV and the defender's attribute or skill value as OV. The result of the roll is applied as RAPs to the target's Body attribute or another relevant attribute.
-
The defense action, which is any action that involves avoiding or resisting an attack action. The defense action requires an AV/OV roll using the defender's attribute or skill value as AV and the attacker's attribute or skill value as OV. The result of the roll is subtracted from the RAPs inflicted by the attack action.
-
The damage types, which are different categories of harm or effects that can be inflicted by an attack action. The damage types are Physical (P), Mental (M), Mystical (M), Spiritual (S), Killing (K), Bashing (B), Energy (E), Magic (Mg), Mental Blast (MB), Power Drain (PD), Power Transfer (PT), Power Reserve (PR), Power Loss (PL), Power Boost (PB), Illusion (I), Mind Control (MC), Emotion Control (EC), Fear Control (FC), Persuasion Control (PC), and more. Each damage type has a different effect on the target's attributes and abilities, and some characters may have resistance or vulnerability to certain damage types.
-
The modifiers, which are factors that can affect the outcome of an action or a roll. The modifiers can be positive or negative, and can be applied to the AV, the OV, or the RAPs. Some examples of modifiers are distance, cover, visibility, surprise, teamwork, critical hit, critical miss, hero points, karma points, and more.
-
-
The combat and action resolution system of DC Heroes RPG is designed to be fast and dynamic, while also being realistic and consistent. It allows for a wide range of possibilities and outcomes, from simple punches and kicks to spectacular feats and stunts. It also encourages creativity and strategy from both the players and the GM.
-
Where to find DC Heroes RPG (3rd edition PDF download)?
-
If you are interested in playing DC Heroes RPG (3rd edition), you might be wondering where you can find the PDF files of the rulebooks and other materials. Unfortunately, the game is no longer in print or officially supported by DC Comics or Mayfair Games. However, there are still some ways to get your hands on the game, such as:
-
The official sources and publishers
-
The official sources and publishers of DC Heroes RPG (3rd edition) are DC Comics and Mayfair Games. They are the ones who own the rights and licenses to the game and its content. However, they have not published or reprinted the game since 1996, when they lost their license agreement. Therefore, finding physical copies of the game from these sources is very difficult and expensive.
-
However, there is a possibility that DC Comics or Mayfair Games might re-release or re-license the game in the future, either in digital or physical format. This could happen if there is enough demand and interest from the fans and the market. Therefore, you can try contacting them directly or through their websites or social media platforms to express your support and request for the game.
-
The fan-made and unofficial sources
-
The fan-made and unofficial sources of DC Heroes RPG (3rd edition) are various websites, blogs, forums, groups, and individuals who have created or shared their own versions or copies of the game and its content. These sources are not authorized or endorsed by DC Comics or Mayfair Games, but they are made by fans for fans. Therefore, finding PDF files of the game from these sources is relatively easy and cheap.
-
However, there are some risks and drawbacks involved in using these sources. First of all, the quality and accuracy of the PDF files may vary depending on the source. Some files may be incomplete, outdated, corrupted, or infected with viruses or malware. Second of all, the legality and ethics of using these files may be questionable depending on the source. Some files may be pirated, stolen, or infringing on the rights and licenses of DC Comics or Mayfair Games. Therefore, you should be careful and responsible when using these sources.
-
The legal and ethical issues
-
The legal and ethical issues of finding DC Heroes RPG (3rd edition PDF download) are complex and controversial. On one hand, you might argue that downloading or sharing PDF files of the game is fair use or fair dealing under certain circumstances. For example, if you already own a physical copy of the game but want a digital backup for convenience or preservation purposes. Or if you want to use the game for personal or educational purposes only.
-
On the other hand, you might argue that downloading or sharing PDF files of the game is illegal or unethical under certain circumstances. For example, if you do not own a physical copy of the game but want to avoid paying for it. Or if you want to use the game for commercial or malicious purposes.
-
Ultimately, the decision is up to you as an individual player or GM. You should be aware of the consequences and implications of your actions, and respect the rights and wishes of DC Comics and Mayfair Games as the original creators and publishers of the game. You should also be respectful and supportive of the fan community and the fan-made content that they produce and share. You should also be grateful and appreciative of the game and its content, and enjoy it as much as you can.
-
Why should you play DC Heroes RPG?
-
Now that you know what DC Heroes RPG is, how to play it, and where to find it, you might be wondering why you should play it. What makes this game so special and appealing? What are the benefits and advantages of playing it? What are the challenges and drawbacks of playing it? What are the tips and tricks for a better gaming experience?
-
Here are some possible answers to these questions:
-
The benefits and advantages of the game
-
DC Heroes RPG has many benefits and advantages that make it a great game for superhero fans. Some of them are:
-
-
It is a game that allows you to create and play as any character you want, from existing heroes or villains to original ones. You can also mix and match different traits, powers, skills, backgrounds, and more to create your own unique characters.
-
It is a game that allows you to explore and experience the DC Universe in any way you want, from different eras, worlds, dimensions, and themes. You can also create your own scenarios and stories using the rich and diverse content and material available in the game.
-
It is a game that uses a simple and intuitive system that is easy to learn and play. It also uses a flexible and customizable system that can be adapted and modified to suit your preferences and tastes.
-
It is a game that is fun and exciting, as it involves a lot of action, adventure, drama, humor, suspense, mystery, romance, and more. It also involves a lot of creativity, imagination, strategy, teamwork, role-playing, and problem-solving.
-
It is a game that is educational and informative, as it teaches you about the history, culture, science, mythology, philosophy, psychology, sociology, politics, ethics, morality, and more of the DC Universe and the real world.
-
-
The challenges and drawbacks of the game
-
DC Heroes RPG also has some challenges and drawbacks that make it a difficult or problematic game for some players. Some of them are:
-
-
It is a game that requires a lot of preparation and research, as it involves a lot of rules, details, facts, figures, references, sources, and more. It also requires a lot of resources and materials, such as dice, paper, pencils, character sheets, rulebooks, and more. It also requires a lot of time and commitment, as it involves a lot of sessions, scenes, actions, and rolls.
-
It is a game that can be confusing and overwhelming, as it involves a lot of exponential scales, logarithmic tables, calculations, conversions, modifiers, and more. It also involves a lot of inconsistencies and contradictions, as it covers a lot of different versions, editions, revisions, expansions, and more of the DC Universe and the game itself.
-
It is a game that can be frustrating and disappointing, as it involves a lot of luck, chance, randomness, and unpredictability. It also involves a lot of imbalance and unfairness, as it covers a lot of different power levels, abilities, challenges, and outcomes.
-
It is a game that can be boring and repetitive, as it involves a lot of similar or identical actions, rolls, results, and effects. It also involves a lot of limitations and restrictions, as it covers a lot of rules, guidelines, conventions, and expectations.
-
It is a game that can be controversial and problematic, as it involves a lot of sensitive or offensive topics, such as violence, death, crime, war, politics, religion, morality, ethics, and more. It also involves a lot of legal or ethical issues, such as piracy, plagiarism, infringement, and more.
-
-
The tips and tricks for a better gaming experience
-
DC Heroes RPG can be a great game for superhero fans if played properly and wisely. Here are some tips and tricks that can help you have a better gaming experience:
-
-
Be flexible and adaptable. Don't be afraid to change or modify the rules or the content to suit your needs or preferences. Don't be afraid to try new or different things to spice up your game. Don't be afraid to make mistakes or learn from them.
-
Be creative and imaginative. Use your own ideas or inspirations to create or enhance your characters or scenarios. Use your own style or voice to narrate or role-play your actions or reactions. Use your own logic or intuition to solve or overcome your problems or challenges.
-
Be cooperative and respectful. Work with your GM and your fellow players to create or enjoy a shared story or adventure. Listen to their suggestions or feedback and give them yours. Respect their decisions or opinions and expect them to respect yours.
-
Be fun and exciting. Have fun with your game and make it fun for others. Be enthusiastic and passionate about your game and show it to others. Be adventurous and daring with your game and challenge yourself and others.
-
Be loyal and supportive. Support the game and its creators by buying or promoting their products or services. Support the fan community by joining or contributing to their websites or groups. Support the genre by reading or watching more superhero comics or movies.
-
-
Conclusion
-
DC Heroes RPG (3rd edition) is a role-playing game that lets you create and play as characters from the DC Universe or your own original characters in a variety of settings and genres. It is a game that is simple and intuitive, flexible and customizable, fun and exciting, educational and informative, and loyal and supportive. It is a game that can cater to any superhero fan's preferences and tastes. If you are interested in playing DC Heroes RPG (3rd edition), you can find the PDF files of the rulebooks and other materials from various sources, such as the official publishers, the fan-made websites, or the legal and ethical platforms. However, you should be careful and responsible when using these sources, and respect the rights and wishes of the original creators and publishers of the game. We hope that this article has given you a comprehensive guide on DC Heroes RPG (3rd edition) and how to enjoy it. We also hope that you have learned something new and useful about the game and its content. If you have any questions or comments, feel free to contact us or leave them below. Thank you for reading and happy gaming!
-
FAQs
-
Here are some frequently asked questions about DC Heroes RPG (3rd edition) and their answers:
-
-
Q: How many players can play DC Heroes RPG?
-
A: DC Heroes RPG can be played by a group of two to six players, with one of them being the Gamemaster (GM) and the rest being the Heroes (or Villains).
-
Q: How long does a session of DC Heroes RPG last?
-
A: A session of DC Heroes RPG can last between two to four hours, depending on the scenario and the pace of the game.
-
Q: What kind of dice do I need to play DC Heroes RPG?
-
A: DC Heroes RPG uses only d10s (ten-sided dice) for its rolls and calculations.
-
Q: Where can I find more information or resources about DC Heroes RPG?
-
A: You can find more information or resources about DC Heroes RPG from various sources, such as:
-
-
The official websites or social media platforms of DC Comics or Mayfair Games.
-
The fan-made websites or groups dedicated to DC Heroes RPG, such as [DC Heroes RPG], [DC Universe RPG], [DC Adventures], [DC Roleplaying], [DC Heroes Wiki], [DC Heroes Facebook Group], [DC Heroes Reddit Community], and more.
-
The legal and ethical platforms that offer PDF files of DC Heroes RPG, such as [DriveThruRPG], [RPGNow], [Amazon], [eBay], [Google Books], [Archive.org], [Scribd], [Library Genesis], and more.
-
-
Q: How can I support DC Heroes RPG and its creators?
-
A: You can support DC Heroes RPG and its creators by buying or promoting their products or services, joining or contributing to their websites or groups, reading or watching more superhero comics or movies, and playing or enjoying their game.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download bhop pro for PC and Mac The Ultimate Bunnyhopping and Surfing Simulator.md b/spaces/congsaPfin/Manga-OCR/logs/Download bhop pro for PC and Mac The Ultimate Bunnyhopping and Surfing Simulator.md
deleted file mode 100644
index bfa8b03c2c79d9a9c040006eb3575bbc992284d6..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download bhop pro for PC and Mac The Ultimate Bunnyhopping and Surfing Simulator.md
+++ /dev/null
@@ -1,138 +0,0 @@
-
-
How to Download and Play Bhop Pro on PC
-
If you are a fan of first-person shooter games, you might have heard of or tried Bhop Pro, a popular mobile game that lets you practice your bunny hopping skills in various maps and modes. But did you know that you can also download and play Bhop Pro on your PC for a more immersive and enjoyable gaming experience? In this article, we will show you how to do that with different methods and tools, as well as some tips and tricks to optimize your performance. Let's get started!
Bhop Pro is a simulation game developed by begma, inspired by the bunny hopping technique used by professional Counter-Strike players. Bunny hopping is a way of moving faster in FPS games by jumping continuously while turning left or right in the air. It requires precise timing, coordination, and practice to master.
-
In Bhop Pro, you can jump and bunny hop in FPS mode across various maps and modes. You can prove that you are really a bhop master with the scores and durations you will get. You can also get new rankings by doing parkour quests, or compete with other players online. The game also offers a variety of skins, knives, gloves, spinners, and cases to customize your character.
-
Bhop Pro is one of the most realistic bunny hop games for Android devices, with smooth graphics, realistic physics, and easy controls. It has over 10 million downloads and 4.5 stars rating on Google Play Store. It is also available for iOS devices.
-
How to download and play bhop pro on PC with BlueStacks
-bhop pro on Windows PC free download
-Bhop PRO on Steam - buy and play
-bhop pro for PC - simulation game by begma
-bhop pro APK/XAPK for Windows 7/8/10/11
-bhop pro PC gameplay - bunny hop and surf like a pro
-bhop pro PC requirements - minimum and recommended specs
-bhop pro PC cheats and hacks - how to get unlimited coins
-bhop pro PC online - play with friends and other players
-bhop pro PC review - pros and cons of the game
-bhop pro PC tips and tricks - how to improve your skills
-bhop pro PC mods and maps - how to install and play custom content
-bhop pro PC controller support - how to use a gamepad
-bhop pro PC keyboard settings - how to customize your controls
-bhop pro PC update - what's new in the latest version
-bhop pro PC vs mobile - which one is better
-bhop pro PC free trial - how to play the game for free
-bhop pro PC download size - how much space do you need
-bhop pro PC error - how to fix common issues and bugs
-bhop pro PC guide - how to master the game
-bhop pro PC best settings - how to optimize your performance
-bhop pro PC leaderboard - how to rank up and compete with others
-bhop pro PC skins and cosmetics - how to unlock and equip them
-bhop pro PC achievements and trophies - how to earn them all
-bhop pro PC wallpaper - how to download and use it
-bhop pro PC emulator - how to run the game on other platforms
-bhop pro PC nox player - how to play the game with Nox app player
-bhop pro PC reddit - join the community and discuss the game
-bhop pro PC discord - chat with other players and get help
-bhop pro PC steam key - how to get it for free or cheap
-bhop pro PC mac - how to play the game on Mac OS X
-bhop pro PC linux - how to play the game on Linux OS
-bhop pro PC windows 11 compatibility - will the game work on the new OS
-bhop pro PC speedrun - how to complete the game as fast as possible
-bhop pro PC tutorial - learn the basics of the game
-bhop pro PC video - watch gameplay and reviews of the game
-bhop pro PC demo - try the game before you buy it
-bhop pro PC crack - how to download and play the game for free illegally (not recommended)
-bhop pro PC alternatives - similar games you might like
-bhop pro PC challenges - test your skills with different modes and levels
-bhop pro PC codes and coupons - how to get discounts and deals on the game
-bhop pro PC memes and jokes - have fun with the game's humor and culture
-bhop pro PC wiki - find information and facts about the game
-bhop pro PC FAQ - get answers to frequently asked questions about the game
-bhop pro PC forum - share your opinions and feedback about the game
-bhop pro PC news and updates - stay informed about the latest developments of the game
-bhop pro PC fan art and creations - see what other players have made with the game's assets
-bhop pro PC merchandise and products - buy cool stuff related to the game
-
Why play Bhop Pro on PC?
-
While Bhop Pro is designed for mobile devices, playing it on PC can offer several advantages. For example:
-
-
You can enjoy a bigger screen size and higher resolution for better visuals.
-
You can use a keyboard and mouse or a gamepad for more accurate and comfortable controls.
-
You can avoid battery drain, overheating, or lag issues that might affect your mobile device.
-
You can access more features and settings that might not be available on mobile devices.
-
-
So, how can you download and play Bhop Pro on PC? There are two main ways: using an Android emulator or using an APK file. Let's take a look at each method in detail.
-
How to Download and Play Bhop Pro on PC with BlueStacks
-
What is BlueStacks and how does it work?
-
BlueStacks is one of the most popular Android emulators for PC. It allows you to run any Android app or game on your Windows or Mac computer. It has a user-friendly interface, a high-performance engine, and a rich library of games and apps. You can also customize your keyboard and mouse settings, record and stream your gameplay, and access exclusive features like Eco Mode, Multi-Instance Manager, and Smart Controls.
-
Step-by-step guide to install and run Bhop Pro on PC with BlueStacks
-
Here are the steps to download and play Bhop Pro on PC with BlueStacks:
-
-
Download and install BlueStacks from its official website. It is free and safe to use.
-
Launch BlueStacks and sign in with your Google account. If you don't have one, you can create one for free.
-
Go to the search bar on the top-right corner and type "Bhop Pro". You will see the game icon in the search results.
-
Click on the game icon and then click on "Install" to download and install Bhop Pro on your PC.
-
Once the installation is complete, you can click on "Open" to launch Bhop Pro on your PC.
-
You can also find the game icon on the home screen or the app drawer of BlueStacks. You can double-click on it to run Bhop Pro anytime.
-
-
Tips and tricks to optimize your Bhop Pro experience on PC with BlueStacks
-
Here are some tips and tricks to make the most out of playing Bhop Pro on PC with BlueStacks:
-
-
You can adjust the graphics quality, resolution, FPS, and other settings of Bhop Pro from the in-game menu or from the BlueStacks settings menu.
-
You can use the default keyboard and mouse controls for Bhop Pro or customize them according to your preference. You can also use a gamepad if you have one connected to your PC.
-
You can enable Eco Mode to reduce CPU and RAM usage and improve your PC's performance while playing Bhop Pro. You can also use Multi-Instance Manager to run multiple instances of Bhop Pro or other games at the same time.
-
You can use Smart Controls to automatically switch between keyboard and mouse controls depending on the game scene. For example, you can use the mouse to aim and shoot in FPS mode, and use the keyboard to jump and move in bunny hop mode.
-
You can use the Screen Recorder feature to record your Bhop Pro gameplay and save it as a video file. You can also use the Streaming Mode feature to stream your Bhop Pro gameplay to Twitch, YouTube, Facebook, or other platforms.
-
-
How to Download and Play Bhop Pro on PC with Other Emulators
-
What are other emulators and how do they differ from BlueStacks?
-
BlueStacks is not the only Android emulator for PC. There are many other options that you can try, such as MuMu Player, NoxPlayer, LDPlayer, MEmu, etc. Each emulator has its own advantages and disadvantages, such as compatibility, performance, features, interface, etc. You can compare them and choose the one that suits your needs best.
-
In this article, we will show you how to download and play Bhop Pro on PC with two other popular emulators: MuMu Player and NoxPlayer. Both of them are free, easy to use, and support a wide range of games and apps. Let's see how they work.
-
Step-by-step guide to install and run Bhop Pro on PC with MuMu Player
-
MuMu Player is an Android emulator developed by NetEase, a Chinese gaming company. It has a simple interface, a fast engine, and a smooth gaming experience. It also supports keyboard and mouse controls, gamepad support, screen recording, multi-instance mode, etc. Here are the steps to download and play Bhop Pro on PC with MuMu Player:
-
-
Download and install MuMu Player from its official website. It is available in English and Chinese versions.
-
Launch MuMu Player and sign in with your Google account. If you don't have one, you can create one for free.
-
Go to the Google Play Store app on MuMu Player and search for "Bhop Pro". You will see the game icon in the search results.
-
Click on the game icon and then click on "Install" to download and install Bhop Pro on your PC.
-
Once the installation is complete, you can click on "Open" to launch Bhop Pro on your PC.
-
You can also find the game icon on the home screen or the app drawer of MuMu Player . You can double-click on it to run Bhop Pro anytime.
-
-
Step-by-step guide to install and run Bhop Pro on PC with NoxPlayer
-
NoxPlayer is another Android emulator that is widely used by gamers. It has a sleek interface, a powerful engine, and a stable gaming experience. It also supports keyboard and mouse controls, gamepad support, screen recording, multi-instance mode, macro recorder, etc. Here are the steps to download and play Bhop Pro on PC with NoxPlayer:
-
-
Download and install NoxPlayer from its official website. It is available in multiple languages.
-
Launch NoxPlayer and sign in with your Google account. If you don't have one, you can create one for free.
-
Go to the Google Play Store app on NoxPlayer and search for "Bhop Pro". You will see the game icon in the search results.
-
Click on the game icon and then click on "Install" to download and install Bhop Pro on your PC.
-
Once the installation is complete, you can click on "Open" to launch Bhop Pro on your PC.
-
You can also find the game icon on the home screen or the app drawer of NoxPlayer. You can double-click on it to run Bhop Pro anytime.
-
-
Conclusion
-
A summary of the main points and benefits of playing Bhop Pro on PC
-
Bhop Pro is a fun and challenging game that lets you practice your bunny hopping skills in various maps and modes. It is one of the most realistic and popular bunny hop games for mobile devices, but you can also download and play it on your PC for a better gaming experience. Playing Bhop Pro on PC can give you several advantages, such as a bigger screen size, more accurate and comfortable controls, improved performance, and more features and settings.
-
To download and play Bhop Pro on PC, you can use an Android emulator or an APK file. An Android emulator is a software that allows you to run any Android app or game on your PC. There are many Android emulators that you can choose from, such as BlueStacks, MuMu Player, NoxPlayer, etc. Each emulator has its own pros and cons, so you can compare them and pick the one that suits your needs best. In this article, we have shown you how to download and play Bhop Pro on PC with three of the most popular emulators: BlueStacks, MuMu Player, and NoxPlayer. We have also given you some tips and tricks to optimize your Bhop Pro experience on PC with each emulator.
-
A call to action to download and play Bhop Pro on PC today
-
If you are ready to take your bunny hopping skills to the next level, don't wait any longer. Download and play Bhop Pro on PC today and enjoy the thrill of jumping and shooting in FPS mode across various maps and modes. You can also customize your character with different skins, knives, gloves, spinners, and cases. You can also compete with other players online or do parkour quests to get new rankings. Bhop Pro is a game that will keep you entertained for hours.
-
So what are you waiting for? Download and play Bhop Pro on PC today and become a bhop master!
-
FAQs
-
What are the system requirements for playing Bhop Pro on PC?
-
The system requirements for playing Bhop Pro on PC may vary depending on the emulator that you use. However, here are some general guidelines that you can follow:
-
-
You need a Windows or Mac computer with at least 4 GB of RAM and 4 GB of free disk space.
-
You need a stable internet connection to download and play Bhop Pro online.
-
You need a Google account to access the Google Play Store and install Bhop Pro.
-
You may need a keyboard and mouse or a gamepad for better controls.
-
-
How can I customize the controls for Bhop Pro on PC?
-
You can customize the controls for Bhop Pro on PC from the emulator settings menu or from the in-game menu. You can assign different keys or buttons for different actions, such as jumping, moving, aiming, shooting, etc. You can also adjust the sensitivity, acceleration, inversion, etc. of your mouse or gamepad. You can save your custom controls as a preset or reset them to default anytime.
-
How can I record and share my Bhop Pro gameplay on PC?
-
You can record and share your Bhop Pro gameplay on PC using the screen recording or streaming features of the emulator that you use. For example, you can use the Screen Recorder feature of BlueStacks or MuMu Player to record your Bhop Pro gameplay and save it as a video file. You can also use the Streaming Mode feature of BlueStacks or the Live feature of NoxPlayer to stream your Bhop Pro gameplay to Twitch, YouTube, Facebook, or other platforms. You can also edit, trim, or add effects to your recorded or streamed videos using the emulator tools or other software.
-
How can I access new maps and skins for Bhop Pro on PC?
-
You can access new maps and skins for Bhop Pro on PC by updating the game regularly or by purchasing them from the in-game store. You can also earn coins by playing the game or watching ads, and use them to buy cases that contain random items. You can also trade items with other players online or offline. Some maps and skins are exclusive to certain modes or events, so you need to check them out frequently.
-
How can I compete with other players in Bhop Pro on PC?
-
You can compete with other players in Bhop Pro on PC by joining the online mode or creating your own room. You can also invite your friends or other players to join your room. You can choose from different maps and modes, such as speedrun, deathmatch, zombie escape, etc. You can also chat with other players using the in-game chat system or voice chat. You can also check your rank and stats on the leaderboard or the profile page.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Evertale APK Latest Version - Join the Epic Adventure and Save the World of Erden!.md b/spaces/congsaPfin/Manga-OCR/logs/Evertale APK Latest Version - Join the Epic Adventure and Save the World of Erden!.md
deleted file mode 100644
index d0b16df1c33f0cf7ed82096c48995acd8aed3f99..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Evertale APK Latest Version - Join the Epic Adventure and Save the World of Erden!.md
+++ /dev/null
@@ -1,153 +0,0 @@
-
-
Evertale: A Captivating Fantasy RPG for Mobile Devices
-
Are you looking for a new role-playing game to play on your phone or tablet? Do you love catching, training, and evolving monsters in a stunning fantasy world? If so, you might want to check out Evertale, a game that has been compared to Pokémon due to its fighting style. The game has solid writing, a lovely art style, and strategic combat. It has been well-received by strategy RPG enthusiasts and beginners alike, with over 5 million downloads from the Google Play Store alone. However, there have been complaints about the game falsely advertising itself as a scary Pokémon version. Some players have also expressed disappointment with the online story, which does not allow for free roaming or catching new monsters.
-
In this article, we will give you a comprehensive review of Evertale, covering its features, gameplay, story, modes, pros and cons, tips and tricks, and FAQs. By the end of this article, you will have a clear idea of what Evertale is all about and whether it is worth playing or not.
Evertale is an action-adventure role-playing game that was released in 2019 by ZigZaGame Inc., a Japanese game developer that also created Neo Monsters. The game is available for Android and iOS devices.
-
Evertale takes you to the fantasy world of Erden and takes on a quest to liberate the land from the forces of Pandemonium. As you journey through 6 regions in the land, you will encounter a variety of characters who may join you on your adventure and there are various monsters scattered across each map to battle, capture, collect, and train.
-
Various quests lead you to progress your game and there are plenty of side quests to earn rewards for you to strengthen your party faster. There are plenty of areas to discover and explore as treasures and rare monsters abound in each region.
-
Beyond the game’s story, joining other players in guilds can earn you unique benefits and engaging in real-time PvP leagues can help you earn more rewards and rank higher. Evertale is a game that combines elements of monster catching, turn-based combat, and story-driven adventure in a captivating way.
-
Why is Evertale worth playing?
-
Evertale is worth playing for many reasons, such as:
-
-
It has a rich and immersive story that spans over 180 chapters, with multiple endings and choices that affect the outcome.
-
It has a beautiful and diverse world that is inspired by Japanese folklore, mythology, and fairy tales, with over 200 hand-drawn locations to explore.
-
It has a large and varied collection of over 300 monsters to catch, train, and evolve, each with their own unique skills, traits, and personalities.
-
It has a strategic and challenging combat system that requires you to use your monsters' abilities wisely, as well as your own team spirit and elemental advantages.
-
It has a fun and rewarding online mode that lets you join guilds, participate in events, and compete in PvP leagues against other players from around the world.
-
-
If you are a fan of RPGs, especially those that involve monster collecting and battling, you will definitely enjoy Evertale and its many features.
-
How to download and play Evertale?
-
Evertale is available for both Android and iOS devices. You can download it from the Google Play Store or the App Store for a small fee of $0.99. However, the game also offers in-app purchases that can enhance your gaming experience, such as diamonds (the premium currency), bundles (that contain various items and resources), and subscriptions (that provide daily rewards and bonuses).
-
evertale game download apk
-evertale mod apk unlimited soul stone
-evertale offline apk
-evertale 2.0.85 apk
-evertale apk free download
-evertale hack apk
-evertale android apk
-evertale update apk
-evertale full version apk
-evertale apk obb
-evertale apk mod menu
-evertale latest mod apk
-evertale apk pure
-evertale 2.0.83 apk
-evertale apk rexdl
-evertale cracked apk
-evertale premium apk
-evertale apk mirror
-evertale 2.0.82 apk
-evertale apk uptodown
-evertale mod apk platinmods
-evertale paid apk
-evertale pro apk
-evertale old version apk
-evertale modded apk download
-evertale unlimited money apk
-evertale original apk
-evertale mod apk latest version 2023
-evertale modded apk offline
-evertale mod apk android 1
-evertale cheat apk
-evertale unlocked apk
-evertale modded apk free download
-evertale modded apk unlimited everything
-evertale modded apk latest version 2022
-evertale modded apk no root
-evertale modded apk online mode
-evertale modded apk unlimited gems and coins
-evertale modded apk with obb file
-evertale modded apk for ios
-
To play Evertale on your PC or Mac, you will need to use an emulator software that can run Android or iOS apps on your computer. Some of the popular emulators are BlueStacks, NoxPlayer, and LDPlayer. You will need to download and install the emulator of your choice, then launch it and search for Evertale in the app store. Once you find it, you can download and install it on your emulator and start playing.
-
How to catch, train, and evolve monsters in Evertale?
-
Catching, training, and evolving monsters are some of the core aspects of Evertale. Here are some tips on how to do them:
-
-
To catch a monster, you need to encounter it in the wild or in special events. You can use various items to lure or weaken the monster, such as bait, traps, potions, or weapons. Then, you need to throw a soul stone at it to capture it. The higher the monster's rarity and level, the harder it is to catch.
-
To train a monster, you need to use it in battles or quests to gain experience points (XP) and level up. You can also use items such as books or scrolls to boost its XP or stats. Each monster has a maximum level that depends on its rarity and evolution stage.
-
To evolve a monster, you need to meet certain requirements, such as reaching a specific level or having a specific item. Evolving a monster will increase its stats, skills, and appearance. Some monsters have multiple evolution paths that you can choose from.
-
-
You can view your monster collection in the menu under "Monsters". You can also customize your monster team by selecting up to 6 monsters for your active party. You can switch between different parties depending on the situation or your preference.
How to progress in the story mode and online mode in Evertale?
-
Evertale has two main modes of play: the story mode and the online mode. Each mode has its own features, challenges, and rewards. Here are some differences and tips on how to progress in each mode:
-
Story Mode
-
The story mode is the offline mode of Evertale, where you can follow the main plot of the game and experience the adventures of the Crestbearers. The story mode is divided into 6 regions, each with its own map, quests, monsters, and bosses. You can explore each region at your own pace, but you need to complete certain quests to unlock the next region. You can also replay any region you have cleared to farm for more XP, items, or monsters.
-
The story mode also has a difficulty setting that you can change at any time. The higher the difficulty, the stronger the enemies and the better the rewards. You can choose from Normal, Hard, or Nightmare difficulty levels. You can also unlock a special difficulty level called Endless Mode after you finish the story mode. In Endless Mode, you can challenge randomly generated dungeons with increasing difficulty and rewards.
-
Online Mode
-
The online mode is the online mode of Evertale, where you can interact with other players and participate in various events and activities. The online mode requires an internet connection and consumes mana, a special currency that regenerates over time or can be purchased with real money. The online mode has several features, such as:
-
-
Online Story: This is a continuation of the story mode, where you can follow the events that happen after the main plot. The online story has its own quests, monsters, and bosses, but it does not allow you to free roam or catch new monsters. You can only use the monsters you have collected in the story mode or from events.
-
Guilds: This is where you can join or create a guild with other players. Guilds can provide you with various benefits, such as guild chat, guild shop, guild quests, guild wars, and guild raids. You can also earn guild coins by contributing to your guild's activities, which you can use to buy exclusive items or monsters from the guild shop.
-
PvP Leagues: This is where you can compete against other players in real-time battles. PvP leagues have different tiers and seasons, each with its own rules and rewards. You can earn league points by winning battles, which you can use to rank up and unlock more rewards. You can also earn league coins by participating in battles, which you can use to buy special items or monsters from the league shop.
-
Events: These are special activities that happen periodically in the online mode. Events can offer unique quests, monsters, items, or rewards that are only available for a limited time. Some examples of events are story events, challenge events, colosseum events, and summon events.
-
-
To progress in the online mode, you need to balance your mana consumption and your participation in various features. You also need to constantly upgrade your monster team and your equipment to keep up with the increasing difficulty and competition.
What are the pros and cons of Evertale?
-
Evertale is not a perfect game, and it has its own advantages and disadvantages. Here are some of the pros and cons of Evertale that you should consider before playing it:
-
Pros
-
-
It has a captivating and immersive story that will keep you hooked for hours.
-
It has a stunning and diverse world that will make you want to explore every corner.
-
It has a large and varied collection of monsters that will satisfy your collector's itch.
-
It has a strategic and challenging combat system that will test your skills and tactics.
-
It has a fun and rewarding online mode that will let you interact with other players and compete for glory.
-
-
Cons
-
-
It has a misleading advertisement that may disappoint some players who expected a different game.
-
It has a restrictive online story that may frustrate some players who wanted more freedom and variety.
-
It has a high difficulty curve that may discourage some players who are not used to RPGs or monster games.
-
It has a pay-to-win aspect that may annoy some players who prefer a fair and balanced game.
-
It has a limited mana system that may limit some players who want to play more often or longer.
-
-
Evertale is a game that has a lot of potential, but it also has some flaws that need to be improved. Depending on your preferences and expectations, you may find Evertale to be either a gem or a dud.
-
What are some tips and tricks for beginners in Evertale?
-
If you are new to Evertale, you may feel overwhelmed by the game's complexity and difficulty. However, don't worry, because we have some tips and tricks that can help you get started and enjoy the game more. Here are some of them:
-
-
Save your diamonds for summoning events. Diamonds are the premium currency in Evertale, and they are hard to come by. You can use them to buy various items or resources, but the best way to use them is to summon new monsters during special events. These events offer higher chances of getting rare or legendary monsters, as well as exclusive monsters that are only available for a limited time.
-
Use auto-battle wisely. Auto-battle is a feature that lets you automate your battles and save time. However, auto-battle is not always the best option, especially when you face tougher enemies or bosses. Auto-battle does not take into account your monsters' skills, traits, or elemental advantages, and it may make suboptimal moves or waste your spirit points. Therefore, you should only use auto-battle when you are confident that you can win easily, or when you want to farm for XP or items.
-
Join a guild as soon as possible. Guilds are groups of players who can help each other out in various ways. By joining a guild, you can access the guild chat, where you can ask for advice, tips, or help from other members. You can also access the guild shop, where you can buy exclusive items or monsters with guild coins. You can also participate in guild quests, guild wars, and guild raids, where you can earn more rewards and have fun with your guildmates.
-
Experiment with different monster combinations. Evertale offers a lot of diversity and customization when it comes to your monster team. You can mix and match different monsters based on their skills, traits, personalities, or elements. You can also change their equipment, accessories, or runes to enhance their stats or abilities. You can also evolve them into different forms to unlock new skills or appearances. You should try out different monster combinations and see what works best for you.
-
Have fun and enjoy the game. Evertale is a game that can offer you hours of entertainment and satisfaction. You can immerse yourself in the story mode, explore the world mode, interact with other players in the online mode, or collect and train your monsters in the monster mode. You can play the game at your own pace and style, and discover new things along the way. You should have fun and enjoy the game as much as possible.
-
-
These are some of the tips and tricks that can help you get started and enjoy Evertale more. Of course, there are more things to learn and discover in the game, but we hope that this article has given you a good overview of what Evertale is all about.
-
Frequently Asked Questions
-
Here are some of the frequently asked questions about Evertale:
-
Q: How do I get more diamonds in Evertale?
-
A: There are several ways to get more diamonds in Evertale, such as:
-
-
Completing quests and achievements in the story mode or the online mode.
-
Participating in events and activities in the online mode.
-
Ranking up and unlocking rewards in the PvP leagues.
-
Buying them with real money from the shop.
-
-
Q: How do I get more mana in Evertale?
-
A: Mana is the currency that you need to play the online mode of Evertale. You can get more mana by:
-
-
Waiting for it to regenerate over time. You can regenerate up to 100 mana per day.
-
Using mana potions that you can buy from the shop or get from events or rewards.
-
Buying mana with diamonds from the shop.
-
-
Q: How do I get more soul stones in Evertale?
-
A: Soul stones are the items that you need to catch monsters in Evertale. You can get more soul stones by:
-
-
Buying them with gold or diamonds from the shop.
-
Finding them in chests or hidden spots in the story mode or the online mode.
-
Getting them from events or rewards.
-
-
Q: How do I get more gold in Evertale?
-
A: Gold is the currency that you need to buy items, resources, or equipment in Evertale. You can get more gold by:
-
-
Winning battles or quests in the story mode or the online mode.
-
Selling items, resources, or equipment that you don't need.
-
Getting them from events or rewards.
-
-
Q: How do I get more rare or legendary monsters in Evertale?
-
A: Rare or legendary monsters are the most powerful and desirable monsters in Evertale. You can get more rare or legendary monsters by:
-
-
Summoning them during special events that offer higher chances of getting them.
-
Catching them in the wild or in special locations in the story mode or the online mode.
-
Evolving your existing monsters into their rare or legendary forms.
-
-
I hope this article has helped you learn more about Evertale and how to play it. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and have fun playing Evertale!
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Hay Day OBB File - Everything You Need to Know Before Downloading.md b/spaces/congsaPfin/Manga-OCR/logs/Hay Day OBB File - Everything You Need to Know Before Downloading.md
deleted file mode 100644
index ddb6c3e8582ae4713c08453fc876ae3b7d4fd45e..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Hay Day OBB File - Everything You Need to Know Before Downloading.md
+++ /dev/null
@@ -1,86 +0,0 @@
-
-
How to Download the OBB File for Hay Day on Android
-
Hay Day is one of the most popular farming games for Android devices. It lets you create your own farm, grow crops, raise animals, trade with neighbors, and explore the valley. However, to enjoy all these features, you need to download and install an additional file called an OBB file. In this article, we will explain what an OBB file is, why you need it, how to download it from a trusted source, and how to install it on your Android device.
-
What is an OBB File and Why Do You Need It?
-
An OBB file is an expansion file used by some Android apps distributed using the Google Play online store. It contains data that is not stored in the application's main package (.APK file), such as graphics, media files, and other large program assets. OBB files are often stored in a device's shared storage folder.
Hay Day uses an OBB file to provide graphics, sounds, and videos for the game. These files are separate from the app's APK file and can be used by different apps. For example, an OBB file with map data may be shared and used by Android apps that have been granted access rights to the map data in those .obb files.
-
How to Download the OBB File for Hay Day from a Trusted Source
-
Before downloading any files from external sources, you should be aware of the risks involved. Some websites or apps may offer fake or malicious files that can harm your device or compromise your privacy. Therefore, you should always use a reliable website or app to download the OBB file for Hay Day.
-
One of the trusted sources that we recommend to download the OBB file for Hay Day is the website [APKPure](^1^). This website offers verified and safe APK and OBB files for many Android games and apps. To download the OBB file for Hay Day from APKPure, follow these steps:
-
-
Go to the [Hay Day page](^1^) on APKPure and click on the "Download APK" button.
-
Wait for the download to finish and then open the APK file to install Hay Day on your device. You may need to allow installation from unknown sources if you haven't done so before.
-
Go back to the [Hay Day page](^1^) on APKPure and click on the "OBB" tab. You will see a link to download the OBB file for Hay Day.
-
Click on the link and wait for the download to finish. You will get a ZIP file containing the OBB file for Hay Day.
-
-
How to Install the OBB File for Hay Day on Your Android Device
-
After downloading the OBB file for Hay Day, you need to install it on your device. This process involves extracting the OBB file from the ZIP file and moving it to a specific folder on your device storage. You can do this using a file manager app or a computer. Here are the steps to install the OBB file for Hay Day using a file manager app:
-
-
Open your file manager app and locate the ZIP file that you downloaded from APKPure. It should be in the "Download" folder or wherever you saved it.
-
Tap on the ZIP file and select "Extract" or "Unzip". You will get a folder named "com.supercell.hayday". This is the OBB folder for Hay Day.
-
Tap and hold on the OBB folder and select "Cut" or "Move". Then, navigate to the "Android" folder on your device storage. Inside this folder, you will find another folder named "OBB". If you don't see it, you can create it by tapping on the "+" or "New" button and naming it "OBB".
-
Open the "OBB" folder and paste or move the OBB folder for Hay Day there. Make sure that the path is "/Android/OBB/com.supercell.hayday".
-
Close your file manager app and launch Hay Day. You should be able to play the game with all its features.
-
-
If you want to use a computer to install the OBB file for Hay Day, you can follow these steps:
-
-
Connect your device to your computer using a USB cable. Make sure that your device is in file transfer mode.
-
Open your computer's file explorer and locate your device's storage. You should see a folder named "Android". Open it and look for another folder named "OBB". If you don't see it, you can create it by right-clicking and selecting "New Folder" and naming it "OBB".
-
Open another window of your file explorer and locate the ZIP file that you downloaded from APKPure. It should be in your "Downloads" folder or wherever you saved it.
-
Right-click on the ZIP file and select "Extract All" or "Unzip". You will get a folder named "com.supercell.hayday". This is the OBB folder for Hay Day.
-
Drag and drop the OBB folder for Hay Day into the "OBB" folder on your device's storage. Make sure that the path is "/Android/OBB/com.supercell.hayday".
-
Eject your device from your computer and launch Hay Day. You should be able to play the game with all its features.
-
-
Conclusion
-
In this article, we have explained how to download and install the OBB file for Hay Day on Android devices. By following these steps, you can enjoy one of the best farming games with high-quality graphics, sounds, and videos. Here are some tips or suggestions for playing Hay Day:
-
-
Connect your game to Facebook or Google Play Games to save your progress and sync it across devices.
-
Join a neighborhood or create your own to chat with other players, help each other, and participate in events.
-
Expand your farm by buying more land, building more facilities, and unlocking new crops and animals.
-
Trade with other players using your roadside shop, truck, boat, or town visitors.
-
Explore the valley with your truck or car and collect rewards, tokens, and special items.
-
FAQs
-
Here are some of the frequently asked questions about downloading and installing the OBB file for Hay Day on Android devices:
-
What is the size of the OBB file for Hay Day?
-
The size of the OBB file for Hay Day may vary depending on the device and game version. However, according to APKPure, the latest OBB file for Hay Day (version 1.52.89) is about 144 MB.
-
How to download file obb hay day on android
-Download file obb hay day mod apk unlimited everything
-Download file obb hay day latest version for free
-Download file obb hay day offline installer
-Download file obb hay day from google drive
-Download file obb hay day for pc windows 10
-Download file obb hay day without root
-Download file obb hay day hack tool
-Download file obb hay day update 2023
-Download file obb hay day with es file explorer
-Download file obb hay day using vpn
-Download file obb hay day from apkdone.com[^1^]
-Download file obb hay day original from supercell
-Download file obb hay day no survey no password
-Download file obb hay day full unlocked
-Download file obb hay day new features and improvements
-Download file obb hay day easy and fast
-Download file obb hay day safe and secure
-Download file obb hay day best farming game
-Download file obb hay day tips and tricks
-Download file obb hay day compatible with all devices
-Download file obb hay day high quality graphics and sound
-Download file obb hay day fun and addictive gameplay
-Download file obb hay day online multiplayer mode
-Download file obb hay day support and feedback
-Download file obb hay day cheats and codes
-Download file obb hay day error fix and troubleshooting
-Download file obb hay day guide and tutorial
-Download file obb hay day review and rating
-Download file obb hay day alternative and similar apps
-
What if I delete or lose the OBB file for Hay Day?
-
If you delete or lose the OBB file for Hay Day, you will not be able to play the game unless you download and install it again. The game will not run properly without the OBB file, as it contains essential data for the game. Therefore, you should always keep a backup of the OBB file or download it from a trusted source whenever you need it.
-
Can I play Hay Day without the OBB file?
-
No, you cannot play Hay Day without the OBB file. The OBB file is required for the game to function properly, as it contains graphics, sounds, and videos for the game. Without the OBB file, the game will not load or display correctly. You need to download and install the OBB file before launching Hay Day.
-
Can I share or transfer the OBB file for Hay Day to another device?
-
Yes, you can share or transfer the OBB file for Hay Day to another device, as long as it has enough storage space and supports Hay Day. You can use a USB cable, a Bluetooth connection, a Wi-Fi network, or a cloud service to transfer the OBB file from one device to another. However, you need to make sure that you also install Hay Day on the other device and move the OBB file to the correct folder (/Android/OBB/com.supercell.hayday).
-
How can I update the OBB file for Hay Day when there is a new version of the game?
-
You can update the OBB file for Hay Day by downloading and installing it from a trusted source, such as APKPure. You can follow the same steps as mentioned above to download and install the new OBB file. Alternatively, you can use the Google Play Store if it is available on your device. The Google Play Store will automatically download and install the latest version of Hay Day and its OBB file when there is an update.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/My Talking Angela The Virtual Pet Star Youll Love.md b/spaces/congsaPfin/Manga-OCR/logs/My Talking Angela The Virtual Pet Star Youll Love.md
deleted file mode 100644
index 04e35ae6b3c118b46c2ea4539e61e622ca65b94b..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/My Talking Angela The Virtual Pet Star Youll Love.md
+++ /dev/null
@@ -1,251 +0,0 @@
-
-
My Talking Angela 3 Download: A Guide for Fans of the Virtual Pet Game
-
If you are looking for a fun and interactive game that lets you adopt, care for, and play with a cute virtual cat, then you might want to check out My Talking Angela 3. This is the latest installment in the popular Talking Tom & Friends series, developed by Outfit7. In this game, you can join Angela, the super fun virtual star, as she dances, sings, and explores her way to the top. You can also customize her look, play mini-games, collect stickers, and discover new activities in her 3D world.
-
My Talking Angela 3 is a simulation game that combines elements of chatterbot, dress-up, and casual gaming. It is suitable for players of all ages, especially those who love cats and fashion. The game has millions of fans around the world, who enjoy interacting with Angela and watching her grow from a kitten to an adult. The game also has regular updates that add new features and content.
If you are interested in playing My Talking Angela 3, you might be wondering how to download and install it on your device. Don't worry, we have got you covered. In this article, we will show you how to get My Talking Angela 3 on your iOS, Android, or Windows device. We will also give you some tips and tricks on how to make the most out of your gameplay experience. So, let's get started!
-
How to Download and Install My Talking Angela 3
-
My Talking Angela 3 is available for free on various platforms, such as iOS, Android, and Windows. However, the game also offers in-app purchases that allow you to buy coins, gems, or subscriptions using real money. You can use these currencies to buy items or access premium features in the game. If you don't want to spend money on the game, you can still enjoy it without making any in-app purchases.
-
To download and install My Talking Angela 3 on your device, follow these simple steps:
-
my talking angela 3 apk free download
-my talking angela 3 mod apk unlimited money and diamonds
-my talking angela 3 game online play
-my talking angela 3 app store
-my talking angela 3 google play
-my talking angela 3 latest version
-my talking angela 3 hack download
-my talking angela 3 for pc
-my talking angela 3 cheats and tips
-my talking angela 3 outfits and makeup
-my talking angela 3 dance and sing
-my talking angela 3 sticker album
-my talking angela 3 youtube videos
-my talking angela 3 review and rating
-my talking angela 3 update and new features
-my talking angela 3 simulation game for kids
-my talking angela 3 virtual pet and best friend
-my talking angela 3 outfit7 limited
-my talking angela 3 privacy policy and terms of use
-my talking angela 3 customer support and feedback
-my talking angela 3 subscription and in-app purchases
-my talking angela 3 how to play and level up
-my talking angela 3 fun and interactive activities
-my talking angela 3 customize and personalize your pet
-my talking angela 3 spring magic and blooming garden
-my talking angela 3 offline and online mode
-my talking angela 3 data safety and encryption
-my talking angela 3 compatible devices and requirements
-my talking angela 3 install and uninstall guide
-my talking angela 3 alternative apps and games
-my talking angela 3 comparison and difference with other versions
-my talking angela 3 screenshots and images
-my talking angela 3 trivia and facts
-my talking angela 3 rumors and myths debunked
-my talking angela 3 challenges and achievements
-my talking angela 3 gifts and rewards
-my talking angela 3 ads and promotions
-my talking angela 3 social media and community
-my talking angela 3 voice and sound effects
-my talking angela 3 animation and graphics quality
-
For iOS Devices
-
-
Go to the App Store on your iPhone or iPad.
-
Search for "My Talking Angela" or use this link: [text](^2^).
-
Tap on "Get" or "Install" and wait for the download to finish.
-
Once the download is complete, tap on "Open" or find the app icon on your home screen.
-
Enjoy playing My Talking Angela 3!
-
-
For Android Devices
-
-
Go to Google Play Store on your smartphone or tablet.
-
Search for "My Talking Angela" or use this link: [text](^1^).
-
Tap on "Install" and wait for the download to finish.
-
Once the download is complete, tap on "Open" or find the app icon on your home screen.
-
Enjoy playing My Talking Angela 3!
-
-
For Windows Devices
-
-
Go to Microsoft Store on your PC or laptop.
-
Search for "My Talking Angela" or use this link: [text](^3^).
Tap on Angela to make her happy, smile, or laugh.
-
Swipe on Angela to pet her, tickle her, or make her purr.
-
Tap on the microphone icon to talk to Angela and hear her repeat what you say.
-
Tap on the camera icon to record a video of your interaction and share it with your friends.
-
Tap on the food icon to feed Angela with different foods and drinks.
-
Tap on the shower icon to bathe Angela and make her clean.
-
Tap on the toothbrush icon to brush Angela's teeth and make them shiny.
-
Tap on the bed icon to put Angela to sleep and turn off the lights.
-
-
For Customizing Angela's Appearance
-
-
Tap on the hanger icon to access Angela's wardrobe and choose from different outfits, accessories, hairstyles, makeup, and stickers.
-
Swipe left or right to browse through different categories and items.
-
Tap on an item to select it and see how it looks on Angela.
-
Tap on the checkmark icon to confirm your choice and save it.
-
Tap on the trash icon to remove an item from Angela.
-
Tap on the gift icon to buy more items with coins or gems.
-
Tap on the home icon to decorate Angela's home and change the background.
-
-
How to Play Mini-Games and Earn Coins and Diamonds
-
Besides interacting with Angela and customizing her appearance, you can also play mini-games and earn coins and diamonds in My Talking Angela 3. Coins and diamonds are the currencies of the game that you can use to buy items or access premium features. You can also earn stickers by playing mini-games, which you can use to unlock new outfits and backgrounds for Angela.
-
There are many mini-games that you can play in My Talking Angela 3, such as:
-
-
Name
Description
-
Cake Tower
Stack cakes as high as you can without dropping them.
-
Dancing Game
Follow the rhythm and tap the buttons at the right time.
-
Puzzle Game
Solve puzzles by matching three or more pieces of the same color.
-
Bubble Shooter
Shoot bubbles and pop them by making groups of three or more of the same color.
-
Rainbow Melody
Create music by tapping on colorful tiles.
-
Sky High
Bounce on clouds and collect stars while avoiding obstacles.
-
Tiny Puzzles
Complete mini-puzzles by finding hidden objects, spotting differences, or solving riddles.
-
Treasure Hunt
Dig for treasure and avoid traps in a mysterious island.
-
Wheel of Fortune
Spin the wheel and win prizes or bonuses.
-
-
To play mini-games and earn coins and diamonds, follow these simple steps:
-
-
Tap on the gamepad icon to access the mini-games menu.
-
Swipe left or right to browse through different mini-games.
-
Tap on a mini-game to start playing it. Follow the instructions on the screen to play the game. You can also tap on the question mark icon for more information about the game rules and controls.
-
Earn coins, diamonds, or stickers by completing levels or achieving high scores. You can also earn bonuses by watching ads or spinning the wheel of fortune.
-
You can play as many mini-games as you want, but some of them may require energy or tickets. You can refill your energy or tickets by waiting for some time, watching ads, spinning the wheel of fortune, or buying them with coins or gems.
-
-
How to Explore the City and Discover New Activities
-
A new feature that My Talking Angela 3 introduces is that you can explore the city with Angela and discover new activities. The city is a 3D world that you can navigate by swiping left or right. You can also zoom in or out by pinching your fingers. The city has different locations that you can visit, such as:
-
-
Name
Description
-
Dance Studio
A place where you can dance with Angela and learn new moves. You can also customize her outfit and accessories for the dance.
-
Music Studio
A place where you can make music with Angela and record your own songs. You can also choose from different genres, instruments, and effects for the music.
-
Photo Studio
A place where you can take photos with Angela and edit them with filters, stickers, and frames. You can also share your photos with your friends or save them to your gallery.
-
Shopping Mall
A place where you can shop for new items for Angela, such as clothes, shoes, jewelry, and more. You can also find special offers and discounts for the items.
-
Cinema
A place where you can watch movies with Angela and enjoy popcorn and drinks. You can also choose from different genres, such as comedy, romance, action, and more.
-
Park
A place where you can relax with Angela and enjoy nature. You can also play with other animals, such as squirrels, birds, and butterflies.
-
Beach
A place where you can have fun with Angela and enjoy the sun and the sea. You can also play with sand, surf, or swim.
-
-
To explore the city and discover new activities, follow these simple steps:
-
-
Tap on the map icon to access the city menu.
-
Swipe left or right to browse through different locations.
-
Tap on a location to visit it. Follow the instructions on the screen to interact with Angela and enjoy the activity. You can also tap on the question mark icon for more information about the location and the activity.
-
Earn coins, diamonds, or stickers by completing tasks or achieving goals in each location. You can also earn bonuses by watching ads or spinning the wheel of fortune.
-
You can visit as many locations as you want, but some of them may require energy or tickets. You can refill your energy or tickets by waiting for some time, watching ads, spinning the wheel of fortune, or buying them with coins or gems.
-
-
Tips and Tricks for My Talking Angela 3
-
Now that you know how to download and install My Talking Angela 3, how to interact with Angela and customize her appearance, how to play mini-games and earn coins and diamonds, and how to explore the city and discover new activities, you might be wondering how to improve your gameplay experience and make it more fun and rewarding. Here are some tips and tricks that you can use to level up faster, unlock more items, use child mode and parental controls, and avoid the hoax and rumors about the game.
-
How to Level Up Faster and Unlock More Items
-
One of the goals of My Talking Angela 3 is to level up Angela from a kitten to an adult. As you level up, you will unlock more items, such as outfits, accessories, hairstyles, makeup, stickers, backgrounds, mini-games, locations, and activities. You will also unlock more features, such as voice recording, video recording, photo editing, music making, dancing lessons, shopping offers, movie genres, sand art, surfing lessons, and more.
-
To level up faster and unlock more items in My Talking Angela 3, follow these simple steps:
-
-
Interact with Angela regularly and make her happy, healthy, and clean. You can do this by talking to her, petting her, feeding her, bathing her, brushing her teeth, and putting her to sleep.
-
Play mini-games and earn coins and diamonds. You can use these currencies to buy items or access premium features in the game.
-
Explore the city and discover new activities. You can also complete tasks or achieve goals in each location to earn coins, diamonds, or stickers.
-
Customize Angela's appearance and style. You can also collect stickers by playing mini-games or visiting locations. You can use these stickers to unlock new outfits and backgrounds for Angela.
-
Watch ads or spin the wheel of fortune to earn bonuses, such as coins, diamonds, energy, tickets, or items.
-
-
How to Use Child Mode and Parental Controls
-
My Talking Angela 3 is a game that is suitable for players of all ages, especially those who love cats and fashion. However, some parents may have concerns about the game's content or features, such as in-app purchases, ads, voice recording, video recording, photo editing, music making, dancing lessons, shopping offers, movie genres, sand art, surfing lessons, and more. If you are a parent who wants to limit or restrict some of these features for your child, you can use the child mode and parental controls in the game.
-
Child mode is a feature that disables some of the features that may not be appropriate for younger players, such as voice recording, video recording, photo editing, music making, dancing lessons, shopping offers, movie genres, sand art, surfing lessons, and more. It also reduces the number of ads that appear in the game. To activate child mode in My Talking Angela 3, follow these simple steps:
-
-
Tap on the settings icon to access the settings menu.
-
Tap on the child mode icon to toggle it on or off.
-
Enter your birth year to confirm your age and activate child mode.
-
You can also tap on the parental controls icon to access more options, such as disabling in-app purchases, ads, or notifications.
-
-
How to Avoid the Hoax and Rumors about the Game
-
My Talking Angela 3 is a game that is designed to be fun and entertaining for players of all ages. However, there are some hoax and rumors that circulate on the internet that claim that the game is dangerous or harmful. Some of these hoax and rumors include:
-
-
The game has a hidden camera that spies on the players and their surroundings.
-
The game has a hacker or a kidnapper that talks to the players and tries to lure them into danger.
-
The game has a virus or a malware that infects the devices and steals personal information.
-
The game has a curse or a demon that haunts the players and causes bad luck or misfortune.
-
-
These hoax and rumors are completely false and have no basis in reality. They are created by people who want to scare or prank others, or who have malicious intentions. They are not supported by any evidence or proof, and they are contradicted by the facts and testimonials of millions of players who enjoy the game safely and happily.
-
To avoid the hoax and rumors about the game, follow these simple steps:
-
-
Do not believe everything you read or hear on the internet. Always check the source and the credibility of the information before you trust it.
-
Do not share or spread the hoax and rumors to others. This will only cause more confusion and panic among the players and the public.
-
Do not download or install the game from unofficial or suspicious websites or links. Always use the official platforms, such as App Store, Google Play Store, or Microsoft Store, to get the game.
-
Do not give out your personal information or location to anyone in the game or online. Always protect your privacy and security when using any app or service.
-
-
Conclusion
-
My Talking Angela 3 is a simulation game that lets you adopt, care for, and play with a cute virtual cat named Angela. You can interact with her in various ways, customize her appearance and style, play mini-games and earn coins and diamonds, explore the city and discover new activities, and more. The game is free to download and play on iOS, Android, and Windows devices, but it also offers in-app purchases that allow you to buy coins, gems, or subscriptions using real money.
-
If you are a fan of cats and fashion, you will love My Talking Angela 3. The game is fun and interactive, with stunning graphics and sound effects. The game also has regular updates that add new features and content. You can also connect with other players and share your videos, photos, or songs with them. You can also follow the official website [text] and social media accounts [text] [text] [text] of the game for more news, tips, and fun.
-
So, what are you waiting for? Download My Talking Angela 3 today and join Angela on her amazing adventure. You will have a blast playing with her and watching her grow from a kitten to an adult. You will also make her happy and help her achieve her dreams. You will be her best friend forever!
-
Do you have any questions or comments about My Talking Angela 3? Let us know in the comment section below. We would love to hear from you!
-
FAQs
-
What is the difference between My Talking Angela 3 and My Talking Angela 2?
-
My Talking Angela 3 is the latest installment in the popular Talking Tom & Friends series, developed by Outfit7. It is an upgraded version of My Talking Angela 2, with new features and content. Some of these features include:
-
-
A 3D world that you can explore with Angela and discover new activities.
-
A music studio that you can use to make music with Angela and record your own songs.
-
A dance studio that you can use to dance with Angela and learn new moves.
-
A photo studio that you can use to take photos with Angela and edit them with filters, stickers, and frames.
-
A shopping mall that you can use to shop for new items for Angela, such as clothes, shoes, jewelry, and more.
A cinema that you can use to watch movies with Angela and enjoy popcorn and drinks.
-
-
My Talking Angela 2 is the previous installment in the series, which has similar features as My Talking Angela 3, but with less variety and quality. Some of these features include:
-
-
A 2D world that you can explore with Angela and discover some activities.
-
A music player that you can use to listen to music with Angela.
-
A dance floor that you can use to dance with Angela and watch her perform.
-
A camera that you can use to take photos with Angela and add some stickers.
-
A wardrobe that you can use to dress up Angela with some outfits and accessories.
-
-
How can I get free gems in My Talking Angela 3?
-
Gems are one of the currencies of My Talking Angela 3, which you can use to buy items or access premium features in the game. You can also use gems to speed up the progress or skip some tasks in the game. You can get gems by using real money or by using some methods that do not require spending money. Some of these methods include:
-
-
Playing mini-games and earning gems as rewards or bonuses.
-
Exploring the city and earning gems as rewards or bonuses.
-
Watching ads or spinning the wheel of fortune and earning gems as bonuses.
-
Completing achievements or daily challenges and earning gems as rewards.
-
Collecting stickers and earning gems as rewards.
-
Connecting your game to your Facebook account and earning gems as a bonus.
-
Inviting your friends to play the game and earning gems as a bonus.
-
-
How can I turn off the ads in My Talking Angela 3?
-
Ads are one of the ways that My Talking Angela 3 generates revenue and supports its development. Ads also provide some benefits for the players, such as bonuses, discounts, or offers. However, some players may find ads annoying or distracting, and may want to turn them off. There are two ways to turn off the ads in My Talking Angela 3:
-
-
Buying a subscription that removes all the ads from the game. You can choose from different subscription plans, such as monthly, quarterly, or yearly. You can also cancel your subscription at any time.
-
Using child mode that reduces the number of ads in the game. You can activate child mode by tapping on the settings icon, then tapping on the child mode icon, then entering your birth year. You can also use parental controls to disable ads completely.
-
-
Is My Talking Angela 3 safe for kids?
-
My Talking Angela 3 is a game that is suitable for players of all ages, especially those who love cats and fashion. The game is designed to be fun and entertaining, with no violence, gore, or inappropriate content. The game also has child mode and parental controls that allow parents to limit or restrict some features that may not be appropriate for younger players, such as voice recording, video recording, photo editing, music making, dancing lessons, shopping offers, movie genres, sand art, surfing lessons, and more.
-
However, like any other app or service, My Talking Angela 3 may have some risks or dangers that parents and kids should be aware of and avoid. Some of these risks or dangers include:
-
-
In-app purchases that may tempt kids to spend real money on the game without their parents' permission or knowledge.
-
Ads that may expose kids to inappropriate or harmful content or products.
-
Voice recording that may capture kids' personal information or location without their parents' permission or knowledge.
-
Video recording that may capture kids' personal information or location without their parents' permission or knowledge.
-
Photo editing that may expose kids to inappropriate or harmful content or products.
-
Music making that may expose kids to inappropriate or harmful content or products.
-
Dancing lessons that may expose kids to inappropriate or harmful content or products.
-
Shopping offers that may expose kids to inappropriate or harmful content or products.
-
Movie genres that may expose kids to inappropriate or harmful content or products.
-
Sand art that may expose kids to inappropriate or harmful content or products.
-
Surfing lessons that may expose kids to inappropriate or harmful content or products.
-
-
To ensure that My Talking Angela 3 is safe for kids, follow these simple steps:
-
-
Monitor your kids' gameplay and set limits on their time and spending.
-
Use child mode and parental controls to disable or restrict some features that may not be appropriate for younger players.
-
Teach your kids to protect their privacy and security when using any app or service.
-
Report any hoax or rumors that claim that the game is dangerous or harmful.
-
-
Can I play My Talking Angela 3 offline?
-
My Talking Angela 3 is a game that requires an internet connection to download and install, as well as to access some features and content, such as in-app purchases, ads, voice recording, video recording, photo editing, music making, dancing lessons, shopping offers, movie genres, sand art, surfing lessons, and more. However, you can also play the game offline without an internet connection, but with some limitations. Some of these limitations include:
-
-
You cannot buy items or access premium features with coins or gems.
-
You cannot watch ads or spin the wheel of fortune to earn bonuses.
-
You cannot record videos or share them with your friends.
-
You cannot edit photos or share them with your friends.
-
You cannot make music or share it with your friends.
-
You cannot dance or learn new moves.
-
You cannot shop for new items or find special offers.
-
You cannot watch movies or enjoy popcorn and drinks.
-
You cannot play with sand or surf.
-
-
To play My Talking Angela 3 offline, follow these simple steps:
-
-
Make sure you have downloaded and installed the game on your device.
-
Turn off your internet connection or switch to airplane mode on your device.
-
Launch the game and enjoy playing it offline.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Play Geometry Dash Lite on PC and Mac with BlueStacks.md b/spaces/congsaPfin/Manga-OCR/logs/Play Geometry Dash Lite on PC and Mac with BlueStacks.md
deleted file mode 100644
index 0a02cac7a346ea9de2683ebcee224b308cb3492d..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Play Geometry Dash Lite on PC and Mac with BlueStacks.md
+++ /dev/null
@@ -1,141 +0,0 @@
-
-
How to Play Geometry Dash Lite on PC
-
Do you love arcade games that challenge your reflexes and skills? Do you want to enjoy a rhythmic-based action platformer with stunning graphics and soundtracks? Do you want to play it on a bigger screen with better performance and controls? If you answered yes to any of these questions, then you should try Geometry Dash Lite, a popular game that you can download and play on your PC using an emulator.
In this article, we will show you what Geometry Dash Lite is, why you should play it on PC, how to download and install it on PC using two different emulators (BlueStacks and NoxPlayer), how to play it on PC with some tips and tricks, what are the features of the game, how to customize your character in the game, how to use practice mode in the game, and how to access more levels in the game. We will also answer some frequently asked questions about the game at the end of the article.
-
What is Geometry Dash Lite?
-
Geometry Dash Lite is an arcade game developed by RobTop Games. It is a simplified version of the full game Geometry Dash, which has more levels, soundtracks, achievements, online level editor, and more. In Geometry Dash Lite, you have to jump and fly your way through danger in a rhythm-based action platformer. You have
How to Play Geometry Dash Lite on PC?
-
Now that you have downloaded and installed Geometry Dash Lite on PC using an emulator, you are ready to play the game. The gameplay is simple but challenging: you have to tap the screen (or press the spacebar on your keyboard) to make your character jump and avoid hitting obstacles. You have to reach the end of the level without crashing or falling. The game will test your reflexes, timing, and patience, as you have to memorize the patterns and rhythms of the obstacles and the music.
-
Here are some tips and tricks that might help you play Geometry Dash Lite on PC better:
-
-
Adjust the settings of the game according to your preferences. You can change the sound volume, the graphics quality, the color mode, and the language of the game.
-
Use headphones or speakers to enjoy the amazing soundtracks of the game. The music will help you sync your jumps with the beats and create a more immersive experience.
-
Don't give up easily. The game is very hard and frustrating, but also very rewarding and satisfying. You will feel a great sense of accomplishment when you complete a level that you have been struggling with for a long time.
-
Practice makes perfect. You can use the practice mode in the game to practice any level that you want. In practice mode, you can place checkpoints anywhere in the level and resume from there if you die. This way, you can learn the layout and timing of the level without having to start over every time.
-
Have fun. Geometry Dash Lite is a game that is meant to be enjoyed and not taken too seriously. Don't let the difficulty or frustration ruin your mood or your fun. Just relax and have a good time playing this awesome game.
-
-
What are the Features of Geometry Dash Lite?
-
Geometry Dash Lite is a game that has many features that make it fun and exciting. Here are some of the features of the game:
-
-
The game has 13 levels with different themes, difficulties, and soundtracks. Each level has its own unique design, style, and atmosphere.
-
The game has a simple one-touch gameplay that is easy to learn but hard to master. You just have to tap the screen (or press the spacebar) to jump and avoid obstacles.
-
The game has a colorful and vibrant graphics that are pleasing to the eye. The game also has a smooth and fluid animation that makes the game look more dynamic and lively.
-
The game has an awesome soundtrack that matches the mood and rhythm of each level. The music is catchy, energetic, and immersive.
-
The game has a leaderboard system that allows you to compare your scores with other players around the world. You can also see your rank, achievements, and stats in the game.
-
How to Customize Your Character in Geometry Dash Lite?
-
One of the fun aspects of Geometry Dash Lite is that you can customize your character in the game. You can change the shape, color, and design of your character to make it look more cool and unique. You can also unlock new icons and colors by completing achievements and collecting stars in the game.
-
To customize your character in Geometry Dash Lite, you have to follow these steps:
-
-
Go to the main menu of the game and tap on the icon that looks like a square with a face on it. This will take you to the customization screen.
-
On the customization screen, you can see different tabs that let you change different aspects of your character. You can tap on the tabs to switch between them.
-
The first tab is the icon tab, where you can change the shape of your character. You can scroll through the icons and tap on the one that you like. You can also see how many stars and achievements you need to unlock each icon.
-
The second tab is the primary color tab, where you can change the main color of your character. You can scroll through the colors and tap on the one that you like. You can also see how many stars and achievements you need to unlock each color.
-
The third tab is the secondary color tab, where you can change the secondary color of your character. This is the color that appears on some parts of your character, such as the eyes, mouth, or outline. You can scroll through the colors and tap on the one that you like. You can also see how many stars and achievements you need to unlock each color.
-
The fourth tab is the special tab, where you can change the special design of your character. This is an optional feature that adds some extra details or effects to your character, such as flames, spikes, or glow. You can scroll through the designs and tap on the one that you like. You can also see how many stars and achievements you need to unlock each design.
-
Once you are happy with your character, you can tap on the back button to save your changes and return to the main menu.
-
-
Now you have a customized character that reflects your personality and style in Geometry Dash Lite.
-
How to Use Practice Mode in Geometry Dash Lite?
-
Practice mode is a feature in Geometry Dash Lite that allows you to practice any level that you want without having to worry about dying or restarting. In practice mode, you can place checkpoints anywhere in the level and resume from there if you die. This way, you can learn the layout and timing of the level without having to start over every time.
-
geometry dash lite download for pc
-geometry dash lite pc online
-geometry dash lite windows 10
-geometry dash lite pc game
-geometry dash lite emulator
-geometry dash lite pc version
-geometry dash lite free play
-geometry dash lite pc full screen
-geometry dash lite windows 7
-geometry dash lite pc no download
-geometry dash lite bluestacks
-geometry dash lite pc controls
-geometry dash lite free download
-geometry dash lite pc unblocked
-geometry dash lite windows 8
-geometry dash lite pc hack
-geometry dash lite emulator download
-geometry dash lite pc cheats
-geometry dash lite apk file
-geometry dash lite pc mod
-geometry dash lite windows xp
-geometry dash lite pc offline
-geometry dash lite apk install
-geometry dash lite pc update
-geometry dash lite windows store
-geometry dash lite pc gameplay
-geometry dash lite apk latest version
-geometry dash lite pc requirements
-geometry dash lite windows phone
-geometry dash lite pc level editor
-geometry dash lite apk pure
-geometry dash lite pc steam
-geometry dash lite windows vista
-geometry dash lite pc tips and tricks
-geometry dash lite apk for laptop
-geometry dash lite pc review
-geometry dash lite windows 11
-geometry dash lite pc custom levels
-geometry dash lite apk mod menu
-geometry dash lite pc keyboard controls
-geometry dash lite windows app store
-geometry dash lite pc bug fixes and improvements
-geometry dash lite apk no ads
-geometry dash lite pc achievements
-geometry dash lite windows 10 download
-geometry dash lite pc soundtracks
-geometry dash lite apk old version
-geometry dash lite pc leaderboards
-geometry dash lite windows 7 free download
-
To use practice mode in Geometry Dash Lite, you have to follow these steps:
-
-
Go to the level selection screen of the game and tap on any level that you want to practice.
-
On the level screen, tap on the icon that looks like a flag with a play button on it. This will start practice mode for that level.
-
In practice mode, you can play the level as usual, but with some differences. First, you will see a green flag at the beginning of the level. This is your first checkpoint. Second, you will see a yellow flag at every place where you tap the screen (or press the spacebar). This is where you place your checkpoints. Third, if you die or fall off the screen, you will not restart from the beginning of the level, but from your last checkpoint.
-
You can place as many checkpoints as you want in practice mode. You can also delete any checkpoint by tapping on it again. You can also skip any checkpoint by tapping on it twice.
-
You can exit practice mode at any time by tapping on the pause button and then tapping on the exit button. This will take you back to the level screen.
-
-
Practice mode is a great way to improve your skills and confidence in Geometry Dash Lite. You can use it to master any level that you find difficult or challenging.
How to Access More Levels in Geometry Dash Lite?
-
Geometry Dash Lite has 13 levels that you can play for free, but if you want to access more levels, you have to buy the full version of the game. The full version of Geometry Dash has over 40 levels, plus an online level editor that allows you to create and share your own levels with other players. You can also play the levels created by other players and rate them.
-
To buy the full version of Geometry Dash, you have to follow these steps:
-
-
Go to the main menu of Geometry Dash Lite and tap on the icon that looks like a lock with a plus sign on it. This will take you to the store screen.
-
On the store screen, you will see the price and the description of the full version of Geometry Dash. You will also see some screenshots and videos of the game.
-
If you want to buy the game, tap on the buy button and follow the instructions to complete the payment. You will need a valid credit card or a Google Play account to buy the game.
-
Once you have bought the game, you can download and install it on your device. You can also transfer your progress and achievements from Geometry Dash Lite to Geometry Dash by using the account system in the game.
-
Now you can enjoy playing more levels and creating your own levels in Geometry Dash.
-
-
The full version of Geometry Dash is worth buying if you love the game and want to experience more content and features. It is also a way to support the developer and appreciate their hard work.
-
Conclusion
-
Geometry Dash Lite is a fun and challenging arcade game that you can play on your PC using an emulator. It has a simple one-touch gameplay, a colorful graphics, an awesome soundtrack, and a variety of levels that will test your skills and reflexes. You can also customize your character, use practice mode, and access more levels by buying the full version of the game.
-
If you are looking for a game that will keep you entertained and engaged for hours, you should try Geometry Dash Lite. It is a game that will make you jump, fly, and dash through danger in a rhythm-based action platformer. It is a game that will make you feel frustrated and satisfied at the same time. It is a game that will make you love geometry.
-
So what are you waiting for? Download Geometry Dash Lite on PC today and enjoy this amazing game.
-
FAQs
-
Here are some frequently asked questions about Geometry Dash Lite:
-
Q1: What are the system requirements for playing Geometry Dash Lite on PC?
-
A1: The system requirements for playing Geometry Dash Lite on PC are not very high. You just need a Windows or Mac computer with at least 2 GB of RAM, 4 GB of disk space, and a decent graphics card. You also need an emulator like BlueStacks or NoxPlayer to run the game.
-
Q2: Is Geometry Dash Lite free to play?
-
A2: Yes, Geometry Dash Lite is free to play. You can download and play it on your PC without paying anything. However, if you want to access more levels and features, you have to buy the full version of Geometry Dash for $1.99.
-
Q3: How many levels are there in Geometry Dash Lite?
-
A3: There are 13 levels in Geometry Dash Lite, each with its own theme, difficulty, and soundtrack. The levels are:
-
-
Stereo Madness
-
Back On Track
-
Polar Geist
-
Dry Out
-
Base After Base
-
Cant Let Go
-
Jumper
-
Time Machine
-
Cycles
-
xStep
-
Clutterfunk
-
Theory of Everything
-
Electroman Adventures
-
-
Q4: Can I create my own levels in Geometry Dash Lite?
-
A4: No, you cannot create your own levels in Geometry Dash Lite. This feature is only available in the full version of Geometry Dash, which has an online level editor that allows you to create and share your own levels with other players.
-
Q5: Can I play Geometry Dash Lite offline?
-
A5: Yes, you can play Geometry Dash Lite offline. You do not need an internet connection to play the game, except for downloading and installing it on your PC using an emulator. However, if you want to access some online features like leaderboards, achievements, or user-generated levels, you will need an internet connection.
-
I hope this article has helped you learn more about Geometry Dash Lite and how to play it on PC. If you have any questions or feedback, please leave a comment below. Thank you for reading and have a great day.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Acoustica Cd Dvd Label Maker 3.40 Keygen Crack [EXCLUSIVE].md b/spaces/contluForse/HuggingGPT/assets/Acoustica Cd Dvd Label Maker 3.40 Keygen Crack [EXCLUSIVE].md
deleted file mode 100644
index d843294c0d7128cb53ea21c3f9f9dc9cdac949b3..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Acoustica Cd Dvd Label Maker 3.40 Keygen Crack [EXCLUSIVE].md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
Besides a number of design elements, this tools allows you to create your own CD, DVD. With this program, you can create your own labels, DVD cases, and documents which can printed on your CD or DVD. with this software, you can create your own CD, DVD, labels, and DVD cases. This software can create your own labels, DVD cases, and documents which can printed on your CD or DVD. You can design your own CD, DVD, labels, and DVD cases. With this software, you can create your own labels, DVD cases, and documents which can printed on your CD or DVD. You can also export your document or DVD case directly into a special file format. You can combine two or more files. Release Date: December 18, 2008. The program supports English and many international languages. Pack size: 1062.25 kB.... acoustica cd label maker 3.40 crack; acoustica cd dvd label maker 3.40 crack; Acoustica Label Maker Crack; Acoustica CD Label Maker Crack; Acoustica DVD Label Maker Crack; Acoustica Label Maker Crack; Acoustica CD Label Maker 3.40 Keygen; Acoustica CD Label Maker 3.40 Keygen; Acoustica Cd Dvd Label Maker 3.40 Keygen; Acoustica Cd Dvd Label Maker 3.40 Keygen; Acoustica CD Label Maker 3.40 Keygen; Acoustica CD Label Maker 3.40 Keygen; Acoustica CD Label Maker 3.40 Keygen
Create your own CD, DVD, labels, DVD cases, and documents that can printed on your CD or DVD. With the program you can design your own CD, DVD, labels, DVD cases, and documents that can printed on your CD or DVD. You can design your own labels, CD case templates, labels, CD cases, and documents that can printed on your CD or DVD, plus you can create your own title page for your CD or DVD, and include text and graphics. With this software, you can print directly to CD with the utmost simplicity and flexibility! With this software, you can create your own labels, DVD cases, and documents which can printed on your CD or DVD. You can also export your document or DVD case directly into a special file format. And you can combine two or more files. Release Date: December 18, 2008. You can design your own CD, DVD, labels, CD cases, and documents that can printed on your CD or DVD. And you can create your own titles for your CD or DVD. Release Date: December 18, 2008. Easy interface with a large selection of different templates and effects. You can design your own CD, DVD, labels, CD cases, and documents which can printed on your CD or DVD. Also, you can create your own CD, DVD, labels, CD cases, and documents which can printed on your CD or DVD. With this software, you can print directly to CD with the utmost simplicity and flexibility! Easy interface with a large selection of different templates and effects. Release Date: December 18, 2008. acoustica cd label maker 3.40 crack; acoustica cd dvd label maker 3.40 crack; Acoustica Label Maker Crack; Acoustica CD Label Maker Crack; Acoustica DVD Label Maker Crack; Acoustica Label Maker Crack; Acoustica CD Label Maker 3.40 Keygen; Acoustica CD Label Maker 3.40 Keygen; Acoustica Cd Dvd Label Maker 3.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Corel VideoStudio Pro X7 Keygen Only How to Get Full Pro Version Without Paying.md b/spaces/contluForse/HuggingGPT/assets/Corel VideoStudio Pro X7 Keygen Only How to Get Full Pro Version Without Paying.md
deleted file mode 100644
index b2a50045e2255869efd6ebcb28e5313f7a4782f2..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Corel VideoStudio Pro X7 Keygen Only How to Get Full Pro Version Without Paying.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
";if(a)return Promise.resolve(m);if(l){l(null,m);return}return m}if(a)return Promise.reject(s);if(l){l(s);return}throw s}}function ll(g,a){return(l,s,m)=>{typeof s=="function"&&(m=s,s=null);const p={...s};s={...fe.defaults,...p};const v=wo(s.silent,s.async,m);if(typeof l>"u"||l===null)return v(new Error("marked(): input parameter is undefined or null"));if(typeof l!="string")return v(new Error("marked(): input parameter is of type "+Object.prototype.toString.call(l)+", string expected"));if(vo(s,m),s.hooks&&(s.hooks.options=s),m){const y=s.highlight;let k;try{s.hooks&&(l=s.hooks.preprocess(l)),k=g(l,s)}catch(Y){return v(Y)}const N=function(Y){let te;if(!Y)try{s.walkTokens&&fe.walkTokens(k,s.walkTokens),te=a(k,s),s.hooks&&(te=s.hooks.postprocess(te))}catch(j){Y=j}return s.highlight=y,Y?v(Y):m(null,te)};if(!y||y.length<3||(delete s.highlight,!k.length))return N();let U=0;fe.walkTokens(k,function(Y){Y.type==="code"&&(U++,setTimeout(()=>{y(Y.text,Y.lang,function(te,j){if(te)return N(te);j!=null&&j!==Y.text&&(Y.text=j,Y.escaped=!0),U--,U===0&&N()})},0))}),U===0&&N();return}if(s.async)return Promise.resolve(s.hooks?s.hooks.preprocess(l):l).then(y=>g(y,s)).then(y=>s.walkTokens?Promise.all(fe.walkTokens(y,s.walkTokens)).then(()=>y):y).then(y=>a(y,s)).then(y=>s.hooks?s.hooks.postprocess(y):y).catch(v);try{s.hooks&&(l=s.hooks.preprocess(l));const y=g(l,s);s.walkTokens&&fe.walkTokens(y,s.walkTokens);let k=a(y,s);return s.hooks&&(k=s.hooks.postprocess(k)),k}catch(y){return v(y)}}}function fe(g,a,l){return ll(d0.lex,p0.parse)(g,a,l)}fe.options=fe.setOptions=function(g){return fe.defaults={...fe.defaults,...g},no(fe.defaults),fe};fe.getDefaults=el;fe.defaults=_0;fe.use=function(...g){const a=fe.defaults.extensions||{renderers:{},childTokens:{}};g.forEach(l=>{const s={...l};if(s.async=fe.defaults.async||s.async||!1,l.extensions&&(l.extensions.forEach(m=>{if(!m.name)throw new Error("extension name required");if(m.renderer){const p=a.renderers[m.name];p?a.renderers[m.name]=function(...v){let y=m.renderer.apply(this,v);return y===!1&&(y=p.apply(this,v)),y}:a.renderers[m.name]=m.renderer}if(m.tokenizer){if(!m.level||m.level!=="block"&&m.level!=="inline")throw new Error("extension level must be 'block' or 'inline'");a[m.level]?a[m.level].unshift(m.tokenizer):a[m.level]=[m.tokenizer],m.start&&(m.level==="block"?a.startBlock?a.startBlock.push(m.start):a.startBlock=[m.start]:m.level==="inline"&&(a.startInline?a.startInline.push(m.start):a.startInline=[m.start]))}m.childTokens&&(a.childTokens[m.name]=m.childTokens)}),s.extensions=a),l.renderer){const m=fe.defaults.renderer||new Vn;for(const p in l.renderer){const v=m[p];m[p]=(...y)=>{let k=l.renderer[p].apply(m,y);return k===!1&&(k=v.apply(m,y)),k}}s.renderer=m}if(l.tokenizer){const m=fe.defaults.tokenizer||new Gn;for(const p in l.tokenizer){const v=m[p];m[p]=(...y)=>{let k=l.tokenizer[p].apply(m,y);return k===!1&&(k=v.apply(m,y)),k}}s.tokenizer=m}if(l.hooks){const m=fe.defaults.hooks||new Ln;for(const p in l.hooks){const v=m[p];Ln.passThroughHooks.has(p)?m[p]=y=>{if(fe.defaults.async)return Promise.resolve(l.hooks[p].call(m,y)).then(N=>v.call(m,N));const k=l.hooks[p].call(m,y);return v.call(m,k)}:m[p]=(...y)=>{let k=l.hooks[p].apply(m,y);return k===!1&&(k=v.apply(m,y)),k}}s.hooks=m}if(l.walkTokens){const m=fe.defaults.walkTokens;s.walkTokens=function(p){let v=[];return v.push(l.walkTokens.call(this,p)),m&&(v=v.concat(m.call(this,p))),v}}fe.setOptions(s)})};fe.walkTokens=function(g,a){let l=[];for(const s of g)switch(l=l.concat(a.call(fe,s)),s.type){case"table":{for(const m of s.header)l=l.concat(fe.walkTokens(m.tokens,a));for(const m of s.rows)for(const p of m)l=l.concat(fe.walkTokens(p.tokens,a));break}case"list":{l=l.concat(fe.walkTokens(s.items,a));break}default:fe.defaults.extensions&&fe.defaults.extensions.childTokens&&fe.defaults.extensions.childTokens[s.type]?fe.defaults.extensions.childTokens[s.type].forEach(function(m){l=l.concat(fe.walkTokens(s[m],a))}):s.tokens&&(l=l.concat(fe.walkTokens(s.tokens,a)))}return l};fe.parseInline=ll(d0.lexInline,p0.parseInline);fe.Parser=p0;fe.parser=p0.parse;fe.Renderer=Vn;fe.TextRenderer=al;fe.Lexer=d0;fe.lexer=d0.lex;fe.Tokenizer=Gn;fe.Slugger=il;fe.Hooks=Ln;fe.parse=fe;fe.options;fe.setOptions;fe.use;fe.walkTokens;fe.parseInline;p0.parse;d0.lex;function xo(g){if(typeof g=="function"&&(g={highlight:g}),!g||typeof g.highlight!="function")throw new Error("Must provide highlight function");return typeof g.langPrefix!="string"&&(g.langPrefix="language-"),{async:!!g.async,walkTokens(a){if(a.type!=="code")return;const l=ko(a);if(g.async)return Promise.resolve(g.highlight(a.text,l)).then(yi(a));const s=g.highlight(a.text,l);yi(a)(s)},renderer:{code(a,l,s){const m=(l||"").match(/\S*/)[0],p=m?` class="${g.langPrefix}${xi(m)}"`:"";return a=a.replace(/\n$/,""),`
${s?a:xi(a,!0)}
-
`}}}}function ko(g){return(g.lang||"").match(/\S*/)[0]}function yi(g){return a=>{typeof a=="string"&&a!==g.text&&(g.escaped=!0,g.text=a)}}const sl=/[&<>"']/,So=new RegExp(sl.source,"g"),ol=/[<>"']|&(?!(#\d{1,7}|#[Xx][a-fA-F0-9]{1,6}|\w+);)/,Ao=new RegExp(ol.source,"g"),To={"&":"&","<":"<",">":">",'"':""","'":"'"},wi=g=>To[g];function xi(g,a){if(a){if(sl.test(g))return g.replace(So,wi)}else if(ol.test(g))return g.replace(Ao,wi);return g}var ul={exports:{}};(function(g){var a=typeof window<"u"?window:typeof WorkerGlobalScope<"u"&&self instanceof WorkerGlobalScope?self:{};/**
- * Prism: Lightweight, robust, elegant syntax highlighting
- *
- * @license MIT
- * @author Lea Verou
- * @namespace
- * @public
- */var l=function(s){var m=/(?:^|\s)lang(?:uage)?-([\w-]+)(?=\s|$)/i,p=0,v={},y={manual:s.Prism&&s.Prism.manual,disableWorkerMessageHandler:s.Prism&&s.Prism.disableWorkerMessageHandler,util:{encode:function B(z){return z instanceof k?new k(z.type,B(z.content),z.alias):Array.isArray(z)?z.map(B):z.replace(/&/g,"&").replace(/"u")return null;if("currentScript"in document&&1<2)return document.currentScript;try{throw new Error}catch(D){var B=(/at [^(\r\n]*\((.*):[^:]+:[^:]+\)$/i.exec(D.stack)||[])[1];if(B){var z=document.getElementsByTagName("script");for(var R in z)if(z[R].src==B)return z[R]}return null}},isActive:function(B,z,R){for(var D="no-"+z;B;){var L=B.classList;if(L.contains(z))return!0;if(L.contains(D))return!1;B=B.parentElement}return!!R}},languages:{plain:v,plaintext:v,text:v,txt:v,extend:function(B,z){var R=y.util.clone(y.languages[B]);for(var D in z)R[D]=z[D];return R},insertBefore:function(B,z,R,D){D=D||y.languages;var L=D[B],X={};for(var ae in L)if(L.hasOwnProperty(ae)){if(ae==z)for(var K in R)R.hasOwnProperty(K)&&(X[K]=R[K]);R.hasOwnProperty(ae)||(X[ae]=L[ae])}var G=D[B];return D[B]=X,y.languages.DFS(y.languages,function(W,Ce){Ce===G&&W!=B&&(this[W]=X)}),X},DFS:function B(z,R,D,L){L=L||{};var X=y.util.objId;for(var ae in z)if(z.hasOwnProperty(ae)){R.call(z,ae,z[ae],D||ae);var K=z[ae],G=y.util.type(K);G==="Object"&&!L[X(K)]?(L[X(K)]=!0,B(K,R,null,L)):G==="Array"&&!L[X(K)]&&(L[X(K)]=!0,B(K,R,ae,L))}}},plugins:{},highlightAll:function(B,z){y.highlightAllUnder(document,B,z)},highlightAllUnder:function(B,z,R){var D={callback:R,container:B,selector:'code[class*="language-"], [class*="language-"] code, code[class*="lang-"], [class*="lang-"] code'};y.hooks.run("before-highlightall",D),D.elements=Array.prototype.slice.apply(D.container.querySelectorAll(D.selector)),y.hooks.run("before-all-elements-highlight",D);for(var L=0,X;X=D.elements[L++];)y.highlightElement(X,z===!0,D.callback)},highlightElement:function(B,z,R){var D=y.util.getLanguage(B),L=y.languages[D];y.util.setLanguage(B,D);var X=B.parentElement;X&&X.nodeName.toLowerCase()==="pre"&&y.util.setLanguage(X,D);var ae=B.textContent,K={element:B,language:D,grammar:L,code:ae};function G(Ce){K.highlightedCode=Ce,y.hooks.run("before-insert",K),K.element.innerHTML=K.highlightedCode,y.hooks.run("after-highlight",K),y.hooks.run("complete",K),R&&R.call(K.element)}if(y.hooks.run("before-sanity-check",K),X=K.element.parentElement,X&&X.nodeName.toLowerCase()==="pre"&&!X.hasAttribute("tabindex")&&X.setAttribute("tabindex","0"),!K.code){y.hooks.run("complete",K),R&&R.call(K.element);return}if(y.hooks.run("before-highlight",K),!K.grammar){G(y.util.encode(K.code));return}if(z&&s.Worker){var W=new Worker(y.filename);W.onmessage=function(Ce){G(Ce.data)},W.postMessage(JSON.stringify({language:K.language,code:K.code,immediateClose:!0}))}else G(y.highlight(K.code,K.grammar,K.language))},highlight:function(B,z,R){var D={code:B,grammar:z,language:R};if(y.hooks.run("before-tokenize",D),!D.grammar)throw new Error('The language "'+D.language+'" has no grammar.');return D.tokens=y.tokenize(D.code,D.grammar),y.hooks.run("after-tokenize",D),k.stringify(y.util.encode(D.tokens),D.language)},tokenize:function(B,z){var R=z.rest;if(R){for(var D in R)z[D]=R[D];delete z.rest}var L=new Y;return te(L,L.head,B),U(B,L,z,L.head,0),ve(L)},hooks:{all:{},add:function(B,z){var R=y.hooks.all;R[B]=R[B]||[],R[B].push(z)},run:function(B,z){var R=y.hooks.all[B];if(!(!R||!R.length))for(var D=0,L;L=R[D++];)L(z)}},Token:k};s.Prism=y;function k(B,z,R,D){this.type=B,this.content=z,this.alias=R,this.length=(D||"").length|0}k.stringify=function B(z,R){if(typeof z=="string")return z;if(Array.isArray(z)){var D="";return z.forEach(function(G){D+=B(G,R)}),D}var L={type:z.type,content:B(z.content,R),tag:"span",classes:["token",z.type],attributes:{},language:R},X=z.alias;X&&(Array.isArray(X)?Array.prototype.push.apply(L.classes,X):L.classes.push(X)),y.hooks.run("wrap",L);var ae="";for(var K in L.attributes)ae+=" "+K+'="'+(L.attributes[K]||"").replace(/"/g,""")+'"';return"<"+L.tag+' class="'+L.classes.join(" ")+'"'+ae+">"+L.content+""+L.tag+">"};function N(B,z,R,D){B.lastIndex=z;var L=B.exec(R);if(L&&D&&L[1]){var X=L[1].length;L.index+=X,L[0]=L[0].slice(X)}return L}function U(B,z,R,D,L,X){for(var ae in R)if(!(!R.hasOwnProperty(ae)||!R[ae])){var K=R[ae];K=Array.isArray(K)?K:[K];for(var G=0;G=X.reach);_e+=Ve.value.length,Ve=Ve.next){var _t=Ve.value;if(z.length>B.length)return;if(!(_t instanceof k)){var ee=1,Ke;if(Ge){if(Ke=N(g0,_e,B,ie),!Ke||Ke.index>=B.length)break;var lt=Ke.index,Ie=Ke.index+Ke[0].length,je=_e;for(je+=Ve.value.length;lt>=je;)Ve=Ve.next,je+=Ve.value.length;if(je-=Ve.value.length,_e=je,Ve.value instanceof k)continue;for(var gt=Ve;gt!==z.tail&&(jeX.reach&&(X.reach=vt);var st=Ve.prev;Kt&&(st=te(z,st,Kt),_e+=Kt.length),j(z,st,ee);var b0=new k(ae,Ce?y.tokenize(Bt,Ce):Bt,Gt,Bt);if(Ve=te(z,st,b0),v0&&te(z,Ve,v0),ee>1){var Vt={cause:ae+","+G,reach:vt};U(B,z,R,Ve.prev,_e,Vt),X&&Vt.reach>X.reach&&(X.reach=Vt.reach)}}}}}}function Y(){var B={value:null,prev:null,next:null},z={value:null,prev:B,next:null};B.next=z,this.head=B,this.tail=z,this.length=0}function te(B,z,R){var D=z.next,L={value:R,prev:z,next:D};return z.next=L,D.prev=L,B.length++,L}function j(B,z,R){for(var D=z.next,L=0;L/,greedy:!0},prolog:{pattern:/<\?[\s\S]+?\?>/,greedy:!0},doctype:{pattern:/"'[\]]|"[^"]*"|'[^']*')+(?:\[(?:[^<"'\]]|"[^"]*"|'[^']*'|<(?!!--)|)*\]\s*)?>/i,greedy:!0,inside:{"internal-subset":{pattern:/(^[^\[]*\[)[\s\S]+(?=\]>$)/,lookbehind:!0,greedy:!0,inside:null},string:{pattern:/"[^"]*"|'[^']*'/,greedy:!0},punctuation:/^$|[[\]]/,"doctype-tag":/^DOCTYPE/i,name:/[^\s<>'"]+/}},cdata:{pattern://i,greedy:!0},tag:{pattern:/<\/?(?!\d)[^\s>\/=$<%]+(?:\s(?:\s*[^\s>\/=]+(?:\s*=\s*(?:"[^"]*"|'[^']*'|[^\s'">=]+(?=[\s>]))|(?=[\s/>])))+)?\s*\/?>/,greedy:!0,inside:{tag:{pattern:/^<\/?[^\s>\/]+/,inside:{punctuation:/^<\/?/,namespace:/^[^\s>\/:]+:/}},"special-attr":[],"attr-value":{pattern:/=\s*(?:"[^"]*"|'[^']*'|[^\s'">=]+)/,inside:{punctuation:[{pattern:/^=/,alias:"attr-equals"},{pattern:/^(\s*)["']|["']$/,lookbehind:!0}]}},punctuation:/\/?>/,"attr-name":{pattern:/[^\s>\/]+/,inside:{namespace:/^[^\s>\/:]+:/}}}},entity:[{pattern:/&[\da-z]{1,8};/i,alias:"named-entity"},/?[\da-f]{1,8};/i]},l.languages.markup.tag.inside["attr-value"].inside.entity=l.languages.markup.entity,l.languages.markup.doctype.inside["internal-subset"].inside=l.languages.markup,l.hooks.add("wrap",function(s){s.type==="entity"&&(s.attributes.title=s.content.replace(/&/,"&"))}),Object.defineProperty(l.languages.markup.tag,"addInlined",{value:function(m,p){var v={};v["language-"+p]={pattern:/(^$)/i,lookbehind:!0,inside:l.languages[p]},v.cdata=/^$/i;var y={"included-cdata":{pattern://i,inside:v}};y["language-"+p]={pattern:/[\s\S]+/,inside:l.languages[p]};var k={};k[m]={pattern:RegExp(/(<__[^>]*>)(?:))*\]\]>|(?!)/.source.replace(/__/g,function(){return m}),"i"),lookbehind:!0,greedy:!0,inside:y},l.languages.insertBefore("markup","cdata",k)}}),Object.defineProperty(l.languages.markup.tag,"addAttribute",{value:function(s,m){l.languages.markup.tag.inside["special-attr"].push({pattern:RegExp(/(^|["'\s])/.source+"(?:"+s+")"+/\s*=\s*(?:"[^"]*"|'[^']*'|[^\s'">=]+(?=[\s>]))/.source,"i"),lookbehind:!0,inside:{"attr-name":/^[^\s=]+/,"attr-value":{pattern:/=[\s\S]+/,inside:{value:{pattern:/(^=\s*(["']|(?!["'])))\S[\s\S]*(?=\2$)/,lookbehind:!0,alias:[m,"language-"+m],inside:l.languages[m]},punctuation:[{pattern:/^=/,alias:"attr-equals"},/"|'/]}}}})}}),l.languages.html=l.languages.markup,l.languages.mathml=l.languages.markup,l.languages.svg=l.languages.markup,l.languages.xml=l.languages.extend("markup",{}),l.languages.ssml=l.languages.xml,l.languages.atom=l.languages.xml,l.languages.rss=l.languages.xml,function(s){var m=/(?:"(?:\\(?:\r\n|[\s\S])|[^"\\\r\n])*"|'(?:\\(?:\r\n|[\s\S])|[^'\\\r\n])*')/;s.languages.css={comment:/\/\*[\s\S]*?\*\//,atrule:{pattern:RegExp("@[\\w-](?:"+/[^;{\s"']|\s+(?!\s)/.source+"|"+m.source+")*?"+/(?:;|(?=\s*\{))/.source),inside:{rule:/^@[\w-]+/,"selector-function-argument":{pattern:/(\bselector\s*\(\s*(?![\s)]))(?:[^()\s]|\s+(?![\s)])|\((?:[^()]|\([^()]*\))*\))+(?=\s*\))/,lookbehind:!0,alias:"selector"},keyword:{pattern:/(^|[^\w-])(?:and|not|only|or)(?![\w-])/,lookbehind:!0}}},url:{pattern:RegExp("\\burl\\((?:"+m.source+"|"+/(?:[^\\\r\n()"']|\\[\s\S])*/.source+")\\)","i"),greedy:!0,inside:{function:/^url/i,punctuation:/^\(|\)$/,string:{pattern:RegExp("^"+m.source+"$"),alias:"url"}}},selector:{pattern:RegExp(`(^|[{}\\s])[^{}\\s](?:[^{};"'\\s]|\\s+(?![\\s{])|`+m.source+")*(?=\\s*\\{)"),lookbehind:!0},string:{pattern:m,greedy:!0},property:{pattern:/(^|[^-\w\xA0-\uFFFF])(?!\s)[-_a-z\xA0-\uFFFF](?:(?!\s)[-\w\xA0-\uFFFF])*(?=\s*:)/i,lookbehind:!0},important:/!important\b/i,function:{pattern:/(^|[^-a-z0-9])[-a-z0-9]+(?=\()/i,lookbehind:!0},punctuation:/[(){};:,]/},s.languages.css.atrule.inside.rest=s.languages.css;var p=s.languages.markup;p&&(p.tag.addInlined("style","css"),p.tag.addAttribute("style","css"))}(l),l.languages.clike={comment:[{pattern:/(^|[^\\])\/\*[\s\S]*?(?:\*\/|$)/,lookbehind:!0,greedy:!0},{pattern:/(^|[^\\:])\/\/.*/,lookbehind:!0,greedy:!0}],string:{pattern:/(["'])(?:\\(?:\r\n|[\s\S])|(?!\1)[^\\\r\n])*\1/,greedy:!0},"class-name":{pattern:/(\b(?:class|extends|implements|instanceof|interface|new|trait)\s+|\bcatch\s+\()[\w.\\]+/i,lookbehind:!0,inside:{punctuation:/[.\\]/}},keyword:/\b(?:break|catch|continue|do|else|finally|for|function|if|in|instanceof|new|null|return|throw|try|while)\b/,boolean:/\b(?:false|true)\b/,function:/\b\w+(?=\()/,number:/\b0x[\da-f]+\b|(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:e[+-]?\d+)?/i,operator:/[<>]=?|[!=]=?=?|--?|\+\+?|&&?|\|\|?|[?*/~^%]/,punctuation:/[{}[\];(),.:]/},l.languages.javascript=l.languages.extend("clike",{"class-name":[l.languages.clike["class-name"],{pattern:/(^|[^$\w\xA0-\uFFFF])(?!\s)[_$A-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*(?=\.(?:constructor|prototype))/,lookbehind:!0}],keyword:[{pattern:/((?:^|\})\s*)catch\b/,lookbehind:!0},{pattern:/(^|[^.]|\.\.\.\s*)\b(?:as|assert(?=\s*\{)|async(?=\s*(?:function\b|\(|[$\w\xA0-\uFFFF]|$))|await|break|case|class|const|continue|debugger|default|delete|do|else|enum|export|extends|finally(?=\s*(?:\{|$))|for|from(?=\s*(?:['"]|$))|function|(?:get|set)(?=\s*(?:[#\[$\w\xA0-\uFFFF]|$))|if|implements|import|in|instanceof|interface|let|new|null|of|package|private|protected|public|return|static|super|switch|this|throw|try|typeof|undefined|var|void|while|with|yield)\b/,lookbehind:!0}],function:/#?(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*(?=\s*(?:\.\s*(?:apply|bind|call)\s*)?\()/,number:{pattern:RegExp(/(^|[^\w$])/.source+"(?:"+(/NaN|Infinity/.source+"|"+/0[bB][01]+(?:_[01]+)*n?/.source+"|"+/0[oO][0-7]+(?:_[0-7]+)*n?/.source+"|"+/0[xX][\dA-Fa-f]+(?:_[\dA-Fa-f]+)*n?/.source+"|"+/\d+(?:_\d+)*n/.source+"|"+/(?:\d+(?:_\d+)*(?:\.(?:\d+(?:_\d+)*)?)?|\.\d+(?:_\d+)*)(?:[Ee][+-]?\d+(?:_\d+)*)?/.source)+")"+/(?![\w$])/.source),lookbehind:!0},operator:/--|\+\+|\*\*=?|=>|&&=?|\|\|=?|[!=]==|<<=?|>>>?=?|[-+*/%&|^!=<>]=?|\.{3}|\?\?=?|\?\.?|[~:]/}),l.languages.javascript["class-name"][0].pattern=/(\b(?:class|extends|implements|instanceof|interface|new)\s+)[\w.\\]+/,l.languages.insertBefore("javascript","keyword",{regex:{pattern:RegExp(/((?:^|[^$\w\xA0-\uFFFF."'\])\s]|\b(?:return|yield))\s*)/.source+/\//.source+"(?:"+/(?:\[(?:[^\]\\\r\n]|\\.)*\]|\\.|[^/\\\[\r\n])+\/[dgimyus]{0,7}/.source+"|"+/(?:\[(?:[^[\]\\\r\n]|\\.|\[(?:[^[\]\\\r\n]|\\.|\[(?:[^[\]\\\r\n]|\\.)*\])*\])*\]|\\.|[^/\\\[\r\n])+\/[dgimyus]{0,7}v[dgimyus]{0,7}/.source+")"+/(?=(?:\s|\/\*(?:[^*]|\*(?!\/))*\*\/)*(?:$|[\r\n,.;:})\]]|\/\/))/.source),lookbehind:!0,greedy:!0,inside:{"regex-source":{pattern:/^(\/)[\s\S]+(?=\/[a-z]*$)/,lookbehind:!0,alias:"language-regex",inside:l.languages.regex},"regex-delimiter":/^\/|\/$/,"regex-flags":/^[a-z]+$/}},"function-variable":{pattern:/#?(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*(?=\s*[=:]\s*(?:async\s*)?(?:\bfunction\b|(?:\((?:[^()]|\([^()]*\))*\)|(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*)\s*=>))/,alias:"function"},parameter:[{pattern:/(function(?:\s+(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*)?\s*\(\s*)(?!\s)(?:[^()\s]|\s+(?![\s)])|\([^()]*\))+(?=\s*\))/,lookbehind:!0,inside:l.languages.javascript},{pattern:/(^|[^$\w\xA0-\uFFFF])(?!\s)[_$a-z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*(?=\s*=>)/i,lookbehind:!0,inside:l.languages.javascript},{pattern:/(\(\s*)(?!\s)(?:[^()\s]|\s+(?![\s)])|\([^()]*\))+(?=\s*\)\s*=>)/,lookbehind:!0,inside:l.languages.javascript},{pattern:/((?:\b|\s|^)(?!(?:as|async|await|break|case|catch|class|const|continue|debugger|default|delete|do|else|enum|export|extends|finally|for|from|function|get|if|implements|import|in|instanceof|interface|let|new|null|of|package|private|protected|public|return|set|static|super|switch|this|throw|try|typeof|undefined|var|void|while|with|yield)(?![$\w\xA0-\uFFFF]))(?:(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*\s*)\(\s*|\]\s*\(\s*)(?!\s)(?:[^()\s]|\s+(?![\s)])|\([^()]*\))+(?=\s*\)\s*\{)/,lookbehind:!0,inside:l.languages.javascript}],constant:/\b[A-Z](?:[A-Z_]|\dx?)*\b/}),l.languages.insertBefore("javascript","string",{hashbang:{pattern:/^#!.*/,greedy:!0,alias:"comment"},"template-string":{pattern:/`(?:\\[\s\S]|\$\{(?:[^{}]|\{(?:[^{}]|\{[^}]*\})*\})+\}|(?!\$\{)[^\\`])*`/,greedy:!0,inside:{"template-punctuation":{pattern:/^`|`$/,alias:"string"},interpolation:{pattern:/((?:^|[^\\])(?:\\{2})*)\$\{(?:[^{}]|\{(?:[^{}]|\{[^}]*\})*\})+\}/,lookbehind:!0,inside:{"interpolation-punctuation":{pattern:/^\$\{|\}$/,alias:"punctuation"},rest:l.languages.javascript}},string:/[\s\S]+/}},"string-property":{pattern:/((?:^|[,{])[ \t]*)(["'])(?:\\(?:\r\n|[\s\S])|(?!\2)[^\\\r\n])*\2(?=\s*:)/m,lookbehind:!0,greedy:!0,alias:"property"}}),l.languages.insertBefore("javascript","operator",{"literal-property":{pattern:/((?:^|[,{])[ \t]*)(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*(?=\s*:)/m,lookbehind:!0,alias:"property"}}),l.languages.markup&&(l.languages.markup.tag.addInlined("script","javascript"),l.languages.markup.tag.addAttribute(/on(?:abort|blur|change|click|composition(?:end|start|update)|dblclick|error|focus(?:in|out)?|key(?:down|up)|load|mouse(?:down|enter|leave|move|out|over|up)|reset|resize|scroll|select|slotchange|submit|unload|wheel)/.source,"javascript")),l.languages.js=l.languages.javascript,function(){if(typeof l>"u"||typeof document>"u")return;Element.prototype.matches||(Element.prototype.matches=Element.prototype.msMatchesSelector||Element.prototype.webkitMatchesSelector);var s="Loading…",m=function(pe,le){return"✖ Error "+pe+" while fetching file: "+le},p="✖ Error: File does not exist or is empty",v={js:"javascript",py:"python",rb:"ruby",ps1:"powershell",psm1:"powershell",sh:"bash",bat:"batch",h:"c",tex:"latex"},y="data-src-status",k="loading",N="loaded",U="failed",Y="pre[data-src]:not(["+y+'="'+N+'"]):not(['+y+'="'+k+'"])';function te(pe,le,O){var B=new XMLHttpRequest;B.open("GET",pe,!0),B.onreadystatechange=function(){B.readyState==4&&(B.status<400&&B.responseText?le(B.responseText):B.status>=400?O(m(B.status,B.statusText)):O(p))},B.send(null)}function j(pe){var le=/^\s*(\d+)\s*(?:(,)\s*(?:(\d+)\s*)?)?$/.exec(pe||"");if(le){var O=Number(le[1]),B=le[2],z=le[3];return B?z?[O,Number(z)]:[O,void 0]:[O,O]}}l.hooks.add("before-highlightall",function(pe){pe.selector+=", "+Y}),l.hooks.add("before-sanity-check",function(pe){var le=pe.element;if(le.matches(Y)){pe.code="",le.setAttribute(y,k);var O=le.appendChild(document.createElement("CODE"));O.textContent=s;var B=le.getAttribute("data-src"),z=pe.language;if(z==="none"){var R=(/\.(\w+)$/.exec(B)||[,"none"])[1];z=v[R]||R}l.util.setLanguage(O,z),l.util.setLanguage(le,z);var D=l.plugins.autoloader;D&&D.loadLanguages(z),te(B,function(L){le.setAttribute(y,N);var X=j(le.getAttribute("data-range"));if(X){var ae=L.split(/\r\n?|\n/g),K=X[0],G=X[1]==null?ae.length:X[1];K<0&&(K+=ae.length),K=Math.max(0,Math.min(K-1,ae.length)),G<0&&(G+=ae.length),G=Math.max(0,Math.min(G,ae.length)),L=ae.slice(K,G).join(`
-`),le.hasAttribute("data-start")||le.setAttribute("data-start",String(K+1))}O.textContent=L,l.highlightElement(O)},function(L){le.setAttribute(y,U),O.textContent=L})}}),l.plugins.fileHighlight={highlight:function(le){for(var O=(le||document).querySelectorAll(Y),B=0,z;z=O[B++];)l.highlightElement(z)}};var ve=!1;l.fileHighlight=function(){ve||(console.warn("Prism.fileHighlight is deprecated. Use `Prism.plugins.fileHighlight.highlight` instead."),ve=!0),l.plugins.fileHighlight.highlight.apply(this,arguments)}}()})(ul);var _o=ul.exports;const En=Xi(_o);Prism.languages.python={comment:{pattern:/(^|[^\\])#.*/,lookbehind:!0,greedy:!0},"string-interpolation":{pattern:/(?:f|fr|rf)(?:("""|''')[\s\S]*?\1|("|')(?:\\.|(?!\2)[^\\\r\n])*\2)/i,greedy:!0,inside:{interpolation:{pattern:/((?:^|[^{])(?:\{\{)*)\{(?!\{)(?:[^{}]|\{(?!\{)(?:[^{}]|\{(?!\{)(?:[^{}])+\})+\})+\}/,lookbehind:!0,inside:{"format-spec":{pattern:/(:)[^:(){}]+(?=\}$)/,lookbehind:!0},"conversion-option":{pattern://,alias:"punctuation"},rest:null}},string:/[\s\S]+/}},"triple-quoted-string":{pattern:/(?:[rub]|br|rb)?("""|''')[\s\S]*?\1/i,greedy:!0,alias:"string"},string:{pattern:/(?:[rub]|br|rb)?("|')(?:\\.|(?!\1)[^\\\r\n])*\1/i,greedy:!0},function:{pattern:/((?:^|\s)def[ \t]+)[a-zA-Z_]\w*(?=\s*\()/g,lookbehind:!0},"class-name":{pattern:/(\bclass\s+)\w+/i,lookbehind:!0},decorator:{pattern:/(^[\t ]*)@\w+(?:\.\w+)*/m,lookbehind:!0,alias:["annotation","punctuation"],inside:{punctuation:/\./}},keyword:/\b(?:_(?=\s*:)|and|as|assert|async|await|break|case|class|continue|def|del|elif|else|except|exec|finally|for|from|global|if|import|in|is|lambda|match|nonlocal|not|or|pass|print|raise|return|try|while|with|yield)\b/,builtin:/\b(?:__import__|abs|all|any|apply|ascii|basestring|bin|bool|buffer|bytearray|bytes|callable|chr|classmethod|cmp|coerce|compile|complex|delattr|dict|dir|divmod|enumerate|eval|execfile|file|filter|float|format|frozenset|getattr|globals|hasattr|hash|help|hex|id|input|int|intern|isinstance|issubclass|iter|len|list|locals|long|map|max|memoryview|min|next|object|oct|open|ord|pow|property|range|raw_input|reduce|reload|repr|reversed|round|set|setattr|slice|sorted|staticmethod|str|sum|super|tuple|type|unichr|unicode|vars|xrange|zip)\b/,boolean:/\b(?:False|None|True)\b/,number:/\b0(?:b(?:_?[01])+|o(?:_?[0-7])+|x(?:_?[a-f0-9])+)\b|(?:\b\d+(?:_\d+)*(?:\.(?:\d+(?:_\d+)*)?)?|\B\.\d+(?:_\d+)*)(?:e[+-]?\d+(?:_\d+)*)?j?(?!\w)/i,operator:/[-+%=]=?|!=|:=|\*\*?=?|\/\/?=?|<[<=>]?|>[=>]?|[&|^~]/,punctuation:/[{}[\];(),.:]/};Prism.languages.python["string-interpolation"].inside.interpolation.inside.rest=Prism.languages.python;Prism.languages.py=Prism.languages.python;(function(g){var a=/\\(?:[^a-z()[\]]|[a-z*]+)/i,l={"equation-command":{pattern:a,alias:"regex"}};g.languages.latex={comment:/%.*/,cdata:{pattern:/(\\begin\{((?:lstlisting|verbatim)\*?)\})[\s\S]*?(?=\\end\{\2\})/,lookbehind:!0},equation:[{pattern:/\$\$(?:\\[\s\S]|[^\\$])+\$\$|\$(?:\\[\s\S]|[^\\$])+\$|\\\([\s\S]*?\\\)|\\\[[\s\S]*?\\\]/,inside:l,alias:"string"},{pattern:/(\\begin\{((?:align|eqnarray|equation|gather|math|multline)\*?)\})[\s\S]*?(?=\\end\{\2\})/,lookbehind:!0,inside:l,alias:"string"}],keyword:{pattern:/(\\(?:begin|cite|documentclass|end|label|ref|usepackage)(?:\[[^\]]+\])?\{)[^}]+(?=\})/,lookbehind:!0},url:{pattern:/(\\url\{)[^}]+(?=\})/,lookbehind:!0},headline:{pattern:/(\\(?:chapter|frametitle|paragraph|part|section|subparagraph|subsection|subsubparagraph|subsubsection|subsubsubparagraph)\*?(?:\[[^\]]+\])?\{)[^}]+(?=\})/,lookbehind:!0,alias:"class-name"},function:{pattern:a,alias:"selector"},punctuation:/[[\]{}&]/},g.languages.tex=g.languages.latex,g.languages.context=g.languages.latex})(Prism);const Mo=``,zo=``,ki=``,cl=/[&<>"']/,Eo=new RegExp(cl.source,"g"),hl=/[<>"']|&(?!(#\d{1,7}|#[Xx][a-fA-F0-9]{1,6}|\w+);)/,Co=new RegExp(hl.source,"g"),Bo={"&":"&","<":"<",">":">",'"':""","'":"'"},Si=g=>Bo[g]||"";function Cn(g,a){if(a){if(cl.test(g))return g.replace(Eo,Si)}else if(hl.test(g))return g.replace(Co,Si);return g}const Do={code(g,a,l){const s=(a??"").match(/\S*/)?.[0]??"";if(this.options.highlight){const m=this.options.highlight(g,s);m!=null&&m!==g&&(l=!0,g=m)}return g=g.replace(/\n$/,"")+`
-`,s?'
-
-Baby-Doll - Fabulous birthday.avi !FULL! · Baby-Doll - Dreamlike Birthday.avi. Fairytale birthday.
-Download for free!Download the film Fabulous birthday in the adventure genre directed by Sergei Gerasimov USSR - in high quality and without registration on Tvigle.ru.
-Fairytale birthday.
-Country: USSR.
-Director: Sergey Gerasimov.
-Starring: Alexander Demyanenko.
-Download in the archive!
-A film for children.
-Year of release: 1984.
-Genre: adventure.
-TÃtulo original: Fabulous Birthday.
-Download for free Fabulous birthday.
-Country: USSR. 8a78ff9644
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Ctroniq Snook C70L Flash File MT6735 6.0 Firmware Stock Rom.md b/spaces/diacanFperku/AutoGPT/Ctroniq Snook C70L Flash File MT6735 6.0 Firmware Stock Rom.md
deleted file mode 100644
index a96ace5d7127d2948ed0f95bb5d7e4d8802a8e58..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Ctroniq Snook C70L Flash File MT6735 6.0 Firmware Stock Rom.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-
on this page, you will find theithink t233 tabnew phone cm2 read flash file/firmware/stock rom free on your computer. the firmware include in a zib package, which containsithink t233 tab frp reset hang logo fixflash file,flash tool,usb driver. ithink t233 tabflash file firmware stock rom download for yourithink t233 tabandroid 5.0.2 5.3 5.1.1 5.2 firmware stock rom download
-
download samsung galaxy s5 firmware almost everyone today knows what is operation system android, why it s so popular among millions of users and how to use its full potential, how to root android samsung galaxy mobile phone and unroot it backing to stock firmware or rom. there are a lot of different mobile firmwares and customized roms for rooted android devices. but sometimes we have to back to stock firmware. internal_link link there can be different reasons to download and upgrade samsung galaxy s5 firmware: to backup to original stock firmware to unroot phone to recover bricked phone to use stock apps and os upgrade quick access required content free download stock rom
on this page, you will find the s-color u709new phone cm2 read flash file/firmware/stock rom free on your computer. the firmware include in a zib package, which contains s-color u709frp reset hang logo fixflash file,flash tool,usb driver. s-color u709flash file firmware stock rom download for yours-color u709android 5.1 phone. the s-color u709
-
sometimes android smart firmware is broken by other software or attacked by hacking code. phone is showing unwanted behaviours like frp lock, suddenly stuck on logo, display blank or white and also dead after flashing etc. if you would like to recover you gright g626 mobile again you ought to flash your mobile by using gright g626 flash file firmware or stock rom.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Jetbrains Resharper 8 Keygen 13 WORK.md b/spaces/diacanFperku/AutoGPT/Jetbrains Resharper 8 Keygen 13 WORK.md
deleted file mode 100644
index aad75f74ef0b3f7e917250f499b92c2d420d4532..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Jetbrains Resharper 8 Keygen 13 WORK.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
-JetBrains is an advanced software vendor specializing in intelligent development tools, including IntelliJ IDEA, the leading Java IDE, ... Download free JetBrains IntelliJ IDEA 2017 (Russian version) in Russian for Windows.
-IntelliJ IDEA is a very flexible and user-friendly development environment.
-IntelliJ IDEA - download IntelliJ IDEA Community Edition 2019.3.1 for free.
-IntelliJ IDEA - The most popular development environment for Java and other programming languages.
-IntelliJ IDEA - Download IntelliJ IDEA Community Edition 2019.3.1, IntelliJ IDEA - The most popular development environment for Java and other programming languages. 8a78ff9644
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Manajemen Sumber Daya Manusia Ebook Download.md b/spaces/diacanFperku/AutoGPT/Manajemen Sumber Daya Manusia Ebook Download.md
deleted file mode 100644
index fd680174a847fb38ecce085a86ac723f8dc1f1a9..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Manajemen Sumber Daya Manusia Ebook Download.md
+++ /dev/null
@@ -1,6 +0,0 @@
-